id
stringlengths
2
8
url
stringlengths
31
117
title
stringlengths
1
71
text
stringlengths
153
118k
topic
stringclasses
4 values
section
stringlengths
4
49
sublist
stringclasses
9 values
22834
https://en.wikipedia.org/wiki/Ozone%20layer
Ozone layer
The ozone layer or ozone shield is a region of Earth's stratosphere that absorbs most of the Sun's ultraviolet radiation. It contains a high concentration of ozone (O3) in relation to other parts of the atmosphere, although still small in relation to other gases in the stratosphere. The ozone layer contains less than 10 parts per million of ozone, while the average ozone concentration in Earth's atmosphere as a whole is about 0.3 parts per million. The ozone layer is mainly found in the lower portion of the stratosphere, from approximately above Earth, although its thickness varies seasonally and geographically. The ozone layer was discovered in 1913 by French physicists Charles Fabry and Henri Buisson. Measurements of the sun showed that the radiation sent out from its surface and reaching the ground on Earth is usually consistent with the spectrum of a black body with a temperature in the range of , except that there was no radiation below a wavelength of about 310 nm at the ultraviolet end of the spectrum. It was deduced that the missing radiation was being absorbed by something in the atmosphere. Eventually the spectrum of the missing radiation was matched to only one known chemical, ozone. Its properties were explored in detail by the British meteorologist G. M. B. Dobson, who developed a simple spectrophotometer (the Dobsonmeter) that could be used to measure stratospheric ozone from the ground. Between 1928 and 1958, Dobson established a worldwide network of ozone monitoring stations, which continue to operate to this day. The "Dobson Unit" (DU), a convenient measure of the amount of ozone overhead, is named in his honor. The ozone layer absorbs 97 to 99 percent of the Sun's medium-frequency ultraviolet light (from about 200 nm to 315 nm wavelength), which otherwise would potentially damage exposed life forms near the surface. In 1985, atmospheric research revealed that the ozone layer was being depleted by chemicals released by industry, mainly chlorofluorocarbons (CFCs). Concerns that increased UV radiation due to ozone depletion threatened life on Earth, including increased skin cancer in humans and other ecological problems, led to bans on the chemicals, and the latest evidence is that ozone depletion has slowed or stopped. The United Nations General Assembly has designated September 16 as the International Day for the Preservation of the Ozone Layer. Venus also has a thin ozone layer at an altitude of 100 kilometers above the planet's surface. Sources The photochemical mechanisms that give rise to the ozone layer were discovered by the British physicist Sydney Chapman in 1930. Ozone in the Earth's stratosphere is created by ultraviolet light striking ordinary oxygen molecules containing two oxygen atoms (O2), splitting them into individual oxygen atoms (atomic oxygen); the atomic oxygen then combines with unbroken O2 to create ozone, O3. The ozone molecule is unstable (although, in the stratosphere, long-lived) and when ultraviolet light hits ozone it splits into a molecule of O2 and an individual atom of oxygen, a continuing process called the ozone–oxygen cycle. Chemically, this can be described as: O2{} + \mathit{h}\nu_{uv} -> 2O O + O2 <-> O3 About 90 percent of the ozone in the atmosphere is contained in the stratosphere. Ozone concentrations are greatest between about , where they range from about 2 to 8 parts per million. If all of the ozone were compressed to the pressure of the air at sea level, it would be only thick. Ultraviolet light Although the concentration of the ozone in the ozone layer is very small, it is vitally important to life because it absorbs biologically harmful ultraviolet (UV) radiation coming from the Sun. Extremely short or vacuum UV (10–100 nm) is screened out by nitrogen. UV radiation capable of penetrating nitrogen is divided into three categories, based on its wavelength; these are referred to as UV-A (400–315 nm), UV-B (315–280 nm), and UV-C (280–100 nm). UV-C, which is very harmful to all living things, is entirely screened out by a combination of dioxygen (< 200 nm) and ozone (> about 200 nm) by around altitude. UV-B radiation can be harmful to the skin and is the main cause of sunburn; excessive exposure can also cause cataracts, immune system suppression, and genetic damage, resulting in problems such as skin cancer. The ozone layer (which absorbs from about 200 nm to 310 nm with a maximal absorption at about 250 nm) is very effective at screening out UV-B; for radiation with a wavelength of 290 nm, the intensity at the top of the atmosphere is 350 million times stronger than at the Earth's surface. Nevertheless, some UV-B, particularly at its longest wavelengths, reaches the surface, and is important for the skin's production of vitamin D in mammals. Ozone is transparent to most UV-A, so most of this longer-wavelength UV radiation reaches the surface, and it constitutes most of the UV reaching the Earth. This type of UV radiation is significantly less harmful to DNA, although it may still potentially cause physical damage, premature aging of the skin, indirect genetic damage, and skin cancer. Distribution in the stratosphere The thickness of the ozone layer varies worldwide and is generally thinner near the equator and thicker near the poles. Thickness refers to how much ozone is in a column over a given area and varies from season to season. The reasons for these variations are due to atmospheric circulation patterns and solar intensity. The majority of ozone is produced over the tropics and is transported towards the poles by stratospheric wind patterns. In the northern hemisphere these patterns, known as the Brewer–Dobson circulation, make the ozone layer thickest in the spring and thinnest in the fall. When ozone is produced by solar UV radiation in the tropics, it is done so by circulation lifting ozone-poor air out of the troposphere and into the stratosphere where the sun photolyzes oxygen molecules and turns them into ozone. Then, the ozone-rich air is carried to higher latitudes and drops into lower layers of the atmosphere. Research has found that the ozone levels in the United States are highest in the spring months of April and May and lowest in October. While the total amount of ozone increases moving from the tropics to higher latitudes, the concentrations are greater in high northern latitudes than in high southern latitudes, with spring ozone columns in high northern latitudes occasionally exceeding 600 DU and averaging 450 DU whereas 400 DU constituted a usual maximum in the Antarctic before anthropogenic ozone depletion. This difference occurred naturally because of the weaker polar vortex and stronger Brewer–Dobson circulation in the northern hemisphere owing to that hemisphere's large mountain ranges and greater contrasts between land and ocean temperatures. The difference between high northern and southern latitudes has increased since the 1970s due to the ozone hole phenomenon. The highest amounts of ozone are found over the Arctic during the spring months of March and April, but the Antarctic has the lowest amounts of ozone during the summer months of September and October, Depletion The ozone layer can be depleted by free radical catalysts, including nitric oxide (NO), nitrous oxide (N2O), hydroxyl (OH), atomic chlorine (Cl), and atomic bromine (Br). While there are natural sources for all of these species, the concentrations of chlorine and bromine increased markedly in recent decades because of the release of large quantities of man-made organohalogen compounds, especially chlorofluorocarbons (CFCs) and bromofluorocarbons. These highly stable compounds are capable of surviving the rise to the stratosphere, where Cl and Br radicals are liberated by the action of ultraviolet light. Each radical is then free to initiate and catalyze a chain reaction capable of breaking down over 100,000 ozone molecules. By 2009, nitrous oxide was the largest ozone-depleting substance (ODS) emitted through human activities. The breakdown of ozone in the stratosphere results in reduced absorption of ultraviolet radiation. Consequently, unabsorbed and dangerous ultraviolet radiation is able to reach the Earth's surface at a higher intensity. Ozone levels have dropped by a worldwide average of about 4 percent since the late 1970s. For approximately 5 percent of the Earth's surface, around the north and south poles, much larger seasonal declines have been seen, and are described as "ozone holes". "Ozone holes" are actually patches in the ozone layer in which the ozone is thinner. The thinnest parts of the ozone are at the polar points of Earth's axis. The discovery of the annual depletion of ozone above the Antarctic was first announced by Joe Farman, Brian Gardiner and Jonathan Shanklin, in a paper which appeared in Nature on May 16, 1985. Regulation attempts have included but not have been limited to the Clean Air Act implemented by the United States Environmental Protection Agency. The Clean Air Act introduced the requirement of National Ambient Air Quality Standards (NAAQS) with ozone pollutions being one of six criteria pollutants. This regulation has proven to be effective since counties, cities and tribal regions must abide by these standards and the EPA also provides assistance for each region to regulate contaminants. Effective presentation of information has also proven to be important in order to educate the general population of the existence and regulation of ozone depletion and contaminants. A scientific paper was written by Sheldon Ungar in which the author explores and studies how information about the depletion of the ozone, climate change and various related topics. The ozone case was communicated to lay persons "with easy-to-understand bridging metaphors derived from the popular culture" and related to "immediate risks with everyday relevance". The specific metaphors used in the discussion (ozone shield, ozone hole) proved quite useful and, compared to global climate change, the ozone case was much more seen as a "hot issue" and imminent risk. Lay people were cautious about a depletion of the ozone layer and the risks of skin cancer. Satellites burning up upon re-entry into Earth's atmosphere produce aluminum oxide (Al2O3) nanoparticles that endure in the atmosphere for decades. Estimates for 2022 alone were ~17 metric tons (~30kg of nanoparticles per ~250kg satellite). Increasing populations of satellite constellations can eventually lead to significant ozone depletion. "Bad" ozone can cause adverse health risks respiratory effects (difficulty breathing) and is proven to be an aggravator of respiratory illnesses such as asthma, COPD and emphysema. That is why many countries have set in place regulations to improve "good" ozone and prevent the increase of "bad" ozone in urban or residential areas. In terms of ozone protection (the preservation of "good" ozone) the European Union has strict guidelines on what products are allowed to be bought, distributed or used in specific areas. With effective regulation, the ozone is expected to heal over time. In 1978, the United States, Canada and Norway enacted bans on CFC-containing aerosol sprays that damage the ozone layer but the European Community rejected a similar proposal. In the U.S., chlorofluorocarbons continued to be used in other applications, such as refrigeration and industrial cleaning, until after the discovery of the Antarctic ozone hole in 1985. After negotiation of an international treaty (the Montreal Protocol), CFC production was capped at 1986 levels with commitments to long-term reductions. This allowed for a ten-year phase-in for developing countries (identified in Article 5 of the protocol). Since then, the treaty was amended to ban CFC production after 1995 in developed countries, and later in developing countries. All of the world's 197 countries have signed the treaty. Beginning January 1, 1996, only recycled or stockpiled CFCs were available for use in developed countries like the US. The production phaseout was possible because of efforts to ensure that there would be substitute chemicals and technologies for all ODS uses. On August 2, 2003, scientists announced that the global depletion of the ozone layer might be slowing because of the international regulation of ozone-depleting substances. In a study organized by the American Geophysical Union, three satellites and three ground stations confirmed that the upper-atmosphere ozone-depletion rate slowed significantly over the previous decade. Some breakdown was expected to continue because of ODSs used by nations which have not banned them, and because of gases already in the stratosphere. Some ODSs, including CFCs, have very long atmospheric lifetimes ranging from 50 to over 100 years. It has been estimated that the ozone layer will recover to 1980 levels near the middle of the 21st century. A gradual trend toward "healing" was reported in 2016. Compounds containing C–H bonds (such as hydrochlorofluorocarbons, or HCFCs) have been designed to replace CFCs in certain applications. These replacement compounds are more reactive and less likely to survive long enough in the atmosphere to reach the stratosphere where they could affect the ozone layer. While being less damaging than CFCs, HCFCs can have a negative impact on the ozone layer, so they are also being phased out. These in turn are being replaced by hydrofluorocarbons (HFCs) and other compounds that do not destroy stratospheric ozone at all. The residual effects of CFCs accumulating within the atmosphere lead to a concentration gradient between the atmosphere and the ocean. This organohalogen compound is able to dissolve into the ocean's surface waters and is able to act as a time-dependent tracer. This tracer helps scientists study ocean circulation by tracing biological, physical and chemical pathways. Implications for astronomy As ozone in the atmosphere prevents most energetic ultraviolet radiation reaching the surface of the Earth, astronomical data in these wavelengths have to be gathered from satellites orbiting above the atmosphere and ozone layer. Most of the light from young hot stars is in the ultraviolet and so study of these wavelengths is important for studying the origins of galaxies. The Galaxy Evolution Explorer, GALEX, is an orbiting ultraviolet space telescope launched on April 28, 2003, which operated until early 2012.
Physical sciences
Atmosphere: General
Earth science
22915
https://en.wikipedia.org/wiki/Planet
Planet
A planet is a large, rounded astronomical body that is generally required to be in orbit around a star, stellar remnant, or brown dwarf, and is not one itself. The Solar System has eight planets by the most restrictive definition of the term: the terrestrial planets Mercury, Venus, Earth, and Mars, and the giant planets Jupiter, Saturn, Uranus, and Neptune. The best available theory of planet formation is the nebular hypothesis, which posits that an interstellar cloud collapses out of a nebula to create a young protostar orbited by a protoplanetary disk. Planets grow in this disk by the gradual accumulation of material driven by gravity, a process called accretion. The word planet comes from the Greek () . In antiquity, this word referred to the Sun, Moon, and five points of light visible to the naked eye that moved across the background of the stars—namely, Mercury, Venus, Mars, Jupiter, and Saturn. Planets have historically had religious associations: multiple cultures identified celestial bodies with gods, and these connections with mythology and folklore persist in the schemes for naming newly discovered Solar System bodies. Earth itself was recognized as a planet when heliocentrism supplanted geocentrism during the 16th and 17th centuries. With the development of the telescope, the meaning of planet broadened to include objects only visible with assistance: the moons of the planets beyond Earth; the ice giants Uranus and Neptune; Ceres and other bodies later recognized to be part of the asteroid belt; and Pluto, later found to be the largest member of the collection of icy bodies known as the Kuiper belt. The discovery of other large objects in the Kuiper belt, particularly Eris, spurred debate about how exactly to define a planet. In 2006, the International Astronomical Union (IAU) adopted a definition of a planet in the Solar System, placing the four terrestrial planets and the four giant planets in the planet category; Ceres, Pluto, and Eris are in the category of dwarf planet. Many planetary scientists have nonetheless continued to apply the term planet more broadly, including dwarf planets as well as rounded satellites like the Moon. Further advances in astronomy led to the discovery of over five thousand planets outside the Solar System, termed exoplanets. These often show unusual features that the Solar System planets do not show, such as hot Jupiters—giant planets that orbit close to their parent stars, like 51 Pegasi b—and extremely eccentric orbits, such as HD 20782 b. The discovery of brown dwarfs and planets larger than Jupiter also spurred debate on the definition, regarding where exactly to draw the line between a planet and a star. Multiple exoplanets have been found to orbit in the habitable zones of their stars (where liquid water can potentially exist on a planetary surface), but Earth remains the only planet known to support life. Formation It is not known with certainty how planets are formed. The prevailing theory is that they coalesce during the collapse of a nebula into a thin disk of gas and dust. A protostar forms at the core, surrounded by a rotating protoplanetary disk. Through accretion (a process of sticky collision) dust particles in the disk steadily accumulate mass to form ever-larger bodies. Local concentrations of mass known as planetesimals form, and these accelerate the accretion process by drawing in additional material by their gravitational attraction. These concentrations become increasingly dense until they collapse inward under gravity to form protoplanets. After a planet reaches a mass somewhat larger than Mars's mass, it begins to accumulate an extended atmosphere, greatly increasing the capture rate of the planetesimals by means of atmospheric drag. Depending on the accretion history of solids and gas, a giant planet, an ice giant, or a terrestrial planet may result. It is thought that the regular satellites of Jupiter, Saturn, and Uranus formed in a similar way; however, Triton was likely captured by Neptune, and Earth's Moon and Pluto's Charon might have formed in collisions. When the protostar has grown such that it ignites to form a star, the surviving disk is removed from the inside outward by photoevaporation, the solar wind, Poynting–Robertson drag and other effects. Thereafter there still may be many protoplanets orbiting the star or each other, but over time many will collide, either to form a larger, combined protoplanet or release material for other protoplanets to absorb. Those objects that have become massive enough will capture most matter in their orbital neighbourhoods to become planets. Protoplanets that have avoided collisions may become natural satellites of planets through a process of gravitational capture, or remain in belts of other objects to become either dwarf planets or small bodies. The energetic impacts of the smaller planetesimals (as well as radioactive decay) will heat up the growing planet, causing it to at least partially melt. The interior of the planet begins to differentiate by density, with higher density materials sinking toward the core. Smaller terrestrial planets lose most of their atmospheres because of this accretion, but the lost gases can be replaced by outgassing from the mantle and from the subsequent impact of comets (smaller planets will lose any atmosphere they gain through various escape mechanisms). With the discovery and observation of planetary systems around stars other than the Sun, it is becoming possible to elaborate, revise or even replace this account. The level of metallicity—an astronomical term describing the abundance of chemical elements with an atomic number greater than 2 (helium)—appears to determine the likelihood that a star will have planets. Hence, a metal-rich population I star is more likely to have a substantial planetary system than a metal-poor, population II star. Planets in the Solar System According to the IAU definition, there are eight planets in the Solar System, which are (in increasing distance from the Sun): Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus, and Neptune. Jupiter is the largest, at 318 Earth masses, whereas Mercury is the smallest, at 0.055 Earth masses. The planets of the Solar System can be divided into categories based on their composition. Terrestrials are similar to Earth, with bodies largely composed of rock and metal: Mercury, Venus, Earth, and Mars. Earth is the largest terrestrial planet. Giant planets are significantly more massive than the terrestrials: Jupiter, Saturn, Uranus, and Neptune. They differ from the terrestrial planets in composition. The gas giants, Jupiter and Saturn, are primarily composed of hydrogen and helium and are the most massive planets in the Solar System. Saturn is one third as massive as Jupiter, at 95 Earth masses. The ice giants, Uranus and Neptune, are primarily composed of low-boiling-point materials such as water, methane, and ammonia, with thick atmospheres of hydrogen and helium. They have a significantly lower mass than the gas giants (only 14 and 17 Earth masses). Dwarf planets are gravitationally rounded, but have not cleared their orbits of other bodies. In increasing order of average distance from the Sun, the ones generally agreed among astronomers are , , , , , , , , and . Ceres is the largest object in the asteroid belt, located between the orbits of Mars and Jupiter. The other eight all orbit beyond Neptune. Orcus, Pluto, Haumea, Quaoar, and Makemake orbit in the Kuiper belt, which is a second belt of small Solar System bodies beyond the orbit of Neptune. Gonggong and Eris orbit in the scattered disc, which is somewhat further out and, unlike the Kuiper belt, is unstable towards interactions with Neptune. Sedna is the largest known detached object, a population that never comes close enough to the Sun to interact with any of the classical planets; the origins of their orbits are still being debated. All nine are similar to terrestrial planets in having a solid surface, but they are made of ice and rock rather than rock and metal. Moreover, all of them are smaller than Mercury, with Pluto being the largest known dwarf planet and Eris being the most massive. There are at least nineteen planetary-mass moons or satellite planets—moons large enough to take on ellipsoidal shapes: One satellite of Earth: the Moon Four satellites of Jupiter: Io, Europa, Ganymede, and Callisto Seven satellites of Saturn: Mimas, Enceladus, Tethys, Dione, Rhea, Titan, and Iapetus Five satellites of Uranus: Miranda, Ariel, Umbriel, Titania, and Oberon One satellite of Neptune: Triton One satellite of Pluto: Charon The Moon, Io, and Europa have compositions similar to the terrestrial planets; the others are made of ice and rock like the dwarf planets, with Tethys being made of almost pure ice. Europa is often considered an icy planet, though, because its surface ice layer makes it difficult to study its interior. Ganymede and Titan are larger than Mercury by radius, and Callisto almost equals it, but all three are much less massive. Mimas is the smallest object generally agreed to be a geophysical planet, at about six millionths of Earth's mass, though there are many larger bodies that may not be geophysical planets (e.g. ). Exoplanets An exoplanet is a planet outside the Solar System. Known exoplanets range in size from gas giants about twice as large as Jupiter down to just over the size of the Moon. Analysis of gravitational microlensing data suggests a minimum average of 1.6 bound planets for every star in the Milky Way. In early 1992, radio astronomers Aleksander Wolszczan and Dale Frail announced the discovery of two planets orbiting the pulsar PSR 1257+12. This discovery was confirmed and is generally considered to be the first definitive detection of exoplanets. Researchers suspect they formed from a disk remnant left over from the supernova that produced the pulsar. The first confirmed discovery of an exoplanet orbiting an ordinary main-sequence star occurred on 6 October 1995, when Michel Mayor and Didier Queloz of the University of Geneva announced the detection of 51 Pegasi b, an exoplanet around 51 Pegasi. From then until the Kepler space telescope mission, most of the known exoplanets were gas giants comparable in mass to Jupiter or larger as they were more easily detected. The catalog of Kepler candidate planets consists mostly of planets the size of Neptune and smaller, down to smaller than Mercury. In 2011, the Kepler space telescope team reported the discovery of the first Earth-sized exoplanets orbiting a Sun-like star, Kepler-20e and Kepler-20f. Since that time, more than 100 planets have been identified that are approximately the same size as Earth, 20 of which orbit in the habitable zone of their star—the range of orbits where a terrestrial planet could sustain liquid water on its surface, given enough atmospheric pressure. One in five Sun-like stars is thought to have an Earth-sized planet in its habitable zone, which suggests that the nearest would be expected to be within 12 light-years distance from Earth. The frequency of occurrence of such terrestrial planets is one of the variables in the Drake equation, which estimates the number of intelligent, communicating civilizations that exist in the Milky Way. There are types of planets that do not exist in the Solar System: super-Earths and mini-Neptunes, which have masses between that of Earth and Neptune. Objects less than about twice the mass of Earth are expected to be rocky like Earth; beyond that, they become a mixture of volatiles and gas like Neptune. The planet Gliese 581c, with a mass 5.5–10.4 times the mass of Earth, attracted attention upon its discovery for potentially being in the habitable zone, though later studies concluded that it is actually too close to its star to be habitable. Planets more massive than Jupiter are also known, extending seamlessly into the realm of brown dwarfs. Exoplanets have been found that are much closer to their parent star than any planet in the Solar System is to the Sun. Mercury, the closest planet to the Sun at 0.4 AU, takes 88 days for an orbit, but ultra-short period planets can orbit in less than a day. The Kepler-11 system has five of its planets in shorter orbits than Mercury's, all of them much more massive than Mercury. There are hot Jupiters, such as 51 Pegasi b, that orbit very close to their star and may evaporate to become chthonian planets, which are the leftover cores. There are also exoplanets that are much farther from their star. Neptune is 30 AU from the Sun and takes 165 years to orbit, but there are exoplanets that are thousands of AU from their star and take more than a million years to orbit (e.g. COCONUTS-2b). Attributes Although each planet has unique physical characteristics, a number of broad commonalities do exist among them. Some of these characteristics, such as rings or natural satellites, have only as yet been observed in planets in the Solar System, whereas others are commonly observed in exoplanets. Dynamic characteristics Orbit In the Solar System, all the planets orbit the Sun in the same direction as the Sun rotates: counter-clockwise as seen from above the Sun's north pole. At least one exoplanet, WASP-17b, has been found to orbit in the opposite direction to its star's rotation. The period of one revolution of a planet's orbit is known as its sidereal period or year. A planet's year depends on its distance from its star; the farther a planet is from its star, the longer the distance it must travel and the slower its speed, since it is less affected by its star's gravity. No planet's orbit is perfectly circular, and hence the distance of each from the host star varies over the course of its year. The closest approach to its star is called its periastron, or perihelion in the Solar System, whereas its farthest separation from the star is called its apastron (aphelion). As a planet approaches periastron, its speed increases as it trades gravitational potential energy for kinetic energy, just as a falling object on Earth accelerates as it falls. As the planet nears apastron, its speed decreases, just as an object thrown upwards on Earth slows down as it reaches the apex of its trajectory. Each planet's orbit is delineated by a set of elements: The eccentricity of an orbit describes the elongation of a planet's elliptical (oval) orbit. Planets with low eccentricities have more circular orbits, whereas planets with high eccentricities have more elliptical orbits. The planets and large moons in the Solar System have relatively low eccentricities, and thus nearly circular orbits. The comets and many Kuiper belt objects, as well as several exoplanets, have very high eccentricities, and thus exceedingly elliptical orbits. The semi-major axis gives the size of the orbit. It is the distance from the midpoint to the longest diameter of its elliptical orbit. This distance is not the same as its apastron, because no planet's orbit has its star at its exact centre. The inclination of a planet tells how far above or below an established reference plane its orbit is tilted. In the Solar System, the reference plane is the plane of Earth's orbit, called the ecliptic. For exoplanets, the plane, known as the sky plane or plane of the sky, is the plane perpendicular to the observer's line of sight from Earth. The orbits of the eight major planets of the Solar System all lie very close to the ecliptic; however, some smaller objects like Pallas, Pluto, and Eris orbit at far more extreme angles to it, as do comets. The large moons are generally not very inclined to their parent planets' equators, but Earth's Moon, Saturn's Iapetus, and Neptune's Triton are exceptions. Triton is unique among the large moons in that it orbits retrograde, i.e. in the direction opposite to its parent planet's rotation. The points at which a planet crosses above and below its reference plane are called its ascending and descending nodes. The longitude of the ascending node is the angle between the reference plane's 0 longitude and the planet's ascending node. The argument of periapsis (or perihelion in the Solar System) is the angle between a planet's ascending node and its closest approach to its star. Axial tilt Planets have varying degrees of axial tilt; they spin at an angle to the plane of their stars' equators. This causes the amount of light received by each hemisphere to vary over the course of its year; when the Northern Hemisphere points away from its star, the Southern Hemisphere points towards it, and vice versa. Each planet therefore has seasons, resulting in changes to the climate over the course of its year. The time at which each hemisphere points farthest or nearest from its star is known as its solstice. Each planet has two in the course of its orbit; when one hemisphere has its summer solstice with its day being the longest, the other has its winter solstice when its day is shortest. The varying amount of light and heat received by each hemisphere creates annual changes in weather patterns for each half of the planet. Jupiter's axial tilt is very small, so its seasonal variation is minimal; Uranus, on the other hand, has an axial tilt so extreme it is virtually on its side, which means that its hemispheres are either continually in sunlight or continually in darkness around the time of its solstices. In the Solar System, Mercury, Venus, Ceres, and Jupiter have very small tilts; Pallas, Uranus, and Pluto have extreme ones; and Earth, Mars, Vesta, Saturn, and Neptune have moderate ones. Among exoplanets, axial tilts are not known for certain, though most hot Jupiters are believed to have a negligible axial tilt as a result of their proximity to their stars. Similarly, the axial tilts of the planetary-mass moons are near zero, with Earth's Moon at 6.687° as the biggest exception; additionally, Callisto's axial tilt varies between 0 and about 2 degrees on timescales of thousands of years. Rotation The planets rotate around invisible axes through their centres. A planet's rotation period is known as a stellar day. Most of the planets in the Solar System rotate in the same direction as they orbit the Sun, which is counter-clockwise as seen from above the Sun's north pole. The exceptions are Venus and Uranus, which rotate clockwise, though Uranus's extreme axial tilt means there are differing conventions on which of its poles is "north", and therefore whether it is rotating clockwise or anti-clockwise. Regardless of which convention is used, Uranus has a retrograde rotation relative to its orbit. The rotation of a planet can be induced by several factors during formation. A net angular momentum can be induced by the individual angular momentum contributions of accreted objects. The accretion of gas by the giant planets contributes to the angular momentum. Finally, during the last stages of planet building, a stochastic process of protoplanetary accretion can randomly alter the spin axis of the planet. There is great variation in the length of day between the planets, with Venus taking 243 days to rotate, and the giant planets only a few hours. The rotational periods of exoplanets are not known, but for hot Jupiters, their proximity to their stars means that they are tidally locked (that is, their orbits are in sync with their rotations). This means, they always show one face to their stars, with one side in perpetual day, the other in perpetual night. Mercury and Venus, the closest planets to the Sun, similarly exhibit very slow rotation: Mercury is tidally locked into a 3:2 spin–orbit resonance (rotating three times for every two revolutions around the Sun), and Venus's rotation may be in equilibrium between tidal forces slowing it down and atmospheric tides created by solar heating speeding it up. All the large moons are tidally locked to their parent planets; Pluto and Charon are tidally locked to each other, as are Eris and Dysnomia, and probably and its moon Vanth. The other dwarf planets with known rotation periods rotate faster than Earth; Haumea rotates so fast that it has been distorted into a triaxial ellipsoid. The exoplanet Tau Boötis b and its parent star Tau Boötis appear to be mutually tidally locked. Orbital clearing The defining dynamic characteristic of a planet, according to the IAU definition, is that it has cleared its neighborhood. A planet that has cleared its neighborhood has accumulated enough mass to gather up or sweep away all the planetesimals in its orbit. In effect, it orbits its star in isolation, as opposed to sharing its orbit with a multitude of similar-sized objects. As described above, this characteristic was mandated as part of the IAU's official definition of a planet in August 2006. Although to date this criterion only applies to the Solar System, a number of young extrasolar systems have been found in which evidence suggests orbital clearing is taking place within their circumstellar discs. Physical characteristics Size and shape Gravity causes planets to be pulled into a roughly spherical shape, so a planet's size can be expressed roughly by an average radius (for example, Earth radius or Jupiter radius). However, planets are not perfectly spherical; for example, the Earth's rotation causes it to be slightly flattened at the poles with a bulge around the equator. Therefore, a better approximation of Earth's shape is an oblate spheroid, whose equatorial diameter is larger than the pole-to-pole diameter. Generally, a planet's shape may be described by giving polar and equatorial radii of a spheroid or specifying a reference ellipsoid. From such a specification, the planet's flattening, surface area, and volume can be calculated; its normal gravity can be computed knowing its size, shape, rotation rate, and mass. Mass A planet's defining physical characteristic is that it is massive enough for the force of its own gravity to dominate over the electromagnetic forces binding its physical structure, leading to a state of hydrostatic equilibrium. This effectively means that all planets are spherical or spheroidal. Up to a certain mass, an object can be irregular in shape, but beyond that point, which varies depending on the chemical makeup of the object, gravity begins to pull an object towards its own centre of mass until the object collapses into a sphere. Mass is the prime attribute by which planets are distinguished from stars. No objects between the masses of the Sun and Jupiter exist in the Solar System, but there are exoplanets of this size. The lower stellar mass limit is estimated to be around 75 to 80 times that of Jupiter (). Some authors advocate that this be used as the upper limit for planethood, on the grounds that the internal physics of objects does not change between approximately one Saturn mass (beginning of significant self-compression) and the onset of hydrogen burning and becoming a red dwarf star. Beyond roughly 13 (at least for objects with solar-type isotopic abundance), an object achieves conditions suitable for nuclear fusion of deuterium: this has sometimes been advocated as a boundary, even though deuterium burning does not last very long and most brown dwarfs have long since finished burning their deuterium. This is not universally agreed upon: the exoplanets Encyclopaedia includes objects up to 60 , and the Exoplanet Data Explorer up to 24 . The smallest known exoplanet with an accurately known mass is PSR B1257+12A, one of the first exoplanets discovered, which was found in 1992 in orbit around a pulsar. Its mass is roughly half that of the planet Mercury. Even smaller is WD 1145+017 b, orbiting a white dwarf; its mass is roughly that of the dwarf planet Haumea, and it is typically termed a minor planet. The smallest known planet orbiting a main-sequence star other than the Sun is Kepler-37b, with a mass (and radius) that is probably slightly higher than that of the Moon. The smallest object in the Solar System generally agreed to be a geophysical planet is Saturn's moon Mimas, with a radius about 3.1% of Earth's and a mass about 0.00063% of Earth's. Saturn's smaller moon Phoebe, currently an irregular body of 1.7% Earth's radius and 0.00014% Earth's mass, is thought to have attained hydrostatic equilibrium and differentiation early in its history before being battered out of shape by impacts. Some asteroids may be fragments of protoplanets that began to accrete and differentiate, but suffered catastrophic collisions, leaving only a metallic or rocky core today, or a reaccumulation of the resulting debris. Internal differentiation Every planet began its existence in an entirely fluid state; in early formation, the denser, heavier materials sank to the centre, leaving the lighter materials near the surface. Each therefore has a differentiated interior consisting of a dense planetary core surrounded by a mantle that either is or was a fluid. The terrestrial planets' mantles are sealed within hard crusts, but in the giant planets the mantle simply blends into the upper cloud layers. The terrestrial planets have cores of elements such as iron and nickel and mantles of silicates. Jupiter and Saturn are believed to have cores of rock and metal surrounded by mantles of metallic hydrogen. Uranus and Neptune, which are smaller, have rocky cores surrounded by mantles of water, ammonia, methane, and other ices. The fluid action within these planets' cores creates a geodynamo that generates a magnetic field. Similar differentiation processes are believed to have occurred on some of the large moons and dwarf planets, though the process may not always have been completed: Ceres, Callisto, and Titan appear to be incompletely differentiated. The asteroid Vesta, though not a dwarf planet because it was battered by impacts out of roundness, has a differentiated interior similar to that of Venus, Earth, and Mars. Atmosphere All of the Solar System planets except Mercury have substantial atmospheres because their gravity is strong enough to keep gases close to the surface. Saturn's largest moon Titan also has a substantial atmosphere thicker than that of Earth; Neptune's largest moon Triton and the dwarf planet Pluto have more tenuous atmospheres. The larger giant planets are massive enough to keep large amounts of the light gases hydrogen and helium, whereas the smaller planets lose these gases into space. Analysis of exoplanets suggests that the threshold for being able to hold on to these light gases occurs at about , so that Earth and Venus are near the maximum size for rocky planets. The composition of Earth's atmosphere is different from the other planets because the various life processes that have transpired on the planet have introduced free molecular oxygen. The atmospheres of Mars and Venus are both dominated by carbon dioxide, but differ drastically in density: the average surface pressure of Mars's atmosphere is less than 1% that of Earth's (too low to allow liquid water to exist), while the average surface pressure of Venus's atmosphere is about 92 times that of Earth's. It is likely that Venus's atmosphere was the result of a runaway greenhouse effect in its history, which today makes it the hottest planet by surface temperature, hotter even than Mercury. Despite hostile surface conditions, temperature, and pressure at about 50–55 km altitude in Venus's atmosphere are close to Earthlike conditions (the only place in the Solar System beyond Earth where this is so), and this region has been suggested as a plausible base for future human exploration. Titan has the only nitrogen-rich planetary atmosphere in the Solar System other than Earth's. Just as Earth's conditions are close to the triple point of water, allowing it to exist in all three states on the planet's surface, so Titan's are to the triple point of methane. Planetary atmospheres are affected by the varying insolation or internal energy, leading to the formation of dynamic weather systems such as hurricanes (on Earth), planet-wide dust storms (on Mars), a greater-than-Earth-sized anticyclone on Jupiter (called the Great Red Spot), and holes in the atmosphere (on Neptune). Weather patterns detected on exoplanets include a hot region on HD 189733 b twice the size of the Great Red Spot, as well as clouds on the hot Jupiter Kepler-7b, the super-Earth Gliese 1214 b, and others. Hot Jupiters, due to their extreme proximities to their host stars, have been shown to be losing their atmospheres into space due to stellar radiation, much like the tails of comets. These planets may have vast differences in temperature between their day and night sides that produce supersonic winds, although multiple factors are involved and the details of the atmospheric dynamics that affect the day-night temperature difference are complex. Magnetosphere One important characteristic of the planets is their intrinsic magnetic moments, which in turn give rise to magnetospheres. The presence of a magnetic field indicates that the planet is still geologically alive. In other words, magnetized planets have flows of electrically conducting material in their interiors, which generate their magnetic fields. These fields significantly change the interaction of the planet and solar wind. A magnetized planet creates a cavity in the solar wind around itself called the magnetosphere, which the wind cannot penetrate. The magnetosphere can be much larger than the planet itself. In contrast, non-magnetized planets have only small magnetospheres induced by interaction of the ionosphere with the solar wind, which cannot effectively protect the planet. Of the eight planets in the Solar System, only Venus and Mars lack such a magnetic field. Of the magnetized planets, the magnetic field of Mercury is the weakest and is barely able to deflect the solar wind. Jupiter's moon Ganymede has a magnetic field several times stronger, and Jupiter's is the strongest in the Solar System (so intense in fact that it poses a serious health risk to future crewed missions to all its moons inward of Callisto). The magnetic fields of the other giant planets, measured at their surfaces, are roughly similar in strength to that of Earth, but their magnetic moments are significantly larger. The magnetic fields of Uranus and Neptune are strongly tilted relative to the planets' rotational axes and displaced from the planets' centres. In 2003, a team of astronomers in Hawaii observing the star HD 179949 detected a bright spot on its surface, apparently created by the magnetosphere of an orbiting hot Jupiter. Secondary characteristics Several planets or dwarf planets in the Solar System (such as Neptune and Pluto) have orbital periods that are in resonance with each other or with smaller bodies. This is common in satellite systems (e.g. the resonance between Io, Europa, and Ganymede around Jupiter, or between Enceladus and Dione around Saturn). All except Mercury and Venus have natural satellites, often called "moons". Earth has one, Mars has two, and the giant planets have numerous moons in complex planetary-type systems. Except for Ceres and Sedna, all the consensus dwarf planets are known to have at least one moon as well. Many moons of the giant planets have features similar to those on the terrestrial planets and dwarf planets, and some have been studied as possible abodes of life (especially Europa and Enceladus). The four giant planets are orbited by planetary rings of varying size and complexity. The rings are composed primarily of dust or particulate matter, but can host tiny 'moonlets' whose gravity shapes and maintains their structure. Although the origins of planetary rings are not precisely known, they are believed to be the result of natural satellites that fell below their parent planets' Roche limits and were torn apart by tidal forces. The dwarf planets Haumea and Quaoar also have rings. No secondary characteristics have been observed around exoplanets. The sub-brown dwarf Cha 110913−773444, which has been described as a rogue planet, is believed to be orbited by a tiny protoplanetary disc, and the sub-brown dwarf OTS 44 was shown to be surrounded by a substantial protoplanetary disk of at least 10 Earth masses. History and etymology The idea of planets has evolved over the history of astronomy, from the divine lights of antiquity to the earthly objects of the scientific age. The concept has expanded to include worlds not only in the Solar System, but in multitudes of other extrasolar systems. The consensus as to what counts as a planet, as opposed to other objects, has changed several times. It previously encompassed asteroids, moons, and dwarf planets like Pluto, and there continues to be some disagreement today. Ancient civilizations and classical planets The five classical planets of the Solar System, being visible to the naked eye, have been known since ancient times and have had a significant impact on mythology, religious cosmology, and ancient astronomy. In ancient times, astronomers noted how certain lights moved across the sky, as opposed to the "fixed stars", which maintained a constant relative position in the sky. Ancient Greeks called these lights () or simply () from which today's word "planet" was derived. In ancient Greece, China, Babylon, and indeed all pre-modern civilizations, it was almost universally believed that Earth was the center of the Universe and that all the "planets" circled Earth. The reasons for this perception were that stars and planets appeared to revolve around Earth each day and the apparently common-sense perceptions that Earth was solid and stable and that it was not moving but at rest. Babylon The first civilization known to have a functional theory of the planets were the Babylonians, who lived in Mesopotamia in the first and second millennia BC. The oldest surviving planetary astronomical text is the Babylonian Venus tablet of Ammisaduqa, a 7th-century BC copy of a list of observations of the motions of the planet Venus, that probably dates as early as the second millennium BC. The MUL.APIN is a pair of cuneiform tablets dating from the 7th century BC that lays out the motions of the Sun, Moon, and planets over the course of the year. Late Babylonian astronomy is the origin of Western astronomy and indeed all Western efforts in the exact sciences. The Enuma anu enlil, written during the Neo-Assyrian period in the 7th century BC, comprises a list of omens and their relationships with various celestial phenomena including the motions of the planets. The inferior planets Venus and Mercury and the superior planets Mars, Jupiter, and Saturn were all identified by Babylonian astronomers. These would remain the only known planets until the invention of the telescope in early modern times. Greco-Roman astronomy The ancient Greeks initially did not attach as much significance to the planets as the Babylonians. In the 6th and 5th centuries BC, the Pythagoreans appear to have developed their own independent planetary theory, which consisted of the Earth, Sun, Moon, and planets revolving around a "Central Fire" at the center of the Universe. Pythagoras or Parmenides is said to have been the first to identify the evening star (Hesperos) and morning star (Phosphoros) as one and the same (Aphrodite, Greek corresponding to Latin Venus), though this had long been known in Mesopotamia. In the 3rd century BC, Aristarchus of Samos proposed a heliocentric system, according to which Earth and the planets revolved around the Sun. The geocentric system remained dominant until the Scientific Revolution. By the 1st century BC, during the Hellenistic period, the Greeks had begun to develop their own mathematical schemes for predicting the positions of the planets. These schemes, which were based on geometry rather than the arithmetic of the Babylonians, would eventually eclipse the Babylonians' theories in complexity and comprehensiveness and account for most of the astronomical movements observed from Earth with the naked eye. These theories would reach their fullest expression in the Almagest written by Ptolemy in the 2nd century CE. So complete was the domination of Ptolemy's model that it superseded all previous works on astronomy and remained the definitive astronomical text in the Western world for 13 centuries. To the Greeks and Romans, there were seven known planets, each presumed to be circling Earth according to the complex laws laid out by Ptolemy. They were, in increasing order from Earth (in Ptolemy's order and using modern names): the Moon, Mercury, Venus, the Sun, Mars, Jupiter, and Saturn. Medieval astronomy After the fall of the Western Roman Empire, astronomy developed further in India and the medieval Islamic world. In 499 CE, the Indian astronomer Aryabhata propounded a planetary model that explicitly incorporated Earth's rotation about its axis, which he explains as the cause of what appears to be an apparent westward motion of the stars. He also theorized that the orbits of planets were elliptical. Aryabhata's followers were particularly strong in South India, where his principles of the diurnal rotation of Earth, among others, were followed and a number of secondary works were based on them. The astronomy of the Islamic Golden Age mostly took place in the Middle East, Central Asia, Al-Andalus, and North Africa, and later in the Far East and India. These astronomers, like the polymath Ibn al-Haytham, generally accepted geocentrism, although they did dispute Ptolemy's system of epicycles and sought alternatives. The 10th-century astronomer Abu Sa'id al-Sijzi accepted that the Earth rotates around its axis. In the 11th century, the transit of Venus was observed by Avicenna. His contemporary Al-Biruni devised a method of determining the Earth's radius using trigonometry that, unlike the older method of Eratosthenes, only required observations at a single mountain. Scientific Revolution and discovery of outer planets With the advent of the Scientific Revolution and the heliocentric model of Copernicus, Galileo, and Kepler, use of the term "planet" changed from something that moved around the sky relative to the fixed star to a body that orbited the Sun, directly (a primary planet) or indirectly (a secondary or satellite planet). Thus the Earth was added to the roster of planets, and the Sun was removed. The Copernican count of primary planets stood until 1781, when William Herschel discovered Uranus. When four satellites of Jupiter (the Galilean moons) and five of Saturn were discovered in the 17th century, they joined Earth's Moon in the category of "satellite planets" or "secondary planets" orbiting the primary planets, though in the following decades they would come to be called simply "satellites" for short. Scientists generally considered planetary satellites to also be planets until about the 1920s, although this usage was not common among non-scientists. In the first decade of the 19th century, four new 'planets' were discovered: Ceres (in 1801), Pallas (in 1802), Juno (in 1804), and Vesta (in 1807). It soon became apparent that they were rather different from previously known planets: they shared the same general region of space, between Mars and Jupiter (the asteroid belt), with sometimes overlapping orbits. This was an area where only one planet had been expected, and they were much smaller than all other planets; indeed, it was suspected that they might be shards of a larger planet that had broken up. Herschel called them asteroids (from the Greek for "starlike") because even in the largest telescopes they resembled stars, without a resolvable disk. The situation was stable for four decades, but in the 1840s several additional asteroids were discovered (Astraea in 1845; Hebe, Iris, and Flora in 1847; Metis in 1848; and Hygiea in 1849). New "planets" were discovered every year; as a result, astronomers began tabulating the asteroids (minor planets) separately from the major planets and assigning them numbers instead of abstract planetary symbols, although they continued to be considered as small planets. Neptune was discovered in 1846, its position having been predicted thanks to its gravitational influence upon Uranus. Because the orbit of Mercury appeared to be affected in a similar way, it was believed in the late 19th century that there might be another planet even closer to the Sun. However, the discrepancy between Mercury's orbit and the predictions of Newtonian gravity was instead explained by an improved theory of gravity, Einstein's general relativity. Pluto was discovered in 1930. After initial observations led to the belief that it was larger than Earth, the object was immediately accepted as the ninth major planet. Further monitoring found the body was actually much smaller: in 1936, Ray Lyttleton suggested that Pluto may be an escaped satellite of Neptune, and Fred Whipple suggested in 1964 that Pluto may be a comet. The discovery of its large moon Charon in 1978 showed that Pluto was only 0.2% the mass of Earth. As this was still substantially more massive than any known asteroid, and because no other trans-Neptunian objects had been discovered at that time, Pluto kept its planetary status, only officially losing it in 2006. In the 1950s, Gerard Kuiper published papers on the origin of the asteroids. He recognized that asteroids were typically not spherical, as had previously been thought, and that the asteroid families were remnants of collisions. Thus he differentiated between the largest asteroids as "true planets" versus the smaller ones as collisional fragments. From the 1960s onwards, the term "minor planet" was mostly displaced by the term "asteroid", and references to the asteroids as planets in the literature became scarce, except for the geologically evolved largest three: Ceres, and less often Pallas and Vesta. The beginning of Solar System exploration by space probes in the 1960s spurred a renewed interest in planetary science. A split in definitions regarding satellites occurred around then: planetary scientists began to reconsider the large moons as also being planets, but astronomers who were not planetary scientists generally did not. (This is not exactly the same as the definition used in the previous century, which classed all satellites as secondary planets, even non-round ones like Saturn's Hyperion or Mars's Phobos and Deimos.) All the eight major planets and their planetary-mass moons have since been explored by spacecraft, as have many asteroids and the dwarf planets Ceres and Pluto; however, so far the only planetary-mass body beyond Earth that has been explored by humans is the Moon. Defining the term planet A growing number of astronomers argued for Pluto to be declassified as a planet, because many similar objects approaching its size had been found in the same region of the Solar System (the Kuiper belt) during the 1990s and early 2000s. Pluto was found to be just one "small" body in a population of thousands. They often referred to the demotion of the asteroids as a precedent, although that had been done based on their geophysical differences from planets rather than their being in a belt. Some of the larger trans-Neptunian objects, such as Quaoar, Sedna, Eris, and Haumea, were heralded in the popular press as the tenth planet. The announcement of Eris in 2005, an object 27% more massive than Pluto, created the impetus for an official definition of a planet, as considering Pluto a planet would logically have demanded that Eris be considered a planet as well. Since different procedures were in place for naming planets versus non-planets, this created an urgent situation because under the rules Eris could not be named without defining what a planet was. At the time, it was also thought that the size required for a trans-Neptunian object to become round was about the same as that required for the moons of the giant planets (about 400 km diameter), a figure that would have suggested about 200 round objects in the Kuiper belt and thousands more beyond. Many astronomers argued that the public would not accept a definition creating a large number of planets. To acknowledge the problem, the International Astronomical Union (IAU) set about creating the definition of planet and produced one in August 2006. Under this definition, the Solar System is considered to have eight planets (Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus, and Neptune). Bodies that fulfill the first two conditions but not the third are classified as dwarf planets, provided they are not natural satellites of other planets. Originally an IAU committee had proposed a definition that would have included a larger number of planets as it did not include (c) as a criterion. After much discussion, it was decided via a vote that those bodies should instead be classified as dwarf planets. Criticisms and alternatives to IAU definition The IAU definition has not been universally used or accepted. In planetary geology, celestial objects are defined as planets by geophysical characteristics. A celestial body may acquire a dynamic (planetary) geology at approximately the mass required for its mantle to become plastic under its own weight. This leads to a state of hydrostatic equilibrium where the body acquires a stable, round shape, which is adopted as the hallmark of planethood by geophysical definitions. For example: In the Solar System, this mass is generally less than the mass required for a body to clear its orbit; thus, some objects that are considered "planets" under geophysical definitions are not considered as such under the IAU definition, such as Ceres and Pluto. (In practice, the requirement for hydrostatic equilibrium is universally relaxed to a requirement for rounding and compaction under self-gravity; Mercury is not actually in hydrostatic equilibrium, but is universally included as a planet regardless.) Proponents of such definitions often argue that location should not matter and that planethood should be defined by the intrinsic properties of an object. Dwarf planets had been proposed as a category of small planet (as opposed to planetoids as sub-planetary objects) and planetary geologists continue to treat them as planets despite the IAU definition. The number of dwarf planets even among known objects is not certain. In 2019, Grundy et al. argued based on the low densities of some mid-sized trans-Neptunian objects that the limiting size required for a trans-Neptunian object to reach equilibrium was in fact much larger than it is for the icy moons of the giant planets, being about 900–1000 km diameter. There is general consensus on Ceres in the asteroid belt and on the eight trans-Neptunians that probably cross this threshold—, , , , , , , and . Planetary geologists may include the nineteen known planetary-mass moons as "satellite planets", including Earth's Moon and Pluto's Charon, like the early modern astronomers. Some go even further and include as planets relatively large, geologically evolved bodies that are nonetheless not very round today, such as Pallas and Vesta; rounded bodies that were completely disrupted by impacts and re-accreted like Hygiea; or even everything at least the diameter of Saturn's moon Mimas, the smallest planetary-mass moon. (This may even include objects that are not round but happen to be larger than Mimas, like Neptune's moon Proteus.) Astronomer Jean-Luc Margot proposed a mathematical criterion that determines whether an object can clear its orbit during the lifetime of its host star, based on the mass of the planet, its semimajor axis, and the mass of its host star. The formula produces a value called that is greater than 1 for planets. The eight known planets and all known exoplanets have values above 100, while Ceres, Pluto, and Eris have values of 0.1, or less. Objects with values of 1 or more are expected to be approximately spherical, so that objects that fulfill the orbital-zone clearance requirement around Sun-like stars will also fulfill the roundness requirement – though this may not be the case around very low-mass stars. In 2024, Margot and collaborators proposed a revised version of the criterion with a uniform clearing timescale of 10 billion years (the approximate main-sequence lifetime of the Sun) or 13.8 billion years (the age of the Universe) to accommodate planets orbiting brown dwarfs. Exoplanets Even before the discovery of exoplanets, there were particular disagreements over whether an object should be considered a planet if it was part of a distinct population such as a belt, or if it was large enough to generate energy by the thermonuclear fusion of deuterium. Complicating the matter even further, bodies too small to generate energy by fusing deuterium can form by gas-cloud collapse just like stars and brown dwarfs, even down to the mass of Jupiter: there was thus disagreement about whether how a body formed should be taken into account. In 1992, astronomers Aleksander Wolszczan and Dale Frail announced the discovery of planets around a pulsar, PSR B1257+12. This discovery is generally considered to be the first definitive detection of a planetary system around another star. Then, on 6 October 1995, Michel Mayor and Didier Queloz of the Geneva Observatory announced the first definitive detection of an exoplanet orbiting an ordinary main-sequence star (51 Pegasi). The discovery of exoplanets led to another ambiguity in defining a planet: the point at which a planet becomes a star. Many known exoplanets are many times the mass of Jupiter, approaching that of stellar objects known as brown dwarfs. Brown dwarfs are generally considered stars due to their theoretical ability to fuse deuterium, a heavier isotope of hydrogen. Although objects more massive than 75 times that of Jupiter fuse simple hydrogen, objects of 13 Jupiter masses can fuse deuterium. Deuterium is quite rare, constituting less than 0.0026% of the hydrogen in the galaxy, and most brown dwarfs would have ceased fusing deuterium long before their discovery, making them effectively indistinguishable from supermassive planets. IAU working definition of exoplanets The 2006 IAU definition presents some challenges for exoplanets because the language is specific to the Solar System and the criteria of roundness and orbital zone clearance are not presently observable for exoplanets. In 2018, this definition was reassessed and updated as knowledge of exoplanets increased. The current official working definition of an exoplanet is as follows: The IAU noted that this definition could be expected to evolve as knowledge improves. A 2022 review article discussing the history and rationale of this definition suggested that the words "in young star clusters" should be deleted in clause 3, as such objects have now been found elsewhere, and that the term "sub-brown dwarfs" should be replaced by the more current "free-floating planetary mass objects". The term "planetary mass object" has also been used to refer to ambiguous situations concerning exoplanets, such as objects with mass typical for a planet that are free-floating or orbit a brown dwarf instead of a star. Free-floating objects of planetary mass have sometimes been called planets anyway, specifically rogue planets. The limit of 13 Jupiter masses is not universally accepted. Objects below this mass limit can sometimes burn deuterium, and the amount of deuterium that is burned depends on an object's composition. Furthermore, deuterium is quite scarce, so the stage of deuterium burning does not actually last very long; unlike hydrogen burning in a star, deuterium burning does not significantly affect the future evolution of an object. The relationship between mass and radius (or density) show no special feature at this limit, according to which brown dwarfs have the same physics and internal structure as lighter Jovian planets, and would more naturally be considered planets. Thus, many catalogues of exoplanets include objects heavier than 13 Jupiter masses, sometimes going up to 60 Jupiter masses. (The limit for hydrogen burning and becoming a red dwarf star is about 80 Jupiter masses.) The situation of main-sequence stars has been used to argue for such an inclusive definition of "planet" as well, as they also differ greatly along the two orders of magnitude that they cover, in their structure, atmospheres, temperature, spectral features, and probably formation mechanisms; yet they are all considered as one class, being all hydrostatic-equilibrium objects undergoing nuclear burning. Mythology and naming The naming of planets differs between planets of the Solar System and exoplanets (planets of other planetary systems). Exoplanets are commonly named after their parent star and their order of discovery within its planetary system, such as Proxima Centauri b. (The lettering starts at b, with a considered to represent the parent star.) The names for the planets of the Solar System (other than Earth) in the English language are derived from naming practices developed consecutively by the Babylonians, Greeks, and Romans of antiquity. The practice of grafting the names of gods onto the planets was almost certainly borrowed from the Babylonians by the ancient Greeks, and thereafter from the Greeks by the Romans. The Babylonians named Venus after the Sumerian goddess of love with the Akkadian name Ishtar; Mars after their god of war, Nergal; Mercury after their god of wisdom Nabu; and Jupiter after their chief god, Marduk. There are too many concordances between Greek and Babylonian naming conventions for them to have arisen separately. Given the differences in mythology, the correspondence was not perfect. For instance, the Babylonian Nergal was a god of war, and thus the Greeks identified him with Ares. Unlike Ares, Nergal was also a god of pestilence and ruler of the underworld. In ancient Greece, the two great luminaries, the Sun and the Moon, were called Helios and Selene, two ancient Titanic deities; the slowest planet, Saturn, was called Phainon, the shiner; followed by Phaethon, Jupiter, "bright"; the red planet, Mars was known as Pyroeis, the "fiery"; the brightest, Venus, was known as Phosphoros, the light bringer; and the fleeting final planet, Mercury, was called Stilbon, the gleamer. The Greeks assigned each planet to one among their pantheon of gods, the Olympians and the earlier Titans: Helios and Selene were the names of both planets and gods, both of them Titans (later supplanted by Olympians Apollo and Artemis); Phainon was sacred to Cronus, the Titan who fathered the Olympians; Phaethon was sacred to Zeus, Cronus's son who deposed him as king; Pyroeis was given to Ares, son of Zeus and god of war; Phosphoros was ruled by Aphrodite, the goddess of love; and Stilbon with its speedy motion, was ruled over by Hermes, messenger of the gods and god of learning and wit. Although modern Greeks still use their ancient names for the planets, other European languages, because of the influence of the Roman Empire and, later, the Catholic Church, use the Roman (Latin) names rather than the Greek ones. The Romans inherited Proto-Indo-European mythology as the Greeks did and shared with them a common pantheon under different names, but the Romans lacked the rich narrative traditions that Greek poetic culture had given their gods. During the later period of the Roman Republic, Roman writers borrowed much of the Greek narratives and applied them to their own pantheon, to the point where they became virtually indistinguishable. When the Romans studied Greek astronomy, they gave the planets their own gods' names: Mercurius (for Hermes), Venus (Aphrodite), Mars (Ares), Iuppiter (Zeus), and Saturnus (Cronus). However, there was not much agreement on which god a particular planet was associated with; according to Pliny the Elder, while Phainon and Phaethon's associations with Saturn and Jupiter respectively were widely agreed upon, Pyroeis was also associated with the demi-god Hercules, Stilbon was also associated with Apollo, god of music, healing, and prophecy; Phosphoros was also associated with prominent goddesses Juno and Isis. Some Romans, following a belief possibly originating in Mesopotamia but developed in Hellenistic Egypt, believed that the seven gods after whom the planets were named took hourly shifts in looking after affairs on Earth. The order of shifts went Saturn, Jupiter, Mars, Sun, Venus, Mercury, Moon (from the farthest to the closest planet). Therefore, the first day was started by Saturn (1st hour), second day by Sun (25th hour), followed by Moon (49th hour), Mars, Mercury, Jupiter, and Venus. Because each day was named by the god that started it, this became the order of the days of the week in the Roman calendar. In English, Saturday, Sunday, and Monday are straightforward translations of these Roman names. The other days were renamed after Tīw (Tuesday), Wōden (Wednesday), Þunor (Thursday), and Frīġ (Friday), the Anglo-Saxon gods considered similar or equivalent to Mars, Mercury, Jupiter, and Venus, respectively. Earth's name in English is not derived from Greco-Roman mythology. Because it was only generally accepted as a planet in the 17th century, there is no tradition of naming it after a god. (The same is true, in English at least, of the Sun and the Moon, though they are no longer generally considered planets.) The name originates from the Old English word eorþe, which was the word for "ground" and "dirt" as well as the world itself. As with its equivalents in the other Germanic languages, it derives ultimately from the Proto-Germanic word erþō, as can be seen in the English earth, the German Erde, the Dutch aarde, and the Scandinavian jord. Many of the Romance languages retain the old Roman word terra (or some variation of it) that was used with the meaning of "dry land" as opposed to "sea". The non-Romance languages use their own native words. The Greeks retain their original name, Γή (Ge). Non-European cultures use other planetary-naming systems. India uses a system based on the Navagraha, which incorporates the seven traditional planets and the ascending and descending lunar nodes Rahu and Ketu. The planets are Surya 'Sun', Chandra 'Moon', Budha for Mercury, Shukra ('bright') for Venus, Mangala (the god of war) for Mars, (councilor of the gods) for Jupiter, and Shani (symbolic of time) for Saturn. The native Persian names of most of the planets are based on identifications of the Mesopotamian gods with Iranian gods, analogous to the Greek and Latin names. Mercury is Tir (Persian: ) for the western Iranian god Tīriya (patron of scribes), analogous to Nabu; Venus is Nāhid () for Anahita; Mars is Bahrām () for Verethragna; and Jupiter is Hormoz () for Ahura Mazda. The Persian name for Saturn, Keyvān (), is a borrowing from Akkadian kajamānu, meaning "the permanent, steady". China and the countries of eastern Asia historically subject to Chinese cultural influence (such as Japan, Korea, and Vietnam) use a naming system based on the five Chinese elements: water (Mercury 水星 "water star"), metal (Venus 金星 "metal star"), fire (Mars 火星 "fire star"), wood (Jupiter 木星 "wood star"), and earth (Saturn 土星 "earth star"). The names of Uranus (天王星 "sky king star"), Neptune (海王星 "sea king star"), and Pluto (冥王星 "underworld king star") in Chinese, Korean, and Japanese are calques based on the roles of those gods in Roman and Greek mythology. In the 19th century, Alexander Wylie and Li Shanlan calqued the names of the first 117 asteroids into Chinese, and many of their names are still used today, e.g. Ceres (穀神星 "grain goddess star"), Pallas (智神星 "wisdom goddess star"), Juno (婚神星 "marriage goddess star"), Vesta (灶神星 "hearth goddess star"), and Hygiea (健神星 "health goddess star"). Such translations were extended to some later minor planets, including some of the dwarf planets discovered in the 21st century, e.g. Haumea (妊神星 "pregnancy goddess star"), Makemake (鳥神星 "bird goddess star"), and Eris (鬩神星 "quarrel goddess star"). However, except for the better-known asteroids and dwarf planets, many of them are rare outside Chinese astronomical dictionaries. In traditional Hebrew astronomy, the seven traditional planets have (for the most part) descriptive names—the Sun is חמה Ḥammah or "the hot one", the Moon is לבנה Levanah or "the white one", Venus is כוכב נוגה Kokhav Nogah or "the bright planet", Mercury is כוכב Kokhav or "the planet" (given its lack of distinguishing features), Mars is מאדים Ma'adim or "the red one", and Saturn is שבתאי Shabbatai or "the resting one" (in reference to its slow movement compared to the other visible planets). The odd one out is Jupiter, called צדק Tzedeq or "justice". These names, first attested in the Babylonian Talmud, are not the original Hebrew names of the planets. In 377 Epiphanius of Salamis recorded another set of names that seem to have pagan or Canaanite associations: those names, since replaced for religious reasons, were probably the historical Semitic names, and may have much earlier roots going back to Babylonian astronomy. Hebrew names were chosen for Uranus (אורון Oron, "small light") and Neptune (רהב Rahab, a Biblical sea monster) in 2009; prior to that the names "Uranus" and "Neptune" had simply been borrowed. The etymologies for the Arabic names of the planets are less well understood. Mostly agreed among scholars are Venus (Arabic: , az-Zuhara, "the bright one"), Earth (, al-ʾArḍ, from the same root as eretz), and Saturn (, Zuḥal, "withdrawer"). Multiple suggested etymologies exist for Mercury (, ʿUṭārid), Mars (, al-Mirrīkh), and Jupiter (, al-Muštarī), but there is no agreement among scholars. When subsequent planets were discovered in the 18th and 19th centuries, Uranus was named for a Greek deity and Neptune for a Roman one (the counterpart of Poseidon). The asteroids were initially named from mythology as well—Ceres, Juno, and Vesta are major Roman goddesses, and Pallas is an epithet of the major Greek goddess Athena—but as more and more were discovered, they first started being named after more minor goddesses, and the mythological restriction was dropped starting from the twentieth asteroid Massalia in 1852. Pluto (named after the Greek god of the underworld) was given a classical name, as it was considered a major planet when it was discovered. After more objects were discovered beyond Neptune, naming conventions depending on their orbits were put in place: those in the 2:3 resonance with Neptune (the plutinos) are given names from underworld myths, while others are given names from creation myths. Most of the trans-Neptunian planetoids are named after gods and goddesses from other cultures (e.g. Quaoar is named after a Tongva god). There are a few exceptions which continue the Roman and Greek scheme, notably including Eris as it had initially been considered a tenth planet. The moons (including the planetary-mass ones) are generally given names with some association with their parent planet. The planetary-mass moons of Jupiter are named after four of Zeus' lovers (or other sexual partners); those of Saturn are named after Cronus' brothers and sisters, the Titans; those of Uranus are named after characters from Shakespeare and Pope (originally specifically from fairy mythology, but that ended with the naming of Miranda). Neptune's planetary-mass moon Triton is named after the god's son; Pluto's planetary-mass moon Charon is named after the ferryman of the dead, who carries the souls of the newly deceased to the underworld (Pluto's domain). Symbols The written symbols for Mercury, Venus, Jupiter, Saturn, and possibly Mars have been traced to forms found in late Greek papyrus texts. The symbols for Jupiter and Saturn are identified as monograms of the corresponding Greek names, and the symbol for Mercury is a stylized caduceus. According to Annie Scott Dill Maunder, antecedents of the planetary symbols were used in art to represent the gods associated with the classical planets. Bianchini's planisphere, discovered by Francesco Bianchini in the 18th century but produced in the 2nd century, shows Greek personifications of planetary gods charged with early versions of the planetary symbols. Mercury has a caduceus; Venus has, attached to her necklace, a cord connected to another necklace; Mars, a spear; Jupiter, a staff; Saturn, a scythe; the Sun, a circlet with rays radiating from it; and the Moon, a headdress with a crescent attached. The modern shapes with the cross-marks first appeared around the 16th century. According to Maunder, the addition of crosses appears to be "an attempt to give a savour of Christianity to the symbols of the old pagan gods." Earth itself was not considered a classical planet; its symbol descends from a pre-heliocentric symbol for the four corners of the world. When further planets were discovered orbiting the Sun, symbols were invented for them. The most common astronomical symbol for Uranus, ⛢, was invented by Johann Gottfried Köhler, and was intended to represent the newly discovered metal platinum. An alternative symbol, ♅, was invented by Jérôme Lalande, and represents a globe with a H on top, for Uranus's discoverer Herschel. Today, ⛢ is mostly used by astronomers and ♅ by astrologers, though it is possible to find each symbol in the other context. The first few asteroids were considered to be planets when they were discovered, and were likewise given abstract symbols, e.g. Ceres' sickle (⚳), Pallas' spear (⚴), Juno's sceptre (⚵), and Vesta's hearth (⚶). However, as their number rose further and further, this practice stopped in favour of numbering them instead. (Massalia, the first asteroid not named from mythology, is also the first asteroid that was not assigned a symbol by its discoverer.) The symbols for the first four asteroids, Ceres through Vesta, remained in use for longer than the others, and even in the modern day NASA has used the Ceres symbol—Ceres being the only asteroid that is also a dwarf planet. Neptune's symbol (♆) represents the god's trident. The astronomical symbol for Pluto is a P-L monogram (♇), though it has become less common since the IAU definition reclassified Pluto. Since Pluto's reclassification, NASA has used the traditional astrological symbol of Pluto (⯓), a planetary orb over Pluto's bident. The IAU discourages the use of planetary symbols in modern journal articles in favour of one-letter or (to disambiguate Mercury and Mars) two-letter abbreviations for the major planets. The symbols for the Sun and Earth are nonetheless common, as solar mass, Earth mass, and similar units are common in astronomy. Other planetary symbols today are mostly encountered in astrology. Astrologers have resurrected the old astronomical symbols for the first few asteroids and continue to invent symbols for other objects. This includes relatively standard astrological symbols for the dwarf planets discovered in the 21st century, which were not given symbols by astronomers because planetary symbols had mostly fallen out of use in astronomy by the time they were discovered. Many astrological symbols are included in Unicode, and a few of these new inventions (the symbols of Haumea, Makemake, and Eris) have since been used by NASA in astronomy. The Eris symbol is a traditional one from Discordianism, a religion worshipping the goddess Eris. The other dwarf-planet symbols are mostly initialisms (except Haumea) in the native scripts of the cultures they come from; they also represent something associated with the corresponding deity or culture, e.g. Makemake's face or Gonggong's snake-tail.
Physical sciences
Astronomy
null
22934
https://en.wikipedia.org/wiki/Probability
Probability
Probability is the branch of mathematics and statistics concerning events and numerical descriptions of how likely they are to occur. The probability of an event is a number between 0 and 1; the larger the probability, the more likely an event is to occur. This number is often expressed as a percentage (%), ranging from 0% to 100%. A simple example is the tossing of a fair (unbiased) coin. Since the coin is fair, the two outcomes ("heads" and "tails") are both equally probable; the probability of "heads" equals the probability of "tails"; and since no other outcomes are possible, the probability of either "heads" or "tails" is 1/2 (which could also be written as 0.5 or 50%). These concepts have been given an axiomatic mathematical formalization in probability theory, which is used widely in areas of study such as statistics, mathematics, science, finance, gambling, artificial intelligence, machine learning, computer science, game theory, and philosophy to, for example, draw inferences about the expected frequency of events. Probability theory is also used to describe the underlying mechanics and regularities of complex systems. Etymology The word probability derives from the Latin , which can also mean "probity", a measure of the authority of a witness in a legal case in Europe, and often correlated with the witness's nobility. In a sense, this differs much from the modern meaning of probability, which in contrast is a measure of the weight of empirical evidence, and is arrived at from inductive reasoning and statistical inference. Interpretations When dealing with random experiments – i.e., experiments that are random and well-defined – in a purely theoretical setting (like tossing a coin), probabilities can be numerically described by the number of desired outcomes, divided by the total number of all outcomes. This is referred to as theoretical probability (in contrast to empirical probability, dealing with probabilities in the context of real experiments). For example, tossing a coin twice will yield "head-head", "head-tail", "tail-head", and "tail-tail" outcomes. The probability of getting an outcome of "head-head" is 1 out of 4 outcomes, or, in numerical terms, 1/4, 0.25 or 25%. However, when it comes to practical application, there are two major competing categories of probability interpretations, whose adherents hold different views about the fundamental nature of probability: Objectivists assign numbers to describe some objective or physical state of affairs. The most popular version of objective probability is frequentist probability, which claims that the probability of a random event denotes the relative frequency of occurrence of an experiment's outcome when the experiment is repeated indefinitely. This interpretation considers probability to be the relative frequency "in the long run" of outcomes. A modification of this is propensity probability, which interprets probability as the tendency of some experiment to yield a certain outcome, even if it is performed only once. Subjectivists assign numbers per subjective probability, that is, as a degree of belief. The degree of belief has been interpreted as "the price at which you would buy or sell a bet that pays 1 unit of utility if E, 0 if not E", although that interpretation is not universally agreed upon. The most popular version of subjective probability is Bayesian probability, which includes expert knowledge as well as experimental data to produce probabilities. The expert knowledge is represented by some (subjective) prior probability distribution. These data are incorporated in a likelihood function. The product of the prior and the likelihood, when normalized, results in a posterior probability distribution that incorporates all the information known to date. By Aumann's agreement theorem, Bayesian agents whose prior beliefs are similar will end up with similar posterior beliefs. However, sufficiently different priors can lead to different conclusions, regardless of how much information the agents share. History The scientific study of probability is a modern development of mathematics. Gambling shows that there has been an interest in quantifying the ideas of probability throughout history, but exact mathematical descriptions arose much later. There are reasons for the slow development of the mathematics of probability. Whereas games of chance provided the impetus for the mathematical study of probability, fundamental issues are still obscured by superstitions. According to Richard Jeffrey, "Before the middle of the seventeenth century, the term 'probable' (Latin probabilis) meant approvable, and was applied in that sense, univocally, to opinion and to action. A probable action or opinion was one such as sensible people would undertake or hold, in the circumstances." However, in legal contexts especially, 'probable' could also apply to propositions for which there was good evidence. The sixteenth-century Italian polymath Gerolamo Cardano demonstrated the efficacy of defining odds as the ratio of favourable to unfavourable outcomes (which implies that the probability of an event is given by the ratio of favourable outcomes to the total number of possible outcomes). Aside from the elementary work by Cardano, the doctrine of probabilities dates to the correspondence of Pierre de Fermat and Blaise Pascal (1654). Christiaan Huygens (1657) gave the earliest known scientific treatment of the subject. Jakob Bernoulli's Ars Conjectandi (posthumous, 1713) and Abraham de Moivre's Doctrine of Chances (1718) treated the subject as a branch of mathematics. See Ian Hacking's The Emergence of Probability and James Franklin's The Science of Conjecture for histories of the early development of the very concept of mathematical probability. The theory of errors may be traced back to Roger Cotes's Opera Miscellanea (posthumous, 1722), but a memoir prepared by Thomas Simpson in 1755 (printed 1756) first applied the theory to the discussion of errors of observation. The reprint (1757) of this memoir lays down the axioms that positive and negative errors are equally probable, and that certain assignable limits define the range of all errors. Simpson also discusses continuous errors and describes a probability curve. The first two laws of error that were proposed both originated with Pierre-Simon Laplace. The first law was published in 1774, and stated that the frequency of an error could be expressed as an exponential function of the numerical magnitude of the errordisregarding sign. The second law of error was proposed in 1778 by Laplace, and stated that the frequency of the error is an exponential function of the square of the error. The second law of error is called the normal distribution or the Gauss law. "It is difficult historically to attribute that law to Gauss, who in spite of his well-known precocity had probably not made this discovery before he was two years old." Daniel Bernoulli (1778) introduced the principle of the maximum product of the probabilities of a system of concurrent errors. Adrien-Marie Legendre (1805) developed the method of least squares, and introduced it in his Nouvelles méthodes pour la détermination des orbites des comètes (New Methods for Determining the Orbits of Comets). In ignorance of Legendre's contribution, an Irish-American writer, Robert Adrain, editor of "The Analyst" (1808), first deduced the law of facility of error, where is a constant depending on precision of observation, and is a scale factor ensuring that the area under the curve equals 1. He gave two proofs, the second being essentially the same as John Herschel's (1850). Gauss gave the first proof that seems to have been known in Europe (the third after Adrain's) in 1809. Further proofs were given by Laplace (1810, 1812), Gauss (1823), James Ivory (1825, 1826), Hagen (1837), Friedrich Bessel (1838), W.F. Donkin (1844, 1856), and Morgan Crofton (1870). Other contributors were Ellis (1844), De Morgan (1864), Glaisher (1872), and Giovanni Schiaparelli (1875). Peters's (1856) formula for r, the probable error of a single observation, is well known. In the nineteenth century, authors on the general theory included Laplace, Sylvestre Lacroix (1816), Littrow (1833), Adolphe Quetelet (1853), Richard Dedekind (1860), Helmert (1872), Hermann Laurent (1873), Liagre, Didion and Karl Pearson. Augustus De Morgan and George Boole improved the exposition of the theory. In 1906, Andrey Markov introduced the notion of Markov chains, which played an important role in stochastic processes theory and its applications. The modern theory of probability based on measure theory was developed by Andrey Kolmogorov in 1931. On the geometric side, contributors to The Educational Times included Miller, Crofton, McColl, Wolstenholme, Watson, and Artemas Martin. See integral geometry for more information. Theory Like other theories, the theory of probability is a representation of its concepts in formal termsthat is, in terms that can be considered separately from their meaning. These formal terms are manipulated by the rules of mathematics and logic, and any results are interpreted or translated back into the problem domain. There have been at least two successful attempts to formalize probability, namely the Kolmogorov formulation and the Cox formulation. In Kolmogorov's formulation (see also probability space), sets are interpreted as events and probability as a measure on a class of sets. In Cox's theorem, probability is taken as a primitive (i.e., not further analyzed), and the emphasis is on constructing a consistent assignment of probability values to propositions. In both cases, the laws of probability are the same, except for technical details. There are other methods for quantifying uncertainty, such as the Dempster–Shafer theory or possibility theory, but those are essentially different and not compatible with the usually-understood laws of probability. Applications Probability theory is applied in everyday life in risk assessment and modeling. The insurance industry and markets use actuarial science to determine pricing and make trading decisions. Governments apply probabilistic methods in environmental regulation, entitlement analysis, and financial regulation. An example of the use of probability theory in equity trading is the effect of the perceived probability of any widespread Middle East conflict on oil prices, which have ripple effects in the economy as a whole. An assessment by a commodity trader that a war is more likely can send that commodity's prices up or down, and signals other traders of that opinion. Accordingly, the probabilities are neither assessed independently nor necessarily rationally. The theory of behavioral finance emerged to describe the effect of such groupthink on pricing, on policy, and on peace and conflict. In addition to financial assessment, probability can be used to analyze trends in biology (e.g., disease spread) as well as ecology (e.g., biological Punnett squares). As with finance, risk assessment can be used as a statistical tool to calculate the likelihood of undesirable events occurring, and can assist with implementing protocols to avoid encountering such circumstances. Probability is used to design games of chance so that casinos can make a guaranteed profit, yet provide payouts to players that are frequent enough to encourage continued play. Another significant application of probability theory in everyday life is reliability. Many consumer products, such as automobiles and consumer electronics, use reliability theory in product design to reduce the probability of failure. Failure probability may influence a manufacturer's decisions on a product's warranty. The cache language model and other statistical language models that are used in natural language processing are also examples of applications of probability theory. Mathematical treatment Consider an experiment that can produce a number of results. The collection of all possible results is called the sample space of the experiment, sometimes denoted as . The power set of the sample space is formed by considering all different collections of possible results. For example, rolling a die can produce six possible results. One collection of possible results gives an odd number on the die. Thus, the subset {1,3,5} is an element of the power set of the sample space of dice rolls. These collections are called "events". In this case, {1,3,5} is the event that the die falls on some odd number. If the results that actually occur fall in a given event, the event is said to have occurred. A probability is a way of assigning every event a value between zero and one, with the requirement that the event made up of all possible results (in our example, the event {1,2,3,4,5,6}) is assigned a value of one. To qualify as a probability, the assignment of values must satisfy the requirement that for any collection of mutually exclusive events (events with no common results, such as the events {1,6}, {3}, and {2,4}), the probability that at least one of the events will occur is given by the sum of the probabilities of all the individual events. The probability of an event A is written as , , or . This mathematical definition of probability can extend to infinite sample spaces, and even uncountable sample spaces, using the concept of a measure. The opposite or complement of an event A is the event [not A] (that is, the event of A not occurring), often denoted as , , or ; its probability is given by . As an example, the chance of not rolling a six on a six-sided die is For a more comprehensive treatment, see Complementary event. If two events A and B occur on a single performance of an experiment, this is called the intersection or joint probability of A and B, denoted as Independent events If two events, A and B are independent then the joint probability is For example, if two coins are flipped, then the chance of both being heads is Mutually exclusive events If either event A or event B can occur but never both simultaneously, then they are called mutually exclusive events. If two events are mutually exclusive, then the probability of both occurring is denoted as andIf two events are mutually exclusive, then the probability of either occurring is denoted as and For example, the chance of rolling a 1 or 2 on a six-sided die is Not (necessarily) mutually exclusive events If the events are not (necessarily) mutually exclusive thenRewritten, For example, when drawing a card from a deck of cards, the chance of getting a heart or a face card (J, Q, K) (or both) is since among the 52 cards of a deck, 13 are hearts, 12 are face cards, and 3 are both: here the possibilities included in the "3 that are both" are included in each of the "13 hearts" and the "12 face cards", but should only be counted once. This can be expanded further for multiple not (necessarily) mutually exclusive events. For three events, this proceeds as follows:It can be seen, then, that this pattern can be repeated for any number of events. Conditional probability Conditional probability is the probability of some event A, given the occurrence of some other event B. Conditional probability is written , and is read "the probability of A, given B". It is defined by If then is formally undefined by this expression. In this case and are independent, since However, it is possible to define a conditional probability for some zero-probability events, for example by using a σ-algebra of such events (such as those arising from a continuous random variable). For example, in a bag of 2 red balls and 2 blue balls (4 balls in total), the probability of taking a red ball is however, when taking a second ball, the probability of it being either a red ball or a blue ball depends on the ball previously taken. For example, if a red ball was taken, then the probability of picking a red ball again would be since only 1 red and 2 blue balls would have been remaining. And if a blue ball was taken previously, the probability of taking a red ball will be Inverse probability In probability theory and applications, Bayes' rule relates the odds of event to event before (prior to) and after (posterior to) conditioning on another event The odds on to event is simply the ratio of the probabilities of the two events. When arbitrarily many events are of interest, not just two, the rule can be rephrased as posterior is proportional to prior times likelihood, where the proportionality symbol means that the left hand side is proportional to (i.e., equals a constant times) the right hand side as varies, for fixed or given (Lee, 2012; Bertsch McGrayne, 2012). In this form it goes back to Laplace (1774) and to Cournot (1843); see Fienberg (2005). Summary of probabilities Relation to randomness and probability in quantum mechanics In a deterministic universe, based on Newtonian concepts, there would be no probability if all conditions were known (Laplace's demon) (but there are situations in which sensitivity to initial conditions exceeds our ability to measure them, i.e. know them). In the case of a roulette wheel, if the force of the hand and the period of that force are known, the number on which the ball will stop would be a certainty (though as a practical matter, this would likely be true only of a roulette wheel that had not been exactly levelled – as Thomas A. Bass' Newtonian Casino revealed). This also assumes knowledge of inertia and friction of the wheel, weight, smoothness, and roundness of the ball, variations in hand speed during the turning, and so forth. A probabilistic description can thus be more useful than Newtonian mechanics for analyzing the pattern of outcomes of repeated rolls of a roulette wheel. Physicists face the same situation in the kinetic theory of gases, where the system, while deterministic in principle, is so complex (with the number of molecules typically the order of magnitude of the Avogadro constant ) that only a statistical description of its properties is feasible. Probability theory is required to describe quantum phenomena. A revolutionary discovery of early 20th century physics was the random character of all physical processes that occur at sub-atomic scales and are governed by the laws of quantum mechanics. The objective wave function evolves deterministically but, according to the Copenhagen interpretation, it deals with probabilities of observing, the outcome being explained by a wave function collapse when an observation is made. However, the loss of determinism for the sake of instrumentalism did not meet with universal approval. Albert Einstein famously remarked in a letter to Max Born: "I am convinced that God does not play dice". Like Einstein, Erwin Schrödinger, who discovered the wave function, believed quantum mechanics is a statistical approximation of an underlying deterministic reality. In some modern interpretations of the statistical mechanics of measurement, quantum decoherence is invoked to account for the appearance of subjectively probabilistic experimental outcomes.
Mathematics
Probability and statistics
null
22939
https://en.wikipedia.org/wiki/Physics
Physics
Physics is the scientific study of matter, its fundamental constituents, its motion and behavior through space and time, and the related entities of energy and force. Physics is one of the most fundamental scientific disciplines. A scientist who specializes in the field of physics is called a physicist. Physics is one of the oldest academic disciplines. Over much of the past two millennia, physics, chemistry, biology, and certain branches of mathematics were a part of natural philosophy, but during the Scientific Revolution in the 17th century, these natural sciences branched into separate research endeavors. Physics intersects with many interdisciplinary areas of research, such as biophysics and quantum chemistry, and the boundaries of physics are not rigidly defined. New ideas in physics often explain the fundamental mechanisms studied by other sciences and suggest new avenues of research in these and other academic disciplines such as mathematics and philosophy. Advances in physics often enable new technologies. For example, advances in the understanding of electromagnetism, solid-state physics, and nuclear physics led directly to the development of technologies that have transformed modern society, such as television, computers, domestic appliances, and nuclear weapons; advances in thermodynamics led to the development of industrialization; and advances in mechanics inspired the development of calculus. History The word physics comes from the Latin ('study of nature'), which itself is a borrowing of the Greek ( 'natural science'), a term derived from ( 'origin, nature, property'). Ancient astronomy Astronomy is one of the oldest natural sciences. Early civilizations dating before 3000 BCE, such as the Sumerians, ancient Egyptians, and the Indus Valley Civilisation, had a predictive knowledge and a basic awareness of the motions of the Sun, Moon, and stars. The stars and planets, believed to represent gods, were often worshipped. While the explanations for the observed positions of the stars were often unscientific and lacking in evidence, these early observations laid the foundation for later astronomy, as the stars were found to traverse great circles across the sky, which could not explain the positions of the planets. According to Asger Aaboe, the origins of Western astronomy can be found in Mesopotamia, and all Western efforts in the exact sciences are descended from late Babylonian astronomy. Egyptian astronomers left monuments showing knowledge of the constellations and the motions of the celestial bodies, while Greek poet Homer wrote of various celestial objects in his Iliad and Odyssey; later Greek astronomers provided names, which are still used today, for most constellations visible from the Northern Hemisphere. Natural philosophy Natural philosophy has its origins in Greece during the Archaic period (650 BCE – 480 BCE), when pre-Socratic philosophers like Thales rejected non-naturalistic explanations for natural phenomena and proclaimed that every event had a natural cause. They proposed ideas verified by reason and observation, and many of their hypotheses proved successful in experiment; for example, atomism was found to be correct approximately 2000 years after it was proposed by Leucippus and his pupil Democritus. Aristotle and Hellenistic physics During the classical period in Greece (6th, 5th and 4th centuries BCE) and in Hellenistic times, natural philosophy developed along many lines of inquiry. Aristotle (, Aristotélēs) (384–322 BCE), a student of Plato, wrote on many subjects, including a substantial treatise on "Physics" – in the 4th century BC. Aristotelian physics was influential for about two millennia. His approach mixed some limited observation with logical deductive arguments, but did not rely on experimental verification of deduced statements. Aristotle's foundational work in Physics, though very imperfect, formed a framework against which later thinkers further developed the field. His approach is entirely superseded today. He explained ideas such as motion (and gravity) with the theory of four elements. Aristotle believed that each of the four classical elements (air, fire, water, earth) had its own natural place. Because of their differing densities, each element will revert to its own specific place in the atmosphere. So, because of their weights, fire would be at the top, air underneath fire, then water, then lastly earth. He also stated that when a small amount of one element enters the natural place of another, the less abundant element will automatically go towards its own natural place. For example, if there is a fire on the ground, the flames go up into the air in an attempt to go back into its natural place where it belongs. His laws of motion included: that heavier objects will fall faster, the speed being proportional to the weight and the speed of the object that is falling depends inversely on the density object it is falling through (e.g. density of air). He also stated that, when it comes to violent motion (motion of an object when a force is applied to it by a second object) that the speed that object moves, will only be as fast or strong as the measure of force applied to it. The problem of motion and its causes was studied carefully, leading to the philosophical notion of a "prime mover" as the ultimate source of all motion in the world (Book 8 of his treatise Physics). Medieval European and Islamic The Western Roman Empire fell to invaders and internal decay in the fifth century, resulting in a decline in intellectual pursuits in western Europe. By contrast, the Eastern Roman Empire (usually known as the Byzantine Empire) resisted the attacks from invaders and continued to advance various fields of learning, including physics. In the sixth century, Isidore of Miletus created an important compilation of Archimedes' works that are copied in the Archimedes Palimpsest. In sixth-century Europe John Philoponus, a Byzantine scholar, questioned Aristotle's teaching of physics and noted its flaws. He introduced the theory of impetus. Aristotle's physics was not scrutinized until Philoponus appeared; unlike Aristotle, who based his physics on verbal argument, Philoponus relied on observation. On Aristotle's physics Philoponus wrote:But this is completely erroneous, and our view may be corroborated by actual observation more effectively than by any sort of verbal argument. For if you let fall from the same height two weights of which one is many times as heavy as the other, you will see that the ratio of the times required for the motion does not depend on the ratio of the weights, but that the difference in time is a very small one. And so, if the difference in the weights is not considerable, that is, of one is, let us say, double the other, there will be no difference, or else an imperceptible difference, in time, though the difference in weight is by no means negligible, with one body weighing twice as much as the otherPhiloponus' criticism of Aristotelian principles of physics served as an inspiration for Galileo Galilei ten centuries later, during the Scientific Revolution. Galileo cited Philoponus substantially in his works when arguing that Aristotelian physics was flawed. In the 1300s Jean Buridan, a teacher in the faculty of arts at the University of Paris, developed the concept of impetus. It was a step toward the modern ideas of inertia and momentum. Islamic scholarship inherited Aristotelian physics from the Greeks and during the Islamic Golden Age developed it further, especially placing emphasis on observation and a priori reasoning, developing early forms of the scientific method. The most notable innovations under Islamic scholarship were in the field of optics and vision, which came from the works of many scientists like Ibn Sahl, Al-Kindi, Ibn al-Haytham, Al-Farisi and Avicenna. The most notable work was The Book of Optics (also known as Kitāb al-Manāẓir), written by Ibn al-Haytham, in which he presented the alternative to the ancient Greek idea about vision. In his Treatise on Light as well as in his Kitāb al-Manāẓir, he presented a study of the phenomenon of the camera obscura (his thousand-year-old version of the pinhole camera) and delved further into the way the eye itself works. Using the knowledge of previous scholars, he began to explain how light enters the eye. He asserted that the light ray is focused, but the actual explanation of how light projected to the back of the eye had to wait until 1604. His Treatise on Light explained the camera obscura, hundreds of years before the modern development of photography. The seven-volume Book of Optics (Kitab al-Manathir) influenced thinking across disciplines from the theory of visual perception to the nature of perspective in medieval art, in both the East and the West, for more than 600 years. This included later European scholars and fellow polymaths, from Robert Grosseteste and Leonardo da Vinci to Johannes Kepler. The translation of The Book of Optics had an impact on Europe. From it, later European scholars were able to build devices that replicated those Ibn al-Haytham had built and understand the way vision works. Classical Physics became a separate science when early modern Europeans used experimental and quantitative methods to discover what are now considered to be the laws of physics. Major developments in this period include the replacement of the geocentric model of the Solar System with the heliocentric Copernican model, the laws governing the motion of planetary bodies (determined by Kepler between 1609 and 1619), Galileo's pioneering work on telescopes and observational astronomy in the 16th and 17th centuries, and Isaac Newton's discovery and unification of the laws of motion and universal gravitation (that would come to bear his name). Newton also developed calculus, the mathematical study of continuous change, which provided new mathematical methods for solving physical problems. The discovery of laws in thermodynamics, chemistry, and electromagnetics resulted from research efforts during the Industrial Revolution as energy needs increased. The laws comprising classical physics remain widely used for objects on everyday scales travelling at non-relativistic speeds, since they provide a close approximation in such situations, and theories such as quantum mechanics and the theory of relativity simplify to their classical equivalents at such scales. Inaccuracies in classical mechanics for very small objects and very high velocities led to the development of modern physics in the 20th century. Modern Modern physics began in the early 20th century with the work of Max Planck in quantum theory and Albert Einstein's theory of relativity. Both of these theories came about due to inaccuracies in classical mechanics in certain situations. Classical mechanics predicted that the speed of light depends on the motion of the observer, which could not be resolved with the constant speed predicted by Maxwell's equations of electromagnetism. This discrepancy was corrected by Einstein's theory of special relativity, which replaced classical mechanics for fast-moving bodies and allowed for a constant speed of light. Black-body radiation provided another problem for classical physics, which was corrected when Planck proposed that the excitation of material oscillators is possible only in discrete steps proportional to their frequency. This, along with the photoelectric effect and a complete theory predicting discrete energy levels of electron orbitals, led to the theory of quantum mechanics improving on classical physics at very small scales. Quantum mechanics would come to be pioneered by Werner Heisenberg, Erwin Schrödinger and Paul Dirac. From this early work, and work in related fields, the Standard Model of particle physics was derived. Following the discovery of a particle with properties consistent with the Higgs boson at CERN in 2012, all fundamental particles predicted by the standard model, and no others, appear to exist; however, physics beyond the Standard Model, with theories such as supersymmetry, is an active area of research. Areas of mathematics in general are important to this field, such as the study of probabilities and groups. Core theories Physics deals with a wide variety of systems, although certain theories are used by all physicists. Each of these theories was experimentally tested numerous times and found to be an adequate approximation of nature. For instance, the theory of classical mechanics accurately describes the motion of objects, provided they are much larger than atoms and moving at a speed much less than the speed of light. These theories continue to be areas of active research today. Chaos theory, an aspect of classical mechanics, was discovered in the 20th century, three centuries after the original formulation of classical mechanics by Newton (1642–1727). These central theories are important tools for research into more specialized topics, and any physicist, regardless of their specialization, is expected to be literate in them. These include classical mechanics, quantum mechanics, thermodynamics and statistical mechanics, electromagnetism, and special relativity. Classical theory Classical physics includes the traditional branches and topics that were recognized and well-developed before the beginning of the 20th century—classical mechanics, acoustics, optics, thermodynamics, and electromagnetism. Classical mechanics is concerned with bodies acted on by forces and bodies in motion and may be divided into statics (study of the forces on a body or bodies not subject to an acceleration), kinematics (study of motion without regard to its causes), and dynamics (study of motion and the forces that affect it); mechanics may also be divided into solid mechanics and fluid mechanics (known together as continuum mechanics), the latter include such branches as hydrostatics, hydrodynamics and pneumatics. Acoustics is the study of how sound is produced, controlled, transmitted and received. Important modern branches of acoustics include ultrasonics, the study of sound waves of very high frequency beyond the range of human hearing; bioacoustics, the physics of animal calls and hearing, and electroacoustics, the manipulation of audible sound waves using electronics. Optics, the study of light, is concerned not only with visible light but also with infrared and ultraviolet radiation, which exhibit all of the phenomena of visible light except visibility, e.g., reflection, refraction, interference, diffraction, dispersion, and polarization of light. Heat is a form of energy, the internal energy possessed by the particles of which a substance is composed; thermodynamics deals with the relationships between heat and other forms of energy. Electricity and magnetism have been studied as a single branch of physics since the intimate connection between them was discovered in the early 19th century; an electric current gives rise to a magnetic field, and a changing magnetic field induces an electric current. Electrostatics deals with electric charges at rest, electrodynamics with moving charges, and magnetostatics with magnetic poles at rest. Modern theory Classical physics is generally concerned with matter and energy on the normal scale of observation, while much of modern physics is concerned with the behavior of matter and energy under extreme conditions or on a very large or very small scale. For example, atomic and nuclear physics study matter on the smallest scale at which chemical elements can be identified. The physics of elementary particles is on an even smaller scale since it is concerned with the most basic units of matter; this branch of physics is also known as high-energy physics because of the extremely high energies necessary to produce many types of particles in particle accelerators. On this scale, ordinary, commonsensical notions of space, time, matter, and energy are no longer valid. The two chief theories of modern physics present a different picture of the concepts of space, time, and matter from that presented by classical physics. Classical mechanics approximates nature as continuous, while quantum theory is concerned with the discrete nature of many phenomena at the atomic and subatomic level and with the complementary aspects of particles and waves in the description of such phenomena. The theory of relativity is concerned with the description of phenomena that take place in a frame of reference that is in motion with respect to an observer; the special theory of relativity is concerned with motion in the absence of gravitational fields and the general theory of relativity with motion and its connection with gravitation. Both quantum theory and the theory of relativity find applications in many areas of modern physics. Fundamental concepts in modern physics Action Causality Covariance Particle Physical field Physical interaction Quantum Statistical ensemble Symmetry Wave Distinction between classical and modern physics While physics itself aims to discover universal laws, its theories lie in explicit domains of applicability. Loosely speaking, the laws of classical physics accurately describe systems whose important length scales are greater than the atomic scale and whose motions are much slower than the speed of light. Outside of this domain, observations do not match predictions provided by classical mechanics. Einstein contributed the framework of special relativity, which replaced notions of absolute time and space with spacetime and allowed an accurate description of systems whose components have speeds approaching the speed of light. Planck, Schrödinger, and others introduced quantum mechanics, a probabilistic notion of particles and interactions that allowed an accurate description of atomic and subatomic scales. Later, quantum field theory unified quantum mechanics and special relativity. General relativity allowed for a dynamical, curved spacetime, with which highly massive systems and the large-scale structure of the universe can be well-described. General relativity has not yet been unified with the other fundamental descriptions; several candidate theories of quantum gravity are being developed. Philosophy and relation to other fields Physics, as with the rest of science, relies on the philosophy of science and its "scientific method" to advance knowledge of the physical world. The scientific method employs a priori and a posteriori reasoning as well as the use of Bayesian inference to measure the validity of a given theory. Study of the philosophical issues surrounding physics, the philosophy of physics, involves issues such as the nature of space and time, determinism, and metaphysical outlooks such as empiricism, naturalism, and realism. Many physicists have written about the philosophical implications of their work, for instance Laplace, who championed causal determinism, and Erwin Schrödinger, who wrote on quantum mechanics. The mathematical physicist Roger Penrose has been called a Platonist by Stephen Hawking, a view Penrose discusses in his book, The Road to Reality. Hawking referred to himself as an "unashamed reductionist" and took issue with Penrose's views. Mathematics provides a compact and exact language used to describe the order in nature. This was noted and advocated by Pythagoras, Plato, Galileo, and Newton. Some theorists, like Hilary Putnam and Penelope Maddy, hold that logical truths, and therefore mathematical reasoning, depend on the empirical world. This is usually combined with the claim that the laws of logic express universal regularities found in the structural features of the world, which may explain the peculiar relation between these fields. Physics uses mathematics to organise and formulate experimental results. From those results, precise or estimated solutions are obtained, or quantitative results, from which new predictions can be made and experimentally confirmed or negated. The results from physics experiments are numerical data, with their units of measure and estimates of the errors in the measurements. Technologies based on mathematics, like computation have made computational physics an active area of research. Ontology is a prerequisite for physics, but not for mathematics. It means physics is ultimately concerned with descriptions of the real world, while mathematics is concerned with abstract patterns, even beyond the real world. Thus physics statements are synthetic, while mathematical statements are analytic. Mathematics contains hypotheses, while physics contains theories. Mathematics statements have to be only logically true, while predictions of physics statements must match observed and experimental data. The distinction is clear-cut, but not always obvious. For example, mathematical physics is the application of mathematics in physics. Its methods are mathematical, but its subject is physical. The problems in this field start with a "mathematical model of a physical situation" (system) and a "mathematical description of a physical law" that will be applied to that system. Every mathematical statement used for solving has a hard-to-find physical meaning. The final mathematical solution has an easier-to-find meaning, because it is what the solver is looking for. Distinction between fundamental vs. applied physics Physics is a branch of fundamental science (also called basic science). Physics is also called "the fundamental science" because all branches of natural science including chemistry, astronomy, geology, and biology are constrained by laws of physics. Similarly, chemistry is often called the central science because of its role in linking the physical sciences. For example, chemistry studies properties, structures, and reactions of matter (chemistry's focus on the molecular and atomic scale distinguishes it from physics). Structures are formed because particles exert electrical forces on each other, properties include physical characteristics of given substances, and reactions are bound by laws of physics, like conservation of energy, mass, and charge. Fundamental physics seeks to better explain and understand phenomena in all spheres, without a specific practical application as a goal, other than the deeper insight into the phenomema themselves. Applied physics is a general term for physics research and development that is intended for a particular use. An applied physics curriculum usually contains a few classes in an applied discipline, like geology or electrical engineering. It usually differs from engineering in that an applied physicist may not be designing something in particular, but rather is using physics or conducting physics research with the aim of developing new technologies or solving a problem. The approach is similar to that of applied mathematics. Applied physicists use physics in scientific research. For instance, people working on accelerator physics might seek to build better particle detectors for research in theoretical physics. Physics is used heavily in engineering. For example, statics, a subfield of mechanics, is used in the building of bridges and other static structures. The understanding and use of acoustics results in sound control and better concert halls; similarly, the use of optics creates better optical devices. An understanding of physics makes for more realistic flight simulators, video games, and movies, and is often critical in forensic investigations. With the standard consensus that the laws of physics are universal and do not change with time, physics can be used to study things that would ordinarily be mired in uncertainty. For example, in the study of the origin of the Earth, a physicist can reasonably model Earth's mass, temperature, and rate of rotation, as a function of time allowing the extrapolation forward or backward in time and so predict future or prior events. It also allows for simulations in engineering that speed up the development of a new technology. There is also considerable interdisciplinarity, so many other important fields are influenced by physics (e.g., the fields of econophysics and sociophysics). Research Scientific method Physicists use the scientific method to test the validity of a physical theory. By using a methodical approach to compare the implications of a theory with the conclusions drawn from its related experiments and observations, physicists are better able to test the validity of a theory in a logical, unbiased, and repeatable way. To that end, experiments are performed and observations are made in order to determine the validity or invalidity of a theory. A scientific law is a concise verbal or mathematical statement of a relation that expresses a fundamental principle of some theory, such as Newton's law of universal gravitation. Theory and experiment Theorists seek to develop mathematical models that both agree with existing experiments and successfully predict future experimental results, while experimentalists devise and perform experiments to test theoretical predictions and explore new phenomena. Although theory and experiment are developed separately, they strongly affect and depend upon each other. Progress in physics frequently comes about when experimental results defy explanation by existing theories, prompting intense focus on applicable modelling, and when new theories generate experimentally testable predictions, which inspire the development of new experiments (and often related equipment). Physicists who work at the interplay of theory and experiment are called phenomenologists, who study complex phenomena observed in experiment and work to relate them to a fundamental theory. Theoretical physics has historically taken inspiration from philosophy; electromagnetism was unified this way. Beyond the known universe, the field of theoretical physics also deals with hypothetical issues, such as parallel universes, a multiverse, and higher dimensions. Theorists invoke these ideas in hopes of solving particular problems with existing theories; they then explore the consequences of these ideas and work toward making testable predictions. Experimental physics expands, and is expanded by, engineering and technology. Experimental physicists who are involved in basic research design and perform experiments with equipment such as particle accelerators and lasers, whereas those involved in applied research often work in industry, developing technologies such as magnetic resonance imaging (MRI) and transistors. Feynman has noted that experimentalists may seek areas that have not been explored well by theorists. Scope and aims Physics covers a wide range of phenomena, from elementary particles (such as quarks, neutrinos, and electrons) to the largest superclusters of galaxies. Included in these phenomena are the most basic objects composing all other things. Therefore, physics is sometimes called the "fundamental science". Physics aims to describe the various phenomena that occur in nature in terms of simpler phenomena. Thus, physics aims to both connect the things observable to humans to root causes, and then connect these causes together. For example, the ancient Chinese observed that certain rocks (lodestone and magnetite) were attracted to one another by an invisible force. This effect was later called magnetism, which was first rigorously studied in the 17th century. But even before the Chinese discovered magnetism, the ancient Greeks knew of other objects such as amber, that when rubbed with fur would cause a similar invisible attraction between the two. This was also first studied rigorously in the 17th century and came to be called electricity. Thus, physics had come to understand two observations of nature in terms of some root cause (electricity and magnetism). However, further work in the 19th century revealed that these two forces were just two different aspects of one force—electromagnetism. This process of "unifying" forces continues today, and electromagnetism and the weak nuclear force are now considered to be two aspects of the electroweak interaction. Physics hopes to find an ultimate reason (theory of everything) for why nature is as it is (see section Current research below for more information). Research fields Contemporary research in physics can be broadly divided into nuclear and particle physics; condensed matter physics; atomic, molecular, and optical physics; astrophysics; and applied physics. Some physics departments also support physics education research and physics outreach. Since the 20th century, the individual fields of physics have become increasingly specialised, and today most physicists work in a single field for their entire careers. "Universalists" such as Einstein (1879–1955) and Lev Landau (1908–1968), who worked in multiple fields of physics, are now very rare. The major fields of physics, along with their subfields and the theories and concepts they employ, are shown in the following table. Nuclear and particle Particle physics is the study of the elementary constituents of matter and energy and the interactions between them. In addition, particle physicists design and develop the high-energy accelerators, detectors, and computer programs necessary for this research. The field is also called "high-energy physics" because many elementary particles do not occur naturally but are created only during high-energy collisions of other particles. Currently, the interactions of elementary particles and fields are described by the Standard Model. The model accounts for the 12 known particles of matter (quarks and leptons) that interact via the strong, weak, and electromagnetic fundamental forces. Dynamics are described in terms of matter particles exchanging gauge bosons (gluons, W and Z bosons, and photons, respectively). The Standard Model also predicts a particle known as the Higgs boson. In July 2012 CERN, the European laboratory for particle physics, announced the detection of a particle consistent with the Higgs boson, an integral part of the Higgs mechanism. Nuclear physics is the field of physics that studies the constituents and interactions of atomic nuclei. The most commonly known applications of nuclear physics are nuclear power generation and nuclear weapons technology, but the research has provided application in many fields, including those in nuclear medicine and magnetic resonance imaging, ion implantation in materials engineering, and radiocarbon dating in geology and archaeology. Atomic, molecular, and optical Atomic, molecular, and optical physics (AMO) is the study of matter—matter and light—matter interactions on the scale of single atoms and molecules. The three areas are grouped together because of their interrelationships, the similarity of methods used, and the commonality of their relevant energy scales. All three areas include both classical, semi-classical and quantum treatments; they can treat their subject from a microscopic view (in contrast to a macroscopic view). Atomic physics studies the electron shells of atoms. Current research focuses on activities in quantum control, cooling and trapping of atoms and ions, low-temperature collision dynamics and the effects of electron correlation on structure and dynamics. Atomic physics is influenced by the nucleus (see hyperfine splitting), but intra-nuclear phenomena such as fission and fusion are considered part of nuclear physics. Molecular physics focuses on multi-atomic structures and their internal and external interactions with matter and light. Optical physics is distinct from optics in that it tends to focus not on the control of classical light fields by macroscopic objects but on the fundamental properties of optical fields and their interactions with matter in the microscopic realm. Condensed matter Condensed matter physics is the field of physics that deals with the macroscopic physical properties of matter. In particular, it is concerned with the "condensed" phases that appear whenever the number of particles in a system is extremely large and the interactions between them are strong. The most familiar examples of condensed phases are solids and liquids, which arise from the bonding by way of the electromagnetic force between atoms. More exotic condensed phases include the superfluid and the Bose–Einstein condensate found in certain atomic systems at very low temperature, the superconducting phase exhibited by conduction electrons in certain materials, and the ferromagnetic and antiferromagnetic phases of spins on atomic lattices. Condensed matter physics is the largest field of contemporary physics. Historically, condensed matter physics grew out of solid-state physics, which is now considered one of its main subfields. The term condensed matter physics was apparently coined by Philip Anderson when he renamed his research group—previously solid-state theory—in 1967. In 1978, the Division of Solid State Physics of the American Physical Society was renamed as the Division of Condensed Matter Physics. Condensed matter physics has a large overlap with chemistry, materials science, nanotechnology and engineering. Astrophysics Astrophysics and astronomy are the application of the theories and methods of physics to the study of stellar structure, stellar evolution, the origin of the Solar System, and related problems of cosmology. Because astrophysics is a broad subject, astrophysicists typically apply many disciplines of physics, including mechanics, electromagnetism, statistical mechanics, thermodynamics, quantum mechanics, relativity, nuclear and particle physics, and atomic and molecular physics. The discovery by Karl Jansky in 1931 that radio signals were emitted by celestial bodies initiated the science of radio astronomy. Most recently, the frontiers of astronomy have been expanded by space exploration. Perturbations and interference from the Earth's atmosphere make space-based observations necessary for infrared, ultraviolet, gamma-ray, and X-ray astronomy. Physical cosmology is the study of the formation and evolution of the universe on its largest scales. Albert Einstein's theory of relativity plays a central role in all modern cosmological theories. In the early 20th century, Hubble's discovery that the universe is expanding, as shown by the Hubble diagram, prompted rival explanations known as the steady state universe and the Big Bang. The Big Bang was confirmed by the success of Big Bang nucleosynthesis and the discovery of the cosmic microwave background in 1964. The Big Bang model rests on two theoretical pillars: Albert Einstein's general relativity and the cosmological principle. Cosmologists have recently established the ΛCDM model of the evolution of the universe, which includes cosmic inflation, dark energy, and dark matter. Numerous possibilities and discoveries are anticipated to emerge from new data from the Fermi Gamma-ray Space Telescope over the upcoming decade and vastly revise or clarify existing models of the universe. In particular, the potential for a tremendous discovery surrounding dark matter is possible over the next several years. Fermi will search for evidence that dark matter is composed of weakly interacting massive particles, complementing similar experiments with the Large Hadron Collider and other underground detectors. IBEX is already yielding new astrophysical discoveries: "No one knows what is creating the ENA (energetic neutral atoms) ribbon" along the termination shock of the solar wind, "but everyone agrees that it means the textbook picture of the heliosphere—in which the Solar System's enveloping pocket filled with the solar wind's charged particles is plowing through the onrushing 'galactic wind' of the interstellar medium in the shape of a comet—is wrong." Current research Research in physics is continually progressing on a large number of fronts. In condensed matter physics, an important unsolved theoretical problem is that of high-temperature superconductivity. Many condensed matter experiments are aiming to fabricate workable spintronics and quantum computers. In particle physics, the first pieces of experimental evidence for physics beyond the Standard Model have begun to appear. Foremost among these are indications that neutrinos have non-zero mass. These experimental results appear to have solved the long-standing solar neutrino problem, and the physics of massive neutrinos remains an area of active theoretical and experimental research. The Large Hadron Collider has already found the Higgs boson, but future research aims to prove or disprove the supersymmetry, which extends the Standard Model of particle physics. Research on the nature of the major mysteries of dark matter and dark energy is also currently ongoing. Although much progress has been made in high-energy, quantum, and astronomical physics, many everyday phenomena involving complexity, chaos, or turbulence are still poorly understood. Complex problems that seem like they could be solved by a clever application of dynamics and mechanics remain unsolved; examples include the formation of sandpiles, nodes in trickling water, the shape of water droplets, mechanisms of surface tension catastrophes, and self-sorting in shaken heterogeneous collections. These complex phenomena have received growing attention since the 1970s for several reasons, including the availability of modern mathematical methods and computers, which enabled complex systems to be modeled in new ways. Complex physics has become part of increasingly interdisciplinary research, as exemplified by the study of turbulence in aerodynamics and the observation of pattern formation in biological systems. In the 1932 Annual Review of Fluid Mechanics, Horace Lamb said: Physics Education Careers
Physical sciences
Science and medicine
null
22949
https://en.wikipedia.org/wiki/Population
Population
Population is the term typically used to refer to the number of people in a single area. Governments conduct a census to quantify the size of a resident population within a given jurisdiction. The term is also applied to non-human animals, microorganisms, and plants, and has specific uses within such fields as ecology and genetics. Etymology The word population is derived from the Late Latin populatio (a people, a multitude), which itself is derived from the Latin word populus (a people). Use of the term Social sciences In sociology and population geography, population refers to a group of human beings with some predefined feature in common, such as location, race, ethnicity, nationality, or religion. Ecology In ecology, a population is a group of organisms of the same species which inhabit the same geographical area and are capable of interbreeding. The area of a sexual population is the area where interbreeding is possible between any opposite-sex pair within the area and more probable than cross-breeding with individuals from other areas. In humans, interbreeding is unrestricted by racial differences, as all humans belong to the same species of Homo sapiens. In ecology, the population of a certain species in a certain area can be estimated using the Lincoln index to calculate the total population of an area based on the number of individuals observed. Dynamics Genetics In genetics, a population is often defined as a set of organisms in which any pair of members can breed together. They can thus routinely exchange gametes in order to have usually fertile progeny, and such a breeding group is also known therefore as a gamodeme. This also implies that all members belong to the same species. If the gamodeme is very large (theoretically, approaching infinity), and all gene alleles are uniformly distributed by the gametes within it, the gamodeme is said to be panmictic. Under this state, allele (gamete) frequencies can be converted to genotype (zygote) frequencies by expanding an appropriate quadratic equation, as shown by Sir Ronald Fisher in his establishment of quantitative genetics. This seldom occurs in nature: localization of gamete exchange – through dispersal limitations, preferential mating, cataclysm, or other cause – may lead to small actual gamodemes which exchange gametes reasonably uniformly within themselves but are virtually separated from their neighboring gamodemes. However, there may be low frequencies of exchange with these neighbors. This may be viewed as the breaking up of a large sexual population (panmictic) into smaller overlapping sexual populations. This failure of panmixia leads to two important changes in overall population structure: (1) the component gamodemes vary (through gamete sampling) in their allele frequencies when compared with each other and with the theoretical panmictic original (this is known as dispersion, and its details can be estimated using expansion of an appropriate binomial equation); and (2) the level of homozygosity rises in the entire collection of gamodemes. The overall rise in homozygosity is quantified by the inbreeding coefficient (f or φ). All homozygotes are increased in frequency – both the deleterious and the desirable. The mean phenotype of the gamodemes collection is lower than that of the panmictic original – which is known as inbreeding depression. It is most important to note, however, that some dispersion lines will be superior to the panmictic original, while some will be about the same, and some will be inferior. The probabilities of each can be estimated from those binomial equations. In plant and animal breeding, procedures have been developed which deliberately utilize the effects of dispersion (such as line breeding, pure-line breeding, backcrossing). Dispersion-assisted selection leads to the greatest genetic advance (ΔG=change in the phenotypic mean), and is much more powerful than selection acting without attendant dispersion. This is so for both allogamous (random fertilization) and autogamous (self-fertilization) gamodemes. World human population According to the UN, the world's population surpassed 8 billion on 15 November 2022, an increase of 1 billion since 12 March 2012. According to a separate estimate by the United Nations, Earth's population exceeded seven billion in October 2011. According to UNFPA, growth to such an extent offers unprecedented challenges and opportunities to all of humanity. According to papers published by the United States Census Bureau, the world population hit 6.5 billion on 24 February 2006. The United Nations Population Fund designated 12 October 1999 as the approximate day on which world population reached 6 billion. This was about 12 years after the world population reached 5 billion in 1987, and six years after the world population reached 5.5 billion in 1993. The population of countries such as Nigeria is not even known to the nearest million, so there is a considerable margin of error in such estimates. Researcher Carl Haub calculated that a total of over 100 billion people have probably been born in the last 2000 years. Predicted growth and decline Population growth increased significantly as the Industrial Revolution gathered pace from 1700 onwards. The last 50 years have seen a yet more rapid increase in the rate of population growth due to medical advances and substantial increases in agricultural productivity, particularly beginning in the 1960s, made by the Green Revolution. In 2017 the United Nations Population Division projected that the world's population would reach about 9.8 billion in 2050 and 11.2 billion in 2100. In the future, the world's population is expected to peak at some point, after which it will decline due to economic reasons, health concerns, land exhaustion and environmental hazards. According to one report, it is very likely that the world's population will stop growing before the end of the 21st century. Further, there is some likelihood that population will actually decline before 2100. Population has already declined in the last decade or two in Eastern Europe, the Baltics and in the former Commonwealth of Independent States. The population pattern of less-developed regions of the world in recent years has been marked by gradually declining birth rates. These followed an earlier sharp reduction in death rates. This transition from high birth and death rates to low birth and death rates is often referred to as the demographic transition. Population planning Human population planning is the practice of altering the rate of growth of a human population. Historically, human population control has been implemented with the goal of limiting the rate of population growth. In the period from the 1950s to the 1980s, concerns about global population growth and its effects on poverty, environmental degradation, and political stability led to efforts to reduce population growth rates. While population control can involve measures that improve people's lives by giving them greater control of their reproduction, a few programs, most notably the Chinese government's one-child per family policy, have resorted to coercive measures. In the 1970s, tension grew between population control advocates and women's health activists who advanced women's reproductive rights as part of a human rights-based approach. Growing opposition to the narrow population control focus led to a significant change in population control policies in the early 1980s.
Biology and health sciences
Ecology
null
22958
https://en.wikipedia.org/wiki/Sample%20space
Sample space
In probability theory, the sample space (also called sample description space, possibility space, or outcome space) of an experiment or random trial is the set of all possible outcomes or results of that experiment. A sample space is usually denoted using set notation, and the possible ordered outcomes, or sample points, are listed as elements in the set. It is common to refer to a sample space by the labels S, Ω, or U (for "universal set"). The elements of a sample space may be numbers, words, letters, or symbols. They can also be finite, countably infinite, or uncountably infinite. A subset of the sample space is an event, denoted by . If the outcome of an experiment is included in , then event has occurred. For example, if the experiment is tossing a single coin, the sample space is the set , where the outcome means that the coin is heads and the outcome means that the coin is tails. The possible events are , , , and . For tossing two coins, the sample space is , where the outcome is if both coins are heads, if the first coin is heads and the second is tails, if the first coin is tails and the second is heads, and if both coins are tails. The event that at least one of the coins is heads is given by . For tossing a single six-sided die one time, where the result of interest is the number of pips facing up, the sample space is . A well-defined, non-empty sample space is one of three components in a probabilistic model (a probability space). The other two basic elements are a well-defined set of possible events (an event space), which is typically the power set of if is discrete or a σ-algebra on if it is continuous, and a probability assigned to each event (a probability measure function). A sample space can be represented visually by a rectangle, with the outcomes of the sample space denoted by points within the rectangle. The events may be represented by ovals, where the points enclosed within the oval make up the event. Conditions of a sample space A set with outcomes (i.e. ) must meet some conditions in order to be a sample space: The outcomes must be mutually exclusive, i.e. if occurs, then no other will take place, . The outcomes must be collectively exhaustive, i.e. on every experiment (or random trial) there will always take place some outcome for . The sample space () must have the right granularity depending on what the experimenter is interested in. Irrelevant information must be removed from the sample space and the right abstraction must be chosen. For instance, in the trial of tossing a coin, one possible sample space is , where is the outcome where the coin lands heads and is for tails. Another possible sample space could be . Here, denotes a rainy day and is a day where it is not raining. For most experiments, would be a better choice than , as an experimenter likely does not care about how the weather affects the coin toss. Multiple sample spaces For many experiments, there may be more than one plausible sample space available, depending on what result is of interest to the experimenter. For example, when drawing a card from a standard deck of fifty-two playing cards, one possibility for the sample space could be the various ranks (Ace through King), while another could be the suits (clubs, diamonds, hearts, or spades). A more complete description of outcomes, however, could specify both the denomination and the suit, and a sample space describing each individual card can be constructed as the Cartesian product of the two sample spaces noted above (this space would contain fifty-two equally likely outcomes). Still other sample spaces are possible, such as right-side up or upside down, if some cards have been flipped when shuffling. Equally likely outcomes Some treatments of probability assume that the various outcomes of an experiment are always defined so as to be equally likely. For any sample space with equally likely outcomes, each outcome is assigned the probability . However, there are experiments that are not easily described by a sample space of equally likely outcomes—for example, if one were to toss a thumb tack many times and observe whether it landed with its point upward or downward, there is no physical symmetry to suggest that the two outcomes should be equally likely. Though most random phenomena do not have equally likely outcomes, it can be helpful to define a sample space in such a way that outcomes are at least approximately equally likely, since this condition significantly simplifies the computation of probabilities for events within the sample space. If each individual outcome occurs with the same probability, then the probability of any event becomes simply: For example, if two fair six-sided dice are thrown to generate two uniformly distributed integers, and , each in the range from 1 to 6, inclusive, the 36 possible ordered pairs of outcomes constitute a sample space of equally likely events. In this case, the above formula applies, such as calculating the probability of a particular sum of the two rolls in an outcome. The probability of the event that the sum is five is , since four of the thirty-six equally likely pairs of outcomes sum to five. If the sample space was all of the possible sums obtained from rolling two six-sided dice, the above formula can still be applied because the dice rolls are fair, but the number of outcomes in a given event will vary. A sum of two can occur with the outcome , so the probability is . For a sum of seven, the outcomes in the event are , so the probability is . Simple random sample In statistics, inferences are made about characteristics of a population by studying a sample of that population's individuals. In order to arrive at a sample that presents an unbiased estimate of the true characteristics of the population, statisticians often seek to study a simple random sample—that is, a sample in which every individual in the population is equally likely to be included. The result of this is that every possible combination of individuals who could be chosen for the sample has an equal chance to be the sample that is selected (that is, the space of simple random samples of a given size from a given population is composed of equally likely outcomes). Infinitely large sample spaces In an elementary approach to probability, any subset of the sample space is usually called an event. However, this gives rise to problems when the sample space is continuous, so that a more precise definition of an event is necessary. Under this definition only measurable subsets of the sample space, constituting a σ-algebra over the sample space itself, are considered events. An example of an infinitely large sample space is measuring the lifetime of a light bulb. The corresponding sample space would be .
Mathematics
Probability
null
22961
https://en.wikipedia.org/wiki/Event%20%28probability%20theory%29
Event (probability theory)
In probability theory, an event is a subset of outcomes of an experiment (a subset of the sample space) to which a probability is assigned. A single outcome may be an element of many different events, and different events in an experiment are usually not equally likely, since they may include very different groups of outcomes. An event consisting of only a single outcome is called an or an ; that is, it is a singleton set. An event that has more than one possible outcome is called a compound event. An event is said to if contains the outcome of the experiment (or trial) (that is, if ). The probability (with respect to some probability measure) that an event occurs is the probability that contains the outcome of an experiment (that is, it is the probability that ). An event defines a complementary event, namely the complementary set (the event occurring), and together these define a Bernoulli trial: did the event occur or not? Typically, when the sample space is finite, any subset of the sample space is an event (that is, all elements of the power set of the sample space are defined as events). However, this approach does not work well in cases where the sample space is uncountably infinite. So, when defining a probability space it is possible, and often necessary, to exclude certain subsets of the sample space from being events (see , below). A simple example If we assemble a deck of 52 playing cards with no jokers, and draw a single card from the deck, then the sample space is a 52-element set, as each card is a possible outcome. An event, however, is any subset of the sample space, including any singleton set (an elementary event), the empty set (an impossible event, with probability zero) and the sample space itself (a certain event, with probability one). Other events are proper subsets of the sample space that contain multiple elements. So, for example, potential events include: "Red and black at the same time without being a joker" (0 elements), "The 5 of Hearts" (1 element), "A King" (4 elements), "A Face card" (12 elements), "A Spade" (13 elements), "A Face card or a red suit" (32 elements), "A card" (52 elements). Since all events are sets, they are usually written as sets (for example, {1, 2, 3}), and represented graphically using Venn diagrams. In the situation where each outcome in the sample space Ω is equally likely, the probability of an event is the following : This rule can readily be applied to each of the example events above. Events in probability spaces Defining all subsets of the sample space as events works well when there are only finitely many outcomes, but gives rise to problems when the sample space is infinite. For many standard probability distributions, such as the normal distribution, the sample space is the set of real numbers or some subset of the real numbers. Attempts to define probabilities for all subsets of the real numbers run into difficulties when one considers 'badly behaved' sets, such as those that are nonmeasurable. Hence, it is necessary to restrict attention to a more limited family of subsets. For the standard tools of probability theory, such as joint and conditional probabilities, to work, it is necessary to use a σ-algebra, that is, a family closed under complementation and countable unions of its members. The most natural choice of σ-algebra is the Borel measurable set derived from unions and intersections of intervals. However, the larger class of Lebesgue measurable sets proves more useful in practice. In the general measure-theoretic description of probability spaces, an event may be defined as an element of a selected -algebra of subsets of the sample space. Under this definition, any subset of the sample space that is not an element of the -algebra is not an event, and does not have a probability. With a reasonable specification of the probability space, however, all are elements of the -algebra. A note on notation Even though events are subsets of some sample space they are often written as predicates or indicators involving random variables. For example, if is a real-valued random variable defined on the sample space the event can be written more conveniently as, simply, This is especially common in formulas for a probability, such as The set is an example of an inverse image under the mapping because if and only if
Mathematics
Probability
null
22984
https://en.wikipedia.org/wiki/Primate
Primate
Primates is an order of mammals, which is further divided into the strepsirrhines, which include lemurs, galagos, and lorisids; and the haplorhines, which include tarsiers and simians (monkeys and apes). Primates arose 74–63 million years ago first from small terrestrial mammals, which adapted for life in tropical forests: many primate characteristics represent adaptations to the challenging environment among tree tops, including large brain sizes, binocular vision, color vision, vocalizations, shoulder girdles allowing a large degree of movement in the upper limbs, and opposable thumbs (in most but not all) that enable better grasping and dexterity. Primates range in size from Madame Berthe's mouse lemur, which weighs , to the eastern gorilla, weighing over . There are 376–524 species of living primates, depending on which classification is used. New primate species continue to be discovered: over 25 species were described in the 2000s, 36 in the 2010s, and six in the 2020s. Primates have large brains (relative to body size) compared to other mammals, as well as an increased reliance on visual acuity at the expense of the sense of smell, which is the dominant sensory system in most mammals. These features are more developed in monkeys and apes, and noticeably less so in lorises and lemurs. Some primates, including gorillas, humans and baboons, are primarily ground-dwelling rather than arboreal, but all species have adaptations for climbing trees. Arboreal locomotion techniques used include leaping from tree to tree and swinging between branches of trees (brachiation); terrestrial locomotion techniques include walking on two hindlimbs (bipedalism) and modified walking on four limbs (quadrupedalism) via knuckle-walking. Primates are among the most social of all animals, forming pairs or family groups, uni-male harems, and multi-male/multi-female groups. Non-human primates have at least four types of social systems, many defined by the amount of movement by adolescent females between groups. Primates have slower rates of development than other similarly sized mammals, reach maturity later, and have longer lifespans. Primates are also the most cognitively advanced animals, with humans (genus Homo) capable of creating complex languages and sophisticated civilizations, and non-human primates are recorded to use tools. They may communicate using facial and hand gestures, smells and vocalizations. Close interactions between humans and non-human primates (NHPs) can create opportunities for the transmission of zoonotic diseases, especially virus diseases including herpes, measles, ebola, rabies and hepatitis. Thousands of non-human primates are used in research around the world because of their psychological and physiological similarity to humans. About 60% of primate species are threatened with extinction. Common threats include deforestation, forest fragmentation, monkey drives, and primate hunting for use in medicines, as pets, and for food. Large-scale tropical forest clearing for agriculture most threatens primates. Etymology The English name primates is derived from Old French or French , from a noun use of Latin , from ('prime, first rank'). The name was given by Carl Linnaeus because he thought this the "highest" order of animals. The relationships among the different groups of primates were not clearly understood until relatively recently, so the commonly used terms are somewhat confused. For example, ape has been used either as an alternative for monkey or for any tailless, relatively human-like primate. Sir Wilfrid Le Gros Clark was one of the primatologists who developed the idea of trends in primate evolution and the methodology of arranging the living members of an order into an "ascending series" leading to humans. Commonly used names for groups of primates such as prosimians, monkeys, lesser apes, and great apes reflect this methodology. According to our current understanding of the evolutionary history of the primates, several of these groups are paraphyletic, or rather they do not include all the descendants of a common ancestor. In contrast with Clark's methodology, modern classifications typically identify (or name) only those groupings that are monophyletic; that is, such a named group includes all the descendants of the group's common ancestor. All groups with scientific names are clades, or monophyletic groups, and the sequence of scientific classification reflects the evolutionary history of the related lineages. Groups that are traditionally named are shown on the right; they form an "ascending series" (per Clark, see above), and several groups are paraphyletic: Prosimians contain two monophyletic groups (the suborder Strepsirrhini, or lemurs, lorises and allies, as well as the tarsiers of the suborder Haplorhini); it is a paraphyletic grouping because it excludes the Simiiformes, which also are descendants of the common ancestor Primates. Monkeys comprise two monophyletic groups, New World monkeys and Old World monkeys, but is paraphyletic because it excludes hominoids, superfamily Hominoidea, also descendants of the common ancestor Simiiformes. Apes as a whole, and the great apes, are paraphyletic if the terms are used such that they exclude humans. Thus, the members of the two sets of groups, and hence names, do not match, which causes problems in relating scientific names to common (usually traditional) names. Consider the superfamily Hominoidea: In terms of the common names on the right, this group consists of apes and humans and there is no single common name for all the members of the group. One remedy is to create a new common name, in this case hominoids. Another possibility is to expand the use of one of the traditional names. For example, in his 2005 book, the vertebrate palaeontologist Benton wrote, "The apes, Hominoidea, today include the gibbons and orangutan ... the gorilla and chimpanzee ... and humans"; thereby Benton was using apes to mean hominoids. In that case, the group heretofore called apes must now be identified as the non-human apes. , there is no consensus as to whether to accept traditional (that is, common), but paraphyletic, names or to use monophyletic names only; or to use 'new' common names or adaptations of old ones. Both competing approaches can be found in biological sources, often in the same work, and sometimes by the same author. Thus, Benton defines apes to include humans, then he repeatedly uses ape-like to mean 'like an ape rather than a human'; and when discussing the reaction of others to a new fossil he writes of "claims that Orrorin ... was an ape rather than a human". Classification of living primates Order Primates was established by Carl Linnaeus in 1758, in the tenth edition of his book Systema Naturae, for the genera Homo (humans), Simia (other apes and monkeys), Lemur (prosimians) and Vespertilio (bats). In the first edition of the same book (1735), he had used the name Anthropomorpha for Homo, Simia and Bradypus (sloths). In 1839, Henri Marie Ducrotay de Blainville, following Linnaeus and aping his nomenclature, established the orders Secundates (including the suborders Chiroptera, Insectivora and Carnivora), Tertiates (or Glires) and Quaternates (including Gravigrada, Pachydermata and Ruminantia), but these new taxa were not accepted. Before Anderson and Jones introduced the classification of Strepsirrhini and Haplorhini in 1984, (followed by McKenna and Bell's 1997 work Classification of Mammals: Above the species level), Primates was divided into two superfamilies: Prosimii and Anthropoidea. Prosimii included all of the prosimians: Strepsirrhini plus the tarsiers. Anthropoidea contained all of the simians. The cladogram below shows one possible classification sequence of the living primates: groups that use common (traditional) names are shown on the right. Phylogeny and genetics Order Primates is part of the clade Euarchontoglires, which is nested within the clade Eutheria of Class Mammalia. Recent molecular genetic research on primates, colugos, and treeshrews has shown that the two species of colugos are more closely related to primates than to treeshrews, even though treeshrews were at one time considered primates. These three orders make up the clade Euarchonta. The combination of this clade with the clade Glires (composed of Rodentia and Lagomorpha) forms the clade Euarchontoglires. Variously, both Euarchonta and Euarchontoglires are ranked as superorders. Some scientists consider Dermoptera to be a suborder of Primates and use the suborder Euprimates for the "true" primates. Evolutionary history The primate lineage is thought to go back at least near the Cretaceous–Paleogene boundary or around 74–63 (mya). The earliest possible primate/proto-primate may be Purgatorius, which dates back to Early Paleocene of North America ~66mya. The oldest known primates from the fossil record date to the Late Paleocene of Africa, c.57 mya (Altiatlasius) or the Paleocene-Eocene transition in the northern continents, c. 55 mya (Cantius, Donrussellia, Altanius, Plesiadapis and Teilhardina). Other studies, including molecular clock studies, have estimated the origin of the primate branch to have been in the mid-Cretaceous period, around 85 mya. By modern cladistic reckoning, the order Primates is monophyletic. The suborder Strepsirrhini, the "wet-nosed" primates, is generally thought to have split off from the primitive primate line about 63 mya, although earlier dates are also supported. The seven strepsirrhine families are the five related lemur families and the two remaining families that include the lorisids and the galagos. Older classification schemes wrap Lepilemuridae into Lemuridae and Galagidae into Lorisidae, yielding a four-one family distribution instead of five-two as presented here. During the Eocene, most of the northern continents were dominated by two groups, the adapiforms and the omomyids. The former are considered members of Strepsirrhini, but did not have a toothcomb like modern lemurs; recent analysis has demonstrated that Darwinius masillae fits into this grouping. The latter was closely related to tarsiers, monkeys, and apes. How these two groups relate to extant primates is unclear. Omomyids perished about 30 mya, while adapiforms survived until about 10 mya. According to genetic studies, the lemurs of Madagascar diverged from the lorisoids approximately 75 mya. These studies, as well as chromosomal and molecular evidence, also show that lemurs are more closely related to each other than to other strepsirrhine primates. However, Madagascar split from Africa 160 mya and from India 90 mya. To account for these facts, a founding lemur population of a few individuals is thought to have reached Madagascar from Africa via a single rafting event between 50 and 80 mya. Other colonization options have been suggested, such as multiple colonizations from Africa and India, but none are supported by the genetic and molecular evidence. Until recently, the aye-aye has been difficult to place within Strepsirrhini. Theories had been proposed that its family, Daubentoniidae, was either a lemuriform primate (meaning its ancestors split from the lemur line more recently than lemurs and lorises split) or a sister group to all the other strepsirrhines. In 2008, the aye-aye family was confirmed to be most closely related to the other Malagasy lemurs, likely having descended from the same ancestral population that colonized the island. Suborder Haplorhini, the simple-nosed or "dry-nosed" primates, is composed of two sister clades. Prosimian tarsiers in the family Tarsiidae (monotypic in its own infraorder Tarsiiformes), represent the most basal division, originating about 58 mya. The earliest known haplorhine skeleton, that of 55 MA old tarsier-like Archicebus, was found in central China, supporting an already suspected Asian origin for the group. The infraorder Simiiformes (simian primates, consisting of monkeys and apes) emerged about 40 mya, possibly also in Asia; if so, they dispersed across the Tethys Sea from Asia to Africa soon afterwards. There are two simian clades, both parvorders: Catarrhini, which developed in Africa, consisting of Old World monkeys, humans and the other apes, and Platyrrhini, which developed in South America, consisting of New World monkeys. A third clade, which included the eosimiids, developed in Asia, but became extinct millions of years ago. As in the case of lemurs, the origin of New World monkeys is unclear. Molecular studies of concatenated nuclear sequences have yielded a widely varying estimated date of divergence between platyrrhines and catarrhines, ranging from 33 to 70 mya, while studies based on mitochondrial sequences produce a narrower range of 35 to 43 mya. The anthropoid primates possibly traversed the Atlantic Ocean from Africa to South America during the Eocene by island hopping, facilitated by Atlantic Ocean ridges and a lowered sea level. Alternatively, a single rafting event may explain this transoceanic colonization. Due to continental drift, the Atlantic Ocean was not nearly as wide at the time as it is today. Research suggests that a small primate could have survived 13 days on a raft of vegetation. Given estimated current and wind speeds, this would have provided enough time to make the voyage between the continents. Apes and monkeys spread from Africa into Europe and Asia starting in the Miocene. Soon after, the lorises and tarsiers made the same journey. The first hominin fossils were discovered in northern Africa and date back 5–8 mya. Old World monkeys disappeared from Europe about 1.8 mya. Molecular and fossil studies generally show that modern humans originated in Africa 100,000–200,000 years ago. Although primates are well studied in comparison to other animal groups, several new species have been discovered recently, and genetic tests have revealed previously unrecognised species in known populations. Primate Taxonomy listed about 350 species of primates in 2001; the author, Colin Groves, increased that number to 376 for his contribution to the third edition of Mammal Species of the World (MSW3). However, publications since the taxonomy in MSW3 was compiled in 2003 have pushed the number to 522 species, or 708 including subspecies. Hybrids Primate hybrids usually arise in captivity, but there have also been examples in the wild. Hybridization occurs where two species' range overlap to form hybrid zones; hybrids may be created by humans when animals are placed in zoos or due to environmental pressures such as predation. Intergeneric hybridizations, hybrids of different genera, have also been found in the wild. Although they belong to genera that have been distinct for several million years, interbreeding still occurs between the gelada and the hamadryas baboon. Clones On 24 January 2018, scientists in China reported in the journal Cell the creation of two crab-eating macaque clones, named Zhong Zhong and Hua Hua, using the complex DNA transfer method that produced Dolly the sheep, for the first time. Anatomy and physiology Head The primate skull has a large, domed cranium, which is particularly prominent in anthropoids. The cranium protects the large brain, a distinguishing characteristic of this group. The endocranial volume (the volume within the skull) is three times greater in humans than in the greatest nonhuman primate, reflecting a larger brain size. The mean endocranial volume is 1,201 cubic centimeters in humans, 469 cm3 in gorillas, 400 cm3 in chimpanzees and 397 cm3 in orangutans. The primary evolutionary trend of primates has been the elaboration of the brain, in particular the neocortex (a part of the cerebral cortex), which is involved with sensory perception, generation of motor commands, spatial reasoning, conscious thought and, in humans, language. While other mammals rely heavily on their sense of smell, the arboreal life of primates has led to a tactile, visually dominant sensory system, a reduction in the olfactory region of the brain and increasingly complex social behavior. The visual acuity of humans and other hominids is exceptional; they have the most acute vision known among all vertebrates, with the exception of certain species of predatory birds. Primates have forward-facing eyes on the front of the skull; binocular vision allows accurate distance perception, useful for the brachiating ancestors of all great apes. A bony ridge above the eye sockets reinforces weaker bones in the face, which are put under strain during chewing. Strepsirrhines have a postorbital bar, a bone around the eye socket, to protect their eyes; in contrast, the higher primates, haplorhines, have evolved fully enclosed sockets. Primates show an evolutionary trend towards a reduced snout. Technically, Old World monkeys are distinguished from New World monkeys by the structure of the nose, and from apes by the arrangement of their teeth. In New World monkeys, the nostrils face sideways; in Old World monkeys, they face downwards. Dental pattern in primates vary considerably; although some have lost most of their incisors, all retain at least one lower incisor. In most strepsirrhines, the lower incisors form a toothcomb, which is used in grooming and sometimes foraging. Old World monkeys have eight premolars, compared with 12 in New World monkeys. The Old World species are divided into apes and monkeys depending on the number of cusps on their molars: monkeys have four, apes have five - although humans may have four or five. The main hominid molar cusp (hypocone) evolved in early primate history, while the cusp of the corresponding primitive lower molar (paraconid) was lost. Prosimians are distinguished by their immobilized upper lips, the moist tip of their noses and forward-facing lower front teeth. Body Primates generally have five digits on each limb (pentadactyly), with a characteristic type of keratin fingernail on the end of each finger and toe. The bottom sides of the hands and feet have sensitive pads on the fingertips. Most have opposable thumbs, a characteristic primate feature most developed in humans, though not limited to this order (opossums and koalas, for example, also have them). Thumbs allow some species to use tools. In primates, the combination of opposing thumbs, short fingernails (rather than claws) and long, inward-closing fingers is a relict of the ancestral practice of gripping branches, and has, in part, allowed some species to develop brachiation (swinging by the arms from tree limb to tree limb) as a significant means of locomotion. Prosimians have clawlike nails on the second toe of each foot, called toilet-claws, which they use for grooming. The primate collar bone is a prominent element of the pectoral girdle; this allows the shoulder joint broad mobility. Compared to Old World monkeys, apes have more mobile shoulder joints and arms due to the dorsal position of the scapula, broad ribcages that are flatter front-to-back, a shorter, less mobile spine, and with lower vertebrae greatly reduced - resulting in tail loss in some species. Prehensile tails are found in the New World atelids, including the howler, spider, woolly spider, woolly monkeys; and in capuchins. Male primates have a low-hanging penis and testes descended into a scrotum. Sexual dimorphism Sexual dimorphism is often exhibited in simians, though to a greater degree in Old World species (apes and some monkeys) than New World species. Recent studies involve comparing DNA to examine both the variation in the expression of the dimorphism among primates and the fundamental causes of sexual dimorphism. Primates usually have dimorphism in body mass and canine tooth size along with pelage and skin color. The dimorphism can be attributed to and affected by different factors, including mating system, size, habitat and diet. Comparative analyses have generated a more complete understanding of the relationship between sexual selection, natural selection, and mating systems in primates. Studies have shown that dimorphism is the product of changes in both male and female traits. Ontogenetic scaling, where relative extension of a common growth trajectory occurs, may give some insight into the relationship between sexual dimorphism and growth patterns. Some evidence from the fossil record suggests that there was convergent evolution of dimorphism, and some extinct hominids probably had greater dimorphism than any living primate. Locomotion Primate species move by brachiation, bipedalism, leaping, arboreal and terrestrial quadrupedalism, climbing, knuckle-walking or by a combination of these methods. Several prosimians are primarily vertical clingers and leapers. These include many bushbabies, all indriids (i.e., sifakas, avahis and indris), sportive lemurs, and all tarsiers. Other prosimians are arboreal quadrupeds and climbers. Some are also terrestrial quadrupeds, while some are leapers. Most monkeys are both arboreal and terrestrial quadrupeds and climbers. Gibbons, muriquis and spider monkeys all brachiate extensively, with gibbons sometimes doing so in remarkably acrobatic fashion. Woolly monkeys also brachiate at times. Orangutans use a similar form of locomotion called quadramanous climbing, in which they use their arms and legs to carry their heavy bodies through the trees. Chimpanzees and gorillas knuckle walk, and can move bipedally for short distances. Although numerous species, such as australopithecines and early hominids, have exhibited fully bipedal locomotion, humans are the only extant species with this trait. Vision The evolution of color vision in primates is unique among most eutherian mammals. While the remote vertebrate ancestors of the primates possessed three color vision (trichromaticism), the nocturnal, warm-blooded, mammalian ancestors lost one of three cones in the retina during the Mesozoic era. Fish, reptiles and birds are therefore trichromatic or tetrachromatic, while all mammals, with the exception of some primates and marsupials, are dichromats or monochromats (totally color blind). Nocturnal primates, such as the night monkeys and bush babies, are often monochromatic. Catarrhines are routinely trichromatic due to a gene duplication of the red-green opsin gene at the base of their lineage, 30 to 40 million years ago. Platyrrhines, on the other hand, are trichromatic in a few cases only. Specifically, individual females must be heterozygous for two alleles of the opsin gene (red and green) located on the same locus of the X chromosome. Males, therefore, can only be dichromatic, while females can be either dichromatic or trichromatic. Color vision in strepsirrhines is not as well understood; however, research indicates a range of color vision similar to that found in platyrrhines. Like catarrhines, howler monkeys (a family of platyrrhines) show routine trichromatism that has been traced to an evolutionarily recent gene duplication. Howler monkeys are one of the most specialized leaf-eaters of the New World monkeys; fruits are not a major part of their diets, and the type of leaves they prefer to consume (young, nutritive, and digestible) are detectable only by a red-green signal. Field work exploring the dietary preferences of howler monkeys suggests that routine trichromaticism was selected by environment. Behavior Social systems Richard Wrangham stated that social systems of primates are best classified by the amount of movement by females occurring between groups. He proposed four categories: Female transfer systems – females move away from the group in which they were born. Females of a group will not be closely related whereas males will have remained with their natal groups, and this close association may be influential in social behavior. The groups formed are generally quite small. This organization can be seen in chimpanzees, where the males, who are typically related, will cooperate in defense of the group's territory. Evidence of this social system has also been found among Neanderthal remains in Spain and in remains of Australopithecus and Paranthropus robustus groups in southern Africa. Among New World Monkeys, spider monkeys and muriquis use this system. Male transfer systems – while the females remain in their natal groups, the males will emigrate as adolescents. Group sizes are usually larger. This system is common among the ring-tailed lemur, capuchin monkeys and cercopithecine monkeys. Monogamous species – a male–female bond, sometimes accompanied by a juvenile offspring. There is shared responsibility of parental care and territorial defense. The offspring leaves the parents' territory during adolescence. Indri, lariang tarsiers, Callitrichidae monkeys and gibbons use this system, although "monogamy" in this context does not necessarily mean absolute sexual fidelity. These species do not live in larger groups. Solitary species – males and females live in overlapping home ranges. This type of organization is found in lorises, galagos, mouse lemurs, aye-ayes and orangutans. Other systems are known to occur as well. For example, with howler monkeys and gorillas both the males and females typically transfer from their natal group on reaching sexual maturity, resulting in groups in which neither the males nor females are typically related. Some prosimians, colobine monkeys and callitrichid monkeys also use this system. The transfer of females or males from their native group is likely an adaptation for avoiding inbreeding. An analysis of breeding records of captive primate colonies representing numerous different species indicates that the infant mortality of inbred young is generally higher than that of non-inbred young. This effect of inbreeding on infant mortality is probably largely a result of increased expression of deleterious recessive alleles (see Inbreeding depression). Primatologist Jane Goodall, who studied in the Gombe Stream National Park, noted fission-fusion societies in chimpanzees. There is fission when the main group splits up to forage during the day, then fusion when the group returns at night to sleep as a group. This social structure can also be observed in the hamadryas baboon, spider monkeys and the bonobo. The gelada has a similar social structure in which many smaller groups come together to form temporary herds of up to 600 monkeys. Humans also form fission-fusion societies. In hunter-gatherer societies, humans form groups which are made up of several individuals that may split up to obtain different resources. These social systems are affected by three main ecological factors: distribution of resources, group size, and predation. Within a social group there is a balance between cooperation and competition. Cooperative behaviors in many primates species include social grooming (removing skin parasites and cleaning wounds), food sharing, and collective defense against predators or of a territory. Aggressive behaviors often signal competition for food, sleeping sites or mates. Aggression is also used in establishing dominance hierarchies. In November 2023, scientists reported, for the first time, evidence that groups of primates, particularly bonobos, are capable of cooperating with each other. Interspecific associations Several species of primates are known to associate in the wild. Some of these associations have been extensively studied. In the Tai Forest of Africa, several species coordinate anti-predator behavior. These include the Diana monkey, Campbell's mona monkey, lesser spot-nosed monkey, western red colobus, king colobus (western black and white colobus), and sooty mangabey, which coordinate anti-predator alarm calls. Among the predators of these monkeys is the common chimpanzee. The red-tailed monkey associates with several species, including the western red colobus, blue monkey, Wolf's mona monkey, mantled guereza, black crested mangabey and Allen's swamp monkey. Several of these species are preyed upon by the common chimpanzee. In South America, squirrel monkeys associate with capuchin monkeys. This may have more to do with foraging benefits to the squirrel monkeys than anti-predation benefits. Mating The mating systems of primates vary between monogamy, polyandry, polygyny and polygynandry. In monogamous species, adult males and females form long-lasting pair bonds. Compared to other systems, there is little competition for mating rights and males and females tend to be similar in size. Polyandry, which involves groups consisting of single females mating with multiple males, may arise as a secondary mating system in monogamous species. In the brown-mantled tamarin, a female may breeding with one or two males. Polyandry may have developed due to the high frequency of twin births, which require more help in raising. Polygynous species include gorillas, Hanuman langurs, geladas, hamadryas baboons, proboscis monkeys, and golden snub-nosed monkeys, and consists of one male mating with multiple females within a harem or one-male unit. Sexual dimorphism tends to be higher in these species and males may also develop prominent secondary sex characteristics. In the patriarchal hamadryas baboon, the males aggressively herd females into their groups and violently discipline those that wander. By contrast, in gelada society, which is based on female kinship, a male is dependent on the support of the females in his unit. Males of these species must defend their harems from other males, who may try to take over. In some species, such as ring-tailed lemurs, sifakas, macaques, most baboons, mangabeys, squirrel monkeys, woolly monkeys, spider monkeys, woolly spider monkeys, chimpanzees and bonobos, both males and females mate with multiple partners. Polygynandry occurs in multimale-multifemale groups, and since females mate many times before conception, males have large testicles for sperm competition. Males may exist in a dominance hierarchy and those at the top will try to monopolize access to the females. Consortships may occur in some species but these are short-term. In solitary-living species, males and females mate with partners whom home ranges they overlap with. Copulation in primates typically involves the males mounting the females from behind, as with most mammals. Belly-to-belly copulation has been recorded in apes, both gibbons and the great apes. Human sex positions are modifications of these two positions. Primates may egage in sexual activity as part of social bonding; includeing homosexual behaviour. Such behavior play an important role in bonobo society in particular. female bonobos engage in mutual genital-rubbing behavior, possibly to bond socially with each other, thus forming a female nucleus of bonobo society. The bonding among females enables them to dominate most of the males. Life history Primates have slower rates of development than other mammals. All primate infants are breastfed by their mothers (with the exception of some human cultures and various zoo raised primates which are fed formula) and rely on them for grooming and transportation. In some species, infants are protected and transported by males in the group, particularly males who may be their fathers. Other relatives of the infant, such as siblings and aunts, may participate in its care as well. Most primate mothers cease ovulation while breastfeeding an infant; once the infant is weaned the mother can reproduce again. This often leads to weaning conflict with infants who attempt to continue breastfeeding. Infanticide is common in polygynous species such as gray langurs and gorillas. Adult males may kill dependent offspring that are not theirs so the female will return to estrus and thus they can sire offspring of their own. Social monogamy in some species may have evolved to combat this behavior. Polygynandry may also lessen the risk of infanticide since paternity becomes uncertain. Primates have a longer juvenile period between weaning and sexual maturity than other mammals of similar size. Some primates such as galagos and New World monkeys use tree-holes for nesting, and park juveniles in leafy patches while foraging. Other primates follow a strategy of "riding", i.e. carrying individuals on the body while feeding. Adults may construct or use nesting sites, sometimes accompanied by juveniles, for the purpose of resting, a behavior which has developed secondarily in the great apes. During the juvenile period, primates are more susceptible than adults to predation and starvation; they gain experience in feeding and avoiding predators during this time. They learn social and fighting skills, often through playing. Primates, especially females, have longer lifespans than other similarly sized mammals, this may be partially due to their slower metabolisms. Late in life, female catarrhine primates appear to undergo a cessation of reproductive function known as menopause; other groups are less studied. Diet and feeding Primates exploit a variety of food sources. It has been said that many characteristics of modern primates, including humans, derive from an early ancestor's practice of taking most of its food from the tropical canopy. Most primates include fruit in their diets to obtain easily digested nutrients including carbohydrates and lipids for energy. Primates in the suborder Strepsirrhini (non-tarsier prosimians) are able to synthesize vitamin C, like most other mammals, while primates of the suborder Haplorhini (tarsiers, monkeys and apes) have lost this ability, and require the vitamin in their diet. Many primates have anatomical specializations that enable them to exploit particular foods, such as fruit, leaves, gum or insects. For example, leaf eaters such as howler monkeys, black-and-white colobuses and sportive lemurs have extended digestive tracts which enable them to absorb nutrients from leaves that can be difficult to digest. Marmosets, which are gum eaters, have strong incisor teeth, enabling them to open tree bark to get to the gum, and claws rather than nails, enabling them to cling to trees while feeding. The aye-aye combines rodent-like teeth with a long, thin middle finger to fill the same ecological niche as a woodpecker. It taps on trees to find insect larvae, then gnaws holes in the wood and inserts its elongated middle finger to pull the larvae out. Some species have additional specializations. For example, the grey-cheeked mangabey has thick enamel on its teeth, enabling it to open hard fruits and seeds that other monkeys cannot. The gelada is the only primate species that feeds primarily on grass. Hunting Tarsiers are the only extant obligate carnivorous primates, exclusively eating insects, crustaceans, small vertebrates and snakes (including venomous species). Capuchin monkeys can exploit many different types of plant matter, including fruit, leaves, flowers, buds, nectar and seeds, but also eat insects and other invertebrates, bird eggs, and small vertebrates such as birds, lizards, squirrels and bats. The common chimpanzee eats an omnivorous frugivorous diet. It prefers fruit above all other food items and even seeks out and eats them when they are not abundant. It also eats leaves and leaf buds, seeds, blossoms, stems, pith, bark and resin. Insects and meat make up a small proportion of their diet, estimated as 2%. The meat consumption includes predation on other primate species, such as the western red colobus monkey. The bonobo is an omnivorous frugivore – the majority of its diet is fruit, but it supplements this with leaves, meat from small vertebrates, such as anomalures, flying squirrels and duikers, and invertebrates. In some instances, bonobos have been shown to consume lower-order primates. Until the development of agriculture approximately 10,000 years ago, Homo sapiens employed a hunter-gatherer method as their sole means of food collection. This involved combining stationary food sources (such as fruits, grains, tubers, and mushrooms, insect larvae and aquatic mollusks) with wild game, which must be hunted and killed in order to be consumed. It has been proposed that humans have used fire to prepare and cook food since the time of Homo erectus. Around ten thousand years ago, humans developed agriculture, which substantially altered their diet. This change in diet may also have altered human biology; with the spread of dairy farming providing a new and rich source of food, leading to the evolution of the ability to digest lactose in some adults. As prey Predators of primates include various species of carnivorans, birds of prey, reptiles, and other primates. Even gorillas have been recorded as prey. Predators of primates have diverse hunting strategies and as such, primates have evolved several different antipredator adaptations including crypsis, alarm calls and mobbing. Several species have separate alarm calls for different predators such as air-borne or ground-dwelling predators. Predation may have shaped group size in primates as species exposed to higher predation pressures appear to live in larger groups. Communication Lemurs, lorises, tarsiers, and New World monkeys rely on olfactory signals for many aspects of social and reproductive behavior. Specialized glands are used to mark territories with pheromones, which are detected by the vomeronasal organ; this process forms a large part of the communication behavior of these primates. In Old World monkeys and apes this ability is mostly vestigial, having regressed as trichromatic eyes evolved to become the main sensory organ. Primates also use vocalizations, gestures, and facial expressions to convey psychological state. Facial musculature is very developed in primates, particularly in monkeys and apes, allowing for complex facial communication. Like humans, chimpanzees can distinguish the faces of familiar and unfamiliar individuals. Hand and arm gestures are also important forms of communication for great apes and a single gesture can have multiple functions. Chest-beating in male gorillas is a form of non-vocal sound communication that serves to show fitness to both rivals and females. The sounds produced may vary in frequency depending on the ape's size. Primates are a particularly vocal group of mammals. Indris and black-and-white ruffed lemurs make distinctive, loud songs and choruses which maintain territories and act as alarm calls. The Philippine tarsier, has a high-frequency limit of auditory sensitivity of approximately 91 kHz with a dominant frequency of 70 kHz, among the highest recorded for any terrestrial mammal. For Philippine tarsiers, these ultrasonic vocalizations might represent a private channel of communication that subverts detection by predators, prey and competitors, enhances energetic efficiency, or improves detection against low-frequency background noise. Male howler monkeys are among the loudest land mammals as their roars can be heard up to , and relate to intergroup spacing, territorial protection and possibly mate-guarding. Roars are produced by a modified larynx and enlarged hyoid bone which contains an air sac. The vervet monkey gives a distinct alarm call for each of at least four different predators, and the reactions of other monkeys vary according to the call. Furthermore, many primate species including chimpanzees, Campbell's mona monkeys or Diana monkeys have been shown to combine vocalizations in sequences, suggesting syntax may not be uniquely humans as previously thought but rather evolutionary ancient, and its origins may be deeply rooted in the primate lineage. Male and female siamangs both possess inflatable pouches in the throat with which pair -bonds use to sing "duets" to each other. Many non-human primates have the vocal anatomy to produce human speech but lack the proper brain wiring. Vowel-like vocal patterns have been recorded in baboons which has implications for the origin of speech in humans. Consonant- and vowel-like sounds exist in some orangutan calls and they maintain their meaning over great distances. The time range for the evolution of human language and/or its anatomical prerequisites extends, at least in principle, from the phylogenetic divergence of Homo (2.3 to 2.4 million years ago) from Pan (5 to 6 million years ago) to the emergence of full behavioral modernity some 50,000–150,000 years ago. Few dispute that Australopithecus probably lacked vocal communication significantly more sophisticated than that of great apes in general. Intelligence and cognition Primates have advanced cognitive abilities: some make tools and use them to acquire food and for social displays; some can perform tasks requiring cooperation, influence and rank; they are status conscious, manipulative and capable of deception; they can recognise kin and conspecifics; and they can learn to use symbols and understand aspects of human language including some relational syntax and concepts of number and numerical sequence. Research in primate cognition explores problem solving, memory, social interaction, a theory of mind, and numerical, spatial, and abstract concepts. Comparative studies show a trend towards higher intelligence going from prosimians to New World monkeys to Old World monkeys, and significantly higher average cognitive abilities in the great apes. However, there is a great deal of variation in each group (e.g., among New World monkeys, both spider and capuchin monkeys have scored highly by some measures), as well as in the results of different studies. Tool use and manufacture In 1960, Jane Goodall observed a chimpanzee poking pieces of grass into a termite mound and then raising the grass to his mouth. After he left, Goodall approached the mound and repeated the behaviour because she was unsure what the chimpanzee was doing. She found that the termites bit onto the grass with their jaws. The chimpanzee had been using the grass as a tool to "fish" or "dip" for termites. There are more limited reports of the closely related bonobo using tools in the wild; it has been claimed they rarely use tools in the wild although they use tools as readily as chimpanzees when in captivity. It has been reported that females, both chimpanzee and bonobo, use tools more avidly than males. Orangutans in Borneo scoop catfish out of small ponds. Over two years, anthropologist Anne Russon observed orangutans learning to jab sticks at catfish to scare them out of the ponds and in to their waiting hands. There are few reports of gorillas using tools in the wild. An adult female western lowland gorilla used a branch as a walking stick apparently to test water depth and to aid her in crossing a pool of water. Another adult female used a detached trunk from a small shrub as a stabilizer during food gathering, and another used a log as a bridge. The first direct observation of a non-ape primate using a tool in a wild environment occurred in 1988. Primatologist Sue Boinski watched an adult male white-faced capuchin beat a fer-de-lance snake to death with a dead branch. The black-striped capuchin was the first non-ape primate for which routine tool use was documented in the wild; individuals were observed cracking nuts by placing them on a stone anvil and hitting them with another large stone. In Thailand and Myanmar, crab-eating macaques use stone tools to open nuts, oysters and other bivalves, and various types of sea snails. Chacma baboons use stones as weapons; stoning by these baboons is done from the rocky walls of the canyon where they sleep and retreat to when they are threatened. Stones are lifted with one hand and dropped over the side whereupon they tumble down the side of the cliff or fall directly to the canyon floor. Although they have not been observed to use tools in the wild, lemurs in controlled settings have been shown to be capable of understanding the functional properties of the objects they had been trained to use as tools, performing as well as tool-using haplorhines. Soon after her initial discovery of tool use, Goodall observed other chimpanzees picking up leafy twigs, stripping off the leaves and using the stems to fish for insects. This change of a leafy twig into a tool was a major discovery. Prior to this, scientists thought that only humans manufactured and used tools, and that this ability was what separated humans from other animals. Chimpanzees have also been observed making "sponges" out of leaves and moss that suck up water. Sumatran orangutans have been observed making and using tools. They will break off a tree branch that is about 30 cm long, snap off the twigs, fray one end and then use the stick to dig in tree holes for termites. In the wild, mandrills have been observed to clean their ears with modified tools. Scientists filmed a large male mandrill at Chester Zoo (UK) stripping down a twig, apparently to make it narrower, and then using the modified stick to scrape dirt from underneath its toenails. Captive gorillas have made a variety of tools. Ecology Non-human primates primarily live in the tropical latitudes of Africa, Asia, and the Americas. Species that live outside of the tropics include the Japanese macaque which lives in the Japanese islands of Honshū and Hokkaido; the Barbary macaque which lives in North Africa and several species of langur which live in China. Primates tend to live in tropical rainforests but are also found in temperate forests, savannas, deserts, mountains and coastal areas. The number of primate species within tropical areas has been shown to be positively correlated to the amount of rainfall and the amount of rain forest area. Accounting for 25% to 40% of the fruit-eating animals (by weight) within tropical rainforests, primates play an important ecological role by dispersing seeds of many tree species. Primate habitats span a range of altitudes: the black snub-nosed monkey has been found living in the Hengduan Mountains at altitudes of 4,700 meters (15,400 ft), the mountain gorilla can be found at 4,200 meters (13,200 ft) crossing the Virunga Mountains, and the gelada has been found at elevations of up to in the Ethiopian Highlands. Some species interact with aquatic environments and may swim or even dive, including the proboscis monkey, De Brazza's monkey and Allen's swamp monkey. Some primates, such as the rhesus macaque and gray langurs, can exploit human-modified environments and even live in cities. Interactions between humans and other primates Disease transmission Close interactions between humans and non-human primates (NHPs) can create pathways for the transmission of zoonotic diseases. Viruses such as Herpesviridae (most notably Herpes B Virus), Poxviridae, measles, ebola, rabies, the Marburg virus and viral hepatitis can be transmitted to humans; in some cases the viruses produce potentially fatal diseases in both humans and non-human primates. Legal and social status Only humans are recognized as persons and protected in law by the United Nations Universal Declaration of Human Rights. The legal status of NHPs, on the other hand, is the subject of much debate, with organizations such as the Great Ape Project (GAP) campaigning to award at least some of them legal rights. In June 2008, Spain became the first country in the world to recognize the rights of some NHPs, when its parliament's cross-party environmental committee urged the country to comply with GAP's recommendations, which are that chimpanzees, orangutans and gorillas are not to be used for animal experiments. Many species of NHP are kept as pets by humans. The Allied Effort to Save Other Primates (AESOP) estimates that around 15,000 NHPs live as exotic pets in the United States. The expanding Chinese middle class has increased demand for NHPs as exotic pets in recent years. Although NHP import for the pet trade was banned in the U.S. in 1975, smuggling still occurs along the United States – Mexico border, with prices ranging from US$3000 for monkeys to $30,000 for apes. Primates are used as model organisms in laboratories and have been used in space missions. They serve as service animals for disabled humans. Capuchin monkeys can be trained to assist quadriplegic humans; their intelligence, memory, and manual dexterity make them ideal helpers. NHPs are kept in zoos around the globe. Historically, zoos were primarily a form of entertainment, but more recently have shifted their focus towards conservation, education and research. GAP does not insist that all NHPs should be released from zoos, primarily because captive-born primates lack the knowledge and experience to survive in the wild if released. Role in scientific research Thousands of non-human primates are used around the world in research because of their psychological and physiological similarity to humans. In particular, the brains and eyes of NHPs more closely parallel human anatomy than those of any other animals. NHPs are commonly used in preclinical trials, neuroscience, ophthalmology studies, and toxicity studies. Rhesus macaques are often used, as are other macaques, African green monkeys, chimpanzees, baboons, squirrel monkeys, and marmosets, both wild-caught and purpose-bred. In 2005, GAP reported that 1,280 of the 3,100 NHPs living in captivity in the United States were used for experiments. In 2004, the European Union used around 10,000 NHPs in such experiments; in 2005 in Great Britain, 4,652 experiments were conducted on 3,115 NHPs. Governments of many nations have strict care requirements of NHPs kept in captivity. In the US, federal guidelines extensively regulate aspects of NHP housing, feeding, enrichment, and breeding. European groups such as the European Coalition to End Animal Experiments are seeking a ban on all NHP use in experiments as part of the European Union's review of animal testing legislation. Extinction threats The International Union for Conservation of Nature (IUCN) lists more than a third of primates as critically endangered or vulnerable. About 60% of primate species are threatened with extinction, including: 87% of species in Madagascar, 73% in Asia, 37% in Africa, and 36% in South and Central America. Additionally, 75% of primate species have decreasing populations. Trade is regulated, as all species are listed by CITES in Appendix II, except 50 species and subspecies listed in Appendix I, which gain full protection from trade. Common threats to primate species include deforestation, forest fragmentation, monkey drives (resulting from primate crop raiding), and primate hunting for use in medicines, as pets, and for food. Large-scale tropical forest clearing is widely regarded as the process that most threatens primates. More than 90% of primate species occur in tropical forests. The main cause of forest loss is clearing for agriculture, although commercial logging, subsistence harvesting of timber, mining, and dam construction also contribute to tropical forest destruction. In Indonesia large areas of lowland forest have been cleared to increase palm oil production, and one analysis of satellite imagery concluded that during 1998 and 1999 there was a loss of 1,000 Sumatran orangutans per year in the Leuser Ecosystem alone. Primates with a large body size (over 5 kg) are at increased extinction risk due to their greater profitability to poachers compared to smaller primates. They reach sexual maturity later and have a longer period between births. Populations therefore recover more slowly after being depleted by poaching or the pet trade. Data for some African cities show that half of all protein consumed in urban areas comes from the bushmeat trade. Endangered primates such as guenons and the drill are hunted at levels that far exceed sustainable levels. This is due to their large body size, ease of transport and profitability per animal. As farming encroaches on forest habitats, primates feed on the crops, causing the farmers large economic losses. Primate crop raiding gives locals a negative impression of primates, hindering conservation efforts. Madagascar, home to five endemic primate families, has experienced the greatest extinction of the recent past; since human settlement 1,500 years ago, at least eight classes and fifteen of the larger species have become extinct due to hunting and habitat destruction. Among the primates wiped out were Archaeoindris (a lemur larger than a silverback gorilla) and the families Palaeopropithecidae and Archaeolemuridae. In Asia, Hinduism, Buddhism, and Islam prohibit eating primate meat; however, primates are still hunted for food. Some smaller traditional religions allow the consumption of primate meat. The pet trade and traditional medicine also increase demand for illegal hunting. The rhesus macaque, a model organism, was protected after excessive trapping threatened its numbers in the 1960s; the program was so effective that they are now viewed as a pest throughout their range. In Central and South America, forest fragmentation and hunting are the two main problems for primates. Large tracts of forest are now rare in Central America. This increases the amount of forest vulnerable to edge effects such as farmland encroachment, lower levels of humidity and a change in plant life. Movement restriction results in a greater amount of inbreeding, which can cause deleterious effects leading to a population bottleneck, whereby a significant percentage of the population is lost. There are 21 critically endangered primates, seven of which have remained on the IUCN's "The World's 25 Most Endangered Primates" list since the year 2000: the silky sifaka, Delacour's langur, the white-headed langur, the gray-shanked douc, the Tonkin snub-nosed monkey, the Cross River gorilla and the Sumatran orangutan. Miss Waldron's red colobus was recently declared extinct when no trace of the subspecies could be found from 1993 to 1999. A few hunters have found and killed individuals since then, but the subspecies' prospects remain bleak.
Biology and health sciences
Biology
null
23000
https://en.wikipedia.org/wiki/Polynomial
Polynomial
In mathematics, a polynomial is a mathematical expression consisting of indeterminates (also called variables) and coefficients, that involves only the operations of addition, subtraction, multiplication and exponentiation to nonnegative integer powers, and has a finite number of terms. An example of a polynomial of a single indeterminate is . An example with three indeterminates is . Polynomials appear in many areas of mathematics and science. For example, they are used to form polynomial equations, which encode a wide range of problems, from elementary word problems to complicated scientific problems; they are used to define polynomial functions, which appear in settings ranging from basic chemistry and physics to economics and social science; and they are used in calculus and numerical analysis to approximate other functions. In advanced mathematics, polynomials are used to construct polynomial rings and algebraic varieties, which are central concepts in algebra and algebraic geometry. Etymology The word polynomial joins two diverse roots: the Greek poly, meaning "many", and the Latin nomen, or "name". It was derived from the term binomial by replacing the Latin root bi- with the Greek poly-. That is, it means a sum of many terms (many monomials). The word polynomial was first used in the 17th century. Notation and terminology The x occurring in a polynomial is commonly called a variable or an indeterminate. When the polynomial is considered as an expression, x is a fixed symbol which does not have any value (its value is "indeterminate"). However, when one considers the function defined by the polynomial, then x represents the argument of the function, and is therefore called a "variable". Many authors use these two words interchangeably. A polynomial P in the indeterminate x is commonly denoted either as P or as P(x). Formally, the name of the polynomial is P, not P(x), but the use of the functional notation P(x) dates from a time when the distinction between a polynomial and the associated function was unclear. Moreover, the functional notation is often useful for specifying, in a single phrase, a polynomial and its indeterminate. For example, "let P(x) be a polynomial" is a shorthand for "let P be a polynomial in the indeterminate x". On the other hand, when it is not necessary to emphasize the name of the indeterminate, many formulas are much simpler and easier to read if the name(s) of the indeterminate(s) do not appear at each occurrence of the polynomial. The ambiguity of having two notations for a single mathematical object may be formally resolved by considering the general meaning of the functional notation for polynomials. If a denotes a number, a variable, another polynomial, or, more generally, any expression, then P(a) denotes, by convention, the result of substituting a for x in P. Thus, the polynomial P defines the function which is the polynomial function associated to P. Frequently, when using this notation, one supposes that a is a number. However, one may use it over any domain where addition and multiplication are defined (that is, any ring). In particular, if a is a polynomial then P(a) is also a polynomial. More specifically, when a is the indeterminate x, then the image of x by this function is the polynomial P itself (substituting x for x does not change anything). In other words, which justifies formally the existence of two notations for the same polynomial. Definition A polynomial expression is an expression that can be built from constants and symbols called variables or indeterminates by means of addition, multiplication and exponentiation to a non-negative integer power. The constants are generally numbers, but may be any expression that do not involve the indeterminates, and represent mathematical objects that can be added and multiplied. Two polynomial expressions are considered as defining the same polynomial if they may be transformed, one to the other, by applying the usual properties of commutativity, associativity and distributivity of addition and multiplication. For example and are two polynomial expressions that represent the same polynomial; so, one has the equality . A polynomial in a single indeterminate can always be written (or rewritten) in the form where are constants that are called the coefficients of the polynomial, and is the indeterminate. The word "indeterminate" means that represents no particular value, although any value may be substituted for it. The mapping that associates the result of this substitution to the substituted value is a function, called a polynomial function. This can be expressed more concisely by using summation notation: That is, a polynomial can either be zero or can be written as the sum of a finite number of non-zero terms. Each term consists of the product of a number called the coefficient of the term and a finite number of indeterminates, raised to non-negative integer powers. Classification The exponent on an indeterminate in a term is called the degree of that indeterminate in that term; the degree of the term is the sum of the degrees of the indeterminates in that term, and the degree of a polynomial is the largest degree of any term with nonzero coefficient. Because , the degree of an indeterminate without a written exponent is one. A term with no indeterminates and a polynomial with no indeterminates are called, respectively, a constant term and a constant polynomial. The degree of a constant term and of a nonzero constant polynomial is 0. The degree of the zero polynomial 0 (which has no terms at all) is generally treated as not defined (but see below). For example: is a term. The coefficient is , the indeterminates are and , the degree of is two, while the degree of is one. The degree of the entire term is the sum of the degrees of each indeterminate in it, so in this example the degree is . Forming a sum of several terms produces a polynomial. For example, the following is a polynomial: It consists of three terms: the first is degree two, the second is degree one, and the third is degree zero. Polynomials of small degree have been given specific names. A polynomial of degree zero is a constant polynomial, or simply a constant. Polynomials of degree one, two or three are respectively linear polynomials, quadratic polynomials and cubic polynomials. For higher degrees, the specific names are not commonly used, although quartic polynomial (for degree four) and quintic polynomial (for degree five) are sometimes used. The names for the degrees may be applied to the polynomial or to its terms. For example, the term in is a linear term in a quadratic polynomial. The polynomial 0, which may be considered to have no terms at all, is called the zero polynomial. Unlike other constant polynomials, its degree is not zero. Rather, the degree of the zero polynomial is either left explicitly undefined, or defined as negative (either −1 or −∞). The zero polynomial is also unique in that it is the only polynomial in one indeterminate that has an infinite number of roots. The graph of the zero polynomial, , is the x-axis. In the case of polynomials in more than one indeterminate, a polynomial is called homogeneous of if all of its non-zero terms have . The zero polynomial is homogeneous, and, as a homogeneous polynomial, its degree is undefined. For example, is homogeneous of degree 5. For more details, see Homogeneous polynomial. The commutative law of addition can be used to rearrange terms into any preferred order. In polynomials with one indeterminate, the terms are usually ordered according to degree, either in "descending powers of ", with the term of largest degree first, or in "ascending powers of ". The polynomial is written in descending powers of . The first term has coefficient , indeterminate , and exponent . In the second term, the coefficient . The third term is a constant. Because the degree of a non-zero polynomial is the largest degree of any one term, this polynomial has degree two. Two terms with the same indeterminates raised to the same powers are called "similar terms" or "like terms", and they can be combined, using the distributive law, into a single term whose coefficient is the sum of the coefficients of the terms that were combined. It may happen that this makes the coefficient 0. Polynomials can be classified by the number of terms with nonzero coefficients, so that a one-term polynomial is called a monomial, a two-term polynomial is called a binomial, and a three-term polynomial is called a trinomial. A real polynomial is a polynomial with real coefficients. When it is used to define a function, the domain is not so restricted. However, a real polynomial function is a function from the reals to the reals that is defined by a real polynomial. Similarly, an integer polynomial is a polynomial with integer coefficients, and a complex polynomial is a polynomial with complex coefficients. A polynomial in one indeterminate is called a univariate polynomial, a polynomial in more than one indeterminate is called a multivariate polynomial. A polynomial with two indeterminates is called a bivariate polynomial. These notions refer more to the kind of polynomials one is generally working with than to individual polynomials; for instance, when working with univariate polynomials, one does not exclude constant polynomials (which may result from the subtraction of non-constant polynomials), although strictly speaking, constant polynomials do not contain any indeterminates at all. It is possible to further classify multivariate polynomials as bivariate, trivariate, and so on, according to the maximum number of indeterminates allowed. Again, so that the set of objects under consideration be closed under subtraction, a study of trivariate polynomials usually allows bivariate polynomials, and so on. It is also common to say simply "polynomials in , and ", listing the indeterminates allowed. Operations Addition and subtraction Polynomials can be added using the associative law of addition (grouping all their terms together into a single sum), possibly followed by reordering (using the commutative law) and combining of like terms. For example, if and then the sum can be reordered and regrouped as and then simplified to When polynomials are added together, the result is another polynomial. Subtraction of polynomials is similar. Multiplication Polynomials can also be multiplied. To expand the product of two polynomials into a sum of terms, the distributive law is repeatedly applied, which results in each term of one polynomial being multiplied by every term of the other. For example, if then Carrying out the multiplication in each term produces Combining similar terms yields which can be simplified to As in the example, the product of polynomials is always a polynomial. Composition Given a polynomial of a single variable and another polynomial of any number of variables, the composition is obtained by substituting each copy of the variable of the first polynomial by the second polynomial. For example, if and then A composition may be expanded to a sum of terms using the rules for multiplication and division of polynomials. The composition of two polynomials is another polynomial. Division The division of one polynomial by another is not typically a polynomial. Instead, such ratios are a more general family of objects, called rational fractions, rational expressions, or rational functions, depending on context. This is analogous to the fact that the ratio of two integers is a rational number, not necessarily an integer. For example, the fraction is not a polynomial, and it cannot be written as a finite sum of powers of the variable . For polynomials in one variable, there is a notion of Euclidean division of polynomials, generalizing the Euclidean division of integers. This notion of the division results in two polynomials, a quotient and a remainder , such that and . The quotient and remainder may be computed by any of several algorithms, including polynomial long division and synthetic division. When the denominator is monic and linear, that is, for some constant , then the polynomial remainder theorem asserts that the remainder of the division of by is the evaluation . In this case, the quotient may be computed by Ruffini's rule, a special case of synthetic division. Factoring All polynomials with coefficients in a unique factorization domain (for example, the integers or a field) also have a factored form in which the polynomial is written as a product of irreducible polynomials and a constant. This factored form is unique up to the order of the factors and their multiplication by an invertible constant. In the case of the field of complex numbers, the irreducible factors are linear. Over the real numbers, they have the degree either one or two. Over the integers and the rational numbers the irreducible factors may have any degree. For example, the factored form of is over the integers and the reals, and over the complex numbers. The computation of the factored form, called factorization is, in general, too difficult to be done by hand-written computation. However, efficient polynomial factorization algorithms are available in most computer algebra systems. Calculus Calculating derivatives and integrals of polynomials is particularly simple, compared to other kinds of functions. The derivative of the polynomial with respect to is the polynomial Similarly, the general antiderivative (or indefinite integral) of is where is an arbitrary constant. For example, antiderivatives of have the form . For polynomials whose coefficients come from more abstract settings (for example, if the coefficients are integers modulo some prime number , or elements of an arbitrary ring), the formula for the derivative can still be interpreted formally, with the coefficient understood to mean the sum of copies of . For example, over the integers modulo , the derivative of the polynomial is the polynomial . Polynomial functions A polynomial function is a function that can be defined by evaluating a polynomial. More precisely, a function of one argument from a given domain is a polynomial function if there exists a polynomial that evaluates to for all in the domain of (here, is a non-negative integer and are constant coefficients). Generally, unless otherwise specified, polynomial functions have complex coefficients, arguments, and values. In particular, a polynomial, restricted to have real coefficients, defines a function from the complex numbers to the complex numbers. If the domain of this function is also restricted to the reals, the resulting function is a real function that maps reals to reals. For example, the function , defined by is a polynomial function of one variable. Polynomial functions of several variables are similarly defined, using polynomials in more than one indeterminate, as in According to the definition of polynomial functions, there may be expressions that obviously are not polynomials but nevertheless define polynomial functions. An example is the expression which takes the same values as the polynomial on the interval , and thus both expressions define the same polynomial function on this interval. Every polynomial function is continuous, smooth, and entire. The evaluation of a polynomial is the computation of the corresponding polynomial function; that is, the evaluation consists of substituting a numerical value to each indeterminate and carrying out the indicated multiplications and additions. For polynomials in one indeterminate, the evaluation is usually more efficient (lower number of arithmetic operations to perform) using Horner's method, which consists of rewriting the polynomial as Graphs A polynomial function in one real variable can be represented by a graph. The graph of the zero polynomial is the -axis. The graph of a degree 0 polynomial is a horizontal line with The graph of a degree 1 polynomial (or linear function) is an oblique line with and slope . The graph of a degree 2 polynomial is a parabola. The graph of a degree 3 polynomial is a cubic curve. The graph of any polynomial with degree 2 or greater is a continuous non-linear curve. A non-constant polynomial function tends to infinity when the variable increases indefinitely (in absolute value). If the degree is higher than one, the graph does not have any asymptote. It has two parabolic branches with vertical direction (one branch for positive x and one for negative x). Polynomial graphs are analyzed in calculus using intercepts, slopes, concavity, and end behavior. Equations A polynomial equation, also called an algebraic equation, is an equation of the form For example, is a polynomial equation. When considering equations, the indeterminates (variables) of polynomials are also called unknowns, and the solutions are the possible values of the unknowns for which the equality is true (in general more than one solution may exist). A polynomial equation stands in contrast to a polynomial identity like , where both expressions represent the same polynomial in different forms, and as a consequence any evaluation of both members gives a valid equality. In elementary algebra, methods such as the quadratic formula are taught for solving all first degree and second degree polynomial equations in one variable. There are also formulas for the cubic and quartic equations. For higher degrees, the Abel–Ruffini theorem asserts that there can not exist a general formula in radicals. However, root-finding algorithms may be used to find numerical approximations of the roots of a polynomial expression of any degree. The number of solutions of a polynomial equation with real coefficients may not exceed the degree, and equals the degree when the complex solutions are counted with their multiplicity. This fact is called the fundamental theorem of algebra. Solving equations A root of a nonzero univariate polynomial is a value of such that . In other words, a root of is a solution of the polynomial equation or a zero of the polynomial function defined by . In the case of the zero polynomial, every number is a zero of the corresponding function, and the concept of root is rarely considered. A number is a root of a polynomial if and only if the linear polynomial divides , that is if there is another polynomial such that . It may happen that a power (greater than ) of divides ; in this case, is a multiple root of , and otherwise is a simple root of . If is a nonzero polynomial, there is a highest power such that divides , which is called the multiplicity of as a root of . The number of roots of a nonzero polynomial , counted with their respective multiplicities, cannot exceed the degree of , and equals this degree if all complex roots are considered (this is a consequence of the fundamental theorem of algebra). The coefficients of a polynomial and its roots are related by Vieta's formulas. Some polynomials, such as , do not have any roots among the real numbers. If, however, the set of accepted solutions is expanded to the complex numbers, every non-constant polynomial has at least one root; this is the fundamental theorem of algebra. By successively dividing out factors , one sees that any polynomial with complex coefficients can be written as a constant (its leading coefficient) times a product of such polynomial factors of degree 1; as a consequence, the number of (complex) roots counted with their multiplicities is exactly equal to the degree of the polynomial. There may be several meanings of "solving an equation". One may want to express the solutions as explicit numbers; for example, the unique solution of is . This is, in general, impossible for equations of degree greater than one, and, since the ancient times, mathematicians have searched to express the solutions as algebraic expressions; for example, the golden ratio is the unique positive solution of In the ancient times, they succeeded only for degrees one and two. For quadratic equations, the quadratic formula provides such expressions of the solutions. Since the 16th century, similar formulas (using cube roots in addition to square roots), although much more complicated, are known for equations of degree three and four (see cubic equation and quartic equation). But formulas for degree 5 and higher eluded researchers for several centuries. In 1824, Niels Henrik Abel proved the striking result that there are equations of degree 5 whose solutions cannot be expressed by a (finite) formula, involving only arithmetic operations and radicals (see Abel–Ruffini theorem). In 1830, Évariste Galois proved that most equations of degree higher than four cannot be solved by radicals, and showed that for each equation, one may decide whether it is solvable by radicals, and, if it is, solve it. This result marked the start of Galois theory and group theory, two important branches of modern algebra. Galois himself noted that the computations implied by his method were impracticable. Nevertheless, formulas for solvable equations of degrees 5 and 6 have been published (see quintic function and sextic equation). When there is no algebraic expression for the roots, and when such an algebraic expression exists but is too complicated to be useful, the unique way of solving it is to compute numerical approximations of the solutions. There are many methods for that; some are restricted to polynomials and others may apply to any continuous function. The most efficient algorithms allow solving easily (on a computer) polynomial equations of degree higher than 1,000 (see Root-finding algorithm). For polynomials with more than one indeterminate, the combinations of values for the variables for which the polynomial function takes the value zero are generally called zeros instead of "roots". The study of the sets of zeros of polynomials is the object of algebraic geometry. For a set of polynomial equations with several unknowns, there are algorithms to decide whether they have a finite number of complex solutions, and, if this number is finite, for computing the solutions. See System of polynomial equations. The special case where all the polynomials are of degree one is called a system of linear equations, for which another range of different solution methods exist, including the classical Gaussian elimination. A polynomial equation for which one is interested only in the solutions which are integers is called a Diophantine equation. Solving Diophantine equations is generally a very hard task. It has been proved that there cannot be any general algorithm for solving them, or even for deciding whether the set of solutions is empty (see Hilbert's tenth problem). Some of the most famous problems that have been solved during the last fifty years are related to Diophantine equations, such as Fermat's Last Theorem. Polynomial expressions Polynomials where indeterminates are substituted for some other mathematical objects are often considered, and sometimes have a special name. Trigonometric polynomials A trigonometric polynomial is a finite linear combination of functions sin(nx) and cos(nx) with n taking on the values of one or more natural numbers. The coefficients may be taken as real numbers, for real-valued functions. If sin(nx) and cos(nx) are expanded in terms of sin(x) and cos(x), a trigonometric polynomial becomes a polynomial in the two variables sin(x) and cos(x) (using List of trigonometric identities#Multiple-angle formulae). Conversely, every polynomial in sin(x) and cos(x) may be converted, with Product-to-sum identities, into a linear combination of functions sin(nx) and cos(nx). This equivalence explains why linear combinations are called polynomials. For complex coefficients, there is no difference between such a function and a finite Fourier series. Trigonometric polynomials are widely used, for example in trigonometric interpolation applied to the interpolation of periodic functions. They are also used in the discrete Fourier transform. Matrix polynomials A matrix polynomial is a polynomial with square matrices as variables. Given an ordinary, scalar-valued polynomial this polynomial evaluated at a matrix A is where I is the identity matrix. A matrix polynomial equation is an equality between two matrix polynomials, which holds for the specific matrices in question. A matrix polynomial identity is a matrix polynomial equation which holds for all matrices A in a specified matrix ring Mn(R). Exponential polynomials A bivariate polynomial where the second variable is substituted for an exponential function applied to the first variable, for example , may be called an exponential polynomial. Related concepts Rational functions A rational fraction is the quotient (algebraic fraction) of two polynomials. Any algebraic expression that can be rewritten as a rational fraction is a rational function. While polynomial functions are defined for all values of the variables, a rational function is defined only for the values of the variables for which the denominator is not zero. The rational fractions include the Laurent polynomials, but do not limit denominators to powers of an indeterminate. Laurent polynomials Laurent polynomials are like polynomials, but allow negative powers of the variable(s) to occur. Power series Formal power series are like polynomials, but allow infinitely many non-zero terms to occur, so that they do not have finite degree. Unlike polynomials they cannot in general be explicitly and fully written down (just like irrational numbers cannot), but the rules for manipulating their terms are the same as for polynomials. Non-formal power series also generalize polynomials, but the multiplication of two power series may not converge. Polynomial ring A polynomial over a commutative ring is a polynomial all of whose coefficients belong to . It is straightforward to verify that the polynomials in a given set of indeterminates over form a commutative ring, called the polynomial ring in these indeterminates, denoted in the univariate case and in the multivariate case. One has So, most of the theory of the multivariate case can be reduced to an iterated univariate case. The map from to sending to itself considered as a constant polynomial is an injective ring homomorphism, by which is viewed as a subring of . In particular, is an algebra over . One can think of the ring as arising from by adding one new element x to R, and extending in a minimal way to a ring in which satisfies no other relations than the obligatory ones, plus commutation with all elements of (that is ). To do this, one must add all powers of and their linear combinations as well. Formation of the polynomial ring, together with forming factor rings by factoring out ideals, are important tools for constructing new rings out of known ones. For instance, the ring (in fact field) of complex numbers, which can be constructed from the polynomial ring over the real numbers by factoring out the ideal of multiples of the polynomial . Another example is the construction of finite fields, which proceeds similarly, starting out with the field of integers modulo some prime number as the coefficient ring (see modular arithmetic). If is commutative, then one can associate with every polynomial in a polynomial function with domain and range equal to . (More generally, one can take domain and range to be any same unital associative algebra over .) One obtains the value by substitution of the value for the symbol in . One reason to distinguish between polynomials and polynomial functions is that, over some rings, different polynomials may give rise to the same polynomial function (see Fermat's little theorem for an example where is the integers modulo ). This is not the case when is the real or complex numbers, whence the two concepts are not always distinguished in analysis. An even more important reason to distinguish between polynomials and polynomial functions is that many operations on polynomials (like Euclidean division) require looking at what a polynomial is composed of as an expression rather than evaluating it at some constant value for . Divisibility If is an integral domain and and are polynomials in , it is said that divides or is a divisor of if there exists a polynomial in such that . If then is a root of if and only divides . In this case, the quotient can be computed using the polynomial long division. If is a field and and are polynomials in with , then there exist unique polynomials and in with and such that the degree of is smaller than the degree of (using the convention that the polynomial 0 has a negative degree). The polynomials and are uniquely determined by and . This is called Euclidean division, division with remainder or polynomial long division and shows that the ring is a Euclidean domain. Analogously, prime polynomials (more correctly, irreducible polynomials) can be defined as non-zero polynomials which cannot be factorized into the product of two non-constant polynomials. In the case of coefficients in a ring, "non-constant" must be replaced by "non-constant or non-unit" (both definitions agree in the case of coefficients in a field). Any polynomial may be decomposed into the product of an invertible constant by a product of irreducible polynomials. If the coefficients belong to a field or a unique factorization domain this decomposition is unique up to the order of the factors and the multiplication of any non-unit factor by a unit (and division of the unit factor by the same unit). When the coefficients belong to integers, rational numbers or a finite field, there are algorithms to test irreducibility and to compute the factorization into irreducible polynomials (see Factorization of polynomials). These algorithms are not practicable for hand-written computation, but are available in any computer algebra system. Eisenstein's criterion can also be used in some cases to determine irreducibility. Applications Positional notation In modern positional numbers systems, such as the decimal system, the digits and their positions in the representation of an integer, for example, 45, are a shorthand notation for a polynomial in the radix or base, in this case, . As another example, in radix 5, a string of digits such as 132 denotes the (decimal) number = 42. This representation is unique. Let b be a positive integer greater than 1. Then every positive integer a can be expressed uniquely in the form where m is a nonnegative integer and the r'''s are integers such that and for . Interpolation and approximation The simple structure of polynomial functions makes them quite useful in analyzing general functions using polynomial approximations. An important example in calculus is Taylor's theorem, which roughly states that every differentiable function locally looks like a polynomial function, and the Stone–Weierstrass theorem, which states that every continuous function defined on a compact interval of the real axis can be approximated on the whole interval as closely as desired by a polynomial function. Practical methods of approximation include polynomial interpolation and the use of splines. Other applications Polynomials are frequently used to encode information about some other object. The characteristic polynomial of a matrix or linear operator contains information about the operator's eigenvalues. The minimal polynomial of an algebraic element records the simplest algebraic relation satisfied by that element. The chromatic polynomial of a graph counts the number of proper colourings of that graph. The term "polynomial", as an adjective, can also be used for quantities or functions that can be written in polynomial form. For example, in computational complexity theory the phrase polynomial time means that the time it takes to complete an algorithm is bounded by a polynomial function of some variable, such as the size of the input. History Determining the roots of polynomials, or "solving algebraic equations", is among the oldest problems in mathematics. However, the elegant and practical notation we use today only developed beginning in the 15th century. Before that, equations were written out in words. For example, an algebra problem from the Chinese Arithmetic in Nine Sections, , begins "Three sheafs of good crop, two sheafs of mediocre crop, and one sheaf of bad crop are sold for 29 dou." We would write . History of the notation The earliest known use of the equal sign is in Robert Recorde's The Whetstone of Witte, 1557. The signs + for addition, − for subtraction, and the use of a letter for an unknown appear in Michael Stifel's Arithemetica integra, 1544. René Descartes, in La géometrie'', 1637, introduced the concept of the graph of a polynomial equation. He popularized the use of letters from the beginning of the alphabet to denote constants and letters from the end of the alphabet to denote variables, as can be seen above, in the general formula for a polynomial in one variable, where the s denote constants and denotes a variable. Descartes introduced the use of superscripts to denote exponents as well.
Mathematics
Basics
null
23001
https://en.wikipedia.org/wiki/Polymer
Polymer
A polymer () is a substance or material that consists of very large molecules, or macromolecules, that are constituted by many repeating subunits derived from one or more species of monomers. Due to their broad spectrum of properties, both synthetic and natural polymers play essential and ubiquitous roles in everyday life. Polymers range from familiar synthetic plastics such as polystyrene to natural biopolymers such as DNA and proteins that are fundamental to biological structure and function. Polymers, both natural and synthetic, are created via polymerization of many small molecules, known as monomers. Their consequently large molecular mass, relative to small molecule compounds, produces unique physical properties including toughness, high elasticity, viscoelasticity, and a tendency to form amorphous and semicrystalline structures rather than crystals. Polymers are studied in the fields of polymer science (which includes polymer chemistry and polymer physics), biophysics and materials science and engineering. Historically, products arising from the linkage of repeating units by covalent chemical bonds have been the primary focus of polymer science. An emerging important area now focuses on supramolecular polymers formed by non-covalent links. Polyisoprene of latex rubber is an example of a natural polymer, and the polystyrene of styrofoam is an example of a synthetic polymer. In biological contexts, essentially all biological macromolecules—i.e., proteins (polyamides), nucleic acids (polynucleotides), and polysaccharides—are purely polymeric, or are composed in large part of polymeric components. Etymology The term "polymer" derives . The term was coined in 1833 by Jöns Jacob Berzelius, though with a definition distinct from the modern IUPAC definition. The modern concept of polymers as covalently bonded macromolecular structures was proposed in 1920 by Hermann Staudinger, who spent the next decade finding experimental evidence for this hypothesis. Common examples Polymers are of two types: naturally occurring and synthetic or man made. Natural Natural polymeric materials such as hemp, shellac, amber, wool, silk, and natural rubber have been used for centuries. A variety of other natural polymers exist, such as cellulose, which is the main constituent of wood and paper. Space polymer Hemoglycin (previously termed hemolithin) is a space polymer that is the first polymer of amino acids found in meteorites. Synthetic The list of synthetic polymers, roughly in order of worldwide demand, includes polyethylene, polypropylene, polystyrene, polyvinyl chloride, synthetic rubber, phenol formaldehyde resin (or Bakelite), neoprene, nylon, polyacrylonitrile, PVB, silicone, and many more. More than 330 million tons of these polymers are made every year (2015). Most commonly, the continuously linked backbone of a polymer used for the preparation of plastics consists mainly of carbon atoms. A simple example is polyethylene ('polythene' in British English), whose repeat unit or monomer is ethylene. Many other structures do exist; for example, elements such as silicon form familiar materials such as silicones, examples being Silly Putty and waterproof plumbing sealant. Oxygen is also commonly present in polymer backbones, such as those of polyethylene glycol, polysaccharides (in glycosidic bonds), and DNA (in phosphodiester bonds). Synthesis Polymerization is the process of combining many small molecules known as monomers into a covalently bonded chain or network. During the polymerization process, some chemical groups may be lost from each monomer. This happens in the polymerization of PET polyester. The monomers are terephthalic acid (HOOCC6H4COOH) and ethylene glycol (HOCH2CH2OH) but the repeating unit is OCC6H4COOCH2CH2O, which corresponds to the combination of the two monomers with the loss of two water molecules. The distinct piece of each monomer that is incorporated into the polymer is known as a repeat unit or monomer residue. Synthetic methods are generally divided into two categories, step-growth polymerization and chain polymerization. The essential difference between the two is that in chain polymerization, monomers are added to the chain one at a time only, such as in polystyrene, whereas in step-growth polymerization chains of monomers may combine with one another directly, such as in polyester. Step-growth polymerization can be divided into polycondensation, in which low-molar-mass by-product is formed in every reaction step, and polyaddition. Newer methods, such as plasma polymerization do not fit neatly into either category. Synthetic polymerization reactions may be carried out with or without a catalyst. Laboratory synthesis of biopolymers, especially of proteins, is an area of intensive research. Biological synthesis There are three main classes of biopolymers: polysaccharides, polypeptides, and polynucleotides. In living cells, they may be synthesized by enzyme-mediated processes, such as the formation of DNA catalyzed by DNA polymerase. The synthesis of proteins involves multiple enzyme-mediated processes to transcribe genetic information from the DNA to RNA and subsequently translate that information to synthesize the specified protein from amino acids. The protein may be modified further following translation in order to provide appropriate structure and functioning. There are other biopolymers such as rubber, suberin, melanin, and lignin. Modification of natural polymers Naturally occurring polymers such as cotton, starch, and rubber were familiar materials for years before synthetic polymers such as polyethene and perspex appeared on the market. Many commercially important polymers are synthesized by chemical modification of naturally occurring polymers. Prominent examples include the reaction of nitric acid and cellulose to form nitrocellulose and the formation of vulcanized rubber by heating natural rubber in the presence of sulfur. Ways in which polymers can be modified include oxidation, cross-linking, and end-capping. Structure The structure of a polymeric material can be described at different length scales, from the sub-nm length scale up to the macroscopic one. There is in fact a hierarchy of structures, in which each stage provides the foundations for the next one. The starting point for the description of the structure of a polymer is the identity of its constituent monomers. Next, the microstructure essentially describes the arrangement of these monomers within the polymer at the scale of a single chain. The microstructure determines the possibility for the polymer to form phases with different arrangements, for example through crystallization, the glass transition or microphase separation. These features play a major role in determining the physical and chemical properties of a polymer. Monomers and repeat units The identity of the repeat units (monomer residues, also known as "mers") comprising a polymer is its first and most important attribute. Polymer nomenclature is generally based upon the type of monomer residues comprising the polymer. A polymer which contains only a single type of repeat unit is known as a homopolymer, while a polymer containing two or more types of repeat units is known as a copolymer. A terpolymer is a copolymer which contains three types of repeat units. Polystyrene is composed only of styrene-based repeat units, and is classified as a homopolymer. Polyethylene terephthalate, even though produced from two different monomers (ethylene glycol and terephthalic acid), is usually regarded as a homopolymer because only one type of repeat unit is formed. Ethylene-vinyl acetate contains more than one variety of repeat unit and is a copolymer. Some biological polymers are composed of a variety of different but structurally related monomer residues; for example, polynucleotides such as DNA are composed of four types of nucleotide subunits. {| class="wikitable" style="text-align:left; font-size:90%;" width="80%" |- | class="hintergrundfarbe6" align="center" colspan="4" |Homopolymers and copolymers (examples) |- style="vertical-align:top" class="hintergrundfarbe2" | | | | |- style="vertical-align:top" | Homopolymer polystyrene | Homopolymer polydimethylsiloxane, a silicone. The main chain is formed of silicon and oxygen atoms. | The homopolymer polyethylene terephthalate has only one repeat unit. | Copolymer styrene-butadiene rubber: The repeat units based on styrene and 1,3-butadiene form two repeating units, which can alternate in any order in the macromolecule, making the polymer thus a random copolymer. |} A polymer containing ionizable subunits (e.g., pendant carboxylic groups) is known as a polyelectrolyte or ionomer, when the fraction of ionizable units is large or small respectively. Microstructure The microstructure of a polymer (sometimes called configuration) relates to the physical arrangement of monomer residues along the backbone of the chain. These are the elements of polymer structure that require the breaking of a covalent bond in order to change. Various polymer structures can be produced depending on the monomers and reaction conditions: A polymer may consist of linear macromolecules containing each only one unbranched chain. In the case of unbranched polyethylene, this chain is a long-chain n-alkane. There are also branched macromolecules with a main chain and side chains, in the case of polyethylene the side chains would be alkyl groups. In particular unbranched macromolecules can be in the solid state semi-crystalline, crystalline chain sections highlighted red in the figure below. While branched and unbranched polymers are usually thermoplastics, many elastomers have a wide-meshed cross-linking between the "main chains". Close-meshed crosslinking, on the other hand, leads to thermosets. Cross-links and branches are shown as red dots in the figures. Highly branched polymers are amorphous and the molecules in the solid interact randomly. {| class="wikitable" style="text-align:center; font-size:90%;" width="60%" |- class="hintergrundfarbe2" | Linear, unbranched macromolecule | Branched macromolecule |Semi-crystalline structure of an unbranched polymer | Slightly cross-linked polymer (elastomer) | Highly cross-linked polymer (thermoset) |} Polymer architecture An important microstructural feature of a polymer is its architecture and shape, which relates to the way branch points lead to a deviation from a simple linear chain. A branched polymer molecule is composed of a main chain with one or more substituent side chains or branches. Types of branched polymers include star polymers, comb polymers, polymer brushes, dendronized polymers, ladder polymers, and dendrimers. There exist also two-dimensional polymers (2DP) which are composed of topologically planar repeat units. A polymer's architecture affects many of its physical properties including solution viscosity, melt viscosity, solubility in various solvents, glass-transition temperature and the size of individual polymer coils in solution. A variety of techniques may be employed for the synthesis of a polymeric material with a range of architectures, for example living polymerization. Chain length A common means of expressing the length of a chain is the degree of polymerization, which quantifies the number of monomers incorporated into the chain. As with other molecules, a polymer's size may also be expressed in terms of molecular weight. Since synthetic polymerization techniques typically yield a statistical distribution of chain lengths, the molecular weight is expressed in terms of weighted averages. The number-average molecular weight (Mn) and weight-average molecular weight (Mw) are most commonly reported. The ratio of these two values (Mw / Mn) is the dispersity (Đ), which is commonly used to express the width of the molecular weight distribution. The physical properties of polymer strongly depend on the length (or equivalently, the molecular weight) of the polymer chain. One important example of the physical consequences of the molecular weight is the scaling of the viscosity (resistance to flow) in the melt. The influence of the weight-average molecular weight () on the melt viscosity () depends on whether the polymer is above or below the onset of entanglements. Below the entanglement molecular weight, , whereas above the entanglement molecular weight, . In the latter case, increasing the polymer chain length 10-fold would increase the viscosity over 1000 times. Increasing chain length furthermore tends to decrease chain mobility, increase strength and toughness, and increase the glass-transition temperature (Tg). This is a result of the increase in chain interactions such as van der Waals attractions and entanglements that come with increased chain length. These interactions tend to fix the individual chains more strongly in position and resist deformations and matrix breakup, both at higher stresses and higher temperatures. Monomer arrangement in copolymers Copolymers are classified either as statistical copolymers, alternating copolymers, block copolymers, graft copolymers or gradient copolymers. In the schematic figure below, Ⓐ and Ⓑ symbolize the two repeat units. {| class="wikitable" style="text-align:center; font-size:90%;" |- class="hintergrundfarbe2" | Random copolymer | Gradient copolymer | rowspan="2" | Graft copolymer |- class="hintergrundfarbe2" | Alternating copolymer | Block copolymer |} Alternating copolymers possess two regularly alternating monomer residues: . An example is the equimolar copolymer of styrene and maleic anhydride formed by free-radical chain-growth polymerization. A step-growth copolymer such as Nylon 66 can also be considered a strictly alternating copolymer of diamine and diacid residues, but is often described as a homopolymer with the dimeric residue of one amine and one acid as a repeat unit. Periodic copolymers have more than two species of monomer units in a regular sequence. Statistical copolymers have monomer residues arranged according to a statistical rule. A statistical copolymer in which the probability of finding a particular type of monomer residue at a particular point in the chain is independent of the types of surrounding monomer residue may be referred to as a truly random copolymer. For example, the chain-growth copolymer of vinyl chloride and vinyl acetate is random. Block copolymers have long sequences of different monomer units. Polymers with two or three blocks of two distinct chemical species (e.g., A and B) are called diblock copolymers and triblock copolymers, respectively. Polymers with three blocks, each of a different chemical species (e.g., A, B, and C) are termed triblock terpolymers. Graft or grafted copolymers contain side chains or branches whose repeat units have a different composition or configuration than the main chain. The branches are added on to a preformed main chain macromolecule. Monomers within a copolymer may be organized along the backbone in a variety of ways. A copolymer containing a controlled arrangement of monomers is called a sequence-controlled polymer. Alternating, periodic and block copolymers are simple examples of sequence-controlled polymers. Tacticity Tacticity describes the relative stereochemistry of chiral centers in neighboring structural units within a macromolecule. There are three types of tacticity: isotactic (all substituents on the same side), atactic (random placement of substituents), and syndiotactic (alternating placement of substituents). {| class="wikitable" style="text-align:center; font-size:90%;" width="60%" |- class="hintergrundfarbe2" |Isotactic | Syndiotactic | Atactic (i. e. random) |} Morphology Polymer morphology generally describes the arrangement and microscale ordering of polymer chains in space. The macroscopic physical properties of a polymer are related to the interactions between the polymer chains. Disordered polymers: In the solid state, atactic polymers, polymers with a high degree of branching and random copolymers form amorphous (i.e. glassy structures). In melt and solution, polymers tend to form a constantly changing "statistical cluster", see freely-jointed-chain model. In the solid state, the respective conformations of the molecules are frozen. Hooking and entanglement of chain molecules lead to a "mechanical bond" between the chains. Intermolecular and intramolecular attractive forces only occur at sites where molecule segments are close enough to each other. The irregular structures of the molecules prevent a narrower arrangement. Linear polymers with periodic structure, low branching and stereoregularity (e. g. not atactic) have a semi-crystalline structure in the solid state. In simple polymers (such as polyethylene), the chains are present in the crystal in zigzag conformation. Several zigzag conformations form dense chain packs, called crystallites or lamellae. The lamellae are much thinner than the polymers are long (often about 10 nm). They are formed by more or less regular folding of one or more molecular chains. Amorphous structures exist between the lamellae. Individual molecules can lead to entanglements between the lamellae and can also be involved in the formation of two (or more) lamellae (chains than called tie molecules). Several lamellae form a superstructure, a spherulite, often with a diameter in the range of 0.05 to 1 mm. The type and arrangement of (functional) residues of the repeat units effects or determines the crystallinity and strength of the secondary valence bonds. In isotactic polypropylene, the molecules form a helix. Like the zigzag conformation, such helices allow a dense chain packing. Particularly strong intermolecular interactions occur when the residues of the repeating units allow the formation of hydrogen bonds, as in the case of p-aramid. The formation of strong intramolecular associations may produce diverse folded states of single linear chains with distinct circuit topology. Crystallinity and superstructure are always dependent on the conditions of their formation, see also: crystallization of polymers. Compared to amorphous structures, semi-crystalline structures lead to a higher stiffness, density, melting temperature and higher resistance of a polymer. Cross-linked polymers: Wide-meshed cross-linked polymers are elastomers and cannot be molten (unlike thermoplastics); heating cross-linked polymers only leads to decomposition. Thermoplastic elastomers, on the other hand, are reversibly "physically crosslinked" and can be molten. Block copolymers in which a hard segment of the polymer has a tendency to crystallize and a soft segment has an amorphous structure are one type of thermoplastic elastomers: the hard segments ensure wide-meshed, physical crosslinking. Crystallinity When applied to polymers, the term crystalline has a somewhat ambiguous usage. In some cases, the term crystalline finds identical usage to that used in conventional crystallography. For example, the structure of a crystalline protein or polynucleotide, such as a sample prepared for x-ray crystallography, may be defined in terms of a conventional unit cell composed of one or more polymer molecules with cell dimensions of hundreds of angstroms or more. A synthetic polymer may be loosely described as crystalline if it contains regions of three-dimensional ordering on atomic (rather than macromolecular) length scales, usually arising from intramolecular folding or stacking of adjacent chains. Synthetic polymers may consist of both crystalline and amorphous regions; the degree of crystallinity may be expressed in terms of a weight fraction or volume fraction of crystalline material. Few synthetic polymers are entirely crystalline. The crystallinity of polymers is characterized by their degree of crystallinity, ranging from zero for a completely non-crystalline polymer to one for a theoretical completely crystalline polymer. Polymers with microcrystalline regions are generally tougher (can be bent more without breaking) and more impact-resistant than totally amorphous polymers. Polymers with a degree of crystallinity approaching zero or one will tend to be transparent, while polymers with intermediate degrees of crystallinity will tend to be opaque due to light scattering by crystalline or glassy regions. For many polymers, crystallinity may also be associated with decreased transparency. Chain conformation The space occupied by a polymer molecule is generally expressed in terms of radius of gyration, which is an average distance from the center of mass of the chain to the chain itself. Alternatively, it may be expressed in terms of pervaded volume, which is the volume spanned by the polymer chain and scales with the cube of the radius of gyration. The simplest theoretical models for polymers in the molten, amorphous state are ideal chains. Properties Polymer properties depend of their structure and they are divided into classes according to their physical bases. Many physical and chemical properties describe how a polymer behaves as a continuous macroscopic material. They are classified as bulk properties, or intensive properties according to thermodynamics. Mechanical properties The bulk properties of a polymer are those most often of end-use interest. These are the properties that dictate how the polymer actually behaves on a macroscopic scale. Tensile strength The tensile strength of a material quantifies how much elongating stress the material will endure before failure. This is very important in applications that rely upon a polymer's physical strength or durability. For example, a rubber band with a higher tensile strength will hold a greater weight before snapping. In general, tensile strength increases with polymer chain length and crosslinking of polymer chains. Young's modulus of elasticity Young's modulus quantifies the elasticity of the polymer. It is defined, for small strains, as the ratio of rate of change of stress to strain. Like tensile strength, this is highly relevant in polymer applications involving the physical properties of polymers, such as rubber bands. The modulus is strongly dependent on temperature. Viscoelasticity describes a complex time-dependent elastic response, which will exhibit hysteresis in the stress-strain curve when the load is removed. Dynamic mechanical analysis or DMA measures this complex modulus by oscillating the load and measuring the resulting strain as a function of time. Transport properties Transport properties such as diffusivity describe how rapidly molecules move through the polymer matrix. These are very important in many applications of polymers for films and membranes. The movement of individual macromolecules occurs by a process called reptation in which each chain molecule is constrained by entanglements with neighboring chains to move within a virtual tube. The theory of reptation can explain polymer molecule dynamics and viscoelasticity. Phase behavior Crystallization and melting Depending on their chemical structures, polymers may be either semi-crystalline or amorphous. Semi-crystalline polymers can undergo crystallization and melting transitions, whereas amorphous polymers do not. In polymers, crystallization and melting do not suggest solid-liquid phase transitions, as in the case of water or other molecular fluids. Instead, crystallization and melting refer to the phase transitions between two solid states (i.e., semi-crystalline and amorphous). Crystallization occurs above the glass-transition temperature (Tg) and below the melting temperature (Tm). Glass transition All polymers (amorphous or semi-crystalline) go through glass transitions. The glass-transition temperature (Tg) is a crucial physical parameter for polymer manufacturing, processing, and use. Below Tg, molecular motions are frozen and polymers are brittle and glassy. Above Tg, molecular motions are activated and polymers are rubbery and viscous. The glass-transition temperature may be engineered by altering the degree of branching or crosslinking in the polymer or by the addition of plasticizers. Whereas crystallization and melting are first-order phase transitions, the glass transition is not. The glass transition shares features of second-order phase transitions (such as discontinuity in the heat capacity, as shown in the figure), but it is generally not considered a thermodynamic transition between equilibrium states. Mixing behavior In general, polymeric mixtures are far less miscible than mixtures of small molecule materials. This effect results from the fact that the driving force for mixing is usually entropy, not interaction energy. In other words, miscible materials usually form a solution not because their interaction with each other is more favorable than their self-interaction, but because of an increase in entropy and hence free energy associated with increasing the amount of volume available to each component. This increase in entropy scales with the number of particles (or moles) being mixed. Since polymeric molecules are much larger and hence generally have much higher specific volumes than small molecules, the number of molecules involved in a polymeric mixture is far smaller than the number in a small molecule mixture of equal volume. The energetics of mixing, on the other hand, is comparable on a per volume basis for polymeric and small molecule mixtures. This tends to increase the free energy of mixing for polymer solutions and thereby making solvation less favorable, and thereby making the availability of concentrated solutions of polymers far rarer than those of small molecules. Furthermore, the phase behavior of polymer solutions and mixtures is more complex than that of small molecule mixtures. Whereas most small molecule solutions exhibit only an upper critical solution temperature phase transition (UCST), at which phase separation occurs with cooling, polymer mixtures commonly exhibit a lower critical solution temperature phase transition (LCST), at which phase separation occurs with heating. In dilute solutions, the properties of the polymer are characterized by the interaction between the solvent and the polymer. In a good solvent, the polymer appears swollen and occupies a large volume. In this scenario, intermolecular forces between the solvent and monomer subunits dominate over intramolecular interactions. In a bad solvent or poor solvent, intramolecular forces dominate and the chain contracts. In the theta solvent, or the state of the polymer solution where the value of the second virial coefficient becomes 0, the intermolecular polymer-solvent repulsion balances exactly the intramolecular monomer-monomer attraction. Under the theta condition (also called the Flory condition), the polymer behaves like an ideal random coil. The transition between the states is known as a coil–globule transition. Inclusion of plasticizers Inclusion of plasticizers tends to lower Tg and increase polymer flexibility. Addition of the plasticizer will also modify dependence of the glass-transition temperature Tg on the cooling rate. The mobility of the chain can further change if the molecules of plasticizer give rise to hydrogen bonding formation. Plasticizers are generally small molecules that are chemically similar to the polymer and create gaps between polymer chains for greater mobility and fewer interchain interactions. A good example of the action of plasticizers is related to polyvinylchlorides or PVCs. A uPVC, or unplasticized polyvinylchloride, is used for things such as pipes. A pipe has no plasticizers in it, because it needs to remain strong and heat-resistant. Plasticized PVC is used in clothing for a flexible quality. Plasticizers are also put in some types of cling film to make the polymer more flexible. Chemical properties The attractive forces between polymer chains play a large part in determining the polymer's properties. Because polymer chains are so long, they have many such interchain interactions per molecule, amplifying the effect of these interactions on the polymer properties in comparison to attractions between conventional molecules. Different side groups on the polymer can lend the polymer to ionic bonding or hydrogen bonding between its own chains. These stronger forces typically result in higher tensile strength and higher crystalline melting points. The intermolecular forces in polymers can be affected by dipoles in the monomer units. Polymers containing amide or carbonyl groups can form hydrogen bonds between adjacent chains; the partially positively charged hydrogen atoms in N-H groups of one chain are strongly attracted to the partially negatively charged oxygen atoms in C=O groups on another. These strong hydrogen bonds, for example, result in the high tensile strength and melting point of polymers containing urethane or urea linkages. Polyesters have dipole-dipole bonding between the oxygen atoms in C=O groups and the hydrogen atoms in H-C groups. Dipole bonding is not as strong as hydrogen bonding, so a polyester's melting point and strength are lower than Kevlar's (Twaron), but polyesters have greater flexibility. Polymers with non-polar units such as polyethylene interact only through weak Van der Waals forces. As a result, they typically have lower melting temperatures than other polymers. When a polymer is dispersed or dissolved in a liquid, such as in commercial products like paints and glues, the chemical properties and molecular interactions influence how the solution flows and can even lead to self-assembly of the polymer into complex structures. When a polymer is applied as a coating, the chemical properties will influence the adhesion of the coating and how it interacts with external materials, such as superhydrophobic polymer coatings leading to water resistance. Overall the chemical properties of a polymer are important elements for designing new polymeric material products. Optical properties Polymers such as PMMA and HEMA:MMA are used as matrices in the gain medium of solid-state dye lasers, also known as solid-state dye-doped polymer lasers. These polymers have a high surface quality and are also highly transparent so that the laser properties are dominated by the laser dye used to dope the polymer matrix. These type of lasers, that also belong to the class of organic lasers, are known to yield very narrow linewidths which is useful for spectroscopy and analytical applications. An important optical parameter in the polymer used in laser applications is the change in refractive index with temperature also known as dn/dT. For the polymers mentioned here the (dn/dT) ~ −1.4 × 10−4 in units of K−1 in the 297 ≤ T ≤ 337 K range. Electrical properties Most conventional polymers such as polyethylene are electrical insulators, but the development of polymers containing π-conjugated bonds has led to a wealth of polymer-based semiconductors, such as polythiophenes. This has led to many applications in the field of organic electronics. Applications Nowadays, synthetic polymers are used in almost all walks of life. Modern society would look very different without them. The spreading of polymer use is connected to their unique properties: low density, low cost, good thermal/electrical insulation properties, high resistance to corrosion, low-energy demanding polymer manufacture and facile processing into final products. For a given application, the properties of a polymer can be tuned or enhanced by combination with other materials, as in composites. Their application allows to save energy (lighter cars and planes, thermally insulated buildings), protect food and drinking water (packaging), save land and lower use of fertilizers (synthetic fibres), preserve other materials (coatings), protect and save lives (hygiene, medical applications). A representative, non-exhaustive list of applications is given below. Clothing, sportswear and accessories: polyester and PVC clothing, spandex, sport shoes, wetsuits, footballs and billiard balls, skis and snowboards, rackets, parachutes, sails, tents and shelters. Electronic and photonic technologies: organic field effect transistors (OFET), light emitting diodes (OLED) and solar cells, television components, compact discs (CD), photoresists, holography. Packaging and containers: films, bottles, food packaging, barrels. Insulation: electrical and thermal insulation, spray foams. Construction and structural applications: garden furniture, PVC windows, flooring, sealing, pipes. Paints, glues and lubricants: varnish, adhesives, dispersants, anti-graffiti coatings, antifouling coatings, non-stick surfaces, lubricants. Car parts: tires, bumpers, windshields, windscreen wipers, fuel tanks, car seats. Household items: buckets, kitchenware, toys (e.g., construction sets and Rubik's cube). Medical applications: blood bag, syringes, rubber gloves, surgical suture, contact lenses, prosthesis, controlled drug delivery and release, matrices for cell growth. Personal hygiene and healthcare: diapers using superabsorbent polymers, toothbrushes, cosmetics, shampoo, condoms. Security: personal protective equipment, bulletproof vests, space suits, ropes. Separation technologies: synthetic membranes, fuel cell membranes, filtration, ion-exchange resins. Money: polymer banknotes and payment cards. 3D printing. Standardized nomenclature There are multiple conventions for naming polymer substances. Many commonly used polymers, such as those found in consumer products, are referred to by a common or trivial name. The trivial name is assigned based on historical precedent or popular usage rather than a standardized naming convention. Both the American Chemical Society (ACS) and IUPAC have proposed standardized naming conventions; the ACS and IUPAC conventions are similar but not identical. Examples of the differences between the various naming conventions are given in the table below: In both standardized conventions, the polymers' names are intended to reflect the monomer(s) from which they are synthesized (source based nomenclature) rather than the precise nature of the repeating subunit. For example, the polymer synthesized from the simple alkene ethene is called polyethene, retaining the -ene suffix even though the double bond is removed during the polymerization process: → However, IUPAC structure based nomenclature is based on naming of the preferred constitutional repeating unit. IUPAC has also issued guidelines for abbreviating new polymer names. 138 common polymer abbreviations are also standardized in the standard ISO 1043-1. Characterization Polymer characterization spans many techniques for determining the chemical composition, molecular weight distribution, and physical properties. Select common techniques include the following: Size-exclusion chromatography (also called gel permeation chromatography), sometimes coupled with static light scattering, can used to determine the number-average molecular weight, weight-average molecular weight, and dispersity. Scattering techniques, such as static light scattering and small-angle neutron scattering, are used to determine the dimensions (radius of gyration) of macromolecules in solution or in the melt. These techniques are also used to characterize the three-dimensional structure of microphase-separated block polymers, polymeric micelles, and other materials. Wide-angle X-ray scattering (also called wide-angle X-ray diffraction) is used to determine the crystalline structure of polymers (or lack thereof). Spectroscopy techniques, including Fourier-transform infrared spectroscopy, Raman spectroscopy, and nuclear magnetic resonance spectroscopy, can be used to determine the chemical composition. Differential scanning calorimetry is used to characterize the thermal properties of polymers, such as the glass-transition temperature, crystallization temperature, and melting temperature. The glass-transition temperature can also be determined by dynamic mechanical analysis. Thermogravimetry is a useful technique to evaluate the thermal stability of the polymer. Rheology is used to characterize the flow and deformation behavior. It can be used to determine the viscosity, modulus, and other rheological properties. Rheology is also often used to determine the molecular architecture (molecular weight, molecular weight distribution, branching) and to understand how the polymer can be processed. Degradation Polymer degradation is a change in the properties—tensile strength, color, shape, or molecular weight—of a polymer or polymer-based product under the influence of one or more environmental factors, such as heat, light, and the presence of certain chemicals, oxygen, and enzymes. This change in properties is often the result of bond breaking in the polymer backbone (chain scission) which may occur at the chain ends or at random positions in the chain. Although such changes are frequently undesirable, in some cases, such as biodegradation and recycling, they may be intended to prevent environmental pollution. Degradation can also be useful in biomedical settings. For example, a copolymer of polylactic acid and polyglycolic acid is employed in hydrolysable stitches that slowly degrade after they are applied to a wound. The susceptibility of a polymer to degradation depends on its structure. Epoxies and chains containing aromatic functionalities are especially susceptible to UV degradation while polyesters are susceptible to degradation by hydrolysis. Polymers containing an unsaturated backbone degrade via ozone cracking. Carbon based polymers are more susceptible to thermal degradation than inorganic polymers such as polydimethylsiloxane and are therefore not ideal for most high-temperature applications. The degradation of polyethylene occurs by random scission—a random breakage of the bonds that hold the atoms of the polymer together. When heated above 450 °C, polyethylene degrades to form a mixture of hydrocarbons. In the case of chain-end scission, monomers are released and this process is referred to as unzipping or depolymerization. Which mechanism dominates will depend on the type of polymer and temperature; in general, polymers with no or a single small substituent in the repeat unit will decompose via random-chain scission. The sorting of polymer waste for recycling purposes may be facilitated by the use of the resin identification codes developed by the Society of the Plastics Industry to identify the type of plastic. Product failure Failure of safety-critical polymer components can cause serious accidents, such as fire in the case of cracked and degraded polymer fuel lines. Chlorine-induced cracking of acetal resin plumbing joints and polybutylene pipes has caused many serious floods in domestic properties, especially in the US in the 1990s. Traces of chlorine in the water supply attacked polymers present in the plumbing, a problem which occurs faster if any of the parts have been poorly extruded or injection molded. Attack of the acetal joint occurred because of faulty molding, leading to cracking along the threads of the fitting where there is stress concentration. Polymer oxidation has caused accidents involving medical devices. One of the oldest known failure modes is ozone cracking caused by chain scission when ozone gas attacks susceptible elastomers, such as natural rubber and nitrile rubber. They possess double bonds in their repeat units which are cleaved during ozonolysis. Cracks in fuel lines can penetrate the bore of the tube and cause fuel leakage. If cracking occurs in the engine compartment, electric sparks can ignite the gasoline and can cause a serious fire. In medical use degradation of polymers can lead to changes of physical and chemical characteristics of implantable devices. Nylon 66 is susceptible to acid hydrolysis, and in one accident, a fractured fuel line led to a spillage of diesel into the road. If diesel fuel leaks onto the road, accidents to following cars can be caused by the slippery nature of the deposit, which is like black ice. Furthermore, the asphalt concrete road surface will suffer damage as a result of the diesel fuel dissolving the asphaltenes from the composite material, this resulting in the degradation of the asphalt surface and structural integrity of the road. History Polymers have been essential components of commodities since the early days of humankind. The use of wool (keratin), cotton and linen fibres (cellulose) for garments, paper reed (cellulose) for paper are just a few examples of how ancient societies exploited polymer-containing raw materials to obtain artefacts. The latex sap of "caoutchouc" trees (natural rubber) reached Europe in the 16th century from South America long after the Olmec, Maya and Aztec had started using it as a material to make balls, waterproof textiles and containers. The chemical manipulation of polymers dates back to the 19th century, although at the time the nature of these species was not understood. The behaviour of polymers was initially rationalised according to the theory proposed by Thomas Graham which considered them as colloidal aggregates of small molecules held together by unknown forces. Notwithstanding the lack of theoretical knowledge, the potential of polymers to provide innovative, accessible and cheap materials was immediately grasped. The work carried out by Braconnot, Parkes, Ludersdorf, Hayward and many others on the modification of natural polymers determined many significant advances in the field. Their contributions led to the discovery of materials such as celluloid, galalith, parkesine, rayon, vulcanised rubber and, later, Bakelite: all materials that quickly entered industrial manufacturing processes and reached households as garments components (e.g., fabrics, buttons), crockery and decorative items. In 1920, Hermann Staudinger published his seminal work "Über Polymerisation", in which he proposed that polymers were in fact long chains of atoms linked by covalent bonds. His work was debated at length, but eventually it was accepted by the scientific community. Because of this work, Staudinger was awarded the Nobel Prize in 1953. After the 1930s polymers entered a golden age during which new types were discovered and quickly given commercial applications, replacing naturally-sourced materials. This development was fuelled by an industrial sector with a strong economic drive and it was supported by a broad academic community that contributed innovative syntheses of monomers from cheaper raw material, more efficient polymerisation processes, improved techniques for polymer characterisation and advanced, theoretical understanding of polymers. Since 1953, six Nobel prizes have been awarded in the area of polymer science, excluding those for research on biological macromolecules. This further testifies to its impact on modern science and technology. As Lord Todd summarised in 1980, "I am inclined to think that the development of polymerization is perhaps the biggest thing that chemistry has done, where it has had the biggest effect on everyday life".
Physical sciences
Chemistry: General
null
23015
https://en.wikipedia.org/wiki/Programming%20language
Programming language
A programming language is a system of notation for writing computer programs. Programming languages are described in terms of their syntax (form) and semantics (meaning), usually defined by a formal language. Languages usually provide features such as a type system, variables, and mechanisms for error handling. An implementation of a programming language is required in order to execute programs, namely an interpreter or a compiler. An interpreter directly executes the source code, while a compiler produces an executable program. Computer architecture has strongly influenced the design of programming languages, with the most common type (imperative languages—which implement operations in a specified order) developed to perform well on the popular von Neumann architecture. While early programming languages were closely tied to the hardware, over time they have developed more abstraction to hide implementation details for greater simplicity. Thousands of programming languages—often classified as imperative, functional, logic, or object-oriented—have been developed for a wide variety of uses. Many aspects of programming language design involve tradeoffs—for example, exception handling simplifies error handling, but at a performance cost. Programming language theory is the subfield of computer science that studies the design, implementation, analysis, characterization, and classification of programming languages. Definitions Programming languages differ from natural languages in that natural languages are used for interaction between people, while programming languages are designed to allow humans to communicate instructions to machines. The term computer language is sometimes used interchangeably with "programming language". However, usage of these terms varies among authors. In one usage, programming languages are described as a subset of computer languages. Similarly, the term "computer language" may be used in contrast to the term "programming language" to describe languages used in computing but not considered programming languages – for example, markup languages. Some authors restrict the term "programming language" to Turing complete languages. Most practical programming languages are Turing complete, and as such are equivalent in what programs they can compute. Another usage regards programming languages as theoretical constructs for programming abstract machines and computer languages as the subset thereof that runs on physical computers, which have finite hardware resources. John C. Reynolds emphasizes that formal specification languages are just as much programming languages as are the languages intended for execution. He also argues that textual and even graphical input formats that affect the behavior of a computer are programming languages, despite the fact they are commonly not Turing-complete, and remarks that ignorance of programming language concepts is the reason for many flaws in input formats. History Early developments The first programmable computers were invented at the end of the 1940s, and with them, the first programming languages. The earliest computers were programmed in first-generation programming languages (1GLs), machine language (simple instructions that could be directly executed by the processor). This code was very difficult to debug and was not portable between different computer systems. In order to improve the ease of programming, assembly languages (or second-generation programming languages—2GLs) were invented, diverging from the machine language to make programs easier to understand for humans, although they did not increase portability. Initially, hardware resources were scarce and expensive, while human resources were cheaper. Therefore, cumbersome languages that were time-consuming to use, but were closer to the hardware for higher efficiency were favored. The introduction of high-level programming languages (third-generation programming languages—3GLs)—revolutionized programming. These languages abstracted away the details of the hardware, instead being designed to express algorithms that could be understood more easily by humans. For example, arithmetic expressions could now be written in symbolic notation and later translated into machine code that the hardware could execute. In 1957, Fortran (FORmula TRANslation) was invented. Often considered the first compiled high-level programming language, Fortran has remained in use into the twenty-first century. 1960s and 1970s Around 1960, the first mainframes—general purpose computers—were developed, although they could only be operated by professionals and the cost was extreme. The data and instructions were input by punch cards, meaning that no input could be added while the program was running. The languages developed at this time therefore are designed for minimal interaction. After the invention of the microprocessor, computers in the 1970s became dramatically cheaper. New computers also allowed more user interaction, which was supported by newer programming languages. Lisp, implemented in 1958, was the first functional programming language. Unlike Fortran, it supported recursion and conditional expressions, and it also introduced dynamic memory management on a heap and automatic garbage collection. For the next decades, Lisp dominated artificial intelligence applications. In 1978, another functional language, ML, introduced inferred types and polymorphic parameters. After ALGOL (ALGOrithmic Language) was released in 1958 and 1960, it became the standard in computing literature for describing algorithms. Although its commercial success was limited, most popular imperative languages—including C, Pascal, Ada, C++, Java, and C#—are directly or indirectly descended from ALGOL 60. Among its innovations adopted by later programming languages included greater portability and the first use of context-free, BNF grammar. Simula, the first language to support object-oriented programming (including subtypes, dynamic dispatch, and inheritance), also descends from ALGOL and achieved commercial success. C, another ALGOL descendant, has sustained popularity into the twenty-first century. C allows access to lower-level machine operations more than other contemporary languages. Its power and efficiency, generated in part with flexible pointer operations, comes at the cost of making it more difficult to write correct code. Prolog, designed in 1972, was the first logic programming language, communicating with a computer using formal logic notation. With logic programming, the programmer specifies a desired result and allows the interpreter to decide how to achieve it. 1980s to 2000s During the 1980s, the invention of the personal computer transformed the roles for which programming languages were used. New languages introduced in the 1980s included C++, a superset of C that can compile C programs but also supports classes and inheritance. Ada and other new languages introduced support for concurrency. The Japanese government invested heavily into the so-called fifth-generation languages that added support for concurrency to logic programming constructs, but these languages were outperformed by other concurrency-supporting languages. Due to the rapid growth of the Internet and the World Wide Web in the 1990s, new programming languages were introduced to support Web pages and networking. Java, based on C++ and designed for increased portability across systems and security, enjoyed large-scale success because these features are essential for many Internet applications. Another development was that of dynamically typed scripting languages—Python, JavaScript, PHP, and Ruby—designed to quickly produce small programs that coordinate existing applications. Due to their integration with HTML, they have also been used for building web pages hosted on servers. 2000s to present During the 2000s, there was a slowdown in the development of new programming languages that achieved widespread popularity. One innovation was service-oriented programming, designed to exploit distributed systems whose components are connected by a network. Services are similar to objects in object-oriented programming, but run on a separate process. C# and F# cross-pollinated ideas between imperative and functional programming. After 2010, several new languages—Rust, Go, Swift, Zig and Carbon —competed for the performance-critical software for which C had historically been used. Most of the new programming languages uses static typing while a few numbers of new languages use dynamic typing like Ring and Julia. Some of the new programming languages are classified as visual programming languages like Scratch, LabVIEW and PWCT. Also, some of these languages mix between textual and visual programming usage like Ballerina. Also, this trend lead to developing projects that help in developing new VPLs like Blockly by Google. Many game engines like Unreal and Unity added support for visual scripting too. Elements Every programming language includes fundamental elements for describing data and the operations or transformations applied to them, such as adding two numbers or selecting an item from a collection. These elements are governed by syntactic and semantic rules that define their structure and meaning, respectively. Syntax A programming language's surface form is known as its syntax. Most programming languages are purely textual; they use sequences of text including words, numbers, and punctuation, much like written natural languages. On the other hand, some programming languages are graphical, using visual relationships between symbols to specify a program. The syntax of a language describes the possible combinations of symbols that form a syntactically correct program. The meaning given to a combination of symbols is handled by semantics (either formal or hard-coded in a reference implementation). Since most languages are textual, this article discusses textual syntax. The programming language syntax is usually defined using a combination of regular expressions (for lexical structure) and Backus–Naur form (for grammatical structure). Below is a simple grammar, based on Lisp: expression ::= atom | list atom ::= number | symbol number ::= [+-]?['0'-'9']+ symbol ::= ['A'-'Z''a'-'z'].* list ::= '(' expression* ')' This grammar specifies the following: an expression is either an atom or a list; an atom is either a number or a symbol; a number is an unbroken sequence of one or more decimal digits, optionally preceded by a plus or minus sign; a symbol is a letter followed by zero or more of any alphabetical characters (excluding whitespace); and a list is a matched pair of parentheses, with zero or more expressions inside it. The following are examples of well-formed token sequences in this grammar: 12345, () and (a b c232 (1)). Not all syntactically correct programs are semantically correct. Many syntactically correct programs are nonetheless ill-formed, per the language's rules; and may (depending on the language specification and the soundness of the implementation) result in an error on translation or execution. In some cases, such programs may exhibit undefined behavior. Even when a program is well-defined within a language, it may still have a meaning that is not intended by the person who wrote it. Using natural language as an example, it may not be possible to assign a meaning to a grammatically correct sentence or the sentence may be false: "Colorless green ideas sleep furiously." is grammatically well-formed but has no generally accepted meaning. "John is a married bachelor." is grammatically well-formed but expresses a meaning that cannot be true. The following C language fragment is syntactically correct, but performs operations that are not semantically defined (the operation *p >> 4 has no meaning for a value having a complex type and p->im is not defined because the value of p is the null pointer): complex *p = NULL; complex abs_p = sqrt(*p >> 4 + p->im); If the type declaration on the first line were omitted, the program would trigger an error on the undefined variable p during compilation. However, the program would still be syntactically correct since type declarations provide only semantic information. The grammar needed to specify a programming language can be classified by its position in the Chomsky hierarchy. The syntax of most programming languages can be specified using a Type-2 grammar, i.e., they are context-free grammars. Some languages, including Perl and Lisp, contain constructs that allow execution during the parsing phase. Languages that have constructs that allow the programmer to alter the behavior of the parser make syntax analysis an undecidable problem, and generally blur the distinction between parsing and execution. In contrast to Lisp's macro system and Perl's BEGIN blocks, which may contain general computations, C macros are merely string replacements and do not require code execution. Semantics The term semantics refers to the meaning of languages, as opposed to their form (syntax). Static semantics Static semantics defines restrictions on the structure of valid texts that are hard or impossible to express in standard syntactic formalisms. For compiled languages, static semantics essentially include those semantic rules that can be checked at compile time. Examples include checking that every identifier is declared before it is used (in languages that require such declarations) or that the labels on the arms of a case statement are distinct. Many important restrictions of this type, like checking that identifiers are used in the appropriate context (e.g. not adding an integer to a function name), or that subroutine calls have the appropriate number and type of arguments, can be enforced by defining them as rules in a logic called a type system. Other forms of static analyses like data flow analysis may also be part of static semantics. Programming languages such as Java and C# have definite assignment analysis, a form of data flow analysis, as part of their respective static semantics. Dynamic semantics Once data has been specified, the machine must be instructed to perform operations on the data. For example, the semantics may define the strategy by which expressions are evaluated to values, or the manner in which control structures conditionally execute statements. The dynamic semantics (also known as execution semantics) of a language defines how and when the various constructs of a language should produce a program behavior. There are many ways of defining execution semantics. Natural language is often used to specify the execution semantics of languages commonly used in practice. A significant amount of academic research goes into formal semantics of programming languages, which allows execution semantics to be specified in a formal manner. Results from this field of research have seen limited application to programming language design and implementation outside academia. Type system A data type is a set of allowable values and operations that can be performed on these values. Each programming language's type system defines which data types exist, the type of an expression, and how type equivalence and type compatibility function in the language. According to type theory, a language is fully typed if the specification of every operation defines types of data to which the operation is applicable. In contrast, an untyped language, such as most assembly languages, allows any operation to be performed on any data, generally sequences of bits of various lengths. In practice, while few languages are fully typed, most offer a degree of typing. Because different types (such as integers and floats) represent values differently, unexpected results will occur if one type is used when another is expected. Type checking will flag this error, usually at compile time (runtime type checking is more costly). With strong typing, type errors can always be detected unless variables are explicitly cast to a different type. Weak typing occurs when languages allow implicit casting—for example, to enable operations between variables of different types without the programmer making an explicit type conversion. The more cases in which this type coercion is allowed, the fewer type errors can be detected. Commonly supported types Early programming languages often supported only built-in, numeric types such as the integer (signed and unsigned) and floating point (to support operations on real numbers that are not integers). Most programming languages support multiple sizes of floats (often called float and double) and integers depending on the size and precision required by the programmer. Storing an integer in a type that is too small to represent it leads to integer overflow. The most common way of representing negative numbers with signed types is twos complement, although ones complement is also used. Other common types include Boolean—which is either true or false—and character—traditionally one byte, sufficient to represent all ASCII characters. Arrays are a data type whose elements, in many languages, must consist of a single type of fixed length. Other languages define arrays as references to data stored elsewhere and support elements of varying types. Depending on the programming language, sequences of multiple characters, called strings, may be supported as arrays of characters or their own primitive type. Strings may be of fixed or variable length, which enables greater flexibility at the cost of increased storage space and more complexity. Other data types that may be supported include lists, associative (unordered) arrays accessed via keys, records in which data is mapped to names in an ordered structure, and tuples—similar to records but without names for data fields. Pointers store memory addresses, typically referencing locations on the heap where other data is stored. The simplest user-defined type is an ordinal type whose values can be mapped onto the set of positive integers. Since the mid-1980s, most programming languages also support abstract data types, in which the representation of the data and operations are hidden from the user, who can only access an interface. The benefits of data abstraction can include increased reliability, reduced complexity, less potential for name collision, and allowing the underlying data structure to be changed without the client needing to alter its code. Static and dynamic typing In static typing, all expressions have their types determined before a program executes, typically at compile-time. Most widely used, statically typed programming languages require the types of variables to be specified explicitly. In some languages, types are implicit; one form of this is when the compiler can infer types based on context. The downside of implicit typing is the potential for errors to go undetected. Complete type inference has traditionally been associated with functional languages such as Haskell and ML. With dynamic typing, the type is not attached to the variable but only the value encoded in it. A single variable can be reused for a value of a different type. Although this provides more flexibility to the programmer, it is at the cost of lower reliability and less ability for the programming language to check for errors. Some languages allow variables of a union type to which any type of value can be assigned, in an exception to their usual static typing rules. Concurrency In computing, multiple instructions can be executed simultaneously. Many programming languages support instruction-level and subprogram-level concurrency. By the twenty-first century, additional processing power on computers was increasingly coming from the use of additional processors, which requires programmers to design software that makes use of multiple processors simultaneously to achieve improved performance. Interpreted languages such as Python and Ruby do not support the concurrent use of multiple processors. Other programming languages do support managing data shared between different threads by controlling the order of execution of key instructions via the use of semaphores, controlling access to shared data via monitor, or enabling message passing between threads. Exception handling Many programming languages include exception handlers, a section of code triggered by runtime errors that can deal with them in two main ways: Termination: shutting down and handing over control to the operating system. This option is considered the simplest. Resumption: resuming the program near where the exception occurred. This can trigger a repeat of the exception, unless the exception handler is able to modify values to prevent the exception from reoccurring. Some programming languages support dedicating a block of code to run regardless of whether an exception occurs before the code is reached; this is called finalization. There is a tradeoff between increased ability to handle exceptions and reduced performance. For example, even though array index errors are common C does not check them for performance reasons. Although programmers can write code to catch user-defined exceptions, this can clutter a program. Standard libraries in some languages, such as C, use their return values to indicate an exception. Some languages and their compilers have the option of turning on and off error handling capability, either temporarily or permanently. Design and implementation One of the most important influences on programming language design has been computer architecture. Imperative languages, the most commonly used type, were designed to perform well on von Neumann architecture, the most common computer architecture. In von Neumann architecture, the memory stores both data and instructions, while the CPU that performs instructions on data is separate, and data must be piped back and forth to the CPU. The central elements in these languages are variables, assignment, and iteration, which is more efficient than recursion on these machines. Many programming languages have been designed from scratch, altered to meet new needs, and combined with other languages. Many have eventually fallen into disuse. The birth of programming languages in the 1950s was stimulated by the desire to make a universal programming language suitable for all machines and uses, avoiding the need to write code for different computers. By the early 1960s, the idea of a universal language was rejected due to the differing requirements of the variety of purposes for which code was written. Tradeoffs Desirable qualities of programming languages include readability, writability, and reliability. These features can reduce the cost of training programmers in a language, the amount of time needed to write and maintain programs in the language, the cost of compiling the code, and increase runtime performance. Although early programming languages often prioritized efficiency over readability, the latter has grown in importance since the 1970s. Having multiple operations to achieve the same result can be detrimental to readability, as is overloading operators, so that the same operator can have multiple meanings. Another feature important to readability is orthogonality, limiting the number of constructs that a programmer has to learn. A syntax structure that is easily understood and special words that are immediately obvious also supports readability. Writability is the ease of use for writing code to solve the desired problem. Along with the same features essential for readability, abstraction—interfaces that enable hiding details from the client—and expressivity—enabling more concise programs—additionally help the programmer write code. The earliest programming languages were tied very closely to the underlying hardware of the computer, but over time support for abstraction has increased, allowing programmers express ideas that are more remote from simple translation into underlying hardware instructions. Because programmers are less tied to the complexity of the computer, their programs can do more computing with less effort from the programmer. Most programming languages come with a standard library of commonly used functions. Reliability means that a program performs as specified in a wide range of circumstances. Type checking, exception handling, and restricted aliasing (multiple variable names accessing the same region of memory) all can improve a program's reliability. Programming language design often involves tradeoffs. For example, features to improve reliability typically come at the cost of performance. Increased expressivity due to a large number of operators makes writing code easier but comes at the cost of readability. Natural-language programming has been proposed as a way to eliminate the need for a specialized language for programming. However, this goal remains distant and its benefits are open to debate. Edsger W. Dijkstra took the position that the use of a formal language is essential to prevent the introduction of meaningless constructs. Alan Perlis was similarly dismissive of the idea. Specification The specification of a programming language is an artifact that the language users and the implementors can use to agree upon whether a piece of source code is a valid program in that language, and if so what its behavior shall be. A programming language specification can take several forms, including the following: An explicit definition of the syntax, static semantics, and execution semantics of the language. While syntax is commonly specified using a formal grammar, semantic definitions may be written in natural language (e.g., as in the C language), or a formal semantics (e.g., as in Standard ML and Scheme specifications). A description of the behavior of a translator for the language (e.g., the C++ and Fortran specifications). The syntax and semantics of the language have to be inferred from this description, which may be written in natural or formal language. A reference or model implementation, sometimes written in the language being specified (e.g., Prolog or ANSI REXX). The syntax and semantics of the language are explicit in the behavior of the reference implementation. Implementation An implementation of a programming language is the conversion of a program into machine code that can be executed by the hardware. The machine code then can be executed with the help of the operating system. The most common form of interpretation in production code is by a compiler, which translates the source code via an intermediate-level language into machine code, known as an executable. Once the program is compiled, it will run more quickly than with other implementation methods. Some compilers are able to provide further optimization to reduce memory or computation usage when the executable runs, but increasing compilation time. Another implementation method is to run the program with an interpreter, which translates each line of software into machine code just before it executes. Although it can make debugging easier, the downside of interpretation is that it runs 10 to 100 times slower than a compiled executable. Hybrid interpretation methods provide some of the benefits of compilation and some of the benefits of interpretation via partial compilation. One form this takes is just-in-time compilation, in which the software is compiled ahead of time into an intermediate language, and then into machine code immediately before execution. Proprietary languages Although most of the most commonly used programming languages have fully open specifications and implementations, many programming languages exist only as proprietary programming languages with the implementation available only from a single vendor, which may claim that such a proprietary language is their intellectual property. Proprietary programming languages are commonly domain-specific languages or internal scripting languages for a single product; some proprietary languages are used only internally within a vendor, while others are available to external users. Some programming languages exist on the border between proprietary and open; for example, Oracle Corporation asserts proprietary rights to some aspects of the Java programming language, and Microsoft's C# programming language, which has open implementations of most parts of the system, also has Common Language Runtime (CLR) as a closed environment. Many proprietary languages are widely used, in spite of their proprietary nature; examples include MATLAB, VBScript, and Wolfram Language. Some languages may make the transition from closed to open; for example, Erlang was originally Ericsson's internal programming language. Open source programming languages are particularly helpful for open science applications, enhancing the capacity for replication and code sharing. Use Thousands of different programming languages have been created, mainly in the computing field. Individual software projects commonly use five programming languages or more. Programming languages differ from most other forms of human expression in that they require a greater degree of precision and completeness. When using a natural language to communicate with other people, human authors and speakers can be ambiguous and make small errors, and still expect their intent to be understood. However, figuratively speaking, computers "do exactly what they are told to do", and cannot "understand" what code the programmer intended to write. The combination of the language definition, a program, and the program's inputs must fully specify the external behavior that occurs when the program is executed, within the domain of control of that program. On the other hand, ideas about an algorithm can be communicated to humans without the precision required for execution by using pseudocode, which interleaves natural language with code written in a programming language. A programming language provides a structured mechanism for defining pieces of data, and the operations or transformations that may be carried out automatically on that data. A programmer uses the abstractions present in the language to represent the concepts involved in a computation. These concepts are represented as a collection of the simplest elements available (called primitives). Programming is the process by which programmers combine these primitives to compose new programs, or adapt existing ones to new uses or a changing environment. Programs for a computer might be executed in a batch process without human interaction, or a user might type commands in an interactive session of an interpreter. In this case the "commands" are simply programs, whose execution is chained together. When a language can run its commands through an interpreter (such as a Unix shell or other command-line interface), without compiling, it is called a scripting language. Measuring language usage Determining which is the most widely used programming language is difficult since the definition of usage varies by context. One language may occupy the greater number of programmer hours, a different one has more lines of code, and a third may consume the most CPU time. Some languages are very popular for particular kinds of applications. For example, COBOL is still strong in the corporate data center, often on large mainframes; Fortran in scientific and engineering applications; Ada in aerospace, transportation, military, real-time, and embedded applications; and C in embedded applications and operating systems. Other languages are regularly used to write many different kinds of applications. Various methods of measuring language popularity, each subject to a different bias over what is measured, have been proposed: counting the number of job advertisements that mention the language the number of books sold that teach or describe the language estimates of the number of existing lines of code written in the language which may underestimate languages not often found in public searches counts of language references (i.e., to the name of the language) found using a web search engine. Combining and averaging information from various internet sites, stackify.com reported the ten most popular programming languages (in descending order by overall popularity): Java, C, C++, Python, C#, JavaScript, VB .NET, R, PHP, and MATLAB. As of June 2024, the top five programming languages as measured by TIOBE index are Python, C++, C, Java and C#. TIOBE provide a list of top 100 programming languages according to popularity and update this list every month. Dialects, flavors and implementations A dialect of a programming language or a data exchange language is a (relatively small) variation or extension of the language that does not change its intrinsic nature. With languages such as Scheme and Forth, standards may be considered insufficient, inadequate, or illegitimate by implementors, so often they will deviate from the standard, making a new dialect. In other cases, a dialect is created for use in a domain-specific language, often a subset. In the Lisp world, most languages that use basic S-expression syntax and Lisp-like semantics are considered Lisp dialects, although they vary wildly as do, say, Racket and Clojure. As it is common for one language to have several dialects, it can become quite difficult for an inexperienced programmer to find the right documentation. The BASIC language has many dialects. Classifications Programming languages are often placed into four main categories: imperative, functional, logic, and object oriented. Imperative languages are designed to implement an algorithm in a specified order; they include visual programming languages such as .NET for generating graphical user interfaces. Scripting languages, which are partly or fully interpreted rather than compiled, are sometimes considered a separate category but meet the definition of imperative languages. Functional programming languages work by successively applying functions to the given parameters. Although appreciated by many researchers for their simplicity and elegance, problems with efficiency have prevented them from being widely adopted. Logic languages are designed so that the software, rather than the programmer, decides what order in which the instructions are executed. Object-oriented programming—whose characteristic features are data abstraction, inheritance, and dynamic dispatch—is supported by most popular imperative languages and some functional languages. Although markup languages are not programming languages, some have extensions that support limited programming. Additionally, there are special-purpose languages that are not easily compared to other programming languages.
Technology
Programming
null
23048
https://en.wikipedia.org/wiki/Prion
Prion
A prion is a misfolded protein that induces misfolding in normal variants of the same protein, leading to cellular death. Prions are responsible for prion diseases, known as transmissible spongiform encephalopathy (TSEs), which are fatal and transmissible neurodegenerative diseases affecting both humans and animals. These proteins can misfold sporadically, due to genetic mutations, or by exposure to an already misfolded protein, leading to an abnormal three-dimensional structure that can propagate misfolding in other proteins. The term prion comes from "proteinaceous infectious particle". Unlike other infectious agents such as viruses, bacteria, and fungi, prions do not contain nucleic acids (DNA or RNA). Prions are mainly twisted isoforms of the major prion protein (PrP), a naturally occurring protein with an uncertain function. They are the hypothesized cause of various TSEs, including scrapie in sheep, chronic wasting disease (CWD) in deer, bovine spongiform encephalopathy (BSE) in cattle (mad cow disease), and Creutzfeldt–Jakob disease (CJD) in humans. All known prion diseases in mammals affect the structure of the brain or other neural tissues. These diseases are progressive, have no known effective treatment, and are invariably fatal. Most prion diseases were thought to be caused by PrP until 2015 when a prion form of alpha-synuclein was linked to multiple system atrophy (MSA). Prions are also linked to other neurodegenerative diseases like Alzheimer's disease, Parkinson's disease, and amyotrophic lateral sclerosis (ALS), which are sometimes referred to as prion-like diseases. Prions are a type of intrinsically disordered protein that continuously changes conformation unless bound to a specific partner, such as another protein. Once a prion binds to another in the same conformation, it stabilizes and can form a fibril, leading to abnormal protein aggregates called amyloids. These amyloids accumulate in infected tissue, causing damage and cell death. The structural stability of prions makes them resistant to denaturation by chemical or physical agents, complicating disposal and containment, and raising concerns about iatrogenic spread through medical instruments. Etymology and pronunciation The word prion, coined in 1982 by Stanley B. Prusiner, is derived from protein and infection, hence prion, and is short for "proteinaceous infectious particle", in reference to its ability to self-propagate and transmit its conformation to other proteins. Its main pronunciation is , although , as the homographic name of the bird (prions or whalebirds) is pronounced, is also heard. In his 1982 paper introducing the term, Prusiner specified that it is "pronounced pree-on". Prion protein Structure Prions consist of a misfolded form of major prion protein (PrP), a protein that is a natural part of the bodies of humans and other animals. The PrP found in infectious prions has a different structure and is resistant to proteases, the enzymes in the body that can normally break down proteins. The normal form of the protein is called PrPC, while the infectious form is called PrPSc – the C refers to 'cellular' PrP, while the Sc refers to 'scrapie', the prototypic prion disease, occurring in sheep. PrP can also be induced to fold into other more-or-less well-defined isoforms in vitro; although their relationships to the form(s) that are pathogenic in vivo is often unclear, high-resolution structural analyses have begun to reveal structural features that correlate with prion infectivity. PrPC PrPC is a normal protein found on the membranes of cells, "including several blood components of which platelets constitute the largest reservoir in humans." It has 209 amino acids (in humans), one disulfide bond, a molecular mass of 35–36 kDa and a mainly alpha-helical structure. Several topological forms exist; one cell surface form anchored via glycolipid and two transmembrane forms. The normal protein is not sedimentable; meaning that it cannot be separated by centrifuging techniques. It has a complex function, which continues to be investigated. PrPC binds copper(II) ions (those in a +2 oxidation state) with high affinity. This property is supposed to play a role in PrPC’s anti-oxidative properties via reversible oxidation of the N-terminal’s methionine residues into sulfoxide. Moreover, studies have suggested that, in vivo, due to PrPC’s low selectivity to metallic substrates, the protein’s anti oxidative function is impaired when in contact with metals other than copper. PrPC is readily digested by proteinase K and can be liberated from the cell surface by the enzyme phosphoinositide phospholipase C (PI-PLC), which cleaves the glycophosphatidylinositol (GPI) glycolipid anchor. PrP plays an important role in cell-cell adhesion and intracellular signaling in vivo, and may therefore be involved in cell-cell communication in the brain. PrPSc The infectious isoform of PrP, known as PrPSc, or simply the prion, is able to convert normal PrPC proteins into the infectious isoform by changing their conformation, or shape; this, in turn, alters the way the proteins interconnect. PrPSc always causes prion disease. PrPSc has a higher proportion of β-sheet structure in place of the normal α-helix structure. Several highly infectious, brain-derived PrPSc structures have been discovered by cryo-electron microscopy. Another brain-derived fibril structure isolated from humans with Gerstmann-Straussler-Schienker syndrome has also been determined. All of the structures described in high resolution so far are amyloid fibers in which individual PrP molecules are stacked via intermolecular beta sheets. However, 2-D crystalline arrays have also been reported at lower resolution in ex vivo preparations of prions. In the prion amyloids, the glycolipid anchors and asparagine-linked glycans, when present, project outward from the lateral surfaces of the fiber cores. Often PrPSc is bound to cellular membranes, presumably via its array of glycolipid anchors, however, sometimes the fibers are dissociated from membranes and accumulate outside of cells in the form of plaques. The end of each fiber acts as a template onto which free protein molecules may attach, allowing the fiber to grow. This growth process requires complete refolding of PrPC. Different prion strains have distinct templates, or conformations, even when composed of PrP molecules of the same amino acid sequence, as occurs in a particular host genotype. Under most circumstances, only PrP molecules with an identical amino acid sequence to the infectious PrPSc are incorporated into the growing fiber. However, cross-species transmission also happens rarely. PrPres Protease-resistant PrPSc-like protein (PrPres) is the name given to any isoform of PrPc which is structurally altered and converted into a misfolded proteinase K-resistant form. To model conversion of PrPC to PrPSc in vitro, Kocisko et al. showed that PrPSc could cause PrPC to convert to PrPres under cell-free conditions and Soto et al. demonstrated sustained amplification of PrPres and prion infectivity by a procedure involving cyclic amplification of protein misfolding. The term "PrPres" may refer either to protease-resistant forms of PrPSc, which is isolated from infectious tissue and associated with the transmissible spongiform encephalopathy agent, or to other protease-resistant forms of PrP that, for example, might be generated in vitro. Accordingly, unlike PrPSc, PrPres may not necessarily be infectious. Normal function of PrP The physiological function of the prion protein remains poorly understood. While data from in vitro experiments suggest many dissimilar roles, studies on PrP knockout mice have provided only limited information because these animals exhibit only minor abnormalities. In research done in mice, it was found that the cleavage of PrP in peripheral nerves causes the activation of myelin repair in Schwann cells and that the lack of PrP proteins caused demyelination in those cells. PrP and regulated cell death MAVS, RIP1, and RIP3 are prion-like proteins found in other parts of the body. They also polymerise into filamentous amyloid fibers which initiate regulated cell death in the case of a viral infection to prevent the spread of virions to other, surrounding cells. PrP and long-term memory A review of evidence in 2005 suggested that PrP may have a normal function in the maintenance of long-term memory. As well, a 2004 study found that mice lacking genes for normal cellular PrP protein show altered hippocampal long-term potentiation. A recent study that also suggests why this might be the case, found that neuronal protein CPEB has a similar genetic sequence to yeast prion proteins. The prion-like formation of CPEB is essential for maintaining long-term synaptic changes associated with long-term memory formation. PrP and stem cell renewal A 2006 article from the Whitehead Institute for Biomedical Research indicates that PrP expression on stem cells is necessary for an organism's self-renewal of bone marrow. The study showed that all long-term hematopoietic stem cells express PrP on their cell membrane and that hematopoietic tissues with PrP-null stem cells exhibit increased sensitivity to cell depletion. PrP and innate immunity There is some evidence that PrP may play a role in innate immunity, as the expression of PRNP, the PrP gene, is upregulated in many viral infections and PrP has antiviral properties against many viruses, including HIV. Replication The first hypothesis that tried to explain how prions replicate in a protein-only manner was the heterodimer model. This model assumed that a single PrPSc molecule binds to a single PrPC molecule and catalyzes its conversion into PrPSc. The two PrPSc molecules then come apart and can go on to convert more PrPC. However, a model of prion replication must explain both how prions propagate, and why their spontaneous appearance is so rare. Manfred Eigen showed that the heterodimer model requires PrPSc to be an extraordinarily effective catalyst, increasing the rate of the conversion reaction by a factor of around 1015. This problem does not arise if PrPSc exists only in aggregated forms such as amyloid, where cooperativity may act as a barrier to spontaneous conversion. What is more, despite considerable effort, infectious monomeric PrPSc has never been isolated. An alternative model assumes that PrPSc exists only as fibrils, and that fibril ends bind PrPC and convert it into PrPSc. If this were all, then the quantity of prions would increase linearly, forming ever longer fibrils. But exponential growth of both PrPSc and of the quantity of infectious particles is observed during prion disease. This can be explained by taking into account fibril breakage. A mathematical solution for the exponential growth rate resulting from the combination of fibril growth and fibril breakage has been found. The exponential growth rate depends largely on the square root of the PrPC concentration. The incubation period is determined by the exponential growth rate, and in vivo data on prion diseases in transgenic mice match this prediction. The same square root dependence is also seen in vitro in experiments with a variety of different amyloid proteins. The mechanism of prion replication has implications for designing drugs. Since the incubation period of prion diseases is so long, an effective drug does not need to eliminate all prions, but simply needs to slow down the rate of exponential growth. Models predict that the most effective way to achieve this, using a drug with the lowest possible dose, is to find a drug that binds to fibril ends and blocks them from growing any further. Researchers at Dartmouth College discovered that endogenous host cofactor molecules such as the phospholipid molecule (e.g. phosphatidylethanolamine) and polyanions (e.g. single stranded RNA molecules) are necessary to form PrPSc molecules with high levels of specific infectivity in vitro, whereas protein-only PrPSc molecules appear to lack significant levels of biological infectivity. Transmissible spongiform encephalopathies Prions cause neurodegenerative disease by aggregating extracellularly within the central nervous system to form plaques known as amyloids, which disrupt the normal tissue structure. This disruption is characterized by "holes" in the tissue with resultant spongy architecture due to the vacuole formation in the neurons. Other histological changes include astrogliosis and the absence of an inflammatory reaction. While the incubation period for prion diseases is relatively long (5 to 20 years), once symptoms appear the disease progresses rapidly, leading to brain damage and death. Neurodegenerative symptoms can include convulsions, dementia, ataxia (balance and coordination dysfunction), and behavioural or personality changes. Many different mammalian species can be affected by prion diseases, as the prion protein (PrP) is very similar in all mammals. Due to small differences in PrP between different species it is unusual for a prion disease to transmit from one species to another. The human prion disease variant Creutzfeldt–Jakob disease, however, is thought to be caused by a prion that typically infects cattle, causing bovine spongiform encephalopathy and is transmitted through infected meat. All known prion diseases are untreatable and fatal. Until 2015 all known mammalian prion diseases were considered to be caused by the prion protein, PrP; in 2015 multiple system atrophy was found to be transmissible and was hypothesized to be caused by a new prion, the misfolded form of a protein called alpha-synuclein. The endogenous, properly folded form of the prion protein is denoted PrPC (for Common or Cellular), whereas the disease-linked, misfolded form is denoted PrPSc (for Scrapie), after one of the diseases first linked to prions and neurodegeneration. The precise structure of the prion is not known, though they can be formed spontaneously by combining PrPC, homopolymeric polyadenylic acid, and lipids in a protein misfolding cyclic amplification (PMCA) reaction even in the absence of pre-existing infectious prions. This result is further evidence that prion replication does not require genetic information. Transmission It has been recognized that prion diseases can arise in three different ways: acquired, familial, or sporadic. It is often assumed that the diseased form directly interacts with the normal form to make it rearrange its structure. One idea, the "Protein X" hypothesis, is that an as-yet unidentified cellular protein (Protein X) enables the conversion of PrPC to PrPSc by bringing a molecule of each of the two together into a complex. The primary method of infection in animals is through ingestion. It is thought that prions may be deposited in the environment through the remains of dead animals and via urine, saliva, and other body fluids. They may then linger in the soil by binding to clay and other minerals. A University of California research team has provided evidence for the theory that infection can occur from prions in manure. And, since manure is present in many areas surrounding water reservoirs, as well as used on many crop fields, it raises the possibility of widespread transmission. Although it was initially reported in January 2011 that researchers had discovered prions spreading through airborne transmission on aerosol particles in an animal testing experiment focusing on scrapie infection in laboratory mice, this report was retracted in 2024. Preliminary evidence supporting the notion that prions can be transmitted through use of urine-derived human menopausal gonadotropin, administered for the treatment of infertility, was published in 2011. Genetic Susceptibility The majority of human prion diseases are classified as sporadic Creutzfeldt–Jakob disease (sCJD). Genetic research has identified an association between susceptibility to sCJD and a polymorphism at codon 129 in the PRNP gene, which encodes the prion protein (PrP). A homozygous methionine/methionine (MM) genotype at this position has been shown to significantly increase the risk of developing sCJD when compared to a heterozygous methionine/valine (MV) genotype. Analysis of multiple studies has shown that individuals with the MM genotype are approximately five times more likely to develop sCJD than those with the MV genotype. Prions in plants In 2015, researchers at The University of Texas Health Science Center at Houston found that plants can be a vector for prions. When researchers fed hamsters grass that grew on ground where a deer that died with chronic wasting disease (CWD) was buried, the hamsters became ill with CWD, suggesting that prions can bind to plants, which then take them up into the leaf and stem structure, where they can be eaten by herbivores, thus completing the cycle. It is thus possible that there is a progressively accumulating number of prions in the environment. Sterilization Infectious particles possessing nucleic acid are dependent upon it to direct their continued replication. Prions, however, are infectious by their effect on normal versions of the protein. Sterilizing prions, therefore, requires the denaturation of the protein to a state in which the molecule is no longer able to induce the abnormal folding of normal proteins. In general, prions are quite resistant to proteases, heat, ionizing radiation, and formaldehyde treatments, although their infectivity can be reduced by such treatments. Effective prion decontamination relies upon protein hydrolysis or reduction or destruction of protein tertiary structure. Examples include sodium hypochlorite, sodium hydroxide, and strongly acidic detergents such as LpH. The World Health Organization recommends any of the following three procedures for the sterilization of all heat-resistant surgical instruments to ensure that they are not contaminated with prions: Immerse in 1N sodium hydroxide and place in a gravity-displacement autoclave at 121 °C for 30 minutes; clean; rinse in water; and then perform routine sterilization processes. Immerse in 1N sodium hypochlorite (20,000 parts per million available chlorine) for 1 hour; transfer instruments to water; heat in a gravity-displacement autoclave at 121 °C for 1 hour; clean; and then perform routine sterilization processes. Immerse in 1N sodium hydroxide or sodium hypochlorite (20,000 parts per million available chlorine) for 1 hour; remove and rinse in water, then transfer to an open pan and heat in a gravity-displacement (121 °C) or in a porous-load (134 °C) autoclave for 1 hour; clean; and then perform routine sterilization processes. for 18 minutes in a pressurized steam autoclave has been found to be somewhat effective in deactivating the agent of disease. Ozone sterilization has been studied as a potential method for prion denaturation and deactivation. Other approaches being developed include thiourea-urea treatment, guanidinium chloride treatment, and special heat-resistant subtilisin combined with heat and detergent. A method sufficient for sterilizing prions on one material may fail on another. Renaturation of a completely denatured prion to infectious status has not yet been achieved; however, partially denatured prions can be renatured to an infective status under certain artificial conditions. Degradation resistance in nature Overwhelming evidence shows that prions resist degradation and persist in the environment for years, and proteases do not degrade them. Experimental evidence shows that unbound prions degrade over time, while soil-bound prions remain at stable or increasing levels, suggesting that prions likely accumulate in the environment. One 2015 study by US scientists found that repeated drying and wetting may render soil bound prions less infectious, although this was dependent on the soil type they were bound to. Degradation by living beings More recent studies suggest scrapie prions can be degraded by diverse cellular machinery. Inhibition of autophagy accelerates prion accumulation whereas encouragement of autophagy promotes prion clearance. The ubiquitin proteasome system appears to be able to degrade small enough aggregates. In addition, keratinase from B. licheniformis, alkaline serine protease from Streptomyces sp, subtilisin-like pernisine from Aeropyrum pernix, alkaline protease from Nocardiopsis sp, nattokinase from B. subtilis, engineered subtilisins from B. lentus and serine protease from three lichen species have been found to degrade PrPSc. Fungi Proteins showing prion-type behavior are also found in some fungi, which has been useful in helping to understand mammalian prions. Fungal prions do not always cause disease in their hosts. In yeast, protein refolding to the prion configuration is assisted by chaperone proteins such as Hsp104. All known prions induce the formation of an amyloid fold, in which the protein polymerises into an aggregate consisting of tightly packed beta sheets. Amyloid aggregates are fibrils, growing at their ends, and replicate when breakage causes two growing ends to become four growing ends. The incubation period of prion diseases is determined by the exponential growth rate associated with prion replication, which is a balance between the linear growth and the breakage of aggregates. Fungal proteins exhibiting templated conformational change were discovered in the yeast Saccharomyces cerevisiae by Reed Wickner in the early 1990s. For their mechanistic similarity to mammalian prions, they were termed yeast prions. Subsequent to this, a prion has also been found in the fungus Podospora anserina. These prions behave similarly to PrP, but, in general, are nontoxic to their hosts. Susan Lindquist's group at the Whitehead Institute has argued some of the fungal prions are not associated with any disease state, but may have a useful role; however, researchers at the NIH have also provided arguments suggesting that fungal prions could be considered a diseased state. There is evidence that fungal proteins have evolved specific functions that are beneficial to the microorganism that enhance their ability to adapt to their diverse environments. Further, within yeasts, prions can act as vectors of epigenetic inheritance, transferring traits to offspring without any genomic change. Research into fungal prions has given strong support to the protein-only concept, since purified protein extracted from cells with a prion state has been demonstrated to convert the normal form of the protein into a misfolded form in vitro, and in the process, preserve the information corresponding to different strains of the prion state. It has also shed some light on prion domains, which are regions in a protein that promote the conversion into a prion. Fungal prions have helped to suggest mechanisms of conversion that may apply to all prions, though fungal prions appear distinct from infectious mammalian prions in the lack of cofactor required for propagation. The characteristic prion domains may vary between species – e.g., characteristic fungal prion domains are not found in mammalian prions. Treatments There are no effective treatments for prion diseases. Clinical trials in humans have not met with success and have been hampered by the rarity of prion diseases. Although some potential treatments have shown promise in the laboratory, none have been effective once the disease has commenced. In other diseases Prion-like domains have been found in a variety of other mammalian proteins. Some of these proteins have been implicated in the ontogeny of age-related neurodegenerative disorders such as amyotrophic lateral sclerosis (ALS), frontotemporal lobar degeneration with ubiquitin-positive inclusions (FTLD-U), Alzheimer's disease, Parkinson's disease, and Huntington's disease. They are also implicated in some forms of systemic amyloidosis including AA amyloidosis that develops in humans and animals with inflammatory and infectious diseases such as tuberculosis, Crohn's disease, rheumatoid arthritis, and HIV/AIDS. AA amyloidosis, like prion disease, may be transmissible. This has given rise to the 'prion paradigm', where otherwise harmless proteins can be converted to a pathogenic form by a small number of misfolded, nucleating proteins. The definition of a prion-like domain arises from the study of fungal prions. In yeast, prionogenic proteins have a portable prion domain that is both necessary and sufficient for self-templating and protein aggregation. This has been shown by attaching the prion domain to a reporter protein, which then aggregates like a known prion. Similarly, removing the prion domain from a fungal prion protein inhibits prionogenesis. This modular view of prion behaviour has led to the hypothesis that similar prion domains are present in animal proteins, in addition to PrP. These fungal prion domains have several characteristic sequence features. They are typically enriched in asparagine, glutamine, tyrosine and glycine residues, with an asparagine bias being particularly conducive to the aggregative property of prions. Historically, prionogenesis has been seen as independent of sequence and only dependent on relative residue content. However, this has been shown to be false, with the spacing of prolines and charged residues having been shown to be critical in amyloid formation. Bioinformatic screens have predicted that over 250 human proteins contain prion-like domains (PrLD). These domains are hypothesized to have the same transmissible, amyloidogenic properties of PrP and known fungal proteins. As in yeast, proteins involved in gene expression and RNA binding seem to be particularly enriched in PrLD's, compared to other classes of protein. In particular, 29 of the known 210 proteins with an RNA recognition motif also have a putative prion domain. Meanwhile, several of these RNA-binding proteins have been independently identified as pathogenic in cases of ALS, FTLD-U, Alzheimer's disease, and Huntington's disease. Role in neurodegenerative disease The pathogenicity of prions and proteins with prion-like domains is hypothesized to arise from their self-templating ability and the resulting exponential growth of amyloid fibrils. The presence of amyloid fibrils in patients with degenerative diseases has been well documented. These amyloid fibrils are seen as the result of pathogenic proteins that self-propagate and form highly stable, non-functional aggregates. While this does not necessarily imply a causal relationship between amyloid and degenerative diseases, the toxicity of certain amyloid forms and the overproduction of amyloid in familial cases of degenerative disorders supports the idea that amyloid formation is generally toxic. Specifically, aggregation of TDP-43, an RNA-binding protein, has been found in ALS/MND patients, and mutations in the genes coding for these proteins have been identified in familial cases of ALS/MND. These mutations promote the misfolding of the proteins into a prion-like conformation. The misfolded form of TDP-43 forms cytoplasmic inclusions in affected neurons, and is found depleted in the nucleus. In addition to ALS/MND and FTLD-U, TDP-43 pathology is a feature of many cases of Alzheimer's disease, Parkinson's disease and Huntington's disease. The misfolding of TDP-43 is largely directed by its prion-like domain. This domain is inherently prone to misfolding, while pathological mutations in TDP-43 have been found to increase this propensity to misfold, explaining the presence of these mutations in familial cases of ALS/MND. As in yeast, the prion-like domain of TDP-43 has been shown to be both necessary and sufficient for protein misfolding and aggregation. Similarly, pathogenic mutations have been identified in the prion-like domains of heterogeneous nuclear riboproteins hnRNPA2B1 and hnRNPA1 in familial cases of muscle, brain, bone and motor neuron degeneration. The wild-type form of all of these proteins show a tendency to self-assemble into amyloid fibrils, while the pathogenic mutations exacerbate this behaviour and lead to excess accumulation. Weaponization Prions could theoretically be employed as a weaponized agent. With potential fatality rates of 100%, prions could be an effective bioweapon, sometimes called a "biochemical weapon", because a prion is a biochemical. An unfavorable aspect is prions' very long incubation periods. Persistent heavy exposure of prions to the intestine might shorten the overall onset. Another aspect of using prions in warfare is the difficulty of detection and decontamination. History In the 18th and 19th centuries, exportation of sheep from Spain was observed to coincide with a disease called scrapie. This disease caused the affected animals to "lie down, bite at their feet and legs, rub their backs against posts, fail to thrive, stop feeding and finally become lame". The disease was also observed to have the long incubation period that is a key characteristic of transmissible spongiform encephalopathies (TSEs). Although the cause of scrapie was not known back then, it is probably the first transmissible spongiform encephalopathy to be recorded. In the 1950s, Carleton Gajdusek began research which eventually showed that kuru could be transmitted to chimpanzees by what was possibly a new infectious agent, work for which he eventually won the 1976 Nobel prize. During the 1960s, two London-based researchers, radiation biologist Tikvah Alper and biophysicist John Stanley Griffith, developed the hypothesis that the transmissible spongiform encephalopathies are caused by an infectious agent consisting solely of proteins. Earlier investigations by E.J. Field into scrapie and kuru had found evidence for the transfer of pathologically inert polysaccharides that only become infectious post-transfer, in the new host. Alper and Griffith wanted to account for the discovery that the mysterious infectious agent causing the diseases scrapie and Creutzfeldt–Jakob disease resisted ionizing radiation. Griffith proposed three ways in which a protein could be a pathogen. In the first hypothesis, he suggested that if the protein is the product of a normally suppressed gene, and introducing the protein could induce the gene's expression, that is, wake the dormant gene up, then the result would be a process indistinguishable from replication, as the gene's expression would produce the protein, which would then wake the gene in other cells. His second hypothesis forms the basis of the modern prion theory, and proposed that an abnormal form of a cellular protein can convert normal proteins of the same type into its abnormal form, thus leading to replication. His third hypothesis proposed that the agent could be an antibody if the antibody was its own target antigen, as such an antibody would result in more and more antibody being produced against itself. However, Griffith acknowledged that this third hypothesis was unlikely to be true due to the lack of a detectable immune response. Francis Crick recognized the potential significance of the Griffith protein-only hypothesis for scrapie propagation in the second edition of his "Central dogma of molecular biology" (1970): While asserting that the flow of sequence information from protein to protein, or from protein to RNA and DNA was "precluded", he noted that Griffith's hypothesis was a potential contradiction (although it was not so promoted by Griffith). The revised hypothesis was later formulated, in part, to accommodate reverse transcription (which both Howard Temin and David Baltimore discovered in 1970). In 1982, Stanley B. Prusiner of the University of California, San Francisco, announced that his team had purified the hypothetical infectious protein, which did not appear to be present in healthy hosts, though they did not manage to isolate the protein until two years after Prusiner's announcement. The protein was named a prion, for "proteinacious infectious particle", derived from the words protein and infection. When the prion was discovered, Griffith's first hypothesis, that the protein was the product of a normally silent gene was favored by many. It was subsequently discovered, however, that the same protein exists in normal hosts but in different form. Following the discovery of the same protein in different form in uninfected individuals, the specific protein that the prion was composed of was named the prion protein (PrP), and Griffith's second hypothesis that an abnormal form of a host protein can convert other proteins of the same type into its abnormal form, became the dominant theory. Prusiner was awarded the Nobel Prize in Physiology or Medicine in 1997 for his research into prions.
Biology and health sciences
Infectious disease
null
23053
https://en.wikipedia.org/wiki/Periodic%20table
Periodic table
The periodic table, also known as the periodic table of the elements, is an ordered arrangement of the chemical elements into rows ("periods") and columns ("groups"). It is an icon of chemistry and is widely used in physics and other sciences. It is a depiction of the periodic law, which states that when the elements are arranged in order of their atomic numbers an approximate recurrence of their properties is evident. The table is divided into four roughly rectangular areas called blocks. Elements in the same group tend to show similar chemical characteristics. Vertical, horizontal and diagonal trends characterize the periodic table. Metallic character increases going down a group and from right to left across a period. Nonmetallic character increases going from the bottom left of the periodic table to the top right. The first periodic table to become generally accepted was that of the Russian chemist Dmitri Mendeleev in 1869; he formulated the periodic law as a dependence of chemical properties on atomic mass. As not all elements were then known, there were gaps in his periodic table, and Mendeleev successfully used the periodic law to predict some properties of some of the missing elements. The periodic law was recognized as a fundamental discovery in the late 19th century. It was explained early in the 20th century, with the discovery of atomic numbers and associated pioneering work in quantum mechanics, both ideas serving to illuminate the internal structure of the atom. A recognisably modern form of the table was reached in 1945 with Glenn T. Seaborg's discovery that the actinides were in fact f-block rather than d-block elements. The periodic table and law are now a central and indispensable part of modern chemistry. The periodic table continues to evolve with the progress of science. In nature, only elements up to atomic number 94 exist; to go further, it was necessary to synthesize new elements in the laboratory. By 2010, the first 118 elements were known, thereby completing the first seven rows of the table; however, chemical characterization is still needed for the heaviest elements to confirm that their properties match their positions. New discoveries will extend the table beyond these seven rows, though it is not yet known how many more elements are possible; moreover, theoretical calculations suggest that this unknown region will not follow the patterns of the known part of the table. Some scientific discussion also continues regarding whether some elements are correctly positioned in today's table. Many alternative representations of the periodic law exist, and there is some discussion as to whether there is an optimal form of the periodic table. Structure Each chemical element has a unique atomic number (Z for "Zahl", German for "number") representing the number of protons in its nucleus. Each distinct atomic number therefore corresponds to a class of atom: these classes are called the chemical elements. The chemical elements are what the periodic table classifies and organizes. Hydrogen is the element with atomic number 1; helium, atomic number 2; lithium, atomic number 3; and so on. Each of these names can be further abbreviated by a one- or two-letter chemical symbol; those for hydrogen, helium, and lithium are respectively H, He, and Li. Neutrons do not affect the atom's chemical identity, but do affect its weight. Atoms with the same number of protons but different numbers of neutrons are called isotopes of the same chemical element. Naturally occurring elements usually occur as mixes of different isotopes; since each isotope usually occurs with a characteristic abundance, naturally occurring elements have well-defined atomic weights, defined as the average mass of a naturally occurring atom of that element. All elements have multiple isotopes, variants with the same number of protons but different numbers of neutrons. For example, carbon has three naturally occurring isotopes: all of its atoms have six protons and most have six neutrons as well, but about one per cent have seven neutrons, and a very small fraction have eight neutrons. Isotopes are never separated in the periodic table; they are always grouped together under a single element. When atomic mass is shown, it is usually the weighted average of naturally occurring isotopes; but if no isotopes occur naturally in significant quantities, the mass of the most stable isotope usually appears, often in parentheses. In the standard periodic table, the elements are listed in order of increasing atomic number. A new row (period) is started when a new electron shell has its first electron. Columns (groups) are determined by the electron configuration of the atom; elements with the same number of electrons in a particular subshell fall into the same columns (e.g. oxygen, sulfur, and selenium are in the same column because they all have four electrons in the outermost p-subshell). Elements with similar chemical properties generally fall into the same group in the periodic table, although in the f-block, and to some respect in the d-block, the elements in the same period tend to have similar properties, as well. Thus, it is relatively easy to predict the chemical properties of an element if one knows the properties of the elements around it. Today, 118 elements are known, the first 94 of which are known to occur naturally on Earth at present. The remaining 24, americium to oganesson (95–118), occur only when synthesized in laboratories. Of the 94 naturally occurring elements, 83 are primordial and 11 occur only in decay chains of primordial elements. A few of the latter are so rare that they were not discovered in nature, but were synthesized in the laboratory before it was determined that they do exist in nature after all: technetium (element 43), promethium (element 61), astatine (element 85), neptunium (element 93), and plutonium (element 94). No element heavier than einsteinium (element 99) has ever been observed in macroscopic quantities in its pure form, nor has astatine; francium (element 87) has been only photographed in the form of light emitted from microscopic quantities (300,000 atoms). Of the 94 natural elements, eighty have a stable isotope and one more (bismuth) has an almost-stable isotope (with a half-life of 2.01×1019 years, over a billion times the age of the universe). Two more, thorium and uranium, have isotopes undergoing radioactive decay with a half-life comparable to the age of the Earth. The stable elements plus bismuth, thorium, and uranium make up the 83 primordial elements that survived from the Earth's formation. The remaining eleven natural elements decay quickly enough that their continued trace occurrence rests primarily on being constantly regenerated as intermediate products of the decay of thorium and uranium. All 24 known artificial elements are radioactive. Group names and numbers Under an international naming convention, the groups are numbered numerically from 1 to 18 from the leftmost column (the alkali metals) to the rightmost column (the noble gases). The f-block groups are ignored in this numbering. Groups can also be named by their first element, e.g. the "scandium group" for group 3. Previously, groups were known by Roman numerals. In the United States, the Roman numerals were followed by either an "A" if the group was in the s- or p-block, or a "B" if the group was in the d-block. The Roman numerals used correspond to the last digit of today's naming convention (e.g. the group 4 elements were group IVB, and the group 14 elements were group IVA). In Europe, the lettering was similar, except that "A" was used for groups 1 through 7, and "B" was used for groups 11 through 17. In addition, groups 8, 9 and 10 used to be treated as one triple-sized group, known collectively in both notations as group VIII. In 1988, the new IUPAC (International Union of Pure and Applied Chemistry) naming system (1–18) was put into use, and the old group names (I–VIII) were deprecated. Presentation forms 32 columns 18 columns For reasons of space, the periodic table is commonly presented with the f-block elements cut out and positioned as a distinct part below the main body. This reduces the number of element columns from 32 to 18. Both forms represent the same periodic table. The form with the f-block included in the main body is sometimes called the 32-column or long form; the form with the f-block cut out the 18-column or medium-long form. The 32-column form has the advantage of showing all elements in their correct sequence, but it has the disadvantage of requiring more space. The form chosen is an editorial choice, and does not imply any change of scientific claim or statement. For example, when discussing the composition of group 3, the options can be shown equally (unprejudiced) in both forms. Periodic tables usually at least show the elements' symbols; many also provide supplementary information about the elements, either via colour-coding or as data in the cells. The above table shows the names and atomic numbers of the elements, and also their blocks, natural occurrences and standard atomic weights. For the short-lived elements without standard atomic weights, the mass number of the most stable known isotope is used instead. Other tables may include properties such as state of matter, melting and boiling points, densities, as well as provide different classifications of the elements. Electron configurations The periodic table is a graphic description of the periodic law, which states that the properties and atomic structures of the chemical elements are a periodic function of their atomic number. Elements are placed in the periodic table according to their electron configurations, the periodic recurrences of which explain the trends in properties across the periodic table. An electron can be thought of as inhabiting an atomic orbital, which characterizes the probability it can be found in any particular region around the atom. Their energies are quantised, which is to say that they can only take discrete values. Furthermore, electrons obey the Pauli exclusion principle: different electrons must always be in different states. This allows classification of the possible states an electron can take in various energy levels known as shells, divided into individual subshells, which each contain one or more orbitals. Each orbital can contain up to two electrons: they are distinguished by a quantity known as spin, conventionally labelled "up" or "down". In a cold atom (one in its ground state), electrons arrange themselves in such a way that the total energy they have is minimized by occupying the lowest-energy orbitals available. Only the outermost electrons (so-called valence electrons) have enough energy to break free of the nucleus and participate in chemical reactions with other atoms. The others are called core electrons. Elements are known with up to the first seven shells occupied. The first shell contains only one orbital, a spherical s orbital. As it is in the first shell, this is called the 1s orbital. This can hold up to two electrons. The second shell similarly contains a 2s orbital, and it also contains three dumbbell-shaped 2p orbitals, and can thus fill up to eight electrons (2×1 + 2×3 = 8). The third shell contains one 3s orbital, three 3p orbitals, and five 3d orbitals, and thus has a capacity of 2×1 + 2×3 + 2×5 = 18. The fourth shell contains one 4s orbital, three 4p orbitals, five 4d orbitals, and seven 4f orbitals, thus leading to a capacity of 2×1 + 2×3 + 2×5 + 2×7 = 32. Higher shells contain more types of orbitals that continue the pattern, but such types of orbitals are not filled in the ground states of known elements. The subshell types are characterized by the quantum numbers. Four numbers describe an orbital in an atom completely: the principal quantum number n, the azimuthal quantum number ℓ (the orbital type), the orbital magnetic quantum number mℓ, and the spin magnetic quantum number ms. Order of subshell filling The sequence in which the subshells are filled is given in most cases by the Aufbau principle, also known as the Madelung or Klechkovsky rule (after Erwin Madelung and Vsevolod Klechkovsky respectively). This rule was first observed empirically by Madelung, and Klechkovsky and later authors gave it theoretical justification. The shells overlap in energies, and the Madelung rule specifies the sequence of filling according to: 1s ≪ 2s < 2p ≪ 3s < 3p ≪ 4s < 3d < 4p ≪ 5s < 4d < 5p ≪ 6s < 4f < 5d < 6p ≪ 7s < 5f < 6d < 7p ≪ ... Here the sign ≪ means "much less than" as opposed to < meaning just "less than". Phrased differently, electrons enter orbitals in order of increasing n + ℓ, and if two orbitals are available with the same value of n + ℓ, the one with lower n is occupied first. In general, orbitals with the same value of n + ℓ are similar in energy, but in the case of the s-orbitals (with ℓ = 0), quantum effects raise their energy to approach that of the next n + ℓ group. Hence the periodic table is usually drawn to begin each row (often called a period) with the filling of a new s-orbital, which corresponds to the beginning of a new shell. Thus, with the exception of the first row, each period length appears twice: 2, 8, 8, 18, 18, 32, 32, ... The overlaps get quite close at the point where the d-orbitals enter the picture, and the order can shift slightly with atomic number and atomic charge. Starting from the simplest atom, this lets us build up the periodic table one at a time in order of atomic number, by considering the cases of single atoms. In hydrogen, there is only one electron, which must go in the lowest-energy orbital 1s. This electron configuration is written 1s1, where the superscript indicates the number of electrons in the subshell. Helium adds a second electron, which also goes into 1s, completely filling the first shell and giving the configuration 1s2. Starting from the third element, lithium, the first shell is full, so its third electron occupies a 2s orbital, giving a 1s2 2s1 configuration. The 2s electron is lithium's only valence electron, as the 1s subshell is now too tightly bound to the nucleus to participate in chemical bonding to other atoms: such a shell is called a "core shell". The 1s subshell is a core shell for all elements from lithium onward. The 2s subshell is completed by the next element beryllium (1s2 2s2). The following elements then proceed to fill the 2p subshell. Boron (1s2 2s2 2p1) puts its new electron in a 2p orbital; carbon (1s2 2s2 2p2) fills a second 2p orbital; and with nitrogen (1s2 2s2 2p3) all three 2p orbitals become singly occupied. This is consistent with Hund's rule, which states that atoms usually prefer to singly occupy each orbital of the same type before filling them with the second electron. Oxygen (1s2 2s2 2p4), fluorine (1s2 2s2 2p5), and neon (1s2 2s2 2p6) then complete the already singly filled 2p orbitals; the last of these fills the second shell completely. Starting from element 11, sodium, the second shell is full, making the second shell a core shell for this and all heavier elements. The eleventh electron begins the filling of the third shell by occupying a 3s orbital, giving a configuration of 1s2 2s2 2p6 3s1 for sodium. This configuration is abbreviated [Ne] 3s1, where [Ne] represents neon's configuration. Magnesium ([Ne] 3s2) finishes this 3s orbital, and the following six elements aluminium, silicon, phosphorus, sulfur, chlorine, and argon fill the three 3p orbitals ([Ne] 3s2 3p1 through [Ne] 3s2 3p6). This creates an analogous series in which the outer shell structures of sodium through argon are analogous to those of lithium through neon, and is the basis for the periodicity of chemical properties that the periodic table illustrates: at regular but changing intervals of atomic numbers, the properties of the chemical elements approximately repeat. The first 18 elements can thus be arranged as the start of a periodic table. Elements in the same column have the same number of valence electrons and have analogous valence electron configurations: these columns are called groups. The single exception is helium, which has two valence electrons like beryllium and magnesium, but is typically placed in the column of neon and argon to emphasise that its outer shell is full. (Some contemporary authors question even this single exception, preferring to consistently follow the valence configurations and place helium over beryllium.) There are eight columns in this periodic table fragment, corresponding to at most eight outer-shell electrons. A period begins when a new shell starts filling. Finally, the colouring illustrates the blocks: the elements in the s-block (coloured red) are filling s-orbitals, while those in the p-block (coloured yellow) are filling p-orbitals. Starting the next row, for potassium and calcium the 4s subshell is the lowest in energy, and therefore they fill it. Potassium adds one electron to the 4s shell ([Ar] 4s1), and calcium then completes it ([Ar] 4s2). However, starting from scandium ([Ar] 3d1 4s2) the 3d subshell becomes the next highest in energy. The 4s and 3d subshells have approximately the same energy and they compete for filling the electrons, and so the occupation is not quite consistently filling the 3d orbitals one at a time. The precise energy ordering of 3d and 4s changes along the row, and also changes depending on how many electrons are removed from the atom. For example, due to the repulsion between the 3d electrons and the 4s ones, at chromium the 4s energy level becomes slightly higher than 3d, and so it becomes more profitable for a chromium atom to have a [Ar] 3d5 4s1 configuration than an [Ar] 3d4 4s2 one. A similar anomaly occurs at copper, whose atom has a [Ar] 3d10 4s1 configuration rather than the expected [Ar] 3d9 4s2. These are violations of the Madelung rule. Such anomalies, however, do not have any chemical significance: most chemistry is not about isolated gaseous atoms, and the various configurations are so close in energy to each other that the presence of a nearby atom can shift the balance. Therefore, the periodic table ignores them and considers only idealized configurations. At zinc ([Ar] 3d10 4s2), the 3d orbitals are completely filled with a total of ten electrons. Next come the 4p orbitals, completing the row, which are filled progressively by gallium ([Ar] 3d10 4s2 4p1) through krypton ([Ar] 3d10 4s2 4p6), in a manner analogous to the previous p-block elements. From gallium onwards, the 3d orbitals form part of the electronic core, and no longer participate in chemistry. The s- and p-block elements, which fill their outer shells, are called main-group elements; the d-block elements (coloured blue below), which fill an inner shell, are called transition elements (or transition metals, since they are all metals). The next 18 elements fill the 5s orbitals (rubidium and strontium), then 4d (yttrium through cadmium, again with a few anomalies along the way), and then 5p (indium through xenon). Again, from indium onward the 4d orbitals are in the core. Hence the fifth row has the same structure as the fourth. The sixth row of the table likewise starts with two s-block elements: caesium and barium. After this, the first f-block elements (coloured green below) begin to appear, starting with lanthanum. These are sometimes termed inner transition elements. As there are now not only 4f but also 5d and 6s subshells at similar energies, competition occurs once again with many irregular configurations; this resulted in some dispute about where exactly the f-block is supposed to begin, but most who study the matter agree that it starts at lanthanum in accordance with the Aufbau principle. Even though lanthanum does not itself fill the 4f subshell as a single atom, because of repulsion between electrons, its 4f orbitals are low enough in energy to participate in chemistry. At ytterbium, the seven 4f orbitals are completely filled with fourteen electrons; thereafter, a series of ten transition elements (lutetium through mercury) follows, and finally six main-group elements (thallium through radon) complete the period. From lutetium onwards the 4f orbitals are in the core, and from thallium onwards so are the 5d orbitals. The seventh row is analogous to the sixth row: 7s fills (francium and radium), then 5f (actinium to nobelium), then 6d (lawrencium to copernicium), and finally 7p (nihonium to oganesson). Starting from lawrencium the 5f orbitals are in the core, and probably the 6d orbitals join the core starting from nihonium. Again there are a few anomalies along the way: for example, as single atoms neither actinium nor thorium actually fills the 5f subshell, and lawrencium does not fill the 6d shell, but all these subshells can still become filled in chemical environments. For a very long time, the seventh row was incomplete as most of its elements do not occur in nature. The missing elements beyond uranium started to be synthesized in the laboratory in 1940, when neptunium was made. (However, the first element to be discovered by synthesis rather than in nature was technetium in 1937.) The row was completed with the synthesis of tennessine in 2010 (the last element oganesson had already been made in 2002), and the last elements in this seventh row were given names in 2016. This completes the modern periodic table, with all seven rows completely filled to capacity. Electron configuration table The following table shows the electron configuration of a neutral gas-phase atom of each element. Different configurations can be favoured in different chemical environments. The main-group elements have entirely regular electron configurations; the transition and inner transition elements show twenty irregularities due to the aforementioned competition between subshells close in energy level. For the last ten elements (109–118), experimental data is lacking and therefore calculated configurations have been shown instead. Completely filled subshells have been greyed out. Variations Period 1 Although the modern periodic table is standard today, the placement of the period 1 elements hydrogen and helium remains an open issue under discussion, and some variation can be found. Following their respective s1 and s2 electron configurations, hydrogen would be placed in group 1, and helium would be placed in group 2. The group 1 placement of hydrogen is common, but helium is almost always placed in group 18 with the other noble gases. The debate has to do with conflicting understandings of the extent to which chemical or electronic properties should decide periodic table placement. Like the group 1 metals, hydrogen has one electron in its outermost shell and typically loses its only electron in chemical reactions. Hydrogen has some metal-like chemical properties, being able to displace some metals from their salts. But it forms a diatomic nonmetallic gas at standard conditions, unlike the alkali metals which are reactive solid metals. This and hydrogen's formation of hydrides, in which it gains an electron, brings it close to the properties of the halogens which do the same (though it is rarer for hydrogen to form H− than H+). Moreover, the lightest two halogens (fluorine and chlorine) are gaseous like hydrogen at standard conditions. Some properties of hydrogen are not a good fit for either group: hydrogen is neither highly oxidizing nor highly reducing and is not reactive with water. Hydrogen thus has properties corresponding to both those of the alkali metals and the halogens, but matches neither group perfectly, and is thus difficult to place by its chemistry. Therefore, while the electronic placement of hydrogen in group 1 predominates, some rarer arrangements show either hydrogen in group 17, duplicate hydrogen in both groups 1 and 17, or float it separately from all groups. This last option has nonetheless been criticized by the chemist and philosopher of science Eric Scerri on the grounds that it appears to imply that hydrogen is above the periodic law altogether, unlike all the other elements. Helium is the only element that routinely occupies a position in the periodic table that is not consistent with its electronic structure. It has two electrons in its outermost shell, whereas the other noble gases have eight; and it is an s-block element, whereas all other noble gases are p-block elements. However it is unreactive at standard conditions, and has a full outer shell: these properties are like the noble gases in group 18, but not at all like the reactive alkaline earth metals of group 2. For these reasons helium is nearly universally placed in group 18 which its properties best match; a proposal to move helium to group 2 was rejected by IUPAC in 1988 for these reasons. Nonetheless, helium is still occasionally placed in group 2 today, and some of its physical and chemical properties are closer to the group 2 elements and support the electronic placement. Solid helium crystallises in a hexagonal close-packed structure, which matches beryllium and magnesium in group 2, but not the other noble gases in group 18. Recent theoretical developments in noble gas chemistry, in which helium is expected to show slightly less inertness than neon and to form (HeO)(LiF)2 with a structure similar to the analogous beryllium compound (but with no expected neon analogue), have resulted in more chemists advocating a placement of helium in group 2. This relates to the electronic argument, as the reason for neon's greater inertness is repulsion from its filled p-shell that helium lacks, though realistically it is unlikely that helium-containing molecules will be stable outside extreme low-temperature conditions (around 10 K). The first-row anomaly in the periodic table has additionally been cited to support moving helium to group 2. It arises because the first orbital of any type is unusually small, since unlike its higher analogues, it does not experience interelectronic repulsion from a smaller orbital of the same type. This makes the first row of elements in each block unusually small, and such elements tend to exhibit characteristic kinds of anomalies for their group. Some chemists arguing for the repositioning of helium have pointed out that helium exhibits these anomalies if it is placed in group 2, but not if it is placed in group 18: on the other hand, neon, which would be the first group 18 element if helium was removed from that spot, does exhibit those anomalies. The relationship between helium and beryllium is then argued to resemble that between hydrogen and lithium, a placement which is much more commonly accepted. For example, because of this trend in the sizes of orbitals, a large difference in atomic radii between the first and second members of each main group is seen in groups 1 and 13–17: it exists between neon and argon, and between helium and beryllium, but not between helium and neon. This similarly affects the noble gases' boiling points and solubilities in water, where helium is too close to neon, and the large difference characteristic between the first two elements of a group appears only between neon and argon. Moving helium to group 2 makes this trend consistent in groups 2 and 18 as well, by making helium the first group 2 element and neon the first group 18 element: both exhibit the characteristic properties of a kainosymmetric first element of a group. The group 18 placement of helium nonetheless remains near-universal due to its extreme inertness. Additionally, tables that float both hydrogen and helium outside all groups may rarely be encountered. Group 3 In many periodic tables, the f-block is shifted one element to the right, so that lanthanum and actinium become d-block elements in group 3, and Ce–Lu and Th–Lr form the f-block. Thus the d-block is split into two very uneven portions. This is a holdover from early mistaken measurements of electron configurations; modern measurements are more consistent with the form with lutetium and lawrencium in group 3, and with La–Yb and Ac–No as the f-block. The 4f shell is completely filled at ytterbium, and for that reason Lev Landau and Evgeny Lifshitz in 1948 considered it incorrect to group lutetium as an f-block element. They did not yet take the step of removing lanthanum from the d-block as well, but Jun Kondō realized in 1963 that lanthanum's low-temperature superconductivity implied the activity of its 4f shell. In 1965, David C. Hamilton linked this observation to its position in the periodic table, and argued that the f-block should be composed of the elements La–Yb and Ac–No. Since then, physical, chemical, and electronic evidence has supported this assignment. The issue was brought to wide attention by William B. Jensen in 1982, and the reassignment of lutetium and lawrencium to group 3 was supported by IUPAC reports dating from 1988 (when the 1–18 group numbers were recommended) and 2021. The variation nonetheless still exists because most textbook writers are not aware of the issue. A third form can sometimes be encountered in which the spaces below yttrium in group 3 are left empty, such as the table appearing on the IUPAC web site, but this creates an inconsistency with quantum mechanics by making the f-block 15 elements wide (La–Lu and Ac–Lr) even though only 14 electrons can fit in an f-subshell. There is moreover some confusion in the literature on which elements are then implied to be in group 3. While the 2021 IUPAC report noted that 15-element-wide f-blocks are supported by some practitioners of a specialized branch of relativistic quantum mechanics focusing on the properties of superheavy elements, the project's opinion was that such interest-dependent concerns should not have any bearing on how the periodic table is presented to "the general chemical and scientific community". Other authors focusing on superheavy elements since clarified that the "15th entry of the f-block represents the first slot of the d-block which is left vacant to indicate the place of the f-block inserts", which would imply that this form still has lutetium and lawrencium (the 15th entries in question) as d-block elements in group 3. Indeed, when IUPAC publications expand the table to 32 columns, they make this clear and place lutetium and lawrencium under yttrium in group 3. Several arguments in favour of Sc-Y-La-Ac can be encountered in the literature, but they have been challenged as being logically inconsistent. For example, it has been argued that lanthanum and actinium cannot be f-block elements because as individual gas-phase atoms, they have not begun to fill the f-subshells. But the same is true of thorium which is never disputed as an f-block element, and this argument overlooks the problem on the other end: that the f-shells complete filling at ytterbium and nobelium, matching the Sc-Y-Lu-Lr form, and not at lutetium and lawrencium as the Sc-Y-La-Ac form would have it. Not only are such exceptional configurations in the minority, but they have also in any case never been considered as relevant for positioning any other elements on the periodic table: in gaseous atoms, the d-shells complete their filling at copper, palladium, and gold, but it is universally accepted by chemists that these configurations are exceptional and that the d-block really ends in accordance with the Madelung rule at zinc, cadmium, and mercury. The relevant fact for placement is that lanthanum and actinium (like thorium) have valence f-orbitals that can become occupied in chemical environments, whereas lutetium and lawrencium do not: their f-shells are in the core, and cannot be used for chemical reactions. Thus the relationship between yttrium and lanthanum is only a secondary relationship between elements with the same number of valence electrons but different kinds of valence orbitals, such as that between chromium and uranium; whereas the relationship between yttrium and lutetium is primary, sharing both valence electron count and valence orbital type. Periodic trends As chemical reactions involve the valence electrons, elements with similar outer electron configurations may be expected to react similarly and form compounds with similar proportions of elements in them. Such elements are placed in the same group, and thus there tend to be clear similarities and trends in chemical behaviour as one proceeds down a group. As analogous configurations occur at regular intervals, the properties of the elements thus exhibit periodic recurrences, hence the name of the periodic table and the periodic law. These periodic recurrences were noticed well before the underlying theory that explains them was developed. Atomic radius Historically, the physical size of atoms was unknown until the early 20th century. The first calculated estimate of the atomic radius of hydrogen was published by physicist Arthur Haas in 1910 to within an order of magnitude (a factor of 10) of the accepted value, the Bohr radius (~0.529 Å). In his model, Haas used a single-electron configuration based on the classical atomic model proposed by J. J. Thomson in 1904, often called the plum-pudding model. Atomic radii (the size of atoms) are dependent on the sizes of their outermost orbitals. They generally decrease going left to right along the main-group elements, because the nuclear charge increases but the outer electrons are still in the same shell. However, going down a column, the radii generally increase, because the outermost electrons are in higher shells that are thus further away from the nucleus. The first row of each block is abnormally small, due to an effect called kainosymmetry or primogenic repulsion: the 1s, 2p, 3d, and 4f subshells have no inner analogues. For example, the 2p orbitals do not experience strong repulsion from the 1s and 2s orbitals, which have quite different angular charge distributions, and hence are not very large; but the 3p orbitals experience strong repulsion from the 2p orbitals, which have similar angular charge distributions. Thus higher s-, p-, d-, and f-subshells experience strong repulsion from their inner analogues, which have approximately the same angular distribution of charge, and must expand to avoid this. This makes significant differences arise between the small 2p elements, which prefer multiple bonding, and the larger 3p and higher p-elements, which do not. Similar anomalies arise for the 1s, 2p, 3d, 4f, and the hypothetical elements: the degree of this first-row anomaly is highest for the s-block, is moderate for the p-block, and is less pronounced for the d- and f-blocks. In the transition elements, an inner shell is filling, but the size of the atom is still determined by the outer electrons. The increasing nuclear charge across the series and the increased number of inner electrons for shielding somewhat compensate each other, so the decrease in radius is smaller. The 4p and 5d atoms, coming immediately after new types of transition series are first introduced, are smaller than would have been expected, because the added core 3d and 4f subshells provide only incomplete shielding of the nuclear charge for the outer electrons. Hence for example gallium atoms are slightly smaller than aluminium atoms. Together with kainosymmetry, this results in an even-odd difference between the periods (except in the s-block) that is sometimes known as secondary periodicity: elements in even periods have smaller atomic radii and prefer to lose fewer electrons, while elements in odd periods (except the first) differ in the opposite direction. Thus for example many properties in the p-block show a zigzag rather than a smooth trend along the group. For example, phosphorus and antimony in odd periods of group 15 readily reach the +5 oxidation state, whereas nitrogen, arsenic, and bismuth in even periods prefer to stay at +3. A similar situation holds for the d-block, with lutetium through tungsten atoms being slightly smaller than yttrium through molybdenum atoms respectively. Thallium and lead atoms are about the same size as indium and tin atoms respectively, but from bismuth to radon the 6p atoms are larger than the analogous 5p atoms. This happens because when atomic nuclei become highly charged, special relativity becomes needed to gauge the effect of the nucleus on the electron cloud. These relativistic effects result in heavy elements increasingly having differing properties compared to their lighter homologues in the periodic table. Spin–orbit interaction splits the p-subshell: one p-orbital is relativistically stabilized and shrunken (it fills in thallium and lead), but the other two (filling in bismuth through radon) are relativistically destabilized and expanded. Relativistic effects also explain why gold is golden and mercury is a liquid at room temperature. They are expected to become very strong in the late seventh period, potentially leading to a collapse of periodicity. Electron configurations are only clearly known until element 108 (hassium), and experimental chemistry beyond 108 has only been done for elements 112 (copernicium) through 115 (moscovium), so the chemical characterization of the heaviest elements remains a topic of current research. The trend that atomic radii decrease from left to right is also present in ionic radii, though it is more difficult to examine because the most common ions of consecutive elements normally differ in charge. Ions with the same electron configuration decrease in size as their atomic number rises, due to increased attraction from the more positively charged nucleus: thus for example ionic radii decrease in the series Se2−, Br−, Rb+, Sr2+, Y3+, Zr4+, Nb5+, Mo6+, Tc7+. Ions of the same element get smaller as more electrons are removed, because the attraction from the nucleus begins to outweigh the repulsion between electrons that causes electron clouds to expand: thus for example ionic radii decrease in the series V2+, V3+, V4+, V5+. Ionisation energy The first ionisation energy of an atom is the energy required to remove an electron from it. This varies with the atomic radius: ionisation energy increases left to right and down to up, because electrons that are closer to the nucleus are held more tightly and are more difficult to remove. Ionisation energy thus is minimized at the first element of each period – hydrogen and the alkali metals – and then generally rises until it reaches the noble gas at the right edge of the period. There are some exceptions to this trend, such as oxygen, where the electron being removed is paired and thus interelectronic repulsion makes it easier to remove than expected. In the transition series, the outer electrons are preferentially lost even though the inner orbitals are filling. For example, in the 3d series, the 4s electrons are lost first even though the 3d orbitals are being filled. The shielding effect of adding an extra 3d electron approximately compensates the rise in nuclear charge, and therefore the ionisation energies stay mostly constant, though there is a small increase especially at the end of each transition series. As metal atoms tend to lose electrons in chemical reactions, ionisation energy is generally correlated with chemical reactivity, although there are other factors involved as well. Electron affinity The opposite property to ionisation energy is the electron affinity, which is the energy released when adding an electron to the atom. A passing electron will be more readily attracted to an atom if it feels the pull of the nucleus more strongly, and especially if there is an available partially filled outer orbital that can accommodate it. Therefore, electron affinity tends to increase down to up and left to right. The exception is the last column, the noble gases, which have a full shell and have no room for another electron. This gives the halogens in the next-to-last column the highest electron affinities. Some atoms, like the noble gases, have no electron affinity: they cannot form stable gas-phase anions. (They can form metastable resonances if the incoming electron arrives with enough kinetic energy, but these inevitably and rapidly autodetach: for example, the lifetime of the most long-lived He− level is about 359 microseconds.) The noble gases, having high ionisation energies and no electron affinity, have little inclination towards gaining or losing electrons and are generally unreactive. Some exceptions to the trends occur: oxygen and fluorine have lower electron affinities than their heavier homologues sulfur and chlorine, because they are small atoms and hence the newly added electron would experience significant repulsion from the already present ones. For the nonmetallic elements, electron affinity likewise somewhat correlates with reactivity, but not perfectly since other factors are involved. For example, fluorine has a lower electron affinity than chlorine (because of extreme interelectronic repulsion for the very small fluorine atom), but is more reactive. Valence and oxidation states The valence of an element can be defined either as the number of hydrogen atoms that can combine with it to form a simple binary hydride, or as twice the number of oxygen atoms that can combine with it to form a simple binary oxide (that is, not a peroxide or a superoxide). The valences of the main-group elements are directly related to the group number: the hydrides in the main groups 1–2 and 13–17 follow the formulae MH, MH2, MH3, MH4, MH3, MH2, and finally MH. The highest oxides instead increase in valence, following the formulae M2O, MO, M2O3, MO2, M2O5, MO3, M2O7. Today the notion of valence has been extended by that of the oxidation state, which is the formal charge left on an element when all other elements in a compound have been removed as their ions. The electron configuration suggests a ready explanation from the number of electrons available for bonding; indeed, the number of valence electrons starts at 1 in group 1, and then increases towards the right side of the periodic table, only resetting at 3 whenever each new block starts. Thus in period 6, Cs–Ba have 1–2 valence electrons; La–Yb have 3–16; Lu–Hg have 3–12; and Tl–Rn have 3–8. However, towards the right side of the d- and f-blocks, the theoretical maximum corresponding to using all valence electrons is not achievable at all; the same situation affects oxygen, fluorine, and the light noble gases up to krypton. A full explanation requires considering the energy that would be released in forming compounds with different valences rather than simply considering electron configurations alone. For example, magnesium forms Mg2+ rather than Mg+ cations when dissolved in water, because the latter would spontaneously disproportionate into Mg0 and Mg2+ cations. This is because the enthalpy of hydration (surrounding the cation with water molecules) increases in magnitude with the charge and radius of the ion. In Mg+, the outermost orbital (which determines ionic radius) is still 3s, so the hydration enthalpy is small and insufficient to compensate the energy required to remove the electron; but ionizing again to Mg2+ uncovers the core 2p subshell, making the hydration enthalpy large enough to allow magnesium(II) compounds to form. For similar reasons, the common oxidation states of the heavier p-block elements (where the ns electrons become lower in energy than the np) tend to vary by steps of 2, because that is necessary to uncover an inner subshell and decrease the ionic radius (e.g. Tl+ uncovers 6s, and Tl3+ uncovers 5d, so once thallium loses two electrons it tends to lose the third one as well). Analogous arguments based on orbital hybridization can be used for the less electronegative p-block elements. For transition metals, common oxidation states are nearly always at least +2 for similar reasons (uncovering the next subshell); this holds even for the metals with anomalous dx+1s1 or dx+2s0 configurations (except for silver), because repulsion between d-electrons means that the movement of the second electron from the s- to the d-subshell does not appreciably change its ionisation energy. Because ionizing the transition metals further does not uncover any new inner subshells, their oxidation states tend to vary by steps of 1 instead. The lanthanides and late actinides generally show a stable +3 oxidation state, removing the outer s-electrons and then (usually) one electron from the (n−2)f-orbitals, that are similar in energy to ns. The common and maximum oxidation states of the d- and f-block elements tend to depend on the ionisation energies. As the energy difference between the (n−1)d and ns orbitals rises along each transition series, it becomes less energetically favourable to ionize further electrons. Thus, the early transition metal groups tend to prefer higher oxidation states, but the +2 oxidation state becomes more stable for the late transition metal groups. The highest formal oxidation state thus increases from +3 at the beginning of each d-block row, to +7 or +8 in the middle (e.g. OsO4), and then decrease to +2 at the end. The lanthanides and late actinides usually have high fourth ionisation energies and hence rarely surpass the +3 oxidation state, whereas early actinides have low fourth ionisation energies and so for example neptunium and plutonium can reach +7. The very last actinides go further than the lanthanides towards low oxidation states: mendelevium is more easily reduced to the +2 state than thulium or even europium (the lanthanide with the most stable +2 state, on account of its half-filled f-shell), and nobelium outright favours +2 over +3, in contrast to ytterbium. As elements in the same group share the same valence configurations, they usually exhibit similar chemical behaviour. For example, the alkali metals in the first group all have one valence electron, and form a very homogeneous class of elements: they are all soft and reactive metals. However, there are many factors involved, and groups can often be rather heterogeneous. For instance, hydrogen also has one valence electron and is in the same group as the alkali metals, but its chemical behaviour is quite different. The stable elements of group 14 comprise a nonmetal (carbon), two semiconductors (silicon and germanium), and two metals (tin and lead); they are nonetheless united by having four valence electrons. This often leads to similarities in maximum and minimum oxidation states (e.g. sulfur and selenium in group 16 both have maximum oxidation state +6, as in SO3 and SeO3, and minimum oxidation state −2, as in sulfides and selenides); but not always (e.g. oxygen is not known to form oxidation state +6, despite being in the same group as sulfur and selenium). Electronegativity Another important property of elements is their electronegativity. Atoms can form covalent bonds to each other by sharing electrons in pairs, creating an overlap of valence orbitals. The degree to which each atom attracts the shared electron pair depends on the atom's electronegativity – the tendency of an atom towards gaining or losing electrons. The more electronegative atom will tend to attract the electron pair more, and the less electronegative (or more electropositive) one will attract it less. In extreme cases, the electron can be thought of as having been passed completely from the more electropositive atom to the more electronegative one, though this is a simplification. The bond then binds two ions, one positive (having given up the electron) and one negative (having accepted it), and is termed an ionic bond. Electronegativity depends on how strongly the nucleus can attract an electron pair, and so it exhibits a similar variation to the other properties already discussed: electronegativity tends to fall going up to down, and rise going left to right. The alkali and alkaline earth metals are among the most electropositive elements, while the chalcogens, halogens, and noble gases are among the most electronegative ones. Electronegativity is generally measured on the Pauling scale, on which the most electronegative reactive atom (fluorine) is given electronegativity 4.0, and the least electronegative atom (caesium) is given electronegativity 0.79. In fact neon is the most electronegative element, but the Pauling scale cannot measure its electronegativity because it does not form covalent bonds with most elements. An element's electronegativity varies with the identity and number of the atoms it is bonded to, as well as how many electrons it has already lost: an atom becomes more electronegative when it has lost more electrons. This sometimes makes a large difference: lead in the +2 oxidation state has electronegativity 1.87 on the Pauling scale, while lead in the +4 oxidation state has electronegativity 2.33. Metallicity A simple substance is a substance formed from atoms of one chemical element. The simple substances of the more electronegative atoms tend to share electrons (form covalent bonds) with each other. They form either small molecules (like hydrogen or oxygen, whose atoms bond in pairs) or giant structures stretching indefinitely (like carbon or silicon). The noble gases simply stay as single atoms, as they already have a full shell. Substances composed of discrete molecules or single atoms are held together by weaker attractive forces between the molecules, such as the London dispersion force: as electrons move within the molecules, they create momentary imbalances of electrical charge, which induce similar imbalances on nearby molecules and create synchronized movements of electrons across many neighbouring molecules. The more electropositive atoms, however, tend to instead lose electrons, creating a "sea" of electrons engulfing cations. The outer orbitals of one atom overlap to share electrons with all its neighbours, creating a giant structure of molecular orbitals extending over all the atoms. This negatively charged "sea" pulls on all the ions and keeps them together in a metallic bond. Elements forming such bonds are often called metals; those which do not are often called nonmetals. Some elements can form multiple simple substances with different structures: these are called allotropes. For example, diamond and graphite are two allotropes of carbon. The metallicity of an element can be predicted from electronic properties. When atomic orbitals overlap during metallic or covalent bonding, they create both bonding and antibonding molecular orbitals of equal capacity, with the antibonding orbitals of higher energy. Net bonding character occurs when there are more electrons in the bonding orbitals than there are in the antibonding orbitals. Metallic bonding is thus possible when the number of electrons delocalized by each atom is less than twice the number of orbitals contributing to the overlap. This is the situation for elements in groups 1 through 13; they also have too few valence electrons to form giant covalent structures where all atoms take equivalent positions, and so almost all of them metallise. The exceptions are hydrogen and boron, which have too high an ionisation energy. Hydrogen thus forms a covalent H2 molecule, and boron forms a giant covalent structure based on icosahedral B12 clusters. In a metal, the bonding and antibonding orbitals have overlapping energies, creating a single band that electrons can freely flow through, allowing for electrical conduction. In group 14, both metallic and covalent bonding become possible. In a diamond crystal, covalent bonds between carbon atoms are strong, because they have a small atomic radius and thus the nucleus has more of a hold on the electrons. Therefore, the bonding orbitals that result are much lower in energy than the antibonding orbitals, and there is no overlap, so electrical conduction becomes impossible: carbon is a nonmetal. However, covalent bonding becomes weaker for larger atoms and the energy gap between the bonding and antibonding orbitals decreases. Therefore, silicon and germanium have smaller band gaps and are semiconductors at ambient conditions: electrons can cross the gap when thermally excited. (Boron is also a semiconductor at ambient conditions.) The band gap disappears in tin, so that tin and lead become metals. As the temperature rises, all nonmetals develop some semiconducting properties, to a greater or lesser extent depending on the size of the band gap. Thus metals and nonmetals may be distinguished by the temperature dependence of their electrical conductivity: a metal's conductivity lowers as temperature rises (because thermal motion makes it more difficult for the electrons to flow freely), whereas a nonmetal's conductivity rises (as more electrons may be excited to cross the gap). Elements in groups 15 through 17 have too many electrons to form giant covalent molecules that stretch in all three dimensions. For the lighter elements, the bonds in small diatomic molecules are so strong that a condensed phase is disfavoured: thus nitrogen (N2), oxygen (O2), white phosphorus and yellow arsenic (P4 and As4), sulfur and red selenium (S8 and Se8), and the stable halogens (F2, Cl2, Br2, and I2) readily form covalent molecules with few atoms. The heavier ones tend to form long chains (e.g. red phosphorus, grey selenium, tellurium) or layered structures (e.g. carbon as graphite, black phosphorus, grey arsenic, antimony, bismuth) that only extend in one or two rather than three dimensions. Both kinds of structures can be found as allotropes of phosphorus, arsenic, and selenium, although the long-chained allotropes are more stable in all three. As these structures do not use all their orbitals for bonding, they end up with bonding, nonbonding, and antibonding bands in order of increasing energy. Similarly to group 14, the band gaps shrink for the heavier elements and free movement of electrons between the chains or layers becomes possible. Thus for example black phosphorus, black arsenic, grey selenium, tellurium, and iodine are semiconductors; grey arsenic, antimony, and bismuth are semimetals (exhibiting quasi-metallic conduction, with a very small band overlap); and polonium and probably astatine are true metals. Finally, the natural group 18 elements all stay as individual atoms. The dividing line between metals and nonmetals is roughly diagonal from top left to bottom right, with the transition series appearing to the left of this diagonal (as they have many available orbitals for overlap). This is expected, as metallicity tends to be correlated with electropositivity and the willingness to lose electrons, which increases right to left and up to down. Thus the metals greatly outnumber the nonmetals. Elements near the borderline are difficult to classify: they tend to have properties that are intermediate between those of metals and nonmetals, and may have some properties characteristic of both. They are often termed semimetals or metalloids. The term "semimetal" used in this sense should not be confused with its strict physical sense having to do with band structure: bismuth is physically a semimetal, but is generally considered a metal by chemists. The following table considers the most stable allotropes at standard conditions. The elements coloured yellow form simple substances that are well-characterised by metallic bonding. Elements coloured light blue form giant network covalent structures, whereas those coloured dark blue form small covalently bonded molecules that are held together by weaker van der Waals forces. The noble gases are coloured in violet: their molecules are single atoms and no covalent bonding occurs. Greyed-out cells are for elements which have not been prepared in sufficient quantities for their most stable allotropes to have been characterized in this way. Theoretical considerations and current experimental evidence suggest that all of those elements would metallise if they could form condensed phases, except perhaps for oganesson. Generally, metals are shiny and dense. They usually have high melting and boiling points due to the strength of the metallic bond, and are often malleable and ductile (easily stretched and shaped) because the atoms can move relative to each other without breaking the metallic bond. They conduct electricity because their electrons are free to move in all three dimensions. Similarly, they conduct heat, which is transferred by the electrons as extra kinetic energy: they move faster. These properties persist in the liquid state, as although the crystal structure is destroyed on melting, the atoms still touch and the metallic bond persists, though it is weakened. Metals tend to be reactive towards nonmetals. Some exceptions can be found to these generalizations: for example, beryllium, chromium, manganese, antimony, bismuth, and uranium are brittle (not an exhaustive list); chromium is extremely hard; gallium, rubidium, caesium, and mercury are liquid at or close to room temperature; and noble metals such as gold are chemically very inert. Nonmetals exhibit different properties. Those forming giant covalent crystals exhibit high melting and boiling points, as it takes considerable energy to overcome the strong covalent bonds. Those forming discrete molecules are held together mostly by dispersion forces, which are more easily overcome; thus they tend to have lower melting and boiling points, and many are liquids or gases at room temperature. Nonmetals are often dull-looking. They tend to be reactive towards metals, except for the noble gases, which are inert towards most substances. They are brittle when solid as their atoms are held tightly in place. They are less dense and conduct electricity poorly, because there are no mobile electrons. Near the borderline, band gaps are small and thus many elements in that region are semiconductors, such as silicon, germanium, and tellurium. Selenium has both a semiconducting grey allotrope and an insulating red allotrope; arsenic has a metallic grey allotrope, a semiconducting black allotrope, and an insulating yellow allotrope (though the last is unstable at ambient conditions). Again there are exceptions; for example, diamond has the highest thermal conductivity of all known materials, greater than any metal. It is common to designate a class of metalloids straddling the boundary between metals and nonmetals, as elements in that region are intermediate in both physical and chemical properties. However, no consensus exists in the literature for precisely which elements should be so designated. When such a category is used, silicon, germanium, arsenic, and tellurium are almost always included, and boron and antimony usually are; but most sources include other elements as well, without agreement on which extra elements should be added, and some others subtract from this list instead. For example, unlike all the other elements generally considered metalloids or nonmetals, antimony's only stable form has metallic conductivity. Moreover, the element resembles bismuth and, more generally, the other p-block metals in its physical and chemical behaviour. On this basis some authors have argued that it is better classified as a metal than as a metalloid. On the other hand, selenium has some semiconducting properties in its most stable form (though it also has insulating allotropes) and it has been argued that it should be considered a metalloid – though this situation also holds for phosphorus, which is a much rarer inclusion among the metalloids. Further manifestations of periodicity There are some other relationships throughout the periodic table between elements that are not in the same group, such as the diagonal relationships between elements that are diagonally adjacent (e.g. lithium and magnesium). Some similarities can also be found between the main groups and the transition metal groups, or between the early actinides and early transition metals, when the elements have the same number of valence electrons. Thus uranium somewhat resembles chromium and tungsten in group 6, as all three have six valence electrons. Relationships between elements with the same number of valence electrons but different types of valence orbital have been called secondary or isodonor relationships: they usually have the same maximum oxidation states, but not the same minimum oxidation states. For example, chlorine and manganese both have +7 as their maximum oxidation state (e.g. Cl2O7 and Mn2O7), but their respective minimum oxidation states are −1 (e.g. HCl) and −3 (K2[Mn(CO)4]). Elements with the same number of valence vacancies but different numbers of valence electrons are related by a tertiary or isoacceptor relationship: they usually have similar minimum but not maximum oxidation states. For example, hydrogen and chlorine both have −1 as their minimum oxidation state (in hydrides and chlorides), but hydrogen's maximum oxidation state is +1 (e.g. H2O) while chlorine's is +7. Many other physical properties of the elements exhibit periodic variation in accordance with the periodic law, such as melting points, boiling points, heats of fusion, heats of vaporization, atomisation energy, and so on. Similar periodic variations appear for the compounds of the elements, which can be observed by comparing hydrides, oxides, sulfides, halides, and so on. Chemical properties are more difficult to describe quantitatively, but likewise exhibit their own periodicities. Examples include the variation in the acidic and basic properties of the elements and their compounds, the stabilities of compounds, and methods of isolating the elements. Periodicity is and has been used very widely to predict the properties of unknown new elements and new compounds, and is central to modern chemistry. Classification of elements Many terms have been used in the literature to describe sets of elements that behave similarly. The group names alkali metal, alkaline earth metal, triel, tetrel, pnictogen, chalcogen, halogen, and noble gas are acknowledged by IUPAC; the other groups can be referred to by their number, or by their first element (e.g., group 6 is the chromium group). Some divide the p-block elements from groups 13 to 16 by metallicity, although there is neither an IUPAC definition nor a precise consensus on exactly which elements should be considered metals, nonmetals, or semi-metals (sometimes called metalloids). Neither is there a consensus on what the metals succeeding the transition metals ought to be called, with post-transition metal and poor metal being among the possibilities having been used. Some advanced monographs exclude the elements of group 12 from the transition metals on the grounds of their sometimes quite different chemical properties, but this is not a universal practice and IUPAC does not presently mention it as allowable in its Principles of Chemical Nomenclature. The lanthanides are considered to be the elements La–Lu, which are all very similar to each other: historically they included only Ce–Lu, but lanthanum became included by common usage. The rare earth elements (or rare earth metals) add scandium and yttrium to the lanthanides. Analogously, the actinides are considered to be the elements Ac–Lr (historically Th–Lr), although variation of properties in this set is much greater than within the lanthanides. IUPAC recommends the names lanthanoids and actinoids to avoid ambiguity, as the -ide suffix typically denotes a negative ion; however lanthanides and actinides remain common. With the increasing recognition of lutetium and lawrencium as d-block elements, some authors began to define the lanthanides as La–Yb and the actinides as Ac–No, matching the f-block. The transactinides or superheavy elements are the short-lived elements beyond the actinides, starting at lawrencium or rutherfordium (depending on where the actinides are taken to end). Many more categorizations exist and are used according to certain disciplines. In astrophysics, a metal is defined as any element with atomic number greater than 2, i.e. anything except hydrogen and helium. The term "semimetal" has a different definition in physics than it does in chemistry: bismuth is a semimetal by physical definitions, but chemists generally consider it a metal. A few terms are widely used, but without any very formal definition, such as "heavy metal", which has been given such a wide range of definitions that it has been criticized as "effectively meaningless". The scope of terms varies significantly between authors. For example, according to IUPAC, the noble gases extend to include the whole group, including the very radioactive superheavy element oganesson. However, among those who specialize in the superheavy elements, this is not often done: in this case "noble gas" is typically taken to imply the unreactive behaviour of the lighter elements of the group. Since calculations generally predict that oganesson should not be particularly inert due to relativistic effects, and may not even be a gas at room temperature if it could be produced in bulk, its status as a noble gas is often questioned in this context. Furthermore, national variations are sometimes encountered: in Japan, alkaline earth metals often do not include beryllium and magnesium as their behaviour is different from the heavier group 2 metals. History Early history In 1817, German physicist Johann Wolfgang Döbereiner began to formulate one of the earliest attempts to classify the elements. In 1829, he found that he could form some of the elements into groups of three, with the members of each group having related properties. He termed these groups triads. Chlorine, bromine, and iodine formed a triad; as did calcium, strontium, and barium; lithium, sodium, and potassium; and sulfur, selenium, and tellurium. Today, all these triads form part of modern-day groups: the halogens, alkaline earth metals, alkali metals, and chalcogens. Various chemists continued his work and were able to identify more and more relationships between small groups of elements. However, they could not build one scheme that encompassed them all. John Newlands published a letter in the Chemical News in February 1863 on the periodicity among the chemical elements. In 1864 Newlands published an article in the Chemical News showing that if the elements are arranged in the order of their atomic weights, those having consecutive numbers frequently either belong to the same group or occupy similar positions in different groups, and he pointed out that each eighth element starting from a given one is in this arrangement a kind of repetition of the first, like the eighth note of an octave in music (The Law of Octaves). However, Newlands's formulation only worked well for the main-group elements, and encountered serious problems with the others. German chemist Lothar Meyer noted the sequences of similar chemical and physical properties repeated at periodic intervals. According to him, if the atomic weights were plotted as ordinates (i.e. vertically) and the atomic volumes as abscissas (i.e. horizontally)—the curve obtained a series of maximums and minimums—the most electropositive elements would appear at the peaks of the curve in the order of their atomic weights. In 1864, a book of his was published; it contained an early version of the periodic table containing 28 elements, and classified elements into six families by their valence—for the first time, elements had been grouped according to their valence. Works on organizing the elements by atomic weight had until then been stymied by inaccurate measurements of the atomic weights. In 1868, he revised his table, but this revision was published as a draft only after his death. Mendeleev The definitive breakthrough came from the Russian chemist Dmitri Mendeleev. Although other chemists (including Meyer) had found some other versions of the periodic system at about the same time, Mendeleev was the most dedicated to developing and defending his system, and it was his system that most affected the scientific community. On 17 February 1869 (1 March 1869 in the Gregorian calendar), Mendeleev began arranging the elements and comparing them by their atomic weights. He began with a few elements, and over the course of the day his system grew until it encompassed most of the known elements. After he found a consistent arrangement, his printed table appeared in May 1869 in the journal of the Russian Chemical Society. When elements did not appear to fit in the system, he boldly predicted that either valencies or atomic weights had been measured incorrectly, or that there was a missing element yet to be discovered. In 1871, Mendeleev published a long article, including an updated form of his table, that made his predictions for unknown elements explicit. Mendeleev predicted the properties of three of these unknown elements in detail: as they would be missing heavier homologues of boron, aluminium, and silicon, he named them eka-boron, eka-aluminium, and eka-silicon ("eka" being Sanskrit for "one"). In 1875, the French chemist Paul-Émile Lecoq de Boisbaudran, working without knowledge of Mendeleev's prediction, discovered a new element in a sample of the mineral sphalerite, and named it gallium. He isolated the element and began determining its properties. Mendeleev, reading de Boisbaudran's publication, sent a letter claiming that gallium was his predicted eka-aluminium. Although Lecoq de Boisbaudran was initially sceptical, and suspected that Mendeleev was trying to take credit for his discovery, he later admitted that Mendeleev was correct. In 1879, the Swedish chemist Lars Fredrik Nilson discovered a new element, which he named scandium: it turned out to be eka-boron. Eka-silicon was found in 1886 by German chemist Clemens Winkler, who named it germanium. The properties of gallium, scandium, and germanium matched what Mendeleev had predicted. In 1889, Mendeleev noted at the Faraday Lecture to the Royal Institution in London that he had not expected to live long enough "to mention their discovery to the Chemical Society of Great Britain as a confirmation of the exactitude and generality of the periodic law". Even the discovery of the noble gases at the close of the 19th century, which Mendeleev had not predicted, fitted neatly into his scheme as an eighth main group. Mendeleev nevertheless had some trouble fitting the known lanthanides into his scheme, as they did not exhibit the periodic change in valencies that the other elements did. After much investigation, the Czech chemist Bohuslav Brauner suggested in 1902 that the lanthanides could all be placed together in one group on the periodic table. He named this the "asteroid hypothesis" as an astronomical analogy: just as there is an asteroid belt instead of a single planet between Mars and Jupiter, so the place below yttrium was thought to be occupied by all the lanthanides instead of just one element. Atomic number After the internal structure of the atom was probed, amateur Dutch physicist Antonius van den Broek proposed in 1913 that the nuclear charge determined the placement of elements in the periodic table. The New Zealand physicist Ernest Rutherford coined the word "atomic number" for this nuclear charge. In van den Broek's published article he illustrated the first electronic periodic table showing the elements arranged according to the number of their electrons. Rutherford confirmed in his 1914 paper that Bohr had accepted the view of van den Broek. The same year, English physicist Henry Moseley using X-ray spectroscopy confirmed van den Broek's proposal experimentally. Moseley determined the value of the nuclear charge of each element from aluminium to gold and showed that Mendeleev's ordering actually places the elements in sequential order by nuclear charge. Nuclear charge is identical to proton count and determines the value of the atomic number (Z) of each element. Using atomic number gives a definitive, integer-based sequence for the elements. Moseley's research immediately resolved discrepancies between atomic weight and chemical properties; these were cases such as tellurium and iodine, where atomic number increases but atomic weight decreases. Although Moseley was soon killed in World War I, the Swedish physicist Manne Siegbahn continued his work up to uranium, and established that it was the element with the highest atomic number then known (92). Based on Moseley and Siegbahn's research, it was also known which atomic numbers corresponded to missing elements yet to be found: 43, 61, 72, 75, 85, and 87. (Element 75 had in fact already been found by Japanese chemist Masataka Ogawa in 1908 and named nipponium, but he mistakenly assigned it as element 43 instead of 75 and so his discovery was not generally recognized until later. The contemporarily accepted discovery of element 75 came in 1925, when Walter Noddack, Ida Tacke, and Otto Berg independently rediscovered it and gave it its present name, rhenium.) The dawn of atomic physics also clarified the situation of isotopes. In the decay chains of the primordial radioactive elements thorium and uranium, it soon became evident that there were many apparent new elements that had different atomic weights but exactly the same chemical properties. In 1913, Frederick Soddy coined the term "isotope" to describe this situation, and considered isotopes to merely be different forms of the same chemical element. This furthermore clarified discrepancies such as tellurium and iodine: tellurium's natural isotopic composition is weighted towards heavier isotopes than iodine's, but tellurium has a lower atomic number. Electron shells The Danish physicist Niels Bohr applied Max Planck's idea of quantization to the atom. He concluded that the energy levels of electrons were quantised: only a discrete set of stable energy states were allowed. Bohr then attempted to understand periodicity through electron configurations, surmising in 1913 that the inner electrons should be responsible for the chemical properties of the element. In 1913, he produced the first electronic periodic table based on a quantum atom. Bohr called his electron shells "rings" in 1913: atomic orbitals within shells did not exist at the time of his planetary model. Bohr explains in Part 3 of his famous 1913 paper that the maximum electrons in a shell is eight, writing, "We see, further, that a ring of electrons cannot rotate in a single ring round a nucleus of charge ne unless < 8." For smaller atoms, the electron shells would be filled as follows: "rings of electrons will only join if they contain equal numbers of electrons; and that accordingly the numbers of electrons on inner rings will only be 2, 4, 8." However, in larger atoms the innermost shell would contain eight electrons: "on the other hand, the periodic system of the elements strongly suggests that already in neon = 10 an inner ring of eight electrons will occur." His proposed electron configurations for the atoms (shown to the right) mostly do not accord with those now known. They were improved further after the work of Arnold Sommerfeld and Edmund Stoner discovered more quantum numbers. The first one to systematically expand and correct the chemical potentials of Bohr's atomic theory was Walther Kossel in 1914 and in 1916. Kossel explained that in the periodic table new elements would be created as electrons were added to the outer shell. In Kossel's paper, he writes: This leads to the conclusion that the electrons, which are added further, should be put into concentric rings or shells, on each of which ... only a certain number of electrons—namely, eight in our case—should be arranged. As soon as one ring or shell is completed, a new one has to be started for the next element; the number of electrons, which are most easily accessible, and lie at the outermost periphery, increases again from element to element and, therefore, in the formation of each new shell the chemical periodicity is repeated.Translated in Helge Kragh, Aarhus, Lars Vegard, Atomic Structure, and the Periodic System, Bull. Hist. Chem., VOLUME 37, Number 1 (2012), p.43. In a 1919 paper, Irving Langmuir postulated the existence of "cells" which we now call orbitals, which could each only contain two electrons each, and these were arranged in "equidistant layers" which we now call shells. He made an exception for the first shell to only contain two electrons. The chemist Charles Rugeley Bury suggested in 1921 that eight and eighteen electrons in a shell form stable configurations. Bury proposed that the electron configurations in transitional elements depended upon the valence electrons in their outer shell. He introduced the word transition to describe the elements now known as transition metals or transition elements. Bohr's theory was vindicated by the discovery of element 72: Georges Urbain claimed to have discovered it as the rare earth element celtium, but Bury and Bohr had predicted that element 72 could not be a rare earth element and had to be a homologue of zirconium. Dirk Coster and Georg von Hevesy searched for the element in zirconium ores and found element 72, which they named hafnium after Bohr's hometown of Copenhagen (Hafnia in Latin). Urbain's celtium proved to be simply purified lutetium (element 71). Hafnium and rhenium thus became the last stable elements to be discovered. Prompted by Bohr, Wolfgang Pauli took up the problem of electron configurations in 1923. Pauli extended Bohr's scheme to use four quantum numbers, and formulated his exclusion principle which stated that no two electrons could have the same four quantum numbers. This explained the lengths of the periods in the periodic table (2, 8, 18, and 32), which corresponded to the number of electrons that each shell could occupy. In 1925, Friedrich Hund arrived at configurations close to the modern ones. As a result of these advances, periodicity became based on the number of chemically active or valence electrons rather than by the valences of the elements. The Aufbau principle that describes the electron configurations of the elements was first empirically observed by Erwin Madelung in 1926, though the first to publish it was Vladimir Karapetoff in 1930. In 1961, Vsevolod Klechkovsky derived the first part of the Madelung rule (that orbitals fill in order of increasing n + ℓ) from the Thomas–Fermi model; the complete rule was derived from a similar potential in 1971 by Yury N. Demkov and Valentin N. Ostrovsky. The quantum theory clarified the transition metals and lanthanides as forming their own separate groups, transitional between the main groups, although some chemists had already proposed tables showing them this way before then: the English chemist Henry Bassett did so in 1892, the Danish chemist Julius Thomsen in 1895, and the Swiss chemist Alfred Werner in 1905. Bohr used Thomsen's form in his 1922 Nobel Lecture; Werner's form is very similar to the modern 32-column form. In particular, this supplanted Brauner's asteroidal hypothesis. The exact position of the lanthanides, and thus the composition of group 3, remained under dispute for decades longer because their electron configurations were initially measured incorrectly. On chemical grounds Bassett, Werner, and Bury grouped scandium and yttrium with lutetium rather than lanthanum (the former two left an empty space below yttrium as lutetium had not yet been discovered). Hund assumed in 1927 that all the lanthanide atoms had configuration [Xe]4f0−145d16s2, on account of their prevailing trivalency. It is now known that the relationship between chemistry and electron configuration is more complicated than that. Early spectroscopic evidence seemed to confirm these configurations, and thus the periodic table was structured to have group 3 as scandium, yttrium, lanthanum, and actinium, with fourteen f-elements breaking up the d-block between lanthanum and hafnium. But it was later discovered that this is only true for four of the fifteen lanthanides (lanthanum, cerium, gadolinium, and lutetium), and that the other lanthanide atoms do not have a d-electron. In particular, ytterbium completes the 4f shell and thus Soviet physicists Lev Landau and Evgeny Lifshitz noted in 1948 that lutetium is correctly regarded as a d-block rather than an f-block element; that bulk lanthanum is an f-metal was first suggested by Jun Kondō in 1963, on the grounds of its low-temperature superconductivity. This clarified the importance of looking at low-lying excited states of atoms that can play a role in chemical environments when classifying elements by block and positioning them on the table. Many authors subsequently rediscovered this correction based on physical, chemical, and electronic concerns and applied it to all the relevant elements, thus making group 3 contain scandium, yttrium, lutetium, and lawrencium and having lanthanum through ytterbium and actinium through nobelium as the f-block rows: this corrected version achieves consistency with the Madelung rule and vindicates Bassett, Werner, and Bury's initial chemical placement. In 1988, IUPAC released a report supporting this composition of group 3, a decision that was reaffirmed in 2021. Variation can still be found in textbooks on the composition of group 3, and some argumentation against this format is still published today, but chemists and physicists who have considered the matter largely agree on group 3 containing scandium, yttrium, lutetium, and lawrencium and challenge the counterarguments as being inconsistent. Synthetic elements By 1936, the pool of missing elements from hydrogen to uranium had shrunk to four: elements 43, 61, 85, and 87 remained missing. Element 43 eventually became the first element to be synthesized artificially via nuclear reactions rather than discovered in nature. It was discovered in 1937 by Italian chemists Emilio Segrè and Carlo Perrier, who named their discovery technetium, after the Greek word for "artificial". Elements 61 (promethium) and 85 (astatine) were likewise produced artificially in 1945 and 1940 respectively; element 87 (francium) became the last element to be discovered in nature, by French chemist Marguerite Perey in 1939. The elements beyond uranium were likewise discovered artificially, starting with Edwin McMillan and Philip Abelson's 1940 discovery of neptunium (via bombardment of uranium with neutrons). Glenn T. Seaborg and his team at the Lawrence Berkeley National Laboratory (LBNL) continued discovering transuranium elements, starting with plutonium in 1941, and discovered that contrary to previous thinking, the elements from actinium onwards were congeners of the lanthanides rather than transition metals. Bassett (1892), Werner (1905), and the French engineer Charles Janet (1928) had previously suggested this, but their ideas did not then receive general acceptance. Seaborg thus called them the actinides. Elements up to 101 (named mendelevium in honour of Mendeleev) were synthesized up to 1955, either through neutron or alpha-particle irradiation, or in nuclear explosions in the cases of 99 (einsteinium) and 100 (fermium). A significant controversy arose with elements 102 through 106 in the 1960s and 1970s, as competition arose between the LBNL team (now led by Albert Ghiorso) and a team of Soviet scientists at the Joint Institute for Nuclear Research (JINR) led by Georgy Flyorov. Each team claimed discovery, and in some cases each proposed their own name for the element, creating an element naming controversy that lasted decades. These elements were made by bombardment of actinides with light ions. IUPAC at first adopted a hands-off approach, preferring to wait and see if a consensus would be forthcoming. But as it was also the height of the Cold War, it became clear that this would not happen. As such, IUPAC and the International Union of Pure and Applied Physics (IUPAP) created a Transfermium Working Group (TWG, fermium being element 100) in 1985 to set out criteria for discovery, which were published in 1991. After some further controversy, these elements received their final names in 1997, including seaborgium (106) in honour of Seaborg. The TWG's criteria were used to arbitrate later element discovery claims from LBNL and JINR, as well as from research institutes in Germany (GSI) and Japan (Riken). Currently, consideration of discovery claims is performed by a IUPAC/IUPAP Joint Working Party. After priority was assigned, the elements were officially added to the periodic table, and the discoverers were invited to propose their names. By 2016, this had occurred for all elements up to 118, therefore completing the periodic table's first seven rows. The discoveries of elements beyond 106 were made possible by techniques devised by Yuri Oganessian at the JINR: cold fusion (bombardment of lead and bismuth by heavy ions) made possible the 1981–2004 discoveries of elements 107 through 112 at GSI and 113 at Riken, and he led the JINR team (in collaboration with American scientists) to discover elements 114 through 118 using hot fusion (bombardment of actinides by calcium ions) in 1998–2010. The heaviest known element, oganesson (118), is named in Oganessian's honour. Element 114 is named flerovium in honour of his predecessor and mentor Flyorov. In celebration of the periodic table's 150th anniversary, the United Nations declared the year 2019 as the International Year of the Periodic Table, celebrating "one of the most significant achievements in science". The discovery criteria set down by the TWG were updated in 2020 in response to experimental and theoretical progress that had not been foreseen in 1991. Today, the periodic table is among the most recognisable icons of chemistry. IUPAC is involved today with many processes relating to the periodic table: the recognition and naming of new elements, recommending group numbers and collective names, and the updating of atomic weights. Future extension beyond the seventh period The most recently named elements – nihonium (113), moscovium (115), tennessine (117), and oganesson (118) – completed the seventh row of the periodic table. Future elements would have to begin an eighth row. These elements may be referred to either by their atomic numbers (e.g. "element 164"), or by the IUPAC systematic element names adopted in 1978, which directly relate to the atomic numbers (e.g. "unhexquadium" for element 164, derived from Latin unus "one", Greek hexa "six", Latin quadra "four", and the traditional -ium suffix for metallic elements). All attempts to synthesize such elements have failed so far. An attempt to make element 119 has been ongoing since 2018 at the Riken research institute in Japan. The LBNL in the United States, the JINR in Russia, and the Heavy Ion Research Facility in Lanzhou (HIRFL) in China also plan to make their own attempts at synthesizing the first few period 8 elements. If the eighth period followed the pattern set by the earlier periods, then it would contain fifty elements, filling the 8s, , 6f, 7d, and finally 8p subshells in that order. But by this point, relativistic effects should result in significant deviations from the Madelung rule. Various different models have been suggested for the configurations of eighth-period elements, as well as how to show the results in a periodic table. All agree that the eighth period should begin like the previous ones with two 8s elements, 119 and 120. However, after that the massive energetic overlaps between the , 6f, 7d, and 8p subshells means that they all begin to fill together, and it is not clear how to separate out specific and 6f series. Elements 121 through 156 thus do not fit well as chemical analogues of any previous group in the earlier parts of the table, although they have sometimes been placed as , 6f, and other series to formally reflect their electron configurations. Eric Scerri has raised the question of whether an extended periodic table should take into account the failure of the Madelung rule in this region, or if such exceptions should be ignored. The shell structure may also be fairly formal at this point: already the electron distribution in an oganesson atom is expected to be rather uniform, with no discernible shell structure. The situation from elements 157 to 172 should return to normalcy and be more reminiscent of the earlier rows. The heavy p-shells are split by the spin–orbit interaction: one p-orbital (p1/2) is more stabilized, and the other two (p3/2) are destabilized. (Such shifts in the quantum numbers happen for all types of shells, but it makes the biggest difference to the order for the p-shells.) It is likely that by element 157, the filled 8s and 8p1/2 shells with four electrons in total have sunk into the core. Beyond the core, the next orbitals are 7d and 9s at similar energies, followed by 9p1/2 and 8p3/2 at similar energies, and then a large gap. Thus, the 9s and 9p1/2 orbitals in essence replace the 8s and 8p1/2 ones, making elements 157–172 probably chemically analogous to groups 3–18: for example, element 164 would appear two places below lead in group 14 under the usual pattern, but is calculated to be very analogous to palladium in group 10 instead. Thus, it takes fifty-four elements rather than fifty to reach the next noble element after 118. However, while these conclusions about elements 157 through 172's chemistry are generally agreed by models, there is disagreement on whether the periodic table should be drawn to reflect chemical analogies, or if it should reflect likely formal electron configurations, which should be quite different from earlier periods and are not agreed between sources. Discussion about the format of the eighth row thus continues. Beyond element 172, calculation is complicated by the 1s electron energy level becoming imaginary. Such a situation does have a physical interpretation and does not in itself pose an electronic limit to the periodic table, but the correct way to incorporate such states into multi-electron calculations is still an open question needing to be solved to calculate the periodic table's structure beyond this point. Nuclear stability will likely prove a decisive factor constraining the number of possible elements. It depends on the balance between the electric repulsion between protons and the strong force binding protons and neutrons together. Protons and neutrons are arranged in shells, just like electrons, and so a closed shell can significantly increase stability: the known superheavy nuclei exist because of such a shell closure, probably at around 114–126 protons and 184 neutrons. They are probably close to a predicted island of stability, where superheavy nuclides should be more long-lived than expected: predictions for the longest-lived nuclides on the island range from microseconds to millions of years. It should nonetheless be noted that these are essentially extrapolations into an unknown part of the chart of nuclides, and systematic model uncertainties need to be taken into account. As the closed shells are passed, the stabilizing effect should vanish. Thus, superheavy nuclides with more than 184 neutrons are expected to have much shorter lifetimes, spontaneously fissioning within 10−15 seconds. If this is so, then it would not make sense to consider them chemical elements: [IUPAC/IUPAP theorizes and recommends] an element to exist only if the nucleus lives longer than 10−14 seconds, the time needed for it to gather an electron cloud. Nonetheless, theoretical estimates of half-lives are very model-dependent, ranging over many orders of magnitude. The extreme repulsion between protons is predicted to result in exotic nuclear topologies, with bubbles, rings, and tori expected: this further complicates extrapolation. It is not clear if any further-out shell closures exist, due to an expected smearing out of distinct nuclear shells (as is already expected for the electron shells at oganesson). Furthermore, even if later shell closures exist, it is not clear if they would allow such heavy elements to exist. As such, it may be that the periodic table practically ends around element 120, as elements become too short-lived to observe, and then too short-lived to have chemistry; the era of discovering new elements would thus be close to its end. If another proton shell closure beyond 126 does exist, then it probably occurs around 164; thus the region where periodicity fails more or less matches the region of instability between the shell closures. Alternatively, quark matter may become stable at high mass numbers, in which the nucleus is composed of freely flowing up and down quarks instead of binding them into protons and neutrons; this would create a continent of stability instead of an island. Other effects may come into play: for example, in very heavy elements the 1s electrons are likely to spend a significant amount of time so close to the nucleus that they are actually inside it, which would make them vulnerable to electron capture. Even if eighth-row elements can exist, producing them is likely to be difficult, and it should become even more difficult as atomic number rises. Although the 8s elements 119 and 120 are expected to be reachable with present means, the elements beyond that are expected to require new technology, if they can be produced at all. Experimentally characterizing these elements chemically would also pose a great challenge. Alternative periodic tables The periodic law may be represented in multiple ways, of which the standard periodic table is only one. Within 100 years of the appearance of Mendeleev's table in 1869, Edward G. Mazurs had collected an estimated 700 different published versions of the periodic table. Many forms retain the rectangular structure, including Charles Janet's left-step periodic table (pictured below), and the modernised form of Mendeleev's original 8-column layout that is still common in Russia. Other periodic table formats have been shaped much more exotically, such as spirals (Otto Theodor Benfey's pictured to the right), circles and triangles. Alternative periodic tables are often developed to highlight or emphasize chemical or physical properties of the elements that are not as apparent in traditional periodic tables, with different ones skewed more towards emphasizing chemistry or physics at either end. The standard form, which remains by far the most common, is somewhere in the middle. The many different forms of the periodic table have prompted the questions of whether there is an optimal or definitive form of the periodic table, and if so, what it might be. There are no current consensus answers to either question. Janet's left-step table is being increasingly discussed as a candidate for being the optimal or most fundamental form; Scerri has written in support of it, as it clarifies helium's nature as an s-block element, increases regularity by having all period lengths repeated, faithfully follows Madelung's rule by making each period correspond to one value of + , and regularises atomic number triads and the first-row anomaly trend. While he notes that its placement of helium atop the alkaline earth metals can be seen a disadvantage from a chemical perspective, he counters this by appealing to the first-row anomaly, pointing out that the periodic table "fundamentally reduces to quantum mechanics", and that it is concerned with "abstract elements" and hence atomic properties rather than macroscopic properties.
Physical sciences
Chemistry
null
23055
https://en.wikipedia.org/wiki/Potassium
Potassium
Potassium is a chemical element; it has symbol K (from Neo-Latin ) and atomic number19. It is a silvery white metal that is soft enough to easily cut with a knife. Potassium metal reacts rapidly with atmospheric oxygen to form flaky white potassium peroxide in only seconds of exposure. It was first isolated from potash, the ashes of plants, from which its name derives. In the periodic table, potassium is one of the alkali metals, all of which have a single valence electron in the outer electron shell, which is easily removed to create an ion with a positive charge (which combines with anions to form salts). In nature, potassium occurs only in ionic salts. Elemental potassium reacts vigorously with water, generating sufficient heat to ignite hydrogen emitted in the reaction, and burning with a lilac-colored flame. It is found dissolved in seawater (which is 0.04% potassium by weight), and occurs in many minerals such as orthoclase, a common constituent of granites and other igneous rocks. Potassium is chemically very similar to sodium, the previous element in group 1 of the periodic table. They have a similar first ionization energy, which allows for each atom to give up its sole outer electron. It was first suggested in 1702 that they were distinct elements that combine with the same anions to make similar salts, which was demonstrated in 1807 when elemental potassium was first isolated via electrolysis. Naturally occurring potassium is composed of three isotopes, of which is radioactive. Traces of are found in all potassium, and it is the most common radioisotope in the human body. Potassium ions are vital for the functioning of all living cells. The transfer of potassium ions across nerve cell membranes is necessary for normal nerve transmission; potassium deficiency and excess can each result in numerous signs and symptoms, including an abnormal heart rhythm and various electrocardiographic abnormalities. Fresh fruits and vegetables are good dietary sources of potassium. The body responds to the influx of dietary potassium, which raises serum potassium levels, by shifting potassium from outside to inside cells and increasing potassium excretion by the kidneys. Most industrial applications of potassium exploit the high solubility of its compounds in water, such as saltwater soap. Heavy crop production rapidly depletes the soil of potassium, and this can be remedied with agricultural fertilizers containing potassium, accounting for 95% of global potassium chemical production. Etymology The English name for the element potassium comes from the word potash, which refers to an early method of extracting various potassium salts: placing in a pot the ash of burnt wood or tree leaves, adding water, heating, and evaporating the solution. When Humphry Davy first isolated the pure element using electrolysis in 1807, he named it potassium, which he derived from the word potash. The symbol K stems from kali, itself from the root word alkali, which in turn comes from al-qalyah 'plant ashes'. In 1797, the German chemist Martin Klaproth discovered "potash" in the minerals leucite and lepidolite, and realized that "potash" was not a product of plant growth but actually contained a new element, which he proposed calling kali. In 1807, Humphry Davy produced the element via electrolysis: in 1809, Ludwig Wilhelm Gilbert proposed the name Kalium for Davy's "potassium". In 1814, the Swedish chemist Berzelius advocated the name kalium for potassium, with the chemical symbol K. The English and French-speaking countries adopted the name Potassium, which was favored by Davy and French chemists Joseph Louis Gay-Lussac and Louis Jacques Thénard, whereas the other Germanic countries adopted Gilbert and Klaproth's name Kalium. The "Gold Book" of the International Union of Pure and Applied Chemistry has designated the official chemical symbol as K. Properties Physical Potassium is the second least dense metal after lithium. It is a soft solid with a low melting point, and can be easily cut with a knife. Potassium is silvery in appearance, but it begins to tarnish toward gray immediately on exposure to air. In a flame test, potassium and its compounds emit a lilac color with a peak emission wavelength of 766.5 nanometers. Neutral potassium atoms have 19 electrons, one more than the configuration of the noble gas argon. Because of its low first ionization energy of 418.8kJ/mol, the potassium atom is much more likely to lose the last electron and acquire a positive charge, although negatively charged alkalide ions are not impossible. In contrast, the second ionization energy is very high (3052kJ/mol). Chemical Potassium reacts with oxygen, water, and carbon dioxide components in air. With oxygen it forms potassium peroxide. With water potassium forms potassium hydroxide (KOH). The reaction of potassium with water can be violently exothermic, especially since the coproduced hydrogen gas can ignite. Because of this, potassium and the liquid sodium-potassium (NaK) alloy are potent desiccants, although they are no longer used as such. Compounds Four oxides of potassium are well studied: potassium oxide (), potassium peroxide (), potassium superoxide () and potassium ozonide (). The binary potassium-oxygen compounds react with water forming KOH. KOH is a strong base. Illustrating its hydrophilic character, as much as 1.21 kg of KOH can dissolve in a single liter of water. Anhydrous KOH is rarely encountered. KOH reacts readily with carbon dioxide () to produce potassium carbonate (), and in principle could be used to remove traces of the gas from air. Like the closely related sodium hydroxide, KOH reacts with fats to produce soaps. In general, potassium compounds are ionic and, owing to the high hydration energy of the ion, have excellent water solubility. The main species in water solution are the aquo complexes where n = 6 and 7. Potassium heptafluorotantalate () is an intermediate in the purification of tantalum from the otherwise persistent contaminant of niobium. Organopotassium compounds illustrate nonionic compounds of potassium. They feature highly polar covalent K–C bonds. Examples include benzyl potassium . Potassium intercalates into graphite to give a variety of graphite intercalation compounds, including . Isotopes There are 25 known isotopes of potassium, three of which occur naturally: (93.3%), (0.0117%), and (6.7%) (by mole fraction). Naturally occurring has a half-life of years. It decays to stable by electron capture or positron emission (11.2%) or to stable by beta decay (88.8%). The decay of to is the basis of a common method for dating rocks. The conventional K-Ar dating method depends on the assumption that the rocks contained no argon at the time of formation and that all the subsequent radiogenic argon () was quantitatively retained. Minerals are dated by measurement of the concentration of potassium and the amount of radiogenic that has accumulated. The minerals best suited for dating include biotite, muscovite, metamorphic hornblende, and volcanic feldspar; whole rock samples from volcanic flows and shallow instrusives can also be dated if they are unaltered. Apart from dating, potassium isotopes have been used as tracers in studies of weathering and for nutrient cycling studies because potassium is a macronutrient required for life on Earth. occurs in natural potassium (and thus in some commercial salt substitutes) in sufficient quantity that large bags of those substitutes can be used as a radioactive source for classroom demonstrations. is the radioisotope with the largest abundance in the human body. In healthy animals and people, represents the largest source of radioactivity, greater even than . In a human body of 70 kg, about 4,400 nuclei of decay per second. The activity of natural potassium is 31 Bq/g. History Potash Potash is primarily a mixture of potassium salts because plants have little or no sodium content, and the rest of a plant's major mineral content consists of calcium salts of relatively low solubility in water. While potash has been used since ancient times, its composition was not understood. Georg Ernst Stahl obtained experimental evidence that led him to suggest the fundamental difference of sodium and potassium salts in 1702, and Henri Louis Duhamel du Monceau was able to prove this difference in 1736. The exact chemical composition of potassium and sodium compounds, and the status as chemical element of potassium and sodium, was not known then, and thus Antoine Lavoisier did not include the alkali in his list of chemical elements in 1789. For a long time the only significant applications for potash were the production of glass, bleach, soap and gunpowder as potassium nitrate. Potassium soaps from animal fats and vegetable oils were especially prized because they tend to be more water-soluble and of softer texture, and are therefore known as soft soaps. The discovery by Justus Liebig in 1840 that potassium is a necessary element for plants and that most types of soil lack potassium caused a steep rise in demand for potassium salts. Wood-ash from fir trees was initially used as a potassium salt source for fertilizer, but, with the discovery in 1868 of mineral deposits containing potassium chloride near Staßfurt, Germany, the production of potassium-containing fertilizers began at an industrial scale. Other potash deposits were discovered, and by the 1960s Canada became the dominant producer. Metal Potassium metal was first isolated in 1807 by Humphry Davy, who derived it by electrolysis of molten caustic potash (KOH) with the newly discovered voltaic pile. Potassium was the first metal that was isolated by electrolysis. Later in the same year, Davy reported extraction of the metal sodium from a mineral derivative (caustic soda, NaOH, or lye) rather than a plant salt, by a similar technique, demonstrating that the elements, and thus the salts, are different. Although the production of potassium and sodium metal should have shown that both are elements, it took some time before this view was universally accepted. Because of the sensitivity of potassium to water and air, air-free techniques are normally employed for handling the element. It is unreactive toward nitrogen and saturated hydrocarbons such as mineral oil or kerosene. It readily dissolves in liquid ammonia, up to 480 g per 1000 g of ammonia at 0°C. Depending on the concentration, the ammonia solutions are blue to yellow, and their electrical conductivity is similar to that of liquid metals. Potassium slowly reacts with ammonia to form , but this reaction is accelerated by minute amounts of transition metal salts. Because it can reduce the salts to the metal, potassium is often used as the reductant in the preparation of finely divided metals from their salts by the Rieke method. Illustrative is the preparation of magnesium: Occurrence Potassium is formed in supernovae by nucleosynthesis from lighter atoms. Potassium is principally created in Type II supernovae via an explosive oxygen-burning process. These are nuclear fusion reactions, not to be confused with chemical burning of potassium in oxygen. is also formed in nucleosynthesis and the neon burning process. Potassium is the 20th most abundant element in the solar system and the 17th most abundant element by weight in the Earth. It makes up about 2.6% of the weight of the Earth's crust and is the seventh most abundant element in the crust. The potassium concentration in seawater is 0.39g/L (0.039 wt/v%), about one twenty-seventh the concentration of sodium. Geology Elemental potassium does not occur in nature because of its high reactivity. It reacts violently with water and also reacts with oxygen. Orthoclase (potassium feldspar) is a common rock-forming mineral. Granite for example contains 5% potassium, which is well above the average in the Earth's crust. Sylvite (KCl), carnallite (), kainite () and langbeinite () are the minerals found in large evaporite deposits worldwide. The deposits often show layers starting with the least soluble at the bottom and the most soluble on top. Deposits of niter (potassium nitrate) are formed by decomposition of organic material in contact with atmosphere, mostly in caves; because of the good water solubility of niter the formation of larger deposits requires special environmental conditions. Commercial production Mining Potassium salts such as carnallite, langbeinite, polyhalite, and sylvite form extensive evaporite deposits in ancient lake bottoms and seabeds, making extraction of potassium salts in these environments commercially viable. The principal source of potassium – potash – is mined in Canada, Russia, Belarus, Kazakhstan, Germany, Israel, the U.S., Jordan, and other places around the world. The first mined deposits were located near Staßfurt, Germany, but the deposits span from Great Britain over Germany into Poland. They are located in the Zechstein and were deposited in the Middle to Late Permian. The largest deposits ever found lie below the surface of the Canadian province of Saskatchewan. The deposits are located in the Elk Point Group produced in the Middle Devonian. Saskatchewan, where several large mines have operated since the 1960s pioneered the technique of freezing of wet sands (the Blairmore formation) to drive mine shafts through them. The main potash mining company in Saskatchewan until its merge was the Potash Corporation of Saskatchewan, now Nutrien. The water of the Dead Sea is used by Israel and Jordan as a source of potash, while the concentration in normal oceans is too low for commercial production at current prices. Chemical extraction Several methods are used to separate potassium salts from sodium and magnesium compounds. The most-used method is fractional precipitation using the solubility differences of the salts. Electrostatic separation of the ground salt mixture is also used in some mines. The resulting sodium and magnesium waste is either stored underground or piled up in slag heaps. Most of the mined potassium mineral ends up as potassium chloride after processing. The mineral industry refers to potassium chloride either as potash, muriate of potash, or simply MOP. Pure potassium metal can be isolated by electrolysis of its hydroxide in a process that has changed little since it was first used by Humphry Davy in 1807. Although the electrolysis process was developed and used in industrial scale in the 1920s, the thermal method by reacting sodium with potassium chloride in a chemical equilibrium reaction became the dominant method in the 1950s. Na + KCl → NaCl + K The production of sodium potassium alloys is accomplished by changing the reaction time and the amount of sodium used in the reaction. The Griesheimer process employing the reaction of potassium fluoride with calcium carbide was also used to produce potassium. Reagent-grade potassium metal costs about $10.00/pound ($22/kg) in 2010 when purchased by the tonne. Lower purity metal is considerably cheaper. The market is volatile because long-term storage of the metal is difficult. It must be stored in a dry inert gas atmosphere or anhydrous mineral oil to prevent the formation of a surface layer of potassium superoxide, a pressure-sensitive explosive that detonates when scratched. The resulting explosion often starts a fire difficult to extinguish. Cation identification Potassium is now quantified by ionization techniques, but at one time it was quantitated by gravimetric analysis. Reagents used to precipitate potassium salts include sodium tetraphenylborate, hexachloroplatinic acid, and sodium cobaltinitrite into respectively potassium tetraphenylborate, potassium hexachloroplatinate, and potassium cobaltinitrite. The reaction with sodium cobaltinitrite is illustrative: The potassium cobaltinitrite is obtained as a yellow solid. Commercial uses Fertilizer Potassium ions are an essential component of plant nutrition and are found in most soil types. They are used as a fertilizer in agriculture, horticulture, and hydroponic culture in the form of chloride (KCl), sulfate (), or nitrate (), representing the 'K' in 'NPK'. Agricultural fertilizers consume 95% of global potassium chemical production, and about 90% of this potassium is supplied as KCl. The potassium content of most plants ranges from 0.5% to 2% of the harvested weight of crops, conventionally expressed as amount of . Modern high-yield agriculture depends upon fertilizers to replace the potassium lost at harvest. Most agricultural fertilizers contain potassium chloride, while potassium sulfate is used for chloride-sensitive crops or crops needing higher sulfur content. The sulfate is produced mostly by decomposition of the complex minerals kainite () and langbeinite (). Only a very few fertilizers contain potassium nitrate. In 2005, about 93% of world potassium production was consumed by the fertilizer industry. Furthermore, potassium can play a key role in nutrient cycling by controlling litter composition. Medical use Potassium citrate Potassium citrate is used to treat a kidney stone condition called renal tubular acidosis. Potassium chloride Potassium, in the form of potassium chloride is used as a medication to treat and prevent low blood potassium. Low blood potassium may occur due to vomiting, diarrhea, or certain medications. It is given by slow injection into a vein or by mouth. Food additives Potassium sodium tartrate (, Rochelle salt) is a main constituent of some varieties of baking powder; it is also used in the silvering of mirrors. Potassium bromate () is a strong oxidizer (E924), used to improve dough strength and rise height. Potassium bisulfite () is used as a food preservative, for example in wine and beer-making (but not in meats). It is also used to bleach textiles and straw, and in the tanning of leathers. Industrial Major potassium chemicals are potassium hydroxide, potassium carbonate, potassium sulfate, and potassium chloride. Megatons of these compounds are produced annually. KOH is a strong base, which is used in industry to neutralize strong and weak acids, to control pH and to manufacture potassium salts. It is also used to saponify fats and oils, in industrial cleaners, and in hydrolysis reactions, for example of esters. Potassium nitrate () or saltpeter is obtained from natural sources such as guano and evaporites or manufactured via the Haber process; it is the oxidant in gunpowder (black powder) and an important agricultural fertilizer. Potassium cyanide (KCN) is used industrially to dissolve copper and precious metals, in particular silver and gold, by forming complexes. Its applications include gold mining, electroplating, and electroforming of these metals; it is also used in organic synthesis to make nitriles. Potassium carbonate ( or potash) is used in the manufacture of glass, soap, color TV tubes, fluorescent lamps, textile dyes and pigments. Potassium permanganate () is an oxidizing, bleaching and purification substance and is used for production of saccharin. Potassium chlorate () is added to matches and explosives. Potassium bromide (KBr) was formerly used as a sedative and in photography. While potassium chromate () is used in the manufacture of a host of different commercial products such as inks, dyes, wood stains (by reacting with the tannic acid in wood), explosives, fireworks, fly paper, and safety matches, as well as in the tanning of leather, all of these uses are due to the chemistry of the chromate ion rather than to that of the potassium ion. Niche uses There are thousands of uses of various potassium compounds. One example is potassium superoxide, , an orange solid that acts as a portable source of oxygen and a carbon dioxide absorber. It is widely used in respiration systems in mines, submarines and spacecraft as it takes less volume than the gaseous oxygen. Another example is potassium cobaltinitrite, , which is used as artist's pigment under the name of Aureolin or Cobalt Yellow. The stable isotopes of potassium can be laser cooled and used to probe fundamental and technological problems in quantum physics. The two bosonic isotopes possess convenient Feshbach resonances to enable studies requiring tunable interactions, while is one of only two stable fermions amongst the alkali metals. Laboratory uses An alloy of sodium and potassium, NaK is a liquid used as a heat-transfer medium and a desiccant for producing dry and air-free solvents. It can also be used in reactive distillation. The ternary alloy of 12% Na, 47% K and 41% Cs has the lowest melting point of −78°C of any metallic compound. Metallic potassium is used in several types of magnetometers. Biological role Potassium is the eighth or ninth most common element by mass (0.2%) in the human body, so that a 60kg adult contains a total of about 120g of potassium. The body has about as much potassium as sulfur and chlorine, and only calcium and phosphorus are more abundant (with the exception of the ubiquitous CHON elements). Potassium ions are present in a wide variety of proteins and enzymes. Biochemical function Potassium levels influence multiple physiological processes, including resting cellular-membrane potential and the propagation of action potentials in neuronal, muscular, and cardiac tissue. Due to the electrostatic and chemical properties, ions are larger than ions, and ion channels and pumps in cell membranes can differentiate between the two ions, actively pumping or passively passing one of the two ions while blocking the other. hormone secretion and action vascular tone systemic blood pressure control gastrointestinal motility acid–base homeostasis glucose and insulin metabolism mineralocorticoid action renal concentrating ability fluid and electrolyte balance local cortical monoaminergic norepinephrine, serotonin, and dopamine levels, and through them, sleep/wake balance, and spontaneous activity. Homeostasis Potassium homeostasis denotes the maintenance of the total body potassium content, plasma potassium level, and the ratio of the intracellular to extracellular potassium concentrations within narrow limits, in the face of pulsatile intake (meals), obligatory renal excretion, and shifts between intracellular and extracellular compartments. Plasma levels Plasma potassium is normally kept at 3.5 to 5.5 millimoles (mmol) [or milliequivalents (mEq)] per liter by multiple mechanisms. Levels outside this range are associated with an increasing rate of death from multiple causes, and some cardiac, kidney, and lung diseases progress more rapidly if serum potassium levels are not maintained within the normal range. An average meal of 40–50mmol presents the body with more potassium than is present in all plasma (20–25mmol). This surge causes the plasma potassium to rise up to 10% before clearance by renal and extrarenal mechanisms. Hypokalemia, a deficiency of potassium in the plasma, can be fatal if severe. Common causes are increased gastrointestinal loss (vomiting, diarrhea), and increased renal loss (diuresis). Deficiency symptoms include muscle weakness, paralytic ileus, ECG abnormalities, decreased reflex response; and in severe cases, respiratory paralysis, alkalosis, and cardiac arrhythmia. Control mechanisms Potassium content in the plasma is tightly controlled by four basic mechanisms, which have various names and classifications. These are: a reactive negative-feedback system, a reactive feed-forward system, a predictive or circadian system, and an internal or cell membrane transport system. Collectively, the first three are sometimes termed the "external potassium homeostasis system"; and the first two, the "reactive potassium homeostasis system". The reactive negative-feedback system refers to the system that induces renal secretion of potassium in response to a rise in the plasma potassium (potassium ingestion, shift out of cells, or intravenous infusion.) The reactive feed-forward system refers to an incompletely understood system that induces renal potassium secretion in response to potassium ingestion prior to any rise in the plasma potassium. This is probably initiated by gut cell potassium receptors that detect ingested potassium and trigger vagal afferent signals to the pituitary gland. The predictive or circadian system increases renal secretion of potassium during mealtime hours (e.g. daytime for humans, nighttime for rodents) independent of the presence, amount, or absence of potassium ingestion. It is mediated by a circadian oscillator in the suprachiasmatic nucleus of the brain (central clock), which causes the kidney (peripheral clock) to secrete potassium in this rhythmic circadian fashion. The ion transport system moves potassium across the cell membrane using two mechanisms. One is active and pumps sodium out of, and potassium into, the cell. The other is passive and allows potassium to leak out of the cell. Potassium and sodium cations influence fluid distribution between intracellular and extracellular compartments by osmotic forces. The movement of potassium and sodium through the cell membrane is mediated by the Na⁺/K⁺-ATPase pump. This ion pump uses ATP to pump three sodium ions out of the cell and two potassium ions into the cell, creating an electrochemical gradient and electromotive force across the cell membrane. The highly selective potassium ion channels (which are tetramers) are crucial for hyperpolarization inside neurons after an action potential is triggered, to cite one example. The most recently discovered potassium ion channel is KirBac3.1, which makes a total of five potassium ion channels (KcsA, KirBac1.1, KirBac3.1, KvAP, and MthK) with a determined structure. All five are from prokaryotic species. Renal filtration, reabsorption, and excretion Renal handling of potassium is closely connected to sodium handling. Potassium is the major cation (positive ion) inside animal cells (150mmol/L, 4.8g/L), while sodium is the major cation of extracellular fluid (150mmol/L, 3.345g/L). In the kidneys, about 180liters of plasma is filtered through the glomeruli and into the renal tubules per day. This filtering involves about 600mg of sodium and 33mg of potassium. Since only 1–10mg of sodium and 1–4mg of potassium are likely to be replaced by diet, renal filtering must efficiently reabsorb the remainder from the plasma. Sodium is reabsorbed to maintain extracellular volume, osmotic pressure, and serum sodium concentration within narrow limits. Potassium is reabsorbed to maintain serum potassium concentration within narrow limits. Sodium pumps in the renal tubules operate to reabsorb sodium. Potassium must be conserved, but because the amount of potassium in the blood plasma is very small and the pool of potassium in the cells is about 30 times as large, the situation is not so critical for potassium. Since potassium is moved passively in counter flow to sodium in response to an apparent (but not actual) Donnan equilibrium, the urine can never sink below the concentration of potassium in serum except sometimes by actively excreting water at the end of the processing. Potassium is excreted twice and reabsorbed three times before the urine reaches the collecting tubules. At that point, urine usually has about the same potassium concentration as plasma. At the end of the processing, potassium is secreted one more time if the serum levels are too high. With no potassium intake, it is excreted at about 200mg per day until, in about a week, potassium in the serum declines to a mildly deficient level of 3.0–3.5mmol/L. If potassium is still withheld, the concentration continues to fall until a severe deficiency causes eventual death. The potassium moves passively through pores in the cell membrane. When ions move through ion transporters (pumps) there is a gate in the pumps on both sides of the cell membrane and only one gate can be open at once. As a result, approximately 100 ions are forced through per second. Ion channels have only one gate, and there only one kind of ion can stream through, at 10 million to 100 million ions per second. Calcium is required to open the pores, although calcium may work in reverse by blocking at least one of the pores. Carbonyl groups inside the pore on the amino acids mimic the water hydration that takes place in water solution by the nature of the electrostatic charges on four carbonyl groups inside the pore. Nutrition Dietary recommendations North America The U.S. National Academy of Medicine (NAM), on behalf of both the U.S. and Canada, sets Dietary Reference Intakes, including Estimated Average Requirements (EARs) and Recommended Dietary Allowances (RDAs), or Adequate Intakes (AIs) for when there is not sufficient information to set EARs and RDAs. For both males and females under 9 years of age, the AIs for potassium are: 400mg of potassium for 0 to 6-month-old infants, 860mg of potassium for 7 to 12-month-old infants, 2,000mg of potassium for 1 to 3-year-old children, and 2,300mg of potassium for 4 to 8-year-old children. For males 9 years of age and older, the AIs for potassium are: 2,500mg of potassium for 9 to 13-year-old males, 3,000mg of potassium for 14 to 18-year-old males, and 3,400mg for males that are 19 years of age and older. For females 9 years of age and older, the AIs for potassium are: 2,300mg of potassium for 9 to 18-year-old females, and 2,600mg of potassium for females that are 19 years of age and older. For pregnant and lactating females, the AIs for potassium are: 2,600mg of potassium for 14 to 18-year-old pregnant females, 2,900mg for pregnant females that are 19 years of age and older; furthermore, 2,500mg of potassium for 14 to 18-year-old lactating females, and 2,800mg for lactating females that are 19 years of age and older. As for safety, the NAM also sets tolerable upper intake levels (ULs) for vitamins and minerals, but for potassium the evidence was insufficient, so no UL was established. As of 2004, most Americans adults consume less than 3,000mg. Europe Likewise, in the European Union, in particular in Germany, and Italy, insufficient potassium intake is somewhat common. The National Health Service in the United Kingdom recommends a similar intake, saying that "adults (19 to 64 years) need per day" and that excess amounts may cause health problems such as stomach pain and diarrhea. Food sources Potassium is present in all fruits, vegetables, meat and fish. Foods with high potassium concentrations include yam, parsley, dried apricots, milk, chocolate, all nuts (especially almonds and pistachios), potatoes, bamboo shoots, bananas, avocados, coconut water, soybeans, and bran. The United States Department of Agriculture also lists tomato paste, orange juice, beet greens, white beans, plantains, and many other dietary sources of potassium, ranked in descending order according to potassium content. A day's worth of potassium is in 5 plantains or 11 bananas. Deficient intake Mild hypokalemia does not cause distinct symptoms acting instead as a risk factor for hypertension and cardiac arrhythmia. Severe hypokalemia usually presents with hypertension, arrhythmia, muscle cramps, fatigue, weakness and constipation. Causes of hypokalemia include vomiting, diarrhea, medications like furosemide and steroids, dialysis, diabetes insipidus, hyperaldosteronism, hypomagnesemia. Supplementation Supplements of potassium are most widely used in conjunction with diuretics that block reabsorption of sodium and water upstream from the distal tubule (thiazides and loop diuretics), because this promotes increased distal tubular potassium secretion, with resultant increased potassium excretion. A variety of prescription and over-the counter supplements are available. Potassium chloride may be dissolved in water, but the salty/bitter taste makes liquid supplements unpalatable. Potassium is also available in tablets or capsules, which are formulated to allow potassium to leach slowly out of a matrix, since very high concentrations of potassium ion that occur adjacent to a solid tablet can injure the gastric or intestinal mucosa. For this reason, non-prescription potassium pills are limited by law in the US to a maximum of 99mg of potassium. Potassium supplementation can also be combined with other metabolites, such as citrate or chloride, to achieve specific clinical effects. Potassium supplements may be employed to mitigate the impact of hypertension, thereby reducing cardiovascular risk. Potassium chloride and potassium bicarbonate may be useful to control mild hypertension. In 2020, potassium was the 33rd most commonly prescribed medication in the U.S., with more than 17million prescriptions. Potassium supplementation has been shown to reduce both systolic and diastolic blood pressure in individuals with essential hypertension. Additionally, potassium supplements may be employed with the aim of preventing the formation of kidney stones, a condition that can lead to renal complications if left untreated. Low potassium levels can lead to decreased calcium reabsorption in the kidneys, increasing the risk of elevated urine calcium and the formation of kidney stones. By maintaining adequate potassium levels, this risk can be reduced. The mechanism of action of potassium involves various types of transporters and channels that facilitate its movement across cell membranes. This process can lead to an increase in the pumping of hydrogen ions. This, in turn, can escalate the production of gastric acid, potentially contributing to the development of gastric ulcers. Potassium has a role in bone health. It contributes to the acid-base equilibrium in the body and helps protect bone tissue. Potassium salts produce an alkaline component that can aid in maintaining bone health. For individuals with diabetes, potassium supplementation may be necessary, particularly for those with type 2 diabetes. Potassium is essential for the secretion of insulin by pancreatic beta cells, which helps regulate glucose levels. Without sufficient potassium, insulin secretion is compromised, leading to hyperglycemia and worsening diabetes. Excessive potassium intake can have adverse effects, such as gastrointestinal discomfort and disturbances in heart rhythm. Potassium supplementation can have side effects on ulceration, particularly in relation to peptic ulcer disease. Potassium channels have the potential to increase gastric acid secretion, which can lead to an increased risk of ulcerations. Medications used for peptic ulcer disease, known as "proton pump inhibitors", work by inhibiting potassium pumps that activate the H/K ATPase. This inhibition helps to reduce the secretion of hydrochloric acid into the parietal cell, thereby decreasing acidic synthesis and lowering the risk of ulcers. Nicorandil, a drug used for the treatment of ischemic heart disease, can stimulate nitrate and potassium ATP channels, and as a result, it has been associated with side effects such as GI, oral, and anal ulcers. Prolonged and chronic use of potassium supplements has been linked to more severe side effects, including ulcers outside of the gastrointestinal (GI) tract. Close monitoring is necessary for patients who are also taking angiotensinogen-converting enzyme inhibitors, angiotensin receptor blockers, or potassium-sparing diuretics. Detection by taste buds Potassium can be detected by taste because it triggers three of the five types of taste sensations, according to concentration. Dilute solutions of potassium ions taste sweet, allowing moderate concentrations in milk and juices, while higher concentrations become increasingly bitter/alkaline, and finally also salty to the taste. The combined bitterness and saltiness of high-potassium solutions makes high-dose potassium supplementation by liquid drinks a palatability challenge. Precautions Potassium metal can react violently with water producing KOH and hydrogen gas. This reaction is exothermic and releases sufficient heat to ignite the resulting hydrogen in the presence of oxygen. Finely powdered potassium ignites in air at room temperature. The bulk metal ignites in air if heated. Because its density is 0.89g/cm3, burning potassium floats in water that exposes it to atmospheric oxygen. Many common fire extinguishing agents, including water, either are ineffective or make a potassium fire worse. Nitrogen, argon, sodium chloride (table salt), sodium carbonate (soda ash), and silicon dioxide (sand) are effective if they are dry. Some Class D dry powder extinguishers designed for metal fires are also effective. These agents deprive the fire of oxygen and cool the potassium metal. During storage, potassium forms peroxides and superoxides. These peroxides may react violently with organic compounds such as oils. Both peroxides and superoxides may react explosively with metallic potassium. Because potassium reacts with water vapor in the air, it is usually stored under anhydrous mineral oil or kerosene. Unlike lithium and sodium, potassium should not be stored under oil for longer than six months, unless in an inert (oxygen-free) atmosphere, or under vacuum. After prolonged storage in air dangerous shock-sensitive peroxides can form on the metal and under the lid of the container, and can detonate upon opening. Ingestion of large amounts of potassium compounds can lead to hyperkalemia, strongly influencing the cardiovascular system. Potassium chloride is used in the U.S. for lethal injection executions.
Physical sciences
Chemical elements_2
null
23084
https://en.wikipedia.org/wiki/Paleontology
Paleontology
Paleontology ( ), also spelled palaeontology or palæontology, is the scientific study of life that existed prior to the start of the Holocene epoch (roughly 11,700 years before present). It includes the study of fossils to classify organisms and study their interactions with each other and their environments (their paleoecology). Paleontological observations have been documented as far back as the 5th century BC. The science became established in the 18th century as a result of Georges Cuvier's work on comparative anatomy, and developed rapidly in the 19th century. The term has been used since 1822 formed from Greek (, "old, ancient"), (, (gen. ), "being, creature"), and (, "speech, thought, study"). Paleontology lies on the border between biology and geology, but it differs from archaeology in that it excludes the study of anatomically modern humans. It now uses techniques drawn from a wide range of sciences, including biochemistry, mathematics, and engineering. Use of all these techniques has enabled paleontologists to discover much of the evolutionary history of life, almost back to when Earth became capable of supporting life, nearly 4 billion years ago. As knowledge has increased, paleontology has developed specialised sub-divisions, some of which focus on different types of fossil organisms while others study ecology and environmental history, such as ancient climates. Body fossils and trace fossils are the principal types of evidence about ancient life, and geochemical evidence has helped to decipher the evolution of life before there were organisms large enough to leave body fossils. Estimating the dates of these remains is essential but difficult: sometimes adjacent rock layers allow radiometric dating, which provides absolute dates that are accurate to within 0.5%, but more often paleontologists have to rely on relative dating by solving the "jigsaw puzzles" of biostratigraphy (arrangement of rock layers from youngest to oldest). Classifying ancient organisms is also difficult, as many do not fit well into the Linnaean taxonomy classifying living organisms, and paleontologists more often use cladistics to draw up evolutionary "family trees". The final quarter of the 20th century saw the development of molecular phylogenetics, which investigates how closely organisms are related by measuring the similarity of the DNA in their genomes. Molecular phylogenetics has also been used to estimate the dates when species diverged, but there is controversy about the reliability of the molecular clock on which such estimates depend. Overview The simplest definition of "paleontology" is "the study of ancient life". The field seeks information about several aspects of past organisms: "their identity and origin, their environment and evolution, and what they can tell us about the Earth's organic and inorganic past". Historical science William Whewell (1794–1866) classified paleontology as one of the historical sciences, along with archaeology, geology, astronomy, cosmology, philology and history itself: paleontology aims to describe phenomena of the past and to reconstruct their causes. Hence it has three main elements: description of past phenomena; developing a general theory about the causes of various types of change; and applying those theories to specific facts. When trying to explain the past, paleontologists and other historical scientists often construct a set of one or more hypotheses about the causes and then look for a "smoking gun", a piece of evidence that strongly accords with one hypothesis over any others. Sometimes researchers discover a "smoking gun" by a fortunate accident during other research. For example, the 1980 discovery by Luis and Walter Alvarez of iridium, a mainly extraterrestrial metal, in the Cretaceous–Paleogene boundary layer made asteroid impact the most favored explanation for the Cretaceous–Paleogene extinction event – although debate continues about the contribution of volcanism. A complementary approach to developing scientific knowledge, experimental science, is often said to work by conducting experiments to disprove hypotheses about the workings and causes of natural phenomena. This approach cannot prove a hypothesis, since some later experiment may disprove it, but the accumulation of failures to disprove is often compelling evidence in favor. However, when confronted with totally unexpected phenomena, such as the first evidence for invisible radiation, experimental scientists often use the same approach as historical scientists: construct a set of hypotheses about the causes and then look for a "smoking gun". Related sciences Paleontology lies between biology and geology since it focuses on the record of past life, but its main source of evidence is fossils in rocks. For historical reasons, paleontology is part of the geology department at many universities: in the 19th and early 20th centuries, geology departments found fossil evidence important for dating rocks, while biology departments showed little interest. Paleontology also has some overlap with archaeology, which primarily works with objects made by humans and with human remains, while paleontologists are interested in the characteristics and evolution of humans as a species. When dealing with evidence about humans, archaeologists and paleontologists may work together – for example paleontologists might identify animal or plant fossils around an archaeological site, to discover the people who lived there, and what they ate; or they might analyze the climate at the time of habitation. In addition, paleontology often borrows techniques from other sciences, including biology, osteology, ecology, chemistry, physics and mathematics. For example, geochemical signatures from rocks may help to discover when life first arose on Earth, and analyses of carbon isotope ratios may help to identify climate changes and even to explain major transitions such as the Permian–Triassic extinction event. A relatively recent discipline, molecular phylogenetics, compares the DNA and RNA of modern organisms to re-construct the "family trees" of their evolutionary ancestors. It has also been used to estimate the dates of important evolutionary developments, although this approach is controversial because of doubts about the reliability of the "molecular clock". Techniques from engineering have been used to analyse how the bodies of ancient organisms might have worked, for example the running speed and bite strength of Tyrannosaurus, or the flight mechanics of Microraptor. It is relatively commonplace to study the internal details of fossils using X-ray microtomography. Paleontology, biology, archaeology, and paleoneurobiology combine to study endocranial casts (endocasts) of species related to humans to clarify the evolution of the human brain. Paleontology even contributes to astrobiology, the investigation of possible life on other planets, by developing models of how life may have arisen and by providing techniques for detecting evidence of life. Subdivisions As knowledge has increased, paleontology has developed specialised subdivisions. Vertebrate paleontology concentrates on fossils from the earliest fish to the immediate ancestors of modern mammals. Invertebrate paleontology deals with fossils such as molluscs, arthropods, annelid worms and echinoderms. Paleobotany studies fossil plants, algae, and fungi. Palynology, the study of pollen and spores produced by land plants and protists, straddles paleontology and botany, as it deals with both living and fossil organisms. Micropaleontology deals with microscopic fossil organisms of all kinds. Instead of focusing on individual organisms, paleoecology examines the interactions between different ancient organisms, such as their food chains, and the two-way interactions with their environments.  For example, the development of oxygenic photosynthesis by bacteria caused the oxygenation of the atmosphere and hugely increased the productivity and diversity of ecosystems. Together, these led to the evolution of complex eukaryotic cells, from which all multicellular organisms are built. Paleoclimatology, although sometimes treated as part of paleoecology, focuses more on the history of Earth's climate and the mechanisms that have changed it – which have sometimes included evolutionary developments, for example the rapid expansion of land plants in the Devonian period removed more carbon dioxide from the atmosphere, reducing the greenhouse effect and thus helping to cause an ice age in the Carboniferous period. Biostratigraphy, the use of fossils to work out the chronological order in which rocks were formed, is useful to both paleontologists and geologists. Biogeography studies the spatial distribution of organisms, and is also linked to geology, which explains how Earth's geography has changed over time. History Although paleontology became established around 1800, earlier thinkers had noticed aspects of the fossil record. The ancient Greek philosopher Xenophanes (570–480 BCE) concluded from fossil sea shells that some areas of land were once under water. During the Middle Ages the Persian naturalist Ibn Sina, known as Avicenna in Europe, discussed fossils and proposed a theory of petrifying fluids on which Albert of Saxony elaborated in the 14th century. The Chinese naturalist Shen Kuo (1031–1095) proposed a theory of climate change based on the presence of petrified bamboo in regions that in his time were too dry for bamboo. In early modern Europe, the systematic study of fossils emerged as an integral part of the changes in natural philosophy that occurred during the Age of Reason. In the Italian Renaissance, Leonardo da Vinci made various significant contributions to the field as well as depicted numerous fossils. Leonardo's contributions are central to the history of paleontology because he established a line of continuity between the two main branches of paleontologyichnology and body fossil paleontology. He identified the following: The biogenic nature of ichnofossils, i.e. ichnofossils were structures left by living organisms; The utility of ichnofossils as paleoenvironmental toolscertain ichnofossils show the marine origin of rock strata; The importance of the neoichnological approachrecent traces are a key to understanding ichnofossils; The independence and complementary evidence of ichnofossils and body fossilsichnofossils are distinct from body fossils, but can be integrated with body fossils to provide paleontological information At the end of the 18th century Georges Cuvier's work established comparative anatomy as a scientific discipline and, by proving that some fossil animals resembled no living ones, demonstrated that animals could become extinct, leading to the emergence of paleontology. The expanding knowledge of the fossil record also played an increasing role in the development of geology, particularly stratigraphy. Cuvier proved that the different levels of deposits represented different time periods in the early 19th century. The surface-level deposits in the Americas contained later mammals like the megatheriid ground sloth Megatherium and the mammutid proboscidean Mammut (later known informally as a "mastodon"), which were some of the earliest-named fossil mammal genera with official taxonomic authorities. They today are known to date to the Neogene-Quaternary. In deeper-level deposits in western Europe are early-aged mammals such as the palaeothere perissodactyl Palaeotherium and the anoplotheriid artiodactyl Anoplotherium, both of which were described earliest after the former two genera, which today are known to date to the Paleogene period. Cuvier figured out that even older than the two levels of deposits with extinct large mammals is one that contained an extinct "crocodile-like" marine reptile, which eventually came to be known as the mosasaurid Mosasaurus of the Cretaceous period. The first half of the 19th century saw geological and paleontological activity become increasingly well organised with the growth of geologic societies and museums and an increasing number of professional geologists and fossil specialists. Interest increased for reasons that were not purely scientific, as geology and paleontology helped industrialists to find and exploit natural resources such as coal. This contributed to a rapid increase in knowledge about the history of life on Earth and to progress in the definition of the geologic time scale, largely based on fossil evidence. Although she was rarely recognised by the scientific community, Mary Anning was a significant contributor to the field of palaeontology during this period; she uncovered multiple novel Mesozoic reptile fossils and deducted that what were then known as bezoar stones are in fact fossilised faeces. In 1822 Henri Marie Ducrotay de Blainville, editor of Journal de Physique, coined the word "palaeontology" to refer to the study of ancient living organisms through fossils. As knowledge of life's history continued to improve, it became increasingly obvious that there had been some kind of successive order to the development of life. This encouraged early evolutionary theories on the transmutation of species. After Charles Darwin published Origin of Species in 1859, much of the focus of paleontology shifted to understanding evolutionary paths, including human evolution, and evolutionary theory. The last half of the 19th century saw a tremendous expansion in paleontological activity, especially in North America. The trend continued in the 20th century with additional regions of the Earth being opened to systematic fossil collection. Fossils found in China near the end of the 20th century have been particularly important as they have provided new information about the earliest evolution of animals, early fish, dinosaurs and the evolution of birds. The last few decades of the 20th century saw a renewed interest in mass extinctions and their role in the evolution of life on Earth. There was also a renewed interest in the Cambrian explosion that apparently saw the development of the body plans of most animal phyla. The discovery of fossils of the Ediacaran biota and developments in paleobiology extended knowledge about the history of life back far before the Cambrian. Increasing awareness of Gregor Mendel's pioneering work in genetics led first to the development of population genetics and then in the mid-20th century to the modern evolutionary synthesis, which explains evolution as the outcome of events such as mutations and horizontal gene transfer, which provide genetic variation, with genetic drift and natural selection driving changes in this variation over time. Within the next few years the role and operation of DNA in genetic inheritance were discovered, leading to what is now known as the "Central Dogma" of molecular biology. In the 1960s molecular phylogenetics, the investigation of evolutionary "family trees" by techniques derived from biochemistry, began to make an impact, particularly when it was proposed that the human lineage had diverged from apes much more recently than was generally thought at the time. Although this early study compared proteins from apes and humans, most molecular phylogenetics research is now based on comparisons of RNA and DNA. Sources of evidence Body fossils Fossils of organisms' bodies are usually the most informative type of evidence. The most common types are wood, bones, and shells. Fossilisation is a rare event, and most fossils are destroyed by erosion or metamorphism before they can be observed. Hence the fossil record is very incomplete, increasingly so further back in time. Despite this, it is often adequate to illustrate the broader patterns of life's history. There are also biases in the fossil record: different environments are more favorable to the preservation of different types of organism or parts of organisms. Further, only the parts of organisms that were already mineralised are usually preserved, such as the shells of molluscs. Since most animal species are soft-bodied, they decay before they can become fossilised. As a result, although there are 30-plus phyla of living animals, two-thirds have never been found as fossils. Occasionally, unusual environments may preserve soft tissues. These lagerstätten allow paleontologists to examine the internal anatomy of animals that in other sediments are represented only by shells, spines, claws, etc. – if they are preserved at all. However, even lagerstätten present an incomplete picture of life at the time. The majority of organisms living at the time are probably not represented because lagerstätten are restricted to a narrow range of environments, e.g. where soft-bodied organisms can be preserved very quickly by events such as mudslides; and the exceptional events that cause quick burial make it difficult to study the normal environments of the animals. The sparseness of the fossil record means that organisms are expected to exist long before and after they are found in the fossil record – this is known as the Signor–Lipps effect. Trace fossils Trace fossils consist mainly of tracks and burrows, but also include coprolites (fossil feces) and marks left by feeding. Trace fossils are particularly significant because they represent a data source that is not limited to animals with easily fossilised hard parts, and they reflect organisms' behaviours. Also many traces date from significantly earlier than the body fossils of animals that are thought to have been capable of making them. Whilst exact assignment of trace fossils to their makers is generally impossible, traces may for example provide the earliest physical evidence of the appearance of moderately complex animals (comparable to earthworms). Geochemical observations Geochemical observations may help to deduce the global level of biological activity at a certain period, or the affinity of certain fossils. For example, geochemical features of rocks may reveal when life first arose on Earth, and may provide evidence of the presence of eukaryotic cells, the type from which all multicellular organisms are built. Analyses of carbon isotope ratios may help to explain major transitions such as the Permian–Triassic extinction event. Classifying ancient organisms Simple example cladogram Warm-bloodedness evolved somewhere in thesynapsid–mammal transition. Warm-bloodedness must also have evolved at one of these points – an example of convergent evolution. Naming groups of organisms in a way that is clear and widely agreed is important, as some disputes in paleontology have been based just on misunderstandings over names. Linnaean taxonomy is commonly used for classifying living organisms, but runs into difficulties when dealing with newly discovered organisms that are significantly different from known ones. For example: it is hard to decide at what level to place a new higher-level grouping, e.g. genus or family or order; this is important since the Linnaean rules for naming groups are tied to their levels, and hence if a group is moved to a different level it must be renamed. Paleontologists generally use approaches based on cladistics, a technique for working out the evolutionary "family tree" of a set of organisms. It works by the logic that, if groups B and C have more similarities to each other than either has to group A, then B and C are more closely related to each other than either is to A. Characters that are compared may be anatomical, such as the presence of a notochord, or molecular, by comparing sequences of DNA or proteins. The result of a successful analysis is a hierarchy of clades – groups that share a common ancestor. Ideally the "family tree" has only two branches leading from each node ("junction"), but sometimes there is too little information to achieve this, and paleontologists have to make do with junctions that have several branches. The cladistic technique is sometimes fallible, as some features, such as wings or camera eyes, evolved more than once, convergently – this must be taken into account in analyses. Evolutionary developmental biology, commonly abbreviated to "Evo Devo", also helps paleontologists to produce "family trees", and understand fossils. For example, the embryological development of some modern brachiopods suggests that brachiopods may be descendants of the halkieriids, which became extinct in the Cambrian period. Estimating the dates of organisms Paleontology seeks to map out how living things have changed through time. A substantial hurdle to this aim is the difficulty of working out how old fossils are. Beds that preserve fossils typically lack the radioactive elements needed for radiometric dating. This technique is our only means of giving rocks greater than about 50 million years old an absolute age, and can be accurate to within 0.5% or better. Although radiometric dating requires very careful laboratory work, its basic principle is simple: the rates at which various radioactive elements decay are known, and so the ratio of the radioactive element to the element into which it decays shows how long ago the radioactive element was incorporated into the rock. Radioactive elements are common only in rocks with a volcanic origin, and so the only fossil-bearing rocks that can be dated radiometrically are a few volcanic ash layers. Consequently, paleontologists must usually rely on stratigraphy to date fossils. Stratigraphy is the science of deciphering the "layer-cake" that is the sedimentary record, and has been compared to a jigsaw puzzle. Rocks normally form relatively horizontal layers, with each layer younger than the one underneath it. If a fossil is found between two layers whose ages are known, the fossil's age must lie between the two known ages. Because rock sequences are not continuous, but may be broken up by faults or periods of erosion, it is very difficult to match up rock beds that are not directly next to one another. However, fossils of species that survived for a relatively short time can be used to link up isolated rocks: this technique is called biostratigraphy. For instance, the conodont Eoplacognathus pseudoplanus has a short range in the Middle Ordovician period. If rocks of unknown age are found to have traces of E. pseudoplanus, they must have a mid-Ordovician age. Such index fossils must be distinctive, be globally distributed and have a short time range to be useful. However, misleading results are produced if the index fossils turn out to have longer fossil ranges than first thought. Stratigraphy and biostratigraphy can in general provide only relative dating (A was before B), which is often sufficient for studying evolution. However, this is difficult for some time periods, because of the problems involved in matching up rocks of the same age across different continents. Family-tree relationships may also help to narrow down the date when lineages first appeared. For instance, if fossils of B or C date to X million years ago and the calculated "family tree" says A was an ancestor of B and C, then A must have evolved more than X million years ago. It is also possible to estimate how long ago two living clades diverged – i.e. approximately how long ago their last common ancestor must have lived – by assuming that DNA mutations accumulate at a constant rate. These "molecular clocks", however, are fallible, and provide only a very approximate timing: for example, they are not sufficiently precise and reliable for estimating when the groups that feature in the Cambrian explosion first evolved, and estimates produced by different techniques may vary by a factor of two. History of life Earth formed about and, after a collision that formed the Moon about 40 million years later, may have cooled quickly enough to have oceans and an atmosphere about . There is evidence on the Moon of a Late Heavy Bombardment by asteroids from . If, as seems likely, such a bombardment struck Earth at the same time, the first atmosphere and oceans may have been stripped away. Paleontology traces the evolutionary history of life back to over , possibly as far as . The oldest clear evidence of life on Earth dates to , although there have been reports, often disputed, of fossil bacteria from and of geochemical evidence for the presence of life . Some scientists have proposed that life on Earth was "seeded" from elsewhere, but most research concentrates on various explanations of how life could have arisen independently on Earth. For about 2,000 million years microbial mats, multi-layered colonies of different bacteria, were the dominant life on Earth. The evolution of oxygenic photosynthesis enabled them to play the major role in the oxygenation of the atmosphere from about . This change in the atmosphere increased their effectiveness as nurseries of evolution. While eukaryotes, cells with complex internal structures, may have been present earlier, their evolution speeded up when they acquired the ability to transform oxygen from a poison to a powerful source of metabolic energy. This innovation may have come from primitive eukaryotes capturing oxygen-powered bacteria as endosymbionts and transforming them into organelles called mitochondria. The earliest evidence of complex eukaryotes with organelles (such as mitochondria) dates from . Multicellular life is composed only of eukaryotic cells, and the earliest evidence for it is the Francevillian Group Fossils from , although specialisation of cells for different functions first appears between (a possible fungus) and (a probable red alga). Sexual reproduction may be a prerequisite for specialisation of cells, as an asexual multicellular organism might be at risk of being taken over by rogue cells that retain the ability to reproduce. The earliest known animals are cnidarians from about , but these are so modern-looking that they must be descendants of earlier animals. Early fossils of animals are rare because they had not developed mineralised, easily fossilized hard parts until about . The earliest modern-looking bilaterian animals appear in the Early Cambrian, along with several "weird wonders" that bear little obvious resemblance to any modern animals. There is a long-running debate about whether this Cambrian explosion was truly a very rapid period of evolutionary experimentation; alternative views are that modern-looking animals began evolving earlier but fossils of their precursors have not yet been found, or that the "weird wonders" are evolutionary "aunts" and "cousins" of modern groups. Vertebrates remained a minor group until the first jawed fish appeared in the Late Ordovician. The spread of animals and plants from water to land required organisms to solve several problems, including protection against drying out and supporting themselves against gravity. The earliest evidence of land plants and land invertebrates date back to about and respectively. Those invertebrates, as indicated by their trace and body fossils, were shown to be arthropods known as euthycarcinoids. The lineage that produced land vertebrates evolved later but very rapidly between and ; recent discoveries have overturned earlier ideas about the history and driving forces behind their evolution. Land plants were so successful that their detritus caused an ecological crisis in the Late Devonian, until the evolution of fungi that could digest dead wood. During the Permian period, synapsids, including the ancestors of mammals, may have dominated land environments, but this ended with the Permian–Triassic extinction event , which came very close to wiping out all complex life. The extinctions were apparently fairly sudden, at least among vertebrates. During the slow recovery from this catastrophe a previously obscure group, archosaurs, became the most abundant and diverse terrestrial vertebrates. One archosaur group, the dinosaurs, were the dominant land vertebrates for the rest of the Mesozoic, and birds evolved from one group of dinosaurs. During this time mammals' ancestors survived only as small, mainly nocturnal insectivores, which may have accelerated the development of mammalian traits such as endothermy and hair. After the Cretaceous–Paleogene extinction event killed off all the dinosaurs except the birds, mammals increased rapidly in size and diversity, and some took to the air and the sea. Fossil evidence indicates that flowering plants appeared and rapidly diversified in the Early Cretaceous between and . Their rapid rise to dominance of terrestrial ecosystems is thought to have been propelled by coevolution with pollinating insects. Social insects appeared around the same time and, although they account for only small parts of the insect "family tree", now form over 50% of the total mass of all insects. Humans evolved from a lineage of upright-walking apes whose earliest fossils date from over . Although early members of this lineage had chimp-sized brains, about 25% as big as modern humans', there are signs of a steady increase in brain size after about . There is a long-running debate about whether modern humans are descendants of a single small population in Africa, which then migrated all over the world less than 200,000 years ago and replaced previous hominine species, or arose worldwide at the same time as a result of interbreeding. Mass extinctions Life on earth has suffered occasional mass extinctions at least since . Despite their disastrous effects, mass extinctions have sometimes accelerated the evolution of life on earth. When dominance of an ecological niche passes from one group of organisms to another, this is rarely because the new dominant group outcompetes the old, but usually because an extinction event allows a new group, which may possess an advantageous trait, to outlive the old and move into its niche. The fossil record appears to show that the rate of extinction is slowing down, with both the gaps between mass extinctions becoming longer and the average and background rates of extinction decreasing. However, it is not certain whether the actual rate of extinction has altered, since both of these observations could be explained in several ways: The oceans may have become more hospitable to life over the last 500 million years and less vulnerable to mass extinctions: dissolved oxygen became more widespread and penetrated to greater depths; the development of life on land reduced the run-off of nutrients and hence the risk of eutrophication and anoxic events; marine ecosystems became more diversified so that food chains were less likely to be disrupted. Reasonably complete fossils are very rare: most extinct organisms are represented only by partial fossils, and complete fossils are rarest in the oldest rocks. So paleontologists have mistakenly assigned parts of the same organism to different genera, which were often defined solely to accommodate these finds – the story of Anomalocaris is an example of this. The risk of this mistake is higher for older fossils because these are often unlike parts of any living organism. Many "superfluous" genera are represented by fragments that are not found again, and these "superfluous" genera are interpreted as becoming extinct very quickly. Biodiversity in the fossil record, which is "the number of distinct genera alive at any given time; that is, those whose first occurrence predates and whose last occurrence postdates that time" shows a different trend: a fairly swift rise from , a slight decline from , in which the devastating Permian–Triassic extinction event is an important factor, and a swift rise from to the present. Paleontology in the popular press Books catered to the general public on paleontology include: The Last Days of the Dinosaurs: An Asteroid Extinction, and the Beginning of our World written by Riley Black The Rise and Reign of the Mammals: A New History, from the Shadow of the Dinosaurs to Us written by Steve Brusatte Otherlands: A Journey Through Earth's Extinct Worlds written by Thomas Halliday
Biology and health sciences
Biology
null
4414414
https://en.wikipedia.org/wiki/Electride
Electride
An electride is an ionic compound in which an electron serves the role of the anion. Solutions of alkali metals in ammonia are electride salts. In the case of sodium, these blue solutions consist of [Na(NH3)6]+ and solvated electrons: Na + 6 NH3 → [Na(NH3)6]+ + e− The cation [Na(NH3)6]+ is an octahedral coordination complex. Despite the name, the electron does not leave the sodium-ammonia complex, but it is transferred from Na to the vacant orbitals of the coordinated ammonia molecules. Solid salts Addition of a complexant like crown ether or [2.2.2]-cryptand to a solution of [Na(NH3)6]+e− affords [Na (crown ether)]+e− or [Na(2,2,2-crypt)]+e−. Evaporation of these solutions yields a blue-black paramagnetic solid with the formula [Na(2,2,2-crypt)]+e−. Most solid electride salts decompose above 240 K, although [Ca24Al28O64]4+(e−)4 is stable at room temperature. In these salts, the electron is delocalized between the cations. Properties of these salts have been analyzed. ThI2 and ThI3 have also been proposed to be electride compounds. Similarly, , , , and are all electride salts with a tricationic metal ion. Organometallic electrides Magnesium reduced nickel(II)-bipyridyl (bipy) complex have been labeled organic electrides. An example is [(THF)4Mg4(μ2-bipy)4]–, in which the electride is the singly occupied molecular orbital (SOMO) formed by the Mg-square cluster within the larger complex. "Inorganic electrides" have also been described. Reactions Electride salts are powerful reducing agents, as demonstrated by their use in the Birch reduction. Evaporation of these blue solutions affords a mirror of Na metal. If not evaporated, such solutions slowly lose their colour as the electrons reduce ammonia: 2[Na(NH3)6]+e− → 2NaNH2 + 10NH3 + H2 This conversion is catalyzed by various metals. An electride, [Na(NH3)6]+e−, is formed as a reaction intermediate. High-pressure elements In quantum chemistry, an electride is identified by a maximum of the electron density, characterized by a non-nuclear attractor, a large and negative Laplacian at the critical point, and an Electron Localization Function isosurface close to 1. Electride phases are typically semiconducting or have very low conductivity, usually with a complex optical response. A sodium compound called disodium helide has been created under of pressure. It has been proven that the localized electron density in high-pressure electrides does not correspond to isolated electrons, but that it is generated by the formation of (multicenter) chemical bonds. The intrinsic polarization between atomic nucleus and the electron anion in these high pressure electrides can lead to unique properties, such as the splitting of the longitudinal and transverse acoustic modes (i.e., LA-TA splitting, an analogue to the LO-TO splitting in ionic compound), the universal but robust gapless surface state in insulating electride that forming a de facto real space topological distribution of charge carriers, and the colossal charge state of some impurities in them. Layered electrides (Electrenes) Layered electrides or electrenes are single-layer materials consisting of alternating atomically thin two-dimensional layers of electrons and ionized atoms. The first example was Ca2N, in which the charge (+4) of two calcium ions is balanced by the charge of a nitride ion (-3) in the ion layer plus a charge (-1) in the electron layer.
Physical sciences
Noble gas compounds
Chemistry
18978668
https://en.wikipedia.org/wiki/Pickled%20cucumber
Pickled cucumber
A pickled cucumber – commonly known as a pickle in the United States and Canada and a gherkin ( ) in Britain, Ireland, South Africa, Australia, and New Zealand – is a usually small or miniature cucumber that has been pickled in a brine, vinegar, or other solution and left to ferment. The fermentation process is executed either by immersing the cucumbers in an acidic solution or through souring by lacto-fermentation. Pickled cucumbers are often part of mixed pickles. Historical origins It is often claimed that pickled cucumbers were first developed for workers building the Great Wall of China, though another hypothesis is that they were first made as early as 2030 BC in the Tigris Valley of Mesopotamia, using cucumbers brought originally from India. According to the New York Food Museum, archaeologists believe ancient Mesopotamians pickled food as far back as 2400 B.C. while, centuries later, cucumbers native to India were being pickled in the Tigris Valley. Ancient sources and historians have documented awareness around the nutritional benefits of pickles thousands of years ago as well as the perceived beauty benefits of pickles— Queen Cleopatra of Egypt credited the pickles in her diet with contributing to her health and legendary beauty. During World War II, the U.S. government recognized the importance of pickles in soldiers' diets and allocated 40% of the nation's pickle production to the armed forces. Types Pickled cucumbers are highly popular in the United States and are a delicacy in northern and eastern Europe. Pickled cucumbers are flavored differently in different regions of the world. Brined pickles Brined pickles are prepared using the traditional process of natural fermentation in brine, making them grow sour. The salt concentration in the brine can vary between . Vinegar is not needed in the brine of naturally fermented pickled cucumbers. The fermentation process depends on the Lactobacillus bacteria that naturally occur on the skin of a growing cucumber. These may be removed during commercial harvesting and packing processes. Bacteria cultures can be reintroduced to the vegetables by adding already fermented foods such as yogurt or other fermented milk products, pieces of sourdough bread, or pickled vegetables such as sauerkraut. Typically, small cucumbers are placed in a glass or ceramic vessel or a wooden barrel, together with various spices. Among those traditionally used in many recipes are garlic, horseradish, the whole dill stems with umbels and green seeds, white mustard seeds, grape, oak, cherry, blackcurrant and bay laurel leaves, dried allspice fruits, and—most importantly—salt. The container is then filled with cooled, boiled water and kept under a non-airtight cover (often cloth tied on with string or a rubber band) for several weeks, depending on taste and external temperature. Traditionally, stones (also sterilized by boiling) are placed on top of the cucumbers to keep them under the water. The cucumber's sourness depends on the amount of salt added (saltier cucumbers tend to be sourer). Since brined pickles are produced without vinegar, a film of bacteria forms on top of the brine. This does not indicate that the pickles have spoiled, and the film may be removed. They do not keep as long as cucumbers that are pickled with vinegar and usually must be refrigerated. Some commercial manufacturers add vinegar as a preservative. Bread-and-butter Bread-and-butter pickles are a marinated variety of pickled cucumber in a solution of vinegar, sugar, and spices. They may be chilled as refrigerator pickles or canned. Their name and broad popularity in the United States are to Omar and Cora Fanning, Illinois cucumber farmers who started selling sweet and sour pickles in the 1920s. They filed for the trademark "Fanning's Bread and Butter Pickles" in 1923 (though the recipe and similar recipes are probably much older). The story to the name is that the Fannings survived rough years by making the pickles with their surplus of undersized cucumbers and bartering them with their grocer for staples such as bread and butter. Their taste is often much sweeter than other types of pickle, due to the sweeter brine they are marinated in, but they differ from sweet pickles in that they are spiced with cilantro and other spices. Gherkin Gherkins are small cucumbers, typically those in length, often with bumpy skin, which are typically used for pickling. The word gherkin comes from early modern Dutch gurken or augurken, 'small pickled cucumber'. Cornichons, or baby pickles, are tart French pickles made from gherkins pickled in vinegar and tarragon. They traditionally accompany pâtés and cold cuts. Sweet gherkins, which contain sugar in the pickling brine, are also a popular variety. The term gherkin is also used in the name West Indian gherkin for Cucumis anguria, a closely related species. West Indian gherkins are also sometimes used as pickles. Kosher dill A "kosher" dill pickle is not necessarily kosher in the sense that it has been prepared in accordance with Jewish dietary law. Instead, it is a pickle made in the traditional manner of Jewish New York City pickle makers, with a generous addition of garlic and dill to natural salt brine. In New York terminology, a "full-sour" kosher dill is fully fermented, while a "half-sour", given a shorter stay in the brine, is still crisp and bright green. Dill pickles, whether or not described as "kosher", have been served in New York City since at least 1899. Hungarian In Hungary, while regular vinegar-pickled cucumbers ( ) are made during most of the year, during the summer kovászos uborka ("leavened pickles") are made without the use of vinegar. Cucumbers are placed in a glass vessel along with spices (usually dill and garlic), water, and salt. Additionally, a slice or two of bread are placed at the top and bottom of the solution, and the container is left to sit in the sun for a few days so the yeast in the bread can help cause a fermentation process. Polish and German The Polish- or German-style pickled cucumber ( or ; ), was developed in the northern parts of central and eastern Europe. It has been exported worldwide and is found in the cuisines of many countries, including the United States, where immigrants introduced it. It is sour, similar to the kosher dill, but tends to be seasoned differently. Traditionally, pickles were preserved in wooden barrels but are now sold in glass jars. A cucumber only pickled for a few days is different in taste (less sour) than one pickled for a longer time and is called ogórek małosolny, which means "low-salt cucumber". This distinction is similar to the one between half- and full-sour types of kosher dills (see above). Another kind of pickled cucumber popular in Poland is ogórek konserwowy/korniszon ("preserved cucumber"), which is rather sweet and vinegary in taste due to the different composition of the preserving solution. Lime Lime pickles are soaked in pickling lime (not to be confused with the citrus fruit) rather than in a salt brine. This is done more to enhance texture (by making them crisper) rather than as a preservative. The lime is then rinsed off the pickles. Vinegar and sugar are often added after the 24-hour soak in lime, along with pickling spices. If the rinse is incomplete, the acids will end up too weak to preserve the vegetable, compromising food safety. The crisping effect of lime is caused by its calcium content. A safer and more convenient alternative is calcium chloride, which is neutral and requires no rinsing. Kool-Aid pickles Kool-Aid pickles, or "koolickles", enjoyed by children in parts of the Southern United States, are created by soaking dill pickles in a mixture of powdered Kool-Aid and pickle brine. Southern Living reported that fruit punch and cherry Kool-Aid were the most popular flavors for pickling. The flesh of Kool-Aid pickles typically takes on a pink color. Nutrition Similar to pickled vegetables such as sauerkraut, sour pickled cucumbers (technically a fruit) are low in calories. They also contain a moderate amount of vitamin K, specifically in the form of K1. A sour pickled cucumber offers 12–16 μg, or approximately 15–20% of the Recommended Daily Allowance, of vitamin K. It also offers of food energy, most of which comes from carbohydrate. However, most sour pickled cucumbers are also high in sodium; one pickled cucumber can contain 350–500 mg, or 15–20% of the American recommended daily limit of 2400 mg. Sweet pickled cucumbers, including bread-and-butter pickles, are higher in calories due to their sugar content; a similar portion may contain . Sweet pickled cucumbers also tend to contain significantly less sodium than sour pickles. Pickles are being researched for their ability to act as vegetables with high probiotic content. Probiotics are typically associated with dairy products, but lactobacilli species such as L. plantarum and L. brevis has been shown to add to the nutritional value of pickles. Serving During the Victorian era, pickles were considered a luxury food, meaning households that served pickles were wealthy enough to have servants or staff who could prepare pickles. Middle- and upper-class households often served pickles in pickle castors, a glass container in an embellished silver holder. The pickles were served with coordinated silver tongs. In the United States, pickles are often served as a side dish accompanying meals. This usually takes the form of a "pickle spear", a pickled cucumber cut lengthwise into quarters or sixths. Pickles may be used as a condiment on a hamburger or other sandwich (usually in slice form) or a sausage or hot dog in chopped form as pickle relish. Soured cucumbers are commonly used in various dishes—for example, pickle-stuffed meatloaf, potato salad, or chicken salad—or consumed alone as an appetizer. Pickles are sometimes served alone as festival foods, often on a stick. This is also done in Japan, where it is referred to as . Dill pickles can be fried, typically deep-fried with a breading or batter surrounding the spear or slice. This is a popular dish in the southern US and a rising trend elsewhere in the US. In Russia and Ukraine, pickles are used in rassolnik: a traditional soup made from pickled cucumbers, pearl barley, pork or beef kidneys, and various herbs. The dish is known to have existed as far back as the 15th century when it was called kalya. In southern England, large gherkins pickled in vinegar are served as an accompaniment to fish and chips and are sold from big jars on the counter at a fish and chip shop, along with pickled onions. In the Cockney dialect of London, this type of gherkin is called a "wally". Etymology The term pickle is derived from the Dutch word pekel, meaning brine. In the United States and Canada, the word pickle alone used as a noun refers to a pickled cucumber (other types of pickled vegetables will be described using the adjective "pickled", such as "pickled onion", "pickled beets", etc.). In the UK pickle generally refers to a style of sweet, vinegary chutneys, such as Branston pickle, commonly served with a ploughman's lunch. The term traditionally used in British English to refer to a pickled cucumber, gherkin, is also of Dutch origin, derived from the word gurken or augurken, meaning cucumber. Gallery
Biology and health sciences
Botanical fruits used as culinary vegetables
Plants
18978754
https://en.wikipedia.org/wiki/Apple
Apple
An apple is a round, edible fruit produced by an apple tree (Malus spp., among them the domestic or orchard apple; Malus domestica). Apple trees are cultivated worldwide and are the most widely grown species in the genus Malus. The tree originated in Central Asia, where its wild ancestor, Malus sieversii, is still found. Apples have been grown for thousands of years in Eurasia and were introduced to North America by European colonists. Apples have religious and mythological significance in many cultures, including Norse, Greek, and European Christian tradition. Apples grown from seed tend to be very different from those of their parents, and the resultant fruit frequently lacks desired characteristics. For commercial purposes, including botanical evaluation, apple cultivars are propagated by clonal grafting onto rootstocks. Apple trees grown without rootstocks tend to be larger and much slower to fruit after planting. Rootstocks are used to control the speed of growth and the size of the resulting tree, allowing for easier harvesting. There are more than 7,500 cultivars of apples. Different cultivars are bred for various tastes and uses, including cooking, eating raw, and cider or apple juice production. Trees and fruit are prone to fungal, bacterial, and pest problems, which can be controlled by a number of organic and non-organic means. In 2010, the fruit's genome was sequenced as part of research on disease control and selective breeding in apple production. Etymology The word apple, whose Old English ancestor is , is descended from the Proto-Germanic noun , descended in turn from Proto-Indo-European . As late as the 17th century, the word also functioned as a generic term for all fruit, including nuts. This can be compared to the 14th-century Middle English expression , meaning a banana. Description The apple is a deciduous tree, generally standing tall in cultivation and up to in the wild, though more typically . When cultivated, the size, shape and branch density are determined by rootstock selection and trimming method. Apple trees may naturally have a rounded to erect crown with a dense canopy of leaves. The bark of the trunk is dark gray or gray-brown, but young branches are reddish or dark-brown with a smooth texture. Young twigs are covered in fine downy hairs; they become hairless when older. The buds are egg-shaped and dark red or purple in color; they range in size from 3 to 5 millimeters, but are usually less than 4 mm. The bud scales have very hairy edges. When emerging from the buds, the leaves are , meaning that their edges overlap each other. Leaves can be simple ovals (elliptic), medium or wide in width, somewhat egg-shaped with the wider portion toward their base (ovate), or even with sides that are more parallel to each other instead of curved (oblong) with a narrow pointed end. The edges have broadly-angled teeth, but do not have lobes. The top surface of the leaves are , almost hairless, while the undersides are densely covered in fine hairs. The leaves are attached alternately by short leaf stems long. Blossoms are produced in spring simultaneously with the budding of the leaves and are produced on spurs and some long shoots. When the flower buds first begin to open the petals are rose-pink and fade to white or light pink when fully open with each flower in diameter. The five-petaled flowers are group in an inflorescence consisting of a cyme with 3–7 flowers. The central flower of the inflorescence is called the "king bloom"; it opens first and can develop a larger fruit. Open apple blossoms are damaged by even brief exposures to temperatures or less, although the overwintering wood and buds are hardy down to . Fruit The fruit is a pome that matures in late summer or autumn. The true fruits or carpels are the harder interior chambers inside the apple's core. There are usually five carpels inside an apple, but there may be as few as three. Each of the chambers contains one or two seeds. The edible flesh is formed from the receptacle at the base of the flower. The seeds are egg- to pear-shaped and may be colored from light brown or tan to a very dark brown, often with red shades or even purplish-black. They may have a blunt or sharp point. The five sepals remain attached and stand out from the surface of the apple. The size of the fruit varies widely between cultivars, but generally has a diameter between . The shape is quite variable and may be nearly round, elongated, conical, or short and wide. The groundcolor of ripe apples is yellow, green, yellow-green or whitish yellow. The overcolor of ripe apples can be orange-red, pink-red, red, purple-red or brown-red. The overcolor amount can be 0–100%. The skin may be wholly or partly russeted, making it rough and brown. The skin is covered in a protective layer of epicuticular wax. The skin may also be marked with scattered dots. The flesh is generally pale yellowish-white, though it can be pink, yellow or green. Chemistry Important volatile compounds in apples that contribute to their scent and flavour include acetaldehyde, ethyl acetate, 1-butanal, ethanol, 2-methylbutanal, 3-methylbutanal, ethyl propionate, ethyl 2-methylpropionate, ethyl butyrate, ethyl 2-methyl butyrate, hexanal, 1-butanol, 3-methylbutyl acetate, 2-methylbutyl acetate, 1-propyl butyrate, ethyl pentanoate, amyl acetate, 2-methyl-1-butanol, trans-2-hexenal, ethyl hexanoate, hexanol. Taxonomy The apple as a species has more than 100 alternative scientific names, or synonyms. In modern times, Malus pumila and Malus domestica are the two main names in use. M. pumila is the older name, but M. domestica has become much more commonly used starting in the 21st century, especially in the western world. Two proposals were made to make M. domestica a conserved name: the earlier proposal was voted down by the Committee for Vascular Plants of the IAPT in 2014, but in April 2017 the Committee decided, with a narrow majority, that the newly popular name should be conserved. The General Committee of the IAPT decided in June 2017 to approve this change, officially conserving M. domestica. Nevertheless, some works published after 2017 still use M. pumila as the correct name, under an alternate taxonomy. When first classified by Linnaeus in 1753, the pears, apples, and quinces were combined into one genus that he named Pyrus and he named the apple as Pyrus malus. This was widely accepted, however the botanist Philip Miller published an alternate classification in The Gardeners Dictionary with the apple species separated from Pyrus in 1754. He did not clearly indicate that by Malus pumila he meant the domesticated apple. Nonetheless, it was used as such by many botanists. When Moritz Balthasar Borkhausen published his scientific description of the apple in 1803 it may have been a new combination of P. malus var. domestica, but this was not directly referenced by Borkhausen. The earliest use of var. domestica for the apple was by Georg Adolf Suckow in 1786. Genome Apples are diploid, with two sets of chromosomes per cell (though triploid cultivars, with three sets, are not uncommon), have 17 chromosomes and an estimated genome size of approximately 650 Mb. Several whole genome sequences have been completed and made available. The first one in 2010 was based on the diploid cultivar 'Golden Delicious'. However, this first whole genome sequence contained several errors, in part owing to the high degree of heterozygosity in diploid apples which, in combination with an ancient genome duplication, complicated the assembly. Recently, double- and trihaploid individuals have been sequenced, yielding whole genome sequences of higher quality. The first whole genome assembly was estimated to contain around 57,000 genes, though the more recent genome sequences support estimates between 42,000 and 44,700 protein-coding genes. The availability of whole genome sequences has provided evidence that the wild ancestor of the cultivated apple most likely is Malus sieversii. Re-sequencing of multiple accessions has supported this, while also suggesting extensive introgression from Malus sylvestris following domestication. Cultivation History Central Asia is generally considered the center of origin for apples due to the genetic variability in specimens there. The wild ancestor of Malus domestica was Malus sieversii, found growing wild in the mountains of Central Asia in southern Kazakhstan, Kyrgyzstan, Tajikistan, and northwestern China. Cultivation of the species, most likely beginning on the forested flanks of the Tian Shan mountains, progressed over a long period of time and permitted secondary introgression of genes from other species into the open-pollinated seeds. Significant exchange with Malus sylvestris, the crabapple, resulted in populations of apples being more related to crabapples than to the more morphologically similar progenitor Malus sieversii. In strains without recent admixture the contribution of the latter predominates. The apple is thought to have been domesticated 4,000–10,000 years ago in the Tian Shan mountains, and then to have travelled along the Silk Road to Europe, with hybridization and introgression of wild crabapples from Siberia (M. baccata), the Caucasus (M. orientalis), and Europe (M. sylvestris). Only the M. sieversii trees growing on the western side of the Tian Shan mountains contributed genetically to the domesticated apple, not the isolated population on the eastern side. Chinese soft apples, such as M. asiatica and M. prunifolia, have been cultivated as dessert apples for more than 2,000 years in China. These are thought to be hybrids between M. baccata and M. sieversii in Kazakhstan. Among the traits selected for by human growers are size, fruit acidity, color, firmness, and soluble sugar. Unusually for domesticated fruits, the wild M. sieversii origin is only slightly smaller than the modern domesticated apple. At the Sammardenchia-Cueis site near Udine in Northeastern Italy, seeds from some form of apples have been found in material carbon dated to between 6570 and 5684 BCE. Genetic analysis has not yet been successfully used to determine whether such ancient apples were wild Malus sylvestris or Malus domesticus containing Malus sieversii ancestry. It is hard to distinguish in the archeological record between foraged wild apples and apple plantations. There is indirect evidence of apple cultivation in the third millennium BCE in the Middle East. There is direct evidence, apple cores, dated to the 10th century BCE from a Judean site between the Sinai and Negev. There was substantial apple production in European classical antiquity, and grafting was certainly known then. Grafting is an essential part of modern domesticated apple production, to be able to propagate the best cultivars; it is unclear when apple tree grafting was invented. The Roman writer Pliny the Elder describes a method of storage for apples from his time in the 1st century. He says they should be placed in a room with good air circulation from a north facing window on a bed of straw, chaff, or mats with windfalls kept separately. Though methods like this will extend the availabity of reasonably fresh apples, without refrigeration their lifespan is limited. Even sturdy winter apple varieties will only keep well until December in cool climates. For longer storage medieval Europeans strung up cored and peeled apples to dry, either whole or sliced into rings. Of the many Old World plants that the Spanish introduced to Chiloé Archipelago in the 16th century, apple trees became particularly well adapted. Apples were introduced to North America by colonists in the 17th century, and the first named apple cultivar was introduced in Boston by Reverend William Blaxton in 1640. The only apples native to North America are crab apples. Apple cultivars brought as seed from Europe were spread along Native American trade routes, as well as being cultivated on colonial farms. An 1845 United States apples nursery catalogue sold 350 of the "best" cultivars, showing the proliferation of new North American cultivars by the early 19th century. In the 20th century, irrigation projects in Eastern Washington began and allowed the development of the multibillion-dollar fruit industry, of which the apple is the leading product. Until the 20th century, farmers stored apples in frostproof cellars during the winter for their own use or for sale. Improved transportation of fresh apples by train and road replaced the necessity for storage. Controlled atmosphere facilities are used to keep apples fresh year-round. Controlled atmosphere facilities use high humidity, low oxygen, and controlled carbon dioxide levels to maintain fruit freshness. They were first researched at Cambridge University in the 1920s and first used in the United States in the 1950s. Breeding Many apples grow readily from seeds. However, apples must be propagated asexually to obtain cuttings with the characteristics of the parent. This is because seedling apples are "extreme heterozygotes". Rather than resembling their parents, seedlings are all different from each other and from their parents. Triploid cultivars have an additional reproductive barrier in that three sets of chromosomes cannot be divided evenly during meiosis, yielding unequal segregation of the chromosomes (aneuploids). Even in the case when a triploid plant can produce a seed (apples are an example), it occurs infrequently, and seedlings rarely survive. Because apples are not true breeders when planted as seeds, propagation usually involves grafting of cuttings. The rootstock used for the bottom of the graft can be selected to produce trees of a large variety of sizes, as well as changing the winter hardiness, insect and disease resistance, and soil preference of the resulting tree. Dwarf rootstocks can be used to produce very small trees (less than high at maturity), which bear fruit many years earlier in their life cycle than full size trees, and are easier to harvest. Dwarf rootstocks for apple trees can be traced as far back as 300 BCE, to the area of Persia and Asia Minor. Alexander the Great sent samples of dwarf apple trees to Aristotle's Lyceum. Dwarf rootstocks became common by the 15th century and later went through several cycles of popularity and decline throughout the world. The majority of the rootstocks used to control size in apples were developed in England in the early 1900s. The East Malling Research Station conducted extensive research into rootstocks, and their rootstocks are given an "M" prefix to designate their origin. Rootstocks marked with an "MM" prefix are Malling-series cultivars later crossed with trees of 'Northern Spy' in Merton, England. Most new apple cultivars originate as seedlings, which either arise by chance or are bred by deliberately crossing cultivars with promising characteristics. The words "seedling", "pippin", and "kernel" in the name of an apple cultivar suggest that it originated as a seedling. Apples can also form bud sports (mutations on a single branch). Some bud sports turn out to be improved strains of the parent cultivar. Some differ sufficiently from the parent tree to be considered new cultivars. Apples have been acclimatized in Ecuador at very high altitudes, where they can often, with the needed factors, provide crops twice per year because of constant temperate conditions year-round. Pollination Apples are self-incompatible; they must cross-pollinate to develop fruit. During the flowering each season, apple growers often utilize pollinators to carry pollen. Honey bees are most commonly used. Orchard mason bees are also used as supplemental pollinators in commercial orchards. Bumblebee queens are sometimes present in orchards, but not usually in sufficient number to be significant pollinators. Cultivars are sometimes classified by the day of peak bloom in the average 30-day blossom period, with pollinizers selected from cultivars within a 6-day overlap period. There are four to seven pollination groups in apples, depending on climate: Group A – Early flowering, 1 to 3 May in England ('Gravenstein', 'Red Astrachan') Group B – 4 to 7 May ('Idared', 'McIntosh') Group C – Mid-season flowering, 8 to 11 May ('Granny Smith', 'Cox's Orange Pippin') Group D – Mid/late season flowering, 12 to 15 May ('Golden Delicious', 'Calville blanc d'hiver') Group E – Late flowering, 16 to 18 May ('Braeburn', 'Reinette d'Orléans') Group F – 19 to 23 May ('Suntan') Group H – 24 to 28 May ('Court-Pendu Gris' – also called Court-Pendu plat) One cultivar can be pollinated by a compatible cultivar from the same group or close (A with A, or A with B, but not A with C or D). Maturation and harvest Cultivars vary in their yield and the ultimate size of the tree, even when grown on the same rootstock. Some cultivars, if left unpruned, grow very large—letting them bear more fruit, but making harvesting more difficult. Depending on tree density (number of trees planted per unit surface area), mature trees typically bear of apples each year, though productivity can be close to zero in poor years. Apples are harvested using three-point ladders that are designed to fit amongst the branches. Trees grafted on dwarfing rootstocks bear about of fruit per year. Some farms with apple orchards open them to the public so consumers can pick their own apples. Crops ripen at different times of the year according to the cultivar. Cultivar that yield their crop in the summer include 'Sweet Bough' and 'Duchess'; fall producers include 'Blenheim'; winter producers include 'King', 'Swayzie', and 'Tolman Sweet'. Storage Commercially, apples can be stored for months in controlled atmosphere chambers. Apples are commonly stored in chambers with lowered concentrations of oxygen to reduce respiration and slow softening and other changes if the fruit is already fully ripe. The gas ethylene is used by plants as a hormone which promotes ripening, decreasing the time an apple can be stored. For storage longer than about six months the apples are picked earlier, before full ripeness, when ethylene production by the fruit is low. However, in many varieties this increases their sensitivity to carbon dioxide, which also must be controlled. For home storage, most culitvars of apple can be stored for three weeks in a pantry and four to six weeks from the date of purchase in a refrigerator that maintains . Some varieties of apples (e.g. 'Granny Smith' and 'Fuji') have more than three times the storage life of others. Non-organic apples may be sprayed with a substance 1-methylcyclopropene blocking the apples' ethylene receptors, temporarily preventing them from ripening. Pests and diseases Apple trees are susceptible to fungal and bacterial diseases, and to damage by insect pests. Many commercial orchards pursue a program of chemical sprays to maintain high fruit quality, tree health, and high yields. These prohibit the use of synthetic pesticides, though some older pesticides are allowed. Organic methods include, for instance, introducing its natural predator to reduce the population of a particular pest. A wide range of pests and diseases can affect the plant. Three of the more common diseases or pests are mildew, aphids, and apple scab. Mildew is characterized by light grey powdery patches appearing on the leaves, shoots and flowers, normally in spring. The flowers turn a creamy yellow color and do not develop correctly. This can be treated similarly to Botrytis—eliminating the conditions that caused the disease and burning the infected plants are among recommended actions. Aphids are small insects with sucking mouthparts. Five species of aphids commonly attack apples: apple grain aphid, rosy apple aphid, apple aphid, spirea aphid, and the woolly apple aphid. The aphid species can be identified by color, time of year, and by differences in the cornicles (small paired projections from their rear). Aphids feed on foliage using needle-like mouth parts to suck out plant juices. When present in high numbers, certain species reduce tree growth and vigor. Apple scab: Apple scab causes leaves to develop olive-brown spots with a velvety texture that later turn brown and become cork-like in texture. The disease also affects the fruit, which also develops similar brown spots with velvety or cork-like textures. Apple scab is spread through fungus growing in old apple leaves on the ground and spreads during warm spring weather to infect the new year's growth. Among the most serious disease problems is a bacterial disease called fireblight, and three fungal diseases: Gymnosporangium rust, black spot, and bitter rot. Codling moths, and the apple maggots of fruit flies, cause serious damage to apple fruits, making them unsaleable. Young apple trees are also prone to mammal pests like mice and deer, which feed on the soft bark of the trees, especially in winter. The larvae of the apple clearwing moth (red-belted clearwing) burrow through the bark and into the phloem of apple trees, potentially causing significant damage. Cultivars There are more than 7,500 known cultivars (cultivated varieties) of apples. Cultivars vary in their yield and the ultimate size of the tree, even when grown on the same rootstock. Different cultivars are available for temperate and subtropical climates. The UK's National Fruit Collection, which is the responsibility of the Department of Environment, Food, and Rural Affairs, includes a collection of over 2,000 cultivars of apple tree in Kent. The University of Reading, which is responsible for developing the UK national collection database, provides access to search the national collection. The University of Reading's work is part of the European Cooperative Programme for Plant Genetic Resources of which there are 38 countries participating in the Malus/Pyrus work group. The UK's national fruit collection database contains much information on the characteristics and origin of many apples, including alternative names for what is essentially the same "genetic" apple cultivar. Most of these cultivars are bred for eating fresh (dessert apples), though some are cultivated specifically for cooking (cooking apples) or producing cider. Cider apples are typically too tart and astringent to eat fresh, but they give the beverage a rich flavor that dessert apples cannot. In the United States there are many apple breeding programs associated with universities. Cornell University has had a program operating since 1880 in Geneva, New York. Among their recent well known apples is the 'SnapDragon' cultivar released in 2013. In the west Washington State University started a program to support their apple industry in 1994 and released the 'Cosmic Crisp' cultivar in 2017. The third most grown apple cultivar in the United States is the 'Honeycrisp', released by the University of Minnesota program in 1991. Unusually for a popular cultivar, the 'Honeycrisp' is not directly related to another popular apple cultivar but instead to two unsuccessful cultivars. In Europe there are also many breeding programs such as the Julius Kühn-Institut, the German federal research center for cultivated plants. Commercially popular apple cultivars are soft but crisp. Other desirable qualities in modern commercial apple breeding are a colorful skin, absence of russeting, ease of shipping, lengthy storage ability, high yields, disease resistance, common apple shape, and developed flavor. Modern apples are generally sweeter than older cultivars, as popular tastes in apples have varied over time. Most North Americans and Europeans favor sweet, subacid apples, but tart apples have a strong minority following. Extremely sweet apples with barely any acid flavor are popular in Asia, especially the Indian subcontinent. Old cultivars are often oddly shaped, russeted, and grow in a variety of textures and colors. Some find them to have better flavor than modern cultivars, but they may have other problems that make them commercially unviable—low yield, disease susceptibility, poor tolerance for storage or transport, or just being the "wrong" size. A few old cultivars are still produced on a large scale, but many have been preserved by home gardeners and farmers that sell directly to local markets. Many unusual and locally important cultivars with their own unique taste and appearance exist; apple conservation campaigns have sprung up around the world to preserve such local cultivars from extinction. In the United Kingdom, old cultivars such as 'Cox's Orange Pippin' and 'Egremont Russet' are still commercially important even though by modern standards they are low yielding and susceptible to disease. Production World production of apples in 2022 was 96 million tonnes, with China producing 50% of the total (table). Secondary producers were the United States, Turkey, and Poland. Toxicity Amygdalin Apple seeds contain small amounts of amygdalin, a sugar and cyanide compound known as a cyanogenic glycoside. Ingesting small amounts of apple seeds causes no ill effects, but consumption of extremely large doses can cause adverse reactions. It may take several hours before the poison takes effect, as cyanogenic glycosides must be hydrolyzed before the cyanide ion is released. The U.S. National Library of Medicine's Hazardous Substances Data Bank records no cases of amygdalin poisoning from consuming apple seeds. Allergy One form of apple allergy, often found in northern Europe, is called birch-apple syndrome and is found in people who are also allergic to birch pollen. Allergic reactions are triggered by a protein in apples that is similar to birch pollen, and people affected by this protein can also develop allergies to other fruits, nuts, and vegetables. Reactions, which entail oral allergy syndrome (OAS), generally involve itching and inflammation of the mouth and throat, but in rare cases can also include life-threatening anaphylaxis. This reaction only occurs when raw fruit is consumed—the allergen is neutralized in the cooking process. The variety of apple, maturity and storage conditions can change the amount of allergen present in individual fruits. Long storage times can increase the amount of proteins that cause birch-apple syndrome. In other areas, such as the Mediterranean, some individuals have adverse reactions to apples because of their similarity to peaches. This form of apple allergy also includes OAS, but often has more severe symptoms, such as vomiting, abdominal pain and urticaria, and can be life-threatening. Individuals with this form of allergy can also develop reactions to other fruits and nuts. Cooking does not break down the protein causing this particular reaction, so affected individuals cannot eat raw or cooked apples. Freshly harvested, over-ripe fruits tend to have the highest levels of the protein that causes this reaction. Breeding efforts have yet to produce a hypoallergenic fruit suitable for either of the two forms of apple allergy. Uses Nutrition A raw apple is 86% water and 14% carbohydrates, with negligible content of fat and protein (table). A reference serving of a raw apple with skin weighing provides 52 calories and a moderate content of dietary fiber (table). Otherwise, there is low content of micronutrients, with the Daily Values of all falling below 10% (table). Culinary Apples varieties can be grouped as cooking apples, eating apples, and cider apples, the last so astringent as to be "almost inedible". Apples are consumed as juice, raw in salads, baked in pies, cooked into sauces and apple butter, or baked. They are sometimes used as an ingredient in savory foods, such as sausage and stuffing. Several techniques are used to preserve apples and apple products. Traditional methods include drying and making apple butter. Juice and cider are produced commercially; cider is a significant industry in regions such as the West of England and Normandy. A toffee apple (UK) or caramel apple (US) is a confection made by coating an apple in hot toffee or caramel candy respectively and allowing it to cool. Apples and honey are a ritual food pairing eaten during the Jewish New Year of Rosh Hashanah. Apples are an important ingredient in many desserts, such as pies, crumbles, and cakes. When cooked, some apple cultivars easily form a puree known as apple sauce, which can be cooked down to form a preserve, apple butter. They are often baked or stewed, and are cooked in some meat dishes. Apples are milled or pressed to produce apple juice, which may be drunk unfiltered (called apple cider in North America), or filtered. Filtered juice is often concentrated and frozen, then reconstituted later and consumed. Apple juice can be fermented to make cider (called hard cider in North America), ciderkin, and vinegar. Through distillation, various alcoholic beverages can be produced, such as applejack, Calvados, and apple brandy. Organic production Organic apples are commonly produced in the United States. Due to infestations by key insects and diseases, organic production is difficult in Europe. The use of pesticides containing chemicals, such as sulfur, copper, microorganisms, viruses, clay powders, or plant extracts (pyrethrum, neem) has been approved by the EU Organic Standing Committee to improve organic yield and quality. A light coating of kaolin, which forms a physical barrier to some pests, also may help prevent apple sun scalding. Non-browning apples Apple skins and seeds contain polyphenols. These are oxidised by the enzyme polyphenol oxidase, which causes browning in sliced or bruised apples, by catalyzing the oxidation of phenolic compounds to o-quinones, a browning factor. Browning reduces apple taste, color, and food value. Arctic apples, a non-browning group of apples introduced to the United States market in 2019, have been genetically modified to silence the expression of polyphenol oxidase, thereby delaying a browning effect and improving apple eating quality. The US Food and Drug Administration in 2015, and Canadian Food Inspection Agency in 2017, determined that Arctic apples are as safe and nutritious as conventional apples. Other products Apple seed oil is obtained by pressing apple seeds for manufacturing cosmetics. In culture Germanic paganism In Norse mythology, the goddess Iðunn is portrayed in the Prose Edda (written in the 13th century by Snorri Sturluson) as providing apples to the gods that give them eternal youthfulness. The English scholar H. R. Ellis Davidson links apples to religious practices in Germanic paganism, from which Norse paganism developed. She points out that buckets of apples were found in the Oseberg ship burial site in Norway, that fruit and nuts (Iðunn having been described as being transformed into a nut in Skáldskaparmál) have been found in the early graves of the Germanic peoples in England and elsewhere on the continent of Europe, which may have had a symbolic meaning, and that nuts are still a recognized symbol of fertility in southwest England. Davidson notes a connection between apples and the Vanir, a tribe of gods associated with fertility in Norse mythology, citing an instance of eleven "golden apples" being given to woo the beautiful Gerðr by Skírnir, who was acting as messenger for the major Vanir god Freyr in stanzas 19 and 20 of Skírnismál. Davidson also notes a further connection between fertility and apples in Norse mythology in chapter 2 of the Völsunga saga: when the major goddess Frigg sends King Rerir an apple after he prays to Odin for a child, Frigg's messenger (in the guise of a crow) drops the apple in his lap as he sits atop a mound. Rerir's wife's consumption of the apple results in a six-year pregnancy and the birth (by Caesarean section) of their son—the hero Völsung. Further, Davidson points out the "strange" phrase "Apples of Hel" used in an 11th-century poem by the skald Thorbiorn Brúnarson. She states this may imply that the apple was thought of by Brúnarson as the food of the dead. Further, Davidson notes that the potentially Germanic goddess Nehalennia is sometimes depicted with apples and that parallels exist in early Irish stories. Davidson asserts that while cultivation of the apple in Northern Europe extends back to at least the time of the Roman Empire and came to Europe from the Near East, the native varieties of apple trees growing in Northern Europe are small and bitter. Davidson concludes that in the figure of Iðunn "we must have a dim reflection of an old symbol: that of the guardian goddess of the life-giving fruit of the other world." Greek mythology Apples appear in many religious traditions, including Greek and Roman mythology where it has an ambiguous symbolism of discord, fertility, or courtship. In Greek mythology, the Greek hero Heracles, as a part of his Twelve Labours, was required to travel to the Garden of the Hesperides and pick the golden apples off the Tree of Life growing at its center. The Greek goddess of discord, Eris, became disgruntled after she was excluded from the wedding of Peleus and Thetis. In retaliation, she tossed a golden apple inscribed Καλλίστη (Kallistē, "For the most beautiful one"), into the wedding party. Three goddesses claimed the apple: Hera, Athena, and Aphrodite. Paris of Troy was appointed to select the recipient. After being bribed by both Hera and Athena, Aphrodite tempted him with the most beautiful woman in the world, Helen of Sparta. He awarded the apple to Aphrodite, thus indirectly causing the Trojan War. The apple was thus considered, in ancient Greece, sacred to Aphrodite. To throw an apple at someone was to symbolically declare one's love; and similarly, to catch it was to symbolically show one's acceptance of that love. An epigram claiming authorship by Plato states: Atalanta, also of Greek mythology, raced all her suitors in an attempt to avoid marriage. She outran all but Hippomenes (also known as Melanion, a name possibly derived from melon, the Greek word for both "apple" and fruit in general), who defeated her by cunning, not speed. Hippomenes knew that he could not win in a fair race, so he used three golden apples (gifts of Aphrodite, the goddess of love) to distract Atalanta. It took all three apples and all of his speed, but Hippomenes was finally successful, winning the race and Atalanta's hand. Celtic mythology In Celtic mythology, the otherworld has many names, including Emain Ablach, "Emain of the Apple-trees". A version of this is Avalon in Arthurian legend, or in Welsh Ynys Afallon, "Island of Apples". China In China, apples symbolise peace, since the sounds of the first element ("píng") in the words "apple" (苹果, Píngguǒ) and "peace" (平安, Píng'ān) are homophonous in Mandarin and Cantonese. When these two words are combined, the word Píngānguǒ (平安果, "Peace apples") is formed. This association developed further as the name for Christmas Eve in Mandarin is Píngānyè (平安夜, "Peaceful/Quiet Evening"), which made the gifting of apples at this season to friends and associates popular, as a way to wish them peace and safety. Christian art Though the forbidden fruit of Eden in the Book of Genesis is not identified, popular Christian tradition has held that it was an apple that Eve coaxed Adam to share with her. The origin of the popular identification with a fruit unknown in the Middle East in biblical times is found in wordplay with the Latin words mālum (an apple) and mălum (an evil), each of which is normally written malum. The tree of the forbidden fruit is called "the tree of the knowledge of good and evil" in Genesis 2:17, and the Latin for "good and evil" is bonum et malum. Renaissance painters may also have been influenced by the story of the golden apples in the Garden of Hesperides. As a result, in the story of Adam and Eve, the apple became a symbol for knowledge, immortality, temptation, the fall of man into sin, and sin itself. The larynx in the human throat has been called the "Adam's apple" because of a notion that it was caused by the forbidden fruit remaining in the throat of Adam. The apple as symbol of sexual seduction has been used to imply human sexuality, possibly in an ironic vein. Proverb The proverb, "An apple a day keeps the doctor away", addressing the supposed health benefits of the fruit, has been traced to 19th-century Wales, where the original phrase was "Eat an apple on going to bed, and you'll keep the doctor from earning his bread". In the 19th century and early 20th, the phrase evolved to "an apple a day, no doctor to pay" and "an apple a day sends the doctor away"; the phrasing now commonly used was first recorded in 1922.
Biology and health sciences
Rosales
null
18985040
https://en.wikipedia.org/wiki/Data
Data
Data ( , ) are a collection of discrete or continuous values that convey information, describing the quantity, quality, fact, statistics, other basic units of meaning, or simply sequences of symbols that may be further interpreted formally. A datum is an individual value in a collection of data. Data are usually organized into structures such as tables that provide additional context and meaning, and may themselves be used as data in larger structures. Data may be used as variables in a computational process. Data may represent abstract ideas or concrete measurements. Data are commonly used in scientific research, economics, and virtually every other form of human organizational activity. Examples of data sets include price indices (such as the consumer price index), unemployment rates, literacy rates, and census data. In this context, data represent the raw facts and figures from which useful information can be extracted. Data are collected using techniques such as measurement, observation, query, or analysis, and are typically represented as numbers or characters that may be further processed. Field data are data that are collected in an uncontrolled, in-situ environment. Experimental data are data that are generated in the course of a controlled scientific experiment. Data are analyzed using techniques such as calculation, reasoning, discussion, presentation, visualization, or other forms of post-analysis. Prior to analysis, raw data (or unprocessed data) is typically cleaned: Outliers are removed, and obvious instrument or data entry errors are corrected. Data can be seen as the smallest units of factual information that can be used as a basis for calculation, reasoning, or discussion. Data can range from abstract ideas to concrete measurements, including, but not limited to, statistics. Thematically connected data presented in some relevant context can be viewed as information. Contextually connected pieces of information can then be described as data insights or intelligence. The stock of insights and intelligence that accumulate over time resulting from the synthesis of data into information, can then be described as knowledge. Data has been described as "the new oil of the digital economy". Data, as a general concept, refers to the fact that some existing information or knowledge is represented or coded in some form suitable for better usage or processing. Advances in computing technologies have led to the advent of big data, which usually refers to very large quantities of data, usually at the petabyte scale. Using traditional data analysis methods and computing, working with such large (and growing) datasets is difficult, even impossible. (Theoretically speaking, infinite data would yield infinite information, which would render extracting insights or intelligence impossible.) In response, the relatively new field of data science uses machine learning (and other artificial intelligence) methods that allow for efficient applications of analytic methods to big data. Etymology and terminology The Latin word is the plural of , "(thing) given," and the neuter past participle of , "to give". The first English use of the word "data" is from the 1640s. The word "data" was first used to mean "transmissible and storable computer information" in 1946. The expression "data processing" was first used in 1954. When "data" is used more generally as a synonym for "information", it is treated as a mass noun in singular form. This usage is common in everyday language and in technical and scientific fields such as software development and computer science. One example of this usage is the term "big data". When used more specifically to refer to the processing and analysis of sets of data, the term retains its plural form. This usage is common in the natural sciences, life sciences, social sciences, software development and computer science, and grew in popularity in the 20th and 21st centuries. Some style guides do not recognize the different meanings of the term and simply recommend the form that best suits the target audience of the guide. For example, APA style as of the 7th edition requires "data" to be treated as a plural form. Meaning Data, information, knowledge, and wisdom are closely related concepts, but each has its role concerning the other, and each term has its meaning. According to a common view, data is collected and analyzed; data only becomes information suitable for making decisions once it has been analyzed in some fashion. One can say that the extent to which a set of data is informative to someone depends on the extent to which it is unexpected by that person. The amount of information contained in a data stream may be characterized by its Shannon entropy. Knowledge is the awareness of its environment that some entity possesses, whereas data merely communicates that knowledge. For example, the entry in a database specifying the height of Mount Everest is a datum that communicates a precisely-measured value. This measurement may be included in a book along with other data on Mount Everest to describe the mountain in a manner useful for those who wish to decide on the best method to climb it. Awareness of the characteristics represented by this data is knowledge. Data are often assumed to be the least abstract concept, information the next least, and knowledge the most abstract. In this view, data becomes information by interpretation; e.g., the height of Mount Everest is generally considered "data", a book on Mount Everest geological characteristics may be considered "information", and a climber's guidebook containing practical information on the best way to reach Mount Everest's peak may be considered "knowledge". "Information" bears a diversity of meanings that range from everyday usage to technical use. This view, however, has also been argued to reverse how data emerges from information, and information from knowledge. Generally speaking, the concept of information is closely related to notions of constraint, communication, control, data, form, instruction, knowledge, meaning, mental stimulus, pattern, perception, and representation. Beynon-Davies uses the concept of a sign to differentiate between data and information; data is a series of symbols, while information occurs when the symbols are used to refer to something. Before the development of computing devices and machines, people had to manually collect data and impose patterns on it. With the development of computing devices and machines, these devices can also collect data. In the 2010s, computers were widely used in many fields to collect data and sort or process it, in disciplines ranging from marketing, analysis of social service usage by citizens to scientific research. These patterns in the data are seen as information that can be used to enhance knowledge. These patterns may be interpreted as "truth" (though "truth" can be a subjective concept) and may be authorized as aesthetic and ethical criteria in some disciplines or cultures. Events that leave behind perceivable physical or virtual remains can be traced back through data. Marks are no longer considered data once the link between the mark and observation is broken. Mechanical computing devices are classified according to how they represent data. An analog computer represents a datum as a voltage, distance, position, or other physical quantity. A digital computer represents a piece of data as a sequence of symbols drawn from a fixed alphabet. The most common digital computers use a binary alphabet, that is, an alphabet of two characters typically denoted "0" and "1". More familiar representations, such as numbers or letters, are then constructed from the binary alphabet. Some special forms of data are distinguished. A computer program is a collection of data, that can be interpreted as instructions. Most computer languages make a distinction between programs and the other data on which programs operate, but in some languages, notably Lisp and similar languages, programs are essentially indistinguishable from other data. It is also useful to distinguish metadata, that is, a description of other data. A similar yet earlier term for metadata is "ancillary data." The prototypical example of metadata is the library catalog, which is a description of the contents of books. Data sources With respect to ownership of data collected in the course of marketing or other corporate collection, data has been characterized according to "party" depending on how close the data is to the source or if it has been generated through additional processing. "Zero-party data" refers to data that customers "intentionally and proactively shares". This kind of data can come from a variety of sources, including: subscriptions, preference centers, quizzes, surveys, pop-up forms, and interactive digital experiences. "First-party data" may be collected by a company directly from its customers. The secure exchange of first-party data among companies can be done using data clean rooms. "Second-party data" refers to data obtained from other organizations or partners, through purchase or other means and has been described as "another organization's first-party data". "Third-party data" is data collected by other organizations and subsequently aggregated from different sources, websites, and platforms. "No-party" data can sometimes refer to synthetic data that is generated based on patterns from original data. Data documents Whenever data needs to be registered, data exists in the form of a data document. Kinds of data documents include: data repository data study data set software data paper database data handbook data journal Some of these data documents (data repositories, data studies, data sets, and software) are indexed in Data Citation Indexes, while data papers are indexed in traditional bibliographic databases, e.g., Science Citation Index. Data collection Gathering data can be accomplished through a primary source (the researcher is the first person to obtain the data) or a secondary source (the researcher obtains the data that has already been collected by other sources, such as data disseminated in a scientific journal). Data analysis methodologies vary and include data triangulation and data percolation. The latter offers an articulate method of collecting, classifying, and analyzing data using five possible angles of analysis (at least three) to maximize the research's objectivity and permit an understanding of the phenomena under investigation as complete as possible: qualitative and quantitative methods, literature reviews (including scholarly articles), interviews with experts, and computer simulation. The data is thereafter "percolated" using a series of pre-determined steps so as to extract the most relevant information. Data longevity and accessibility An important field in computer science, technology, and library science is the longevity of data. Scientific research generates huge amounts of data, especially in genomics and astronomy, but also in the medical sciences, e.g. in medical imaging. In the past, scientific data has been published in papers and books, stored in libraries, but more recently practically all data is stored on hard drives or optical discs. However, in contrast to paper, these storage devices may become unreadable after a few decades. Scientific publishers and libraries have been struggling with this problem for a few decades, and there is still no satisfactory solution for the long-term storage of data over centuries or even for eternity. Data accessibility. Another problem is that much scientific data is never published or deposited in data repositories such as databases. In a recent survey, data was requested from 516 studies that were published between 2 and 22 years earlier, but less than one out of five of these studies were able or willing to provide the requested data. Overall, the likelihood of retrieving data dropped by 17% each year after publication. Similarly, a survey of 100 datasets in Dryad found that more than half lacked the details to reproduce the research results from these studies. This shows the dire situation of access to scientific data that is not published or does not have enough details to be reproduced. A solution to the problem of reproducibility is the attempt to require FAIR data, that is, data that is Findable, Accessible, Interoperable, and Reusable. Data that fulfills these requirements can be used in subsequent research and thus advances science and technology. In other fields Although data is also increasingly used in other fields, it has been suggested that the highly interpretive nature of them might be at odds with the ethos of data as "given". Peter Checkland introduced the term capta (from the Latin capere, "to take") to distinguish between an immense number of possible data and a sub-set of them, to which attention is oriented. Johanna Drucker has argued that since the humanities affirm knowledge production as "situated, partial, and constitutive," using data may introduce assumptions that are counterproductive, for example that phenomena are discrete or are observer-independent. The term capta, which emphasizes the act of observation as constitutive, is offered as an alternative to data for visual representations in the humanities. The term data-driven is a neologism applied to an activity which is primarily compelled by data over all other factors. Data-driven applications include data-driven programming and data-driven journalism.
Physical sciences
Science basics
Basics and measurement
12310379
https://en.wikipedia.org/wiki/Saldidae
Saldidae
Saldidae, also known as shore bugs, are a family of insects in the order Hemiptera (true bugs). They are oval-shaped and measure when mature. Typically they are found near shorelines or the marginal growths near freshwater bodies, estuaries, and sea coasts. They can flee by leaping or taking flight. There are about 350 recognized species with the majority from the Nearctic and Palearctic. Many species are found in the intertidal zone and both adults and nymphs of some species like Saldula pallipes can tolerate submergence at high-tide. Saldidae are predators and scavengers. They pass the winter through egg or adult diapause. Genera These 39 genera belong to the family Saldidae: Aoteasalda Larivière & Larochelle, 2016 Calacanthia Reuter, 1891 Capitonisalda J.Polhemus, 1981 Capitonisaldoida J.Polhemus & D.Polhemus, 1991 Chartosaldoida Cobben, 1987 Chartoscirta Stal, 1868 Chiloxanthus Reuter, 1891 Enalosalda Polhemus & Evans, 1969 Halosalda Reuter, 1912 Ioscytus Reuter, 1912 Kiwisaldula Larivière & Larochelle, 2016 Lampracanthia Reuter, 1912 Macrosaldula Leston & Southwood, 1964 Mascarenisalda J.Polhemus & D.Polhemus, 1991 Micracanthia Reuter, 1912 Oiosalda Drake & Hoberlandt, 1952 Orthophrys Horvath, 1911 Orthosaldula Gapud, 1986 Pentacora Reuter, 1912 Propentacora J.Polhemus, 1985 Pseudosaldula Cobben, 1961 Rupisalda J.Polhemus, 1985 Salda Fabricius, 1803 Saldoida Osborn, 1901 Saldula Van Duzee, 1914 Salduncula Brown, 1954 Sinosalda Vinokurov, 2004 Teloleuca Reuter, 1912 Zemacrosaldula Larivière & Larochelle, 2015 † Baissotea Ryzhkova, 2015 † Brevrimatus Zhang, Yao & Ren, 2011 † Helenasaldula Cobben, 1976 † Luculentsalda Zhang, Yao & Ren, 2013 † Mongolocoris Ryzhkova, 2012 † Oligosaldina Statz, 1950 † Paralosalda Polhemus & Evans, 1969 † Saldonia Popov, 1973 † Ulanocoris Ryzhkova, 2012 † Venustsalda Zhang, Song, Yao & Ren, 2012
Biology and health sciences
Hemiptera (true bugs)
Animals
12310667
https://en.wikipedia.org/wiki/Point%20bar
Point bar
A point bar is a depositional feature made of alluvium that accumulates on the inside bend of streams and rivers below the slip-off slope. Point bars are found in abundance in mature or meandering streams. They are crescent-shaped and located on the inside of a stream bend, being very similar to, though often smaller than, towheads, or river islands. Point bars are composed of sediment that is well sorted and typically reflects the overall capacity of the stream. They also have a very gentle slope and an elevation very close to water level. Since they are low-lying, they are often overtaken by floods and can accumulate driftwood and other debris during times of high water levels. Due to their near flat topography and the fact that the water speed is slow in the shallows of the point bar they are popular rest stops for boaters and rafters. However, camping on a point bar can be dangerous as a flash flood that raises the stream level by as little as a few inches (centimetres) can overwhelm a campsite in moments. A point bar is an area of deposition where as a cut bank is an area of erosion. Point bars are formed as the secondary flow of the stream sweeps and rolls sand, gravel and small stones laterally across the floor of the stream and up the shallow sloping floor of the point bar. Formation Any fluid, including water in a stream, can only flow around a bend in vortex flow. In vortex flow the speed of the fluid is fastest where the radius of the flow is smallest, and slowest where the radius is greatest. (Tropical cyclones, tornadoes, and the spinning motion of water as it escapes down a drain are all visible examples of vortex flow.) In the case of water flowing around a bend in a stream the secondary flow in the boundary layer along the floor of the stream does not flow parallel to the banks of the stream but flows partly across the floor of the stream toward the inside of the stream (where the radius of curvature is smallest). This movement of the boundary layer is capable of sweeping and rolling loose particles including sand, gravel, small stones and other submerged objects along the floor of the stream toward the point bar. This can be demonstrated at home. Partly fill a circular bowl or cup with water and sprinkle a little sand, rice or sugar into the water. Set the water in circular motion with a hand or spoon. The secondary flow will quickly sweep the solid particles into a neat pile in the center of the bowl or cup. The primary flow (the vortex) might be expected to sweep the solid particles to the perimeter of the bowl or cup, but instead the secondary flow along the floor of the bowl or cup sweeps the particles toward the center. Where a stream is following a straight course the slower boundary layer along the floor of the stream is also following the same straight course. It sweeps and rolls sand, gravel and polished stones downstream, along the floor of the stream. However, as the stream enters a bend and vortex flow commences as the primary flow, a secondary flow also commences and flows partly across the floor of the stream toward the convex bank (the bank with the smaller radius). Sand, gravel and polished stones that have travelled with the stream for a great distance where the stream was following a straight course may finally come to rest in the point bar of the first stream bend. Due to the circular path of a stream around a bend the surface of the water is slightly higher near the concave bank (the bank with the larger radius) than near the convex bank. This slight slope on the water surface of the stream causes a slightly greater water pressure on the floor of the stream near the concave bank than near the convex bank. This pressure gradient drives the slower boundary layer across the floor of the stream toward the convex bank. The pressure gradient is capable of driving the boundary layer up the shallow sloping floor of the point bar, causing sand, gravel and polished stones to be swept and rolled up-hill. The concave bank is often a cut bank and an area of erosion. The eroded material is swept and rolled across the floor of the stream by the secondary flow and may be deposited on the point bar only a small distance downstream from its original location in the concave bank. The point bar typically has a gently sloping floor with shallow water. The shallow water is mostly the accumulated boundary layer and does not have a fast speed. However, in the deepest parts of the stream where the stream is flowing freely, vortex flow prevails and the stream is flowing fastest where the radius of the bend is smallest, and slowest where the radius is greatest. The shallows around the point bar can become treacherous when the stream is rising. As the water depth increases over the shallows of the point bar, the vortex flow can extend closer toward the convex bank and the water speed at any point can increase dramatically in response to only a small increase in water depth. Fallacy regarding formation of point bars An old fallacy exists regarding the formation of point bars and oxbow lakes which suggests they are formed by the deposition (dropping) of a watercourse's suspended load claiming the velocity and energy of the stream decreases toward the inside of a bend. This fallacy relies on the erroneous notion that the momentum of the water is "always" slowest on the inside of the bend (where the radius is smallest) and fastest on the outside of the bend (where the radius is greatest), which ignores its increased angular momentum. Mass deposition of suspended solids rarely occurs on one bank save in tidal estuaries; instead, vortex flow being faster on the inner bank compensates for the greater height and therefore mass of water flowing downstream along the concave bank, and the rough, shallow bed usually provides per liter of water above more agitation to maintain any suspended particles. Any relatively steady gradient open flow not met with complex interactions with contrary flows, such as tides, or major obstacles, flows around a bend in a simple model of vortex flow, with relatively few variables and coefficients. Point bars typically have a gently sloping floor with shallow water. Clearly a higher proportion of the water in very shallow water does much more work to overcome friction above and below (especially in a countervailing breeze) which lowers its speed, see Bernoulli's principle. It is probably this close-quarters observation which led early geographers to believe in deposition by sedimentation of suspended matter rather than close-to-bed secondary currents. In a steady-gradient section of a watercourse, sedimentation may occur where the water is saturated and the shallow bank has high flow resistance but does not agitate the suspension. Similarly, the fallacy has scant explanation as to why deposition occurs at a stream bend, and little or none occurs where the stream is following a straight course, with exception of a steep slope (river gradient) where the river has formed a natural cut or waterfall and may then deposit some of its load at the point of meeting a less steep section e.g. great meander. In the settled low-gradient parts of a meandering watercourse the water speed is slow, turbulence is low, and the water is not capable of holding coarse sand and gravel in suspension. In contrast, point bars comprise coarse sand, gravel, polished stones and other submerged objects. These materials have not been carried in suspension and then dropped on the point bar – they have been swept and rolled into place by the secondary flow that exists across the floor/bed in the vicinity of a stream bend, which will be intensified if there is reflection particularly from an irregular, scoured opposing bank.
Physical sciences
Fluvial landforms
Earth science
12312178
https://en.wikipedia.org/wiki/Cut%20bank
Cut bank
A cut bank, also known as a river cliff or river-cut cliff, is the outside bank of a curve (meander) in a water channel (stream), which is continually undergoing erosion. Cut banks are found in abundance along mature or meandering streams, they are located opposite the slip-off slope on the inside of the stream meander. They are shaped much like a small cliff, and are formed as the stream collides with the river bank. It is the opposite of a point bar, which is an area of deposition of material eroded upstream in a cut bank. Typically, cut banks are steep and may be nearly vertical. Often, particularly during periods of high rainfall and higher-than average water levels, trees and poorly placed buildings can fall into the stream due to mass wasting events. Given enough time, the combination of erosion along cut banks and deposition along point bars can lead to the formation of an oxbow lake. Not only are cut banks steep and unstable, they are also the area of a stream where the water is flowing the fastest and often deeper. In geology, this is known as an area of "high-energy".
Physical sciences
Fluvial landforms
Earth science
12313191
https://en.wikipedia.org/wiki/Limits%20of%20integration
Limits of integration
In calculus and mathematical analysis the limits of integration (or bounds of integration) of the integral of a Riemann integrable function defined on a closed and bounded interval are the real numbers and , in which is called the lower limit and the upper limit. The region that is bounded can be seen as the area inside and . For example, the function is defined on the interval with the limits of integration being and . Integration by Substitution (U-Substitution) In Integration by substitution, the limits of integration will change due to the new function being integrated. With the function that is being derived, and are solved for . In general, where and . Thus, and will be solved in terms of ; the lower bound is and the upper bound is . For example, where and . Thus, and . Hence, the new limits of integration are and . The same applies for other substitutions. Improper integrals Limits of integration can also be defined for improper integrals, with the limits of integration of both and again being a and b. For an improper integral or the limits of integration are a and ∞, or −∞ and b, respectively. Definite Integrals If , then
Mathematics
Integral calculus
null
1697945
https://en.wikipedia.org/wiki/Host%E2%80%93guest%20chemistry
Host–guest chemistry
In supramolecular chemistry, host–guest chemistry describes complexes that are composed of two or more molecules or ions that are held together in unique structural relationships by forces other than those of full covalent bonds. Host–guest chemistry encompasses the idea of molecular recognition and interactions through non-covalent bonding. Non-covalent bonding is critical in maintaining the 3D structure of large molecules, such as proteins and is involved in many biological processes in which large molecules bind specifically but transiently to one another. Although non-covalent interactions could be roughly divided into those with more electrostatic or dispersive contributions, there are few commonly mentioned types of non-covalent interactions: ionic bonding, hydrogen bonding, van der Waals forces and hydrophobic interactions. Host-guest interaction has raised dramatical attention since it was discovered. It is an important field, because many biological processes require the host-guest interaction, and it can be useful in some material designs. There are several typical host molecules, such as, cyclodextrin, crown ether, et al. "Host molecules" usually have "pore-like" structure that is able to capture a "guest molecules". Although called molecules, hosts and guests are often ions. The driving forces of the interaction might vary, such as hydrophobic effect and van der Waals forces Binding between host and guest can be highly selective, in which case the interaction is called molecular recognition. Often, a dynamic equilibrium exist between the unbound and the bound states: H ="host", G ="guest", HG ="host–guest complex" The "host" component is often the larger molecule, and it encloses the smaller, "guest", molecule. In biological systems, the analogous terms of host and guest are commonly referred to as enzyme and substrate respectively. Inclusion and clathrate compounds Closely related to host–guest chemistry, are inclusion compounds (also known as an inclusion complexes). Here, a chemical complex in which one chemical compound (the "host") has a cavity into which a "guest" compound can be accommodated. The interaction between the host and guest involves purely van der Waals bonding. The definition of inclusion compounds is very broad, extending to channels formed between molecules in a crystal lattice in which guest molecules can fit. Yet another related class of compounds are clathrates, which often consisting of a lattice that traps or contains molecules. The word clathrate is derived from the Latin (), meaning 'with bars, latticed'. Molecular encapsulation Molecular encapsulation concerns the confinement of a guest within a larger host. In some cases, true host-guest reversibility is observed, in other cases, the encapsulated guest cannot escape. An important implication of encapsulation (and host-guest chemistry in general) is that the guest behaves differently from the way it would when in solution. Guest molecules that would react by bimolecular pathways are often stabilized because they cannot combine with other reactants. The spectroscopic signatures of trapped guests are of fundamental interest. Compounds normally highly unstable in solution have been isolated at room temperature when molecularly encapsulated. Examples include cyclobutadiene, arynes or cycloheptatetraene. Large metalla-assemblies, known as metallaprisms, contain a conformationally flexible cavity that allows them to host a variety of guest molecules. These assemblies have shown promise as agents of drug delivery to cancer cells. Encapsulation can control reactivity. For instance, excited state reactivity of free 1-phenyl-3-tolyl-2-proponanone (abbreviated A-CO-B) yields products A-A, B-B, and AB, which result from decarbonylation followed by random recombination of radicals A• and B•. Whereas, the same substrate upon encapsulation reacts to yield the controlled recombination product A-B, and rearranged products (isomers of A-CO-B). Macrocyclic hosts Organic hosts are occasionally called cavitands. The original definition proposed by Cram includes many classes of molecules: cyclodextrins, calixarenes, pillararenes and cucurbiturils. Calixarenes Calixarenes and related formaldehyde-arene condensates (resorcinarenes and pyrogallolarenes) form a class of hosts that form inclusion compounds. A related family of formaldehyde-derived oligomeric rings are pillararenes (pillered arenes). One famous illustration of the stabilizing effect of host-guest complexation is the stabilization of cyclobutadiene by such an organic host. Cyclodextrins and cucurbiturils Cyclodextrin (CD) are tubular molecules composed of several glucose units connected by ether bonds. The three kinds of CDs, α-CD (6 units), β-CD (7 units), and γ-CD (8 units) differ in their cavity sizes: 5, 6, and 8 Å, respectively. α-CD can thread onto one PEG chain, while γ-CD can thread onto 2 PEG chains. β-CD can bind with thiophene-based molecule. Cyclodextrins are well established hosts for the formation of inclusion compounds. Illustrative is the case of ferrocene which is inserted into the cyclodextrin at 100 °C under hydrothermal conditions. Cucurbiturils are macrocyclic molecules made of glycoluril () monomers linked by methylene bridges (). The oxygen atoms are located along the edges of the band and are tilted inwards, forming a partly enclosed cavity (cavitand). . Cucurbit[n]urils have similar size of γ-CD, which also behave similarly (e.g., 1 cucurbit[n]uril can thread onto 2 PEG chains). Cryptophanes The structure of cryptophanes contain 6 phenyl rings, mainly connected in 4 ways . Due to the phenyl groups and aliphatic chains, the cages inside cryptophanes are highly hydrophobic, suggesting the capability of capturing non-polar molecules. Based on this, cryptophanes can be employed to capture xenon in aqueous solution, which could be helpful in biological studies. Crown ethers and cryptands Crown ethers bind cations. Small crown ethers, e.g. 12-crown-4 bind well to small ions such as Li+ and large crowns, such as 24-crown-8 bind better to larger ions. Beyond binding ionic guests, crown ethers also bind to some neutral molecules, e.g., 1, 2, 3- triazole. Crown ethers can also be threaded with slender linear molecules and/or polymers, giving rise to supramolecular structures called rotaxanes. Given that the crown ethers are not bound to the chains, they can move up and down the threading molecule. Crown ether complexes of metal cations (and the corresponding complexes of Cryptands) are not considered to be inclusion complexes since the guest is bound by forces stronger than van der Waals bonding. Polymeric hosts Zeolites have open framework structures with cavities in which guest species can reside. Aluminosilicates being their composition, zeolites are rigid. Many structures are known, some of which are considerably useful as catalysts and for separations. Silica clathrasil are compounds structurally similar to clathrate hydrates with a SiO2 framework and can be found in a range of marine sediment. Clathrate compounds with formula A8B16X30, where A is an alkaline earth metal, B is a group III element, and X is an element from group IV have been explored for thermoelectric devices. Thermoelectric materials follow a design strategy called the phonon glass electron crystal concept. Low thermal conductivity and high electrical conductivity is desired to produce the Seebeck Effect. When the guest and host framework are appropriately tuned, clathrates can exhibit low thermal conductivity, i.e., phonon glass behavior, while electrical conductivity through the host framework is undisturbed allowing clathrates to exhibit electron crystal. Hofmann clathrates are coordination polymers with the formula Ni(CN)4·Ni(NH3)2(arene). These materials crystallize with small aromatic guests (benzene, certain xylenes), and this selectivity has been exploited commercially for the separation of these hydrocarbons. Metal organic frameworks (MOFs) form clathrates. Urea, a small molecule with the formula , has the peculiar property of crystallizing in open but rigid networks. The cost of efficient molecular packing is compensated by hydroge-bonding. Ribbons of hydrogen-bonded urea molecules form tunnel-like host into which many organic guests bind. Urea-clathrates have been well investigated for separations. Beyond urea, several other organic molecules form clathrates: thiourea, hydroquinone, and Dianin's compound. Thermodynamics of host-guest interactions When the host and guest molecules combine to form a single complex, the equilibrium is represented as and the equilibrium constant, K, is defined as where [X] denotes the concentration of a chemical species X (all activity coefficients are assumed to have a numerical values of 1). The mass-balance equations, at any data point, where and represent the total concentrations, of host and guest, can be reduced to a single quadratic equation in, say, [G] and so can be solved analytically for any given value of K. The concentrations [H] and [HG] can then derived. The next step in the calculation is to calculate the value, , of a quantity corresponding to the quantity observed . Then, a sum of squares, U, over all data points, np, can be defined as and this can be minimized with respect to the stability constant value, K, and a parameter such the chemical shift of the species HG (nmr data) or its molar absorbency (uv/vis data). This procedure is applicable to 1:1 adducts. Experimental techniques With nuclear magnetic resonance (NMR) spectra the observed chemical shift value, , arising from a given atom contained in a reagent molecule and one or more complexes of that reagent, will be the concentration-weighted average of all shifts of those chemical species. Chemical exchange is assumed to be rapid on the NMR time-scale. Using UV-vis spectroscopy, the absorbance of each species is proportional to the concentration of that species, according to the Beer–Lambert law. where λ is a wavelength, is the optical path length of the cuvette which contains the solution of the N compounds (chromophores), is the molar absorbance (also known as the extinction coefficient) of the ith chemical species at the wavelength λ, ci is its concentration. When the concentrations have been calculated as above and absorbance has been measured for samples with various concentrations of host and guest, the Beer–Lambert law provides a set of equations, at a given wavelength, that which can be solved by a linear least-squares process for the unknown extinction coefficient values at that wavelength. Host-guest structures can be probed by their luminescence. A rigid matrix protects emitters from being quenched, extending the lifetime of phosphoresce. In this circumstance, α-CD and CB could be used, in which the phosphor is served as a guest to interact with the host. For example, 4-phenylpyridium derivatives interacted with CB, and copolymerize with acrylamide. The resulting polymer yielded ~2 s of phosphorescence lifetime. Additionally, Zhu et al used crown ether and potassium ion to modify the polymer, and enhance the emission of phosphorescence. Another technique for evaluating host-guest interactions is calorimetry. Aspiration applications Host guest complexation is pervasive in biochemistry. Many protein hosts recognize and hence selectively bind other biomolecules. When the protein host is an enzyme, the guests are called substrates. While these concepts are well established in biological systems, the applications of synthetic host-guest chemistry remains mostly in the realm of aspiration. One major exception, being zeolites where host-guest chemistry is their raison d'etre. Self-healing A self-healing hydrogel constructed from modified cyclodextrin and adamantane . Another strategy is to use the interaction between the polymer backbone and host molecule (host molecule threading onto the polymer). If the threading process is fast enough, self-healing can also be achieved. Encapsulation and release: fragrances and drugs Cyclodextrin forms inclusion compounds with fragrances which are more stable towards exposure to light and air. When incorporated into textiles the fragrance lasts much longer due to the slow-release action. Photolytically-sensitive caged compounds have been examined as containers for releasing a drug or reagent. Encryption An encryption system constructed by pillar[5]arene, spiropyran and pentanenitrile (free state and grafted to polymer) was constructed by Wang et al. After UV irradiation, spiropyran would transform into merocyanine. When the visible light was shined on the material, the merocyanine close to the pillar[5]arene-free pentanenitrile complex had faster transformation to spiropyran; on the contrary, the one close to pillar[5]arene-grafted pentanenitrile complex has much slower transformation rate. This spiropyran-merocyanine transformation can be used for message encryption. Another strategy is based on the metallacages and polycyclic aromatic hydrocarbons. Because of the fluorescnece emission differences between the complex and the cages, the information could be encrypted. Mechanical property Although some host-guest interactions are not strong, increasing the amount of the host-guest interaction can improve the mechanical properties of the materials. As an example, threading the host molecules onto the polymer is one of the commonly used strategies for increasing the mechanical properties of the polymer. It takes time for the host molecules to de-thread from the polymer, which can be a way of energy dissipation. Another method is to use the slow exchange host-guest interaction. Though the slow exchange improves the mechanical properties, simultaneously, self-healing properties will be sacrificed. Sensing Silicon surfaces functionalized with tetraphosphonate cavitands have been used to singularly detect sarcosine in water and urine solutions. Traditionally, chemical sensing has been approached with a system that contains a covalently bound indicator to a receptor though a linker. Once the analyte binds, the indicator changes color or fluoresces. This technique is called the indicator-spacer-receptor approach (ISR). In contrast to ISR, indicator-displacement assay (IDA) utilizes a non-covalent interaction between a receptor (the host), indicator, and an analyte (the guest). Similar to ISR, IDA also utilizes colorimetric (C-IDA) and fluorescence (F-IDA) indicators. In an IDA assay, a receptor is incubated with the indicator. When the analyte is added to the mixture, the indicator is released to the environment. Once the indicator is released it either changes color (C-IDA) or fluoresces (F-IDA). IDA offers several advantages versus the traditional ISR chemical sensing approach. First, it does not require the indicator to be covalently bound to the receptor. Secondly, since there is no covalent bond, various indicators can be used with the same receptor. Lastly, the media in which the assay may be used is diverse. Chemical sensing techniques such as C-IDA have biological implications. For example, protamine is a coagulant that is routinely administered after cardiopulmonary surgery that counter acts the anti-coagulant activity of herapin. In order to quantify the protamine in plasma samples, a colorimetric displacement assay is used. Azure A dye is blue when it is unbound, but when it is bound to herapin, it shows a purple color. The binding between Azure A and heparin is weak and reversible. This allows protamine to displace Azure A. Once the dye is liberated it displays a purple color. The degree to which the dye is displaced is proportional to the amount of protamine in the plasma. F-IDA has been used by Kwalczykowski and co-workers to monitor the activities of helicase in E.coli. In this study they used thiazole orange as the indicator. The helicase unwinds the dsDNA to make ssDNA. The fluorescence intensity of thiazole orange has a greater affinity for dsDNA than ssDNA and its fluorescence intensity increases when it is bound to dsDNA than when it is unbound. Conformational switching A crystalline solid has been traditionally viewed as a static entity where the movements of its atomic components are limited to its vibrational equilibrium. As seen by the transformation of graphite to diamond, solid to solid transformation can occur under physical or chemical pressure. It has been proposed that the transformation from one crystal arrangement to another occurs in a cooperative manner. Most of these studies have been focused in studying an organic or metal-organic framework. In addition to studies of macromolecular crystalline transformation, there are also studies of single-crystal molecules that can change their conformation in the presence of organic solvents. An organometallic complex has been shown to morph into various orientations depending on whether it is exposed to solvent vapors or not. Environmental applications Host guest systems have been proposed to remove hazardous materials. Certain calix[4]arenes bind cesium-137 ions, which could in principle be applied to clean up radioactive wastes. Some receptors bind carcinogens. Alcohol According to food chemist Udo Pollmer of the European Institute of Food and Nutrition Sciences in Munich, alcohol can be molecularly encapsulated in cyclodextrines, a sugar derivate. In this way, encapsuled in small capsules, the fluid can be handled as a powder. The cyclodextrines can absorb an estimated 60 percent of their own weight in alcohol. A US patent has been registered for the process as early as 1974.
Physical sciences
Supramolecular chemistry
Chemistry
1698977
https://en.wikipedia.org/wiki/Homogeneous%20polynomial
Homogeneous polynomial
In mathematics, a homogeneous polynomial, sometimes called quantic in older texts, is a polynomial whose nonzero terms all have the same degree. For example, is a homogeneous polynomial of degree 5, in two variables; the sum of the exponents in each term is always 5. The polynomial is not homogeneous, because the sum of exponents does not match from term to term. The function defined by a homogeneous polynomial is always a homogeneous function. An algebraic form, or simply form, is a function defined by a homogeneous polynomial. A binary form is a form in two variables. A form is also a function defined on a vector space, which may be expressed as a homogeneous function of the coordinates over any basis. A polynomial of degree 0 is always homogeneous; it is simply an element of the field or ring of the coefficients, usually called a constant or a scalar. A form of degree 1 is a linear form. A form of degree 2 is a quadratic form. In geometry, the Euclidean distance is the square root of a quadratic form. Homogeneous polynomials are ubiquitous in mathematics and physics. They play a fundamental role in algebraic geometry, as a projective algebraic variety is defined as the set of the common zeros of a set of homogeneous polynomials. Properties A homogeneous polynomial defines a homogeneous function. This means that, if a multivariate polynomial P is homogeneous of degree d, then for every in any field containing the coefficients of P. Conversely, if the above relation is true for infinitely many then the polynomial is homogeneous of degree d. In particular, if P is homogeneous then for every This property is fundamental in the definition of a projective variety. Any nonzero polynomial may be decomposed, in a unique way, as a sum of homogeneous polynomials of different degrees, which are called the homogeneous components of the polynomial. Given a polynomial ring over a field (or, more generally, a ring) K, the homogeneous polynomials of degree d form a vector space (or a module), commonly denoted The above unique decomposition means that is the direct sum of the (sum over all nonnegative integers). The dimension of the vector space (or free module) is the number of different monomials of degree d in n variables (that is the maximal number of nonzero terms in a homogeneous polynomial of degree d in n variables). It is equal to the binomial coefficient Homogeneous polynomial satisfy Euler's identity for homogeneous functions. That is, if is a homogeneous polynomial of degree in the indeterminates one has, whichever is the commutative ring of the coefficients, where denotes the formal partial derivative of with respect to Homogenization A non-homogeneous polynomial P(x1,...,xn) can be homogenized by introducing an additional variable x0 and defining the homogeneous polynomial sometimes denoted hP: where d is the degree of P. For example, if then A homogenized polynomial can be dehomogenized by setting the additional variable x0 = 1. That is
Mathematics
Abstract algebra
null
1699025
https://en.wikipedia.org/wiki/Aerial%20application
Aerial application
Aerial application, or what is informally referred to as crop dusting, involves spraying crops with crop protection products from an agricultural aircraft. Planting certain types of seed are also included in aerial application. The specific spreading of fertilizer is also known as aerial topdressing in some countries. Many countries have severely limited aerial application of pesticides and other products because of environmental and public health hazards like spray drift; most notably, the European Union banned it outright with a few highly restricted exceptions in 2009, effectively ending the practice in all member states. Agricultural aircraft are highly specialized, purpose-built aircraft. Today's agricultural aircraft are often powered by turbine engines of up to and can carry as much as of crop protection product. Helicopters are sometimes used, and some aircraft serve double duty as water bombers in areas prone to wildfires. These aircraft are referred to as SEAT, or "single engine air tankers." History Aerial seed sowing The first known aerial application of agricultural materials was by John Chaytor, who in 1906 spread seed over a swamped valley floor in Wairoa, New Zealand, using a hot air balloon with mobile tethers. Aerial sowing of seed still continues to this day with cover crop applications and rice planting. Crop dusting The first known use of a heavier-than-air machine to disperse products occurred on August 3, 1921. Crop dusting was developed under the joint efforts of the U.S. Department of Agriculture and the U.S. Army Signal Corps' research station at McCook Field in Dayton, Ohio. Under the direction of McCook engineer Etienne Dormoy, a United States Army Air Service Curtiss JN4 Jenny piloted by John A. Macready was modified at McCook Field to spread lead arsenate to kill catalpa sphinx caterpillars at a catalpa farm near Troy, Ohio in the United States. The first test was considered highly successful. The first commercial cropdusting operations began in 1924 in Macon, Georgia by Huff-Daland Crop Dusting, which was co-founded by McCook Field test pilot Lt. Harold R. Harris. Use of insecticide and fungicide for crop dusting slowly spread in the Americas and, to a lesser extent, other nations in the 1930s. The name 'crop dusting' originated here, as actual dust was spread across the crops. Today, aerial applicators use liquid crop protection products in very small doses. Top dressing Aerial topdressing is the aerial application of fertilisers over farmland using agricultural aircraft. It was developed in New Zealand in the 1940s and rapidly adopted elsewhere in the 1950s. Purpose-built aircraft In 1951, Leland Snow designed the first aircraft specifically built for aerial application, the S-1. In 1957, The Grumman G-164 Ag-Cat was the first aircraft designed by a major company for agricultural aviation. Currently, the most common agricultural aircraft are the Air Tractor, Cessna Ag-wagon, Gippsland GA200, Grumman Ag Cat, PZL-106 KRUK, M-18 Dromader, PAC Fletcher, Piper PA-36 Pawnee Brave, Embraer EMB 202 Ipanema, and Rockwell Thrush Commander, but multi-purpose helicopters are also used. Unmanned aerial application Since the late 1990s, unmanned aerial vehicles have also been used for agricultural spraying. This phenomenon started in Japan and South Korea, where mountainous terrain and relatively small family-owned farms required lower-cost and higher-precision spraying. , the use of UAV crop dusters, such as the Yamaha R-MAX, is being expanded to the United States for use in spraying at vineyards. Concerns The National Institute of Environmental Health Sciences keeps track of relevant research. Historically, there has been concerns about the effects of aerial applications of pesticides and the chemicals' effects as they spread in the air. For example, the aerial application of mancozeb is likely a source of concern for pregnant women. Bans Since the 1970s, multiple countries started to limit or ban the aerial application of pesticides, fertilizers, and other products out of environmental and public health concerns, in particular from spray drift. Most notably, in 2009, the European Union prohibited aerial spraying of pesticides with a few highly-restricted exceptions in article 9 of Directive 2009/128/EC of the European Parliament and of the Council establishing a framework for Community action to achieve the sustainable use of pesticides, which effectively ended most aerial application in all member states and overseas territories. Guidelines The United States Environmental Protection Agency (EPA) provides guideline documents and hosts webinars about best practices for aerial application. In 2010, the United States Forest Service collected public comments to use within a Draft Environmental Impact Statement (DEIS), which was developed because the Montana Federal District Court ruled that aerial application of fire retardants during wildfires violated the Endangered Species Act.
Technology
Pest and disease control
null
1699214
https://en.wikipedia.org/wiki/Discrete%20uniform%20distribution
Discrete uniform distribution
In probability theory and statistics, the discrete uniform distribution is a symmetric probability distribution wherein each of some finite whole number n of outcome values are equally likely to be observed. Thus every one of the n outcome values has equal probability 1/n. Intuitively, a discrete uniform distribution is "a known, finite number of outcomes all equally likely to happen." A simple example of the discrete uniform distribution comes from throwing a fair six-sided die. The possible values are 1, 2, 3, 4, 5, 6, and each time the die is thrown the probability of each given value is 1/6. If two dice were thrown and their values added, the possible sums would not have equal probability and so the distribution of sums of two dice rolls is not uniform. Although it is common to consider discrete uniform distributions over a contiguous range of integers, such as in this six-sided die example, one can define discrete uniform distributions over any finite set. For instance, the six-sided die could have abstract symbols rather than numbers on each of its faces. Less simply, a random permutation is a permutation generated uniformly randomly from the permutations of a given set and a uniform spanning tree of a graph is a spanning tree selected with uniform probabilities from the full set of spanning trees of the graph. The discrete uniform distribution itself is non-parametric. However, in the common case that its possible outcome values are the integers in an interval , then a and b are parameters of the distribution and In these cases the cumulative distribution function (CDF) of the discrete uniform distribution can be expressed, for any k, as or simply on the distribution's support Estimation of maximum The problem of estimating the maximum of a discrete uniform distribution on the integer interval from a sample of k observations is commonly known as the German tank problem, following the practical application of this maximum estimation problem, during World War II, by Allied forces seeking to estimate German tank production. A uniformly minimum variance unbiased (UMVU) estimator for the distribution's maximum in terms of m, the sample maximum, and k, the sample size, is . This can be seen as a very simple case of maximum spacing estimation. This has a variance of so a standard deviation of approximately , the population-average gap size between samples. The sample maximum itself is the maximum likelihood estimator for the population maximum, but it is biased. If samples from a discrete uniform distribution are not numbered in order but are recognizable or markable, one can instead estimate population size via a mark and recapture method. Random permutation See rencontres numbers for an account of the probability distribution of the number of fixed points of a uniformly distributed random permutation. Properties The family of uniform discrete distributions over ranges of integers with one or both bounds unknown has a finite-dimensional sufficient statistic, namely the triple of the sample maximum, sample minimum, and sample size. Uniform discrete distributions over bounded integer ranges do not constitute an exponential family of distributions because their support varies with their parameters. For families of distributions in which their supports do not depend on their parameters, the Pitman–Koopman–Darmois theorem states that only exponential families have sufficient statistics of dimensions that are bounded as sample size increases. The uniform distribution is thus a simple example showing the necessity of the conditions for this theorem.
Mathematics
Statistics and probability
null
1699223
https://en.wikipedia.org/wiki/Continuous%20uniform%20distribution
Continuous uniform distribution
In probability theory and statistics, the continuous uniform distributions or rectangular distributions are a family of symmetric probability distributions. Such a distribution describes an experiment where there is an arbitrary outcome that lies between certain bounds. The bounds are defined by the parameters, and which are the minimum and maximum values. The interval can either be closed (i.e. ) or open (i.e. ). Therefore, the distribution is often abbreviated where stands for uniform distribution. The difference between the bounds defines the interval length; all intervals of the same length on the distribution's support are equally probable. It is the maximum entropy probability distribution for a random variable under no constraint other than that it is contained in the distribution's support. Definitions Probability density function The probability density function of the continuous uniform distribution is The values of at the two boundaries and are usually unimportant, because they do not alter the value of over any interval nor of nor of any higher moment. Sometimes they are chosen to be zero, and sometimes chosen to be The latter is appropriate in the context of estimation by the method of maximum likelihood. In the context of Fourier analysis, one may take the value of or to be because then the inverse transform of many integral transforms of this uniform function will yield back the function itself, rather than a function which is equal "almost everywhere", i.e. except on a set of points with zero measure. Also, it is consistent with the sign function, which has no such ambiguity. Any probability density function integrates to so the probability density function of the continuous uniform distribution is graphically portrayed as a rectangle where is the base length and is the height. As the base length increases, the height (the density at any particular value within the distribution boundaries) decreases. In terms of mean and variance the probability density function of the continuous uniform distribution is Cumulative distribution function The cumulative distribution function of the continuous uniform distribution is: Its inverse is: In terms of mean and variance the cumulative distribution function of the continuous uniform distribution is: its inverse is: Example 1. Using the continuous uniform distribution function For a random variable find In a graphical representation of the continuous uniform distribution function the area under the curve within the specified bounds, displaying the probability, is a rectangle. For the specific example above, the base would be and the height would be Example 2. Using the continuous uniform distribution function (conditional) For a random variable find The example above is a conditional probability case for the continuous uniform distribution: given that is true, what is the probability that Conditional probability changes the sample space, so a new interval length has to be calculated, where and The graphical representation would still follow Example 1, where the area under the curve within the specified bounds displays the probability; the base of the rectangle would be and the height would be Generating functions Moment-generating function The moment-generating function of the continuous uniform distribution is: from which we may calculate the raw moments For a random variable following the continuous uniform distribution, the expected value is and the variance is For the special case the probability density function of the continuous uniform distribution is: the moment-generating function reduces to the simple form: Cumulant-generating function For the -th cumulant of the continuous uniform distribution on the interval is where is the -th Bernoulli number. Standard uniform distribution The continuous uniform distribution with parameters and i.e. is called the standard uniform distribution. One interesting property of the standard uniform distribution is that if has a standard uniform distribution, then so does This property can be used for generating antithetic variates, among other things. In other words, this property is known as the inversion method where the continuous standard uniform distribution can be used to generate random numbers for any other continuous distribution. If is a uniform random number with standard uniform distribution, i.e. with then generates a random number from any continuous distribution with the specified cumulative distribution function Relationship to other functions As long as the same conventions are followed at the transition points, the probability density function of the continuous uniform distribution may also be expressed in terms of the Heaviside step function as: or in terms of the rectangle function as: There is no ambiguity at the transition point of the sign function. Using the half-maximum convention at the transition points, the continuous uniform distribution may be expressed in terms of the sign function as: Properties Moments The mean (first raw moment) of the continuous uniform distribution is: The second raw moment of this distribution is: In general, the -th raw moment of this distribution is: The variance (second central moment) of this distribution is: Order statistics Let be an i.i.d. sample from and let be the -th order statistic from this sample. has a beta distribution, with parameters and The expected value is: This fact is useful when making Q–Q plots. The variance is: Uniformity The probability that a continuously uniformly distributed random variable falls within any interval of fixed length is independent of the location of the interval itself (but it is dependent on the interval size ), so long as the interval is contained in the distribution's support. Indeed, if and if is a subinterval of with fixed then: which is independent of This fact motivates the distribution's name. Uniform distribution on more general sets The uniform distribution can be generalized to sets more general than intervals. Formally, let be a Borel set of positive, finite Lebesgue measure i.e. The uniform distribution on can be specified by defining the probability density function to be zero outside and constantly equal to on An interesting special case is when the set S is a simplex. It is possible to obtain a uniform distribution on the standard n-vertex simplex in the following way.take n independent random variables with the same exponential distribution; denote them by X1,...,Xn; and let Yi := Xi / (sumi Xi). Then, the vector Y1,...,Yn is uniformly distributed on the simplex. Related distributions If X has a standard uniform distribution, then by the inverse transform sampling method, Y = − λ−1 ln(X) has an exponential distribution with (rate) parameter λ. If X has a standard uniform distribution, then Y = Xn has a beta distribution with parameters (1/n,1). As such, The Irwin–Hall distribution is the sum of n i.i.d. U(0,1) distributions. The Bates distribution is the average of n i.i.d. U(0,1) distributions. The standard uniform distribution is a special case of the beta distribution, with parameters (1,1). The sum of two independent uniform distributions U1(a,b)+U2(c,d) yields a trapezoidal distribution, symmetric about its mean, on the support [a+c,b+d]. The plateau has width equals to the absolute different of the width of U1 and U2. The width of the sloped parts corresponds to the width of the narrowest uniform distribution. If the uniform distributions have the same width w, the result is a triangular distribution, symmetric about its mean, on the support [a+c,a+c+2w]. The sum of two independent, equally distributed, uniform distributions U1(a,b)+U2(a,b) yields a symmetric triangular distribution on the support [2a,2b]. The distance between two i.i.d. uniform random variables |U1(a,b)-U2(a,b)| also has a triangular distribution, although not symmetric, on the support [0,b-a]. Statistical inference Estimation of parameters Estimation of maximum Minimum-variance unbiased estimator Given a uniform distribution on with unknown the minimum-variance unbiased estimator (UMVUE) for the maximum is: where is the sample maximum and is the sample size, sampling without replacement (though this distinction almost surely makes no difference for a continuous distribution). This follows for the same reasons as estimation for the discrete distribution, and can be seen as a very simple case of maximum spacing estimation. This problem is commonly known as the German tank problem, due to application of maximum estimation to estimates of German tank production during World War II. Method of moment estimator The method of moments estimator is: where is the sample mean. Maximum likelihood estimator The maximum likelihood estimator is: where is the sample maximum, also denoted as the maximum order statistic of the sample. Estimation of minimum Given a uniform distribution on with unknown a, the maximum likelihood estimator for a is: , the sample minimum. Estimation of midpoint The midpoint of the distribution, is both the mean and the median of the uniform distribution. Although both the sample mean and the sample median are unbiased estimators of the midpoint, neither is as efficient as the sample mid-range, i.e. the arithmetic mean of the sample maximum and the sample minimum, which is the UMVU estimator of the midpoint (and also the maximum likelihood estimate). Confidence interval For the maximum Let be a sample from where is the maximum value in the population. Then has the Lebesgue-Borel-density where is the indicator function of The confidence interval given before is mathematically incorrect, as cannot be solved for without knowledge of . However, one can solve for for any unknown but valid one then chooses the smallest possible satisfying the condition above. Note that the interval length depends upon the random variable Occurrence and applications The probabilities for uniform distribution function are simple to calculate due to the simplicity of the function form. Therefore, there are various applications that this distribution can be used for as shown below: hypothesis testing situations, random sampling cases, finance, etc. Furthermore, generally, experiments of physical origin follow a uniform distribution (e.g. emission of radioactive particles). However, it is important to note that in any application, there is the unchanging assumption that the probability of falling in an interval of fixed length is constant. Economics example for uniform distribution In the field of economics, usually demand and replenishment may not follow the expected normal distribution. As a result, other distribution models are used to better predict probabilities and trends such as Bernoulli process. But according to Wanke (2008), in the particular case of investigating lead-time for inventory management at the beginning of the life cycle when a completely new product is being analyzed, the uniform distribution proves to be more useful. In this situation, other distribution may not be viable since there is no existing data on the new product or that the demand history is unavailable so there isn't really an appropriate or known distribution. The uniform distribution would be ideal in this situation since the random variable of lead-time (related to demand) is unknown for the new product but the results are likely to range between a plausible range of two values. The lead-time would thus represent the random variable. From the uniform distribution model, other factors related to lead-time were able to be calculated such as cycle service level and shortage per cycle. It was also noted that the uniform distribution was also used due to the simplicity of the calculations. Sampling from an arbitrary distribution The uniform distribution is useful for sampling from arbitrary distributions. A general method is the inverse transform sampling method, which uses the cumulative distribution function (CDF) of the target random variable. This method is very useful in theoretical work. Since simulations using this method require inverting the CDF of the target variable, alternative methods have been devised for the cases where the CDF is not known in closed form. One such method is rejection sampling. The normal distribution is an important example where the inverse transform method is not efficient. However, there is an exact method, the Box–Muller transformation, which uses the inverse transform to convert two independent uniform random variables into two independent normally distributed random variables. Quantization error In analog-to-digital conversion, a quantization error occurs. This error is either due to rounding or truncation. When the original signal is much larger than one least significant bit (LSB), the quantization error is not significantly correlated with the signal, and has an approximately uniform distribution. The RMS error therefore follows from the variance of this distribution. Random variate generation There are many applications in which it is useful to run simulation experiments. Many programming languages come with implementations to generate pseudo-random numbers which are effectively distributed according to the standard uniform distribution. On the other hand, the uniformly distributed numbers are often used as the basis for non-uniform random variate generation. If is a value sampled from the standard uniform distribution, then the value follows the uniform distribution parameterized by and as described above. History While the historical origins in the conception of uniform distribution are inconclusive, it is speculated that the term "uniform" arose from the concept of equiprobability in dice games (note that the dice games would have discrete and not continuous uniform sample space). Equiprobability was mentioned in Gerolamo Cardano's Liber de Ludo Aleae, a manual written in 16th century and detailed on advanced probability calculus in relation to dice.
Mathematics
Probability
null
1701075
https://en.wikipedia.org/wiki/Sequence%20space
Sequence space
In functional analysis and related areas of mathematics, a sequence space is a vector space whose elements are infinite sequences of real or complex numbers. Equivalently, it is a function space whose elements are functions from the natural numbers to the field K of real or complex numbers. The set of all such functions is naturally identified with the set of all possible infinite sequences with elements in K, and can be turned into a vector space under the operations of pointwise addition of functions and pointwise scalar multiplication. All sequence spaces are linear subspaces of this space. Sequence spaces are typically equipped with a norm, or at least the structure of a topological vector space. The most important sequence spaces in analysis are the spaces, consisting of the -power summable sequences, with the p-norm. These are special cases of Lp spaces for the counting measure on the set of natural numbers. Other important classes of sequences like convergent sequences or null sequences form sequence spaces, respectively denoted c and c0, with the sup norm. Any sequence space can also be equipped with the topology of pointwise convergence, under which it becomes a special kind of Fréchet space called FK-space. Definition A sequence in a set is just an -valued map whose value at is denoted by instead of the usual parentheses notation Space of all sequences Let denote the field either of real or complex numbers. The set of all sequences of elements of is a vector space for componentwise addition and componentwise scalar multiplication A sequence space is any linear subspace of As a topological space, is naturally endowed with the product topology. Under this topology, is Fréchet, meaning that it is a complete, metrizable, locally convex topological vector space (TVS). However, this topology is rather pathological: there are no continuous norms on (and thus the product topology cannot be defined by any norm). Among Fréchet spaces, is minimal in having no continuous norms: But the product topology is also unavoidable: does not admit a strictly coarser Hausdorff, locally convex topology. For that reason, the study of sequences begins by finding a strict linear subspace of interest, and endowing it with a topology different from the subspace topology. spaces For is the subspace of consisting of all sequences satisfying If then the real-valued function on defined by defines a norm on In fact, is a complete metric space with respect to this norm, and therefore is a Banach space. If then is also a Hilbert space when endowed with its canonical inner product, called the , defined for all by The canonical norm induced by this inner product is the usual -norm, meaning that for all If then is defined to be the space of all bounded sequences endowed with the norm is also a Banach space. If then does not carry a norm, but rather a metric defined by c, c0 and c00 A is any sequence such that exists. The set of all convergent sequences is a vector subspace of called the . Since every convergent sequence is bounded, is a linear subspace of Moreover, this sequence space is a closed subspace of with respect to the supremum norm, and so it is a Banach space with respect to this norm. A sequence that converges to is called a and is said to . The set of all sequences that converge to is a closed vector subspace of that when endowed with the supremum norm becomes a Banach space that is denoted by and is called the or the . The , is the subspace of consisting of all sequences which have only finitely many nonzero elements. This is not a closed subspace and therefore is not a Banach space with respect to the infinity norm. For example, the sequence where for the first entries (for ) and is zero everywhere else (that is, ) is a Cauchy sequence but it does not converge to a sequence in Space of all finite sequences Let , denote the space of finite sequences over . As a vector space, is equal to , but has a different topology. For every natural number let denote the usual Euclidean space endowed with the Euclidean topology and let denote the canonical inclusion . The image of each inclusion is and consequently, This family of inclusions gives a final topology , defined to be the finest topology on such that all the inclusions are continuous (an example of a coherent topology). With this topology, becomes a complete, Hausdorff, locally convex, sequential, topological vector space that is Fréchet–Urysohn. The topology is also strictly finer than the subspace topology induced on by . Convergence in has a natural description: if and is a sequence in then in if and only is eventually contained in a single image and under the natural topology of that image. Often, each image is identified with the corresponding ; explicitly, the elements and are identified. This is facilitated by the fact that the subspace topology on , the quotient topology from the map , and the Euclidean topology on all coincide. With this identification, is the direct limit of the directed system where every inclusion adds trailing zeros: . This shows is an LB-space. Other sequence spaces The space of bounded series, denote by bs, is the space of sequences for which This space, when equipped with the norm is a Banach space isometrically isomorphic to via the linear mapping The subspace cs consisting of all convergent series is a subspace that goes over to the space c under this isomorphism. The space Φ or is defined to be the space of all infinite sequences with only a finite number of non-zero terms (sequences with finite support). This set is dense in many sequence spaces. Properties of ℓp spaces and the space c0 The space ℓ2 is the only ℓp space that is a Hilbert space, since any norm that is induced by an inner product should satisfy the parallelogram law Substituting two distinct unit vectors for x and y directly shows that the identity is not true unless p = 2. Each is distinct, in that is a strict subset of whenever p < s; furthermore, is not linearly isomorphic to when . In fact, by Pitt's theorem , every bounded linear operator from to is compact when . No such operator can be an isomorphism; and further, it cannot be an isomorphism on any infinite-dimensional subspace of , and is thus said to be strictly singular. If 1 < p < ∞, then the (continuous) dual space of ℓp is isometrically isomorphic to ℓq, where q is the Hölder conjugate of p: 1/p + 1/q = 1. The specific isomorphism associates to an element x of the functional for y in . Hölder's inequality implies that Lx is a bounded linear functional on , and in fact so that the operator norm satisfies In fact, taking y to be the element of with gives Lx(y) = ||x||q, so that in fact Conversely, given a bounded linear functional L on , the sequence defined by lies in ℓq. Thus the mapping gives an isometry The map obtained by composing κp with the inverse of its transpose coincides with the canonical injection of ℓq into its double dual. As a consequence ℓq is a reflexive space. By abuse of notation, it is typical to identify ℓq with the dual of ℓp: (ℓp)* = ℓq. Then reflexivity is understood by the sequence of identifications (ℓp)** = (ℓq)* = ℓp. The space c0 is defined as the space of all sequences converging to zero, with norm identical to ||x||∞. It is a closed subspace of ℓ∞, hence a Banach space. The dual of c0 is ℓ1; the dual of ℓ1 is ℓ∞. For the case of natural numbers index set, the ℓp and c0 are separable, with the sole exception of ℓ∞. The dual of ℓ∞ is the ba space. The spaces c0 and ℓp (for 1 ≤ p < ∞) have a canonical unconditional Schauder basis {ei | i = 1, 2,...}, where ei is the sequence which is zero but for a 1 in the i th entry. The space ℓ1 has the Schur property: In ℓ1, any sequence that is weakly convergent is also strongly convergent . However, since the weak topology on infinite-dimensional spaces is strictly weaker than the strong topology, there are nets in ℓ1 that are weak convergent but not strong convergent. The ℓp spaces can be embedded into many Banach spaces. The question of whether every infinite-dimensional Banach space contains an isomorph of some ℓp or of c0, was answered negatively by B. S. Tsirelson's construction of Tsirelson space in 1974. The dual statement, that every separable Banach space is linearly isometric to a quotient space of ℓ1, was answered in the affirmative by . That is, for every separable Banach space X, there exists a quotient map , so that X is isomorphic to . In general, ker Q is not complemented in ℓ1, that is, there does not exist a subspace Y of ℓ1 such that . In fact, ℓ1 has uncountably many uncomplemented subspaces that are not isomorphic to one another (for example, take ; since there are uncountably many such Xs, and since no ℓp is isomorphic to any other, there are thus uncountably many ker Qs). Except for the trivial finite-dimensional case, an unusual feature of ℓp is that it is not polynomially reflexive. ℓp spaces are increasing in p For , the spaces are increasing in , with the inclusion operator being continuous: for , one has . Indeed, the inequality is homogeneous in the , so it is sufficient to prove it under the assumption that . In this case, we need only show that for . But if , then for all , and then . ℓ2 is isomorphic to all separable, infinite dimensional Hilbert spaces Let H be a separable Hilbert space. Every orthogonal set in H is at most countable (i.e. has finite dimension or ). The following two items are related: If H is infinite dimensional, then it is isomorphic to ℓ2 If , then H is isomorphic to Properties of ℓ1 spaces A sequence of elements in ℓ1 converges in the space of complex sequences ℓ1 if and only if it converges weakly in this space. If K is a subset of this space, then the following are equivalent: K is compact; K is weakly compact; K is bounded, closed, and equismall at infinity. Here K being equismall at infinity means that for every , there exists a natural number such that for all .
Mathematics
Mathematical analysis
null
7657737
https://en.wikipedia.org/wiki/Wet%20lab
Wet lab
A wet lab, or experimental lab, is a type of laboratory where it is necessary to handle various types of chemicals and potential "wet" hazards, so the room has to be carefully designed, constructed, and controlled to avoid spillage and contamination. A dry lab might have large experimental equipment but minimal chemicals, or instruments for analyzing data produced elsewhere. Overview A wet lab is a type of laboratory in which a wide range of experiments are performed, for example, characterizing of enzymes in biology, titration in chemistry, diffraction of light in physics, etc. - all of which may sometimes involve dealing with hazardous substances. Due to the nature of these experiments, the proper appropriate arrangement of safety equipment are of great importance. The researchers (the occupants) are required to know basic laboratory techniques including safety procedures and techniques related to the experiments that they perform. Laboratory design At the present, lab design tends to focus on increasing the interactions between researchers through the use of open plans, allowing the space and opportunity for researchers to exchange ideas, share equipment, and share storage space; increasing productivity and efficiency of experiments. This style of design has been proposed to support team-based work, though more compartmentalised or individual spaces are still important for some types of processes which require separate/isolated space such as electron microscopes, tissue cultures, work/workers that may be disturbed by noise levels, etc. Flexibility of laboratory design should also be promoted, for example, the wall and ceiling should be removable in case of expansion or contraction, the pipes, tubes and fume hoods should also be removable for future expansion, reallocation and change of use. A well thought-through design will ensure that a lab can be adjusted for any future use. The sustainability of resources is also a concern, so the amount of resources and energy used in the lab should be reduced where possible to save the environment, but still yield the same products. As a laboratory consists of many areas such as wet lab, dry lab and office areas, wet labs should be separated from other spaces using controlling devices or dividers to prevent cross-contamination or spillage. Due to the nature of processes used in wet labs, the environmental conditions may need to be carefully considered and controlled using a cleanroom system.
Physical sciences
Basics: General
Chemistry
7661439
https://en.wikipedia.org/wiki/Argon%20fluorohydride
Argon fluorohydride
Argon fluorohydride (systematically named fluoridohydridoargon) or argon hydrofluoride is an inorganic compound with the chemical formula HArF (also written ArHF). It is a compound of the chemical element argon. Discovery The discovery of this argon compound is credited to a group of Finnish scientists, led by Markku Räsänen. On 24 August 2000, in the journal Nature, they announced their discovery of argon fluorohydride. This discovery caused the recognition that argon could form weakly bound compounds, even though it was not the first compound made with noble gases. Synthesis This chemical was synthesized by mixing argon and hydrogen fluoride on a caesium iodide surface at 8 K (−265 °C), and exposing the mixture to ultraviolet radiation. This caused the gases to combine. The infrared spectrum of the resulting gas mixture shows that it definitely contains chemical bonds, albeit very weak ones; thus, it is argon fluorohydride, and not a supermolecule or a mixture of argon and hydrogen fluoride. Its chemical bonds are stable only if the substance is kept at temperatures below 27 K (−246 °C); upon warming, it decomposes into argon and hydrogen fluoride.
Physical sciences
Noble gas compounds
Chemistry
670376
https://en.wikipedia.org/wiki/Power-flow%20study
Power-flow study
In power engineering, the power-flow study, or load-flow study, is a numerical analysis of the flow of electric power in an interconnected system. A power-flow study usually uses simplified notations such as a one-line diagram and per-unit system, and focuses on various aspects of AC power parameters, such as Voltage, voltage angles, real power and reactive power. It analyzes the power systems in normal steady-state operation. Power-flow or load-flow studies are important for planning future expansion of power systems as well as in determining the best operation of existing systems. The principal information obtained from the power-flow study is the magnitude and phase angle of the voltage at each bus, and the real and reactive power flowing in each line. Commercial power systems are usually too complex to allow for hand solution of the power flow. Special-purpose network analyzers were built between 1929 and the early 1960s to provide laboratory-scale physical models of power systems. Large-scale digital computers replaced the analog methods with numerical solutions. In addition to a power-flow study, computer programs perform related calculations such as short-circuit fault analysis, stability studies (transient and steady-state), unit commitment and economic dispatch. In particular, some programs use linear programming to find the optimal power flow, the conditions which give the lowest cost per kilowatt hour delivered. A load flow study is especially valuable for a system with multiple load centers, such as a refinery complex. The power-flow study is an analysis of the system’s capability to adequately supply the connected load. The total system losses, as well as individual line losses, also are tabulated. Transformer tap positions are selected to ensure the correct voltage at critical locations such as motor control centers. Performing a load-flow study on an existing system provides insight and recommendations as to the system operation and optimization of control settings to obtain maximum capacity while minimizing the operating costs. The results of such an analysis are in terms of active power, reactive power, voltage magnitude and phase angle. Furthermore, power-flow computations are crucial for optimal operations of groups of generating units. In term of its approach to uncertainties, load-flow study can be divided to deterministic load flow and uncertainty-concerned load flow. Deterministic load-flow study does not take into account the uncertainties arising from both power generations and load behaviors. To take the uncertainties into consideration, there are several approaches that has been used such as probabilistic, possibilistic, information gap decision theory, robust optimization, and interval analysis. Model An alternating current power-flow model is a model used in electrical engineering to analyze power grids. It provides a nonlinear system of equations which describes the energy flow through each transmission line. The problem is non-linear because the power flow into load impedances is a function of the square of the applied voltages. Due to nonlinearity, in many cases the analysis of large network via AC power-flow model is not feasible, and a linear (but less accurate) DC power-flow model is used instead. Usually analysis of a three-phase power system is simplified by assuming balanced loading of all three phases. Sinusoidal steady-state operation is assumed, with no transient changes in power flow or voltage due to load or generation changes, meaning all current and voltage waveforms are sinusoidal with no DC offset and have the same constant frequency. The previous assumption is the same as assuming the power system is linear time-invariant (even though the system of equations is nonlinear), driven by sinusoidal sources of same frequency, and operating in steady-state, which allows to use phasor analysis, another simplification. A further simplification is to use the per-unit system to represent all voltages, power flows, and impedances, scaling the actual target system values to some convenient base. A system one-line diagram is the basis to build a mathematical model of the generators, loads, buses, and transmission lines of the system, and their electrical impedances and ratings. Power-flow problem formulation The goal of a power-flow study is to obtain complete voltage angles and magnitude information for each bus in a power system for specified load and generator real power and voltage conditions. Once this information is known, real and reactive power flow on each branch as well as generator reactive power output can be analytically determined. Due to the nonlinear nature of this problem, numerical methods are employed to obtain a solution that is within an acceptable tolerance. The solution to the power-flow problem begins with identifying the known and unknown variables in the system. The known and unknown variables are dependent on the type of bus. A bus without any generators connected to it is called a Load Bus. With one exception, a bus with at least one generator connected to it is called a Generator Bus. The exception is one arbitrarily-selected bus that has a generator. This bus is referred to as the slack bus. In the power-flow problem, it is assumed that the real power and reactive power at each Load Bus are known. For this reason, Load Buses are also known as PQ Buses. For Generator Buses, it is assumed that the real power generated and the voltage magnitude is known. For the Slack Bus, it is assumed that the voltage magnitude and voltage phase are known. Therefore, for each Load Bus, both the voltage magnitude and angle are unknown and must be solved for; for each Generator Bus, the voltage angle must be solved for; there are no variables that must be solved for the Slack Bus. In a system with buses and generators, there are then unknowns. In order to solve for the unknowns, there must be equations that do not introduce any new unknown variables. The possible equations to use are power balance equations, which can be written for real and reactive power for each bus. The real power balance equation is: where is the net active power injected at bus i, is the real part of the element in the bus admittance matrix YBUS corresponding to the row and column, is the imaginary part of the element in the YBUS corresponding to the row and column and is the difference in voltage angle between the and buses (). The reactive power balance equation is: where is the net reactive power injected at bus i. Equations included are the real and reactive power balance equations for each Load Bus and the real power balance equation for each Generator Bus. Only the real power balance equation is written for a Generator Bus because the net reactive power injected is assumed to be unknown and therefore including the reactive power balance equation would result in an additional unknown variable. For similar reasons, there are no equations written for the Slack Bus. In many transmission systems, the impedance of the power network lines is primarily inductive, i.e. the phase angles of the power lines impedance are usually relatively large and very close to 90 degrees. There is thus a strong coupling between real power and voltage angle, and between reactive power and voltage magnitude, while the coupling between real power and voltage magnitude, as well as reactive power and voltage angle, is weak. As a result, real power is usually transmitted from the bus with higher voltage angle to the bus with lower voltage angle, and reactive power is usually transmitted from the bus with higher voltage magnitude to the bus with lower voltage magnitude. However, this approximation does not hold when the phase angle of the power line impedance is relatively small. Newton–Raphson solution method There are several different methods of solving the resulting nonlinear system of equations. The most popular is a variation of the Newton–Raphson method. The Newton-Raphson method is an iterative method which begins with initial guesses of all unknown variables (voltage magnitude and angles at Load Buses and voltage angles at Generator Buses). Next, a Taylor Series is written, with the higher order terms ignored, for each of the power balance equations included in the system of equations. The result is a linear system of equations that can be expressed as: where and are called the mismatch equations: and is a matrix of partial derivatives known as a Jacobian: . The linearized system of equations is solved to determine the next guess (m + 1) of voltage magnitude and angles based on: The process continues until a stopping condition is met. A common stopping condition is to terminate if the norm of the mismatch equations is below a specified tolerance. A rough outline of solution of the power-flow problem is: Make an initial guess of all unknown voltage magnitudes and angles. It is common to use a "flat start" in which all voltage angles are set to zero and all voltage magnitudes are set to 1.0 p.u. Solve the power balance equations using the most recent voltage angle and magnitude values. Linearize the system around the most recent voltage angle and magnitude values Solve for the change in voltage angle and magnitude Update the voltage magnitude and angles Check the stopping conditions, if met then terminate, else go to step 2. Other power-flow methods Gauss–Seidel method: This is the earliest devised method. It shows slower rates of convergence compared to other iterative methods, but it uses very little memory and does not need to solve a matrix system. Fast-decoupled-load-flow method is a variation on Newton–Raphson that exploits the approximate decoupling of active and reactive flows in well-behaved power networks, and additionally fixes the value of the Jacobian during the iteration in order to avoid costly matrix decompositions. Also referred to as "fixed-slope, decoupled NR". Within the algorithm, the Jacobian matrix gets inverted only once, and there are three assumptions. Firstly, the conductance between the buses is zero. Secondly, the magnitude of the bus voltage is one per unit. Thirdly, the sine of phases between buses is zero. Fast decoupled load flow can return the answer within seconds whereas the Newton Raphson method takes much longer. This is useful for real-time management of power grids. Holomorphic embedding load flow method: A recently developed method based on advanced techniques of complex analysis. It is direct and guarantees the calculation of the correct (operative) branch, out of the multiple solutions present in the power-flow equations. Backward-Forward Sweep (BFS) method: A method developed to take advantage of the radial structure of most modern distribution grids. It involves choosing an initial voltage profile and separating the original system of equations of grid components into two separate systems and solving one, using the last results of the other, until convergence is achieved. Solving for the currents with the voltages given is called the backward sweep (BS) and solving for the voltages with the currents given is called the forward sweep (FS). Laurent Power Flow (LPF) method: Power flow formulation that provides guarantee of uniqueness of solution and independence on initial conditions for electrical distribution systems. The LPF is based on the current injection method (CIM) and applies the Laurent series expansion. The main characteristics of this formulation are its proven numerical convergence and stability, and its computational advantages, showing to be at least ten times faster than the BFS method both in balanced and unbalanced networks. Since it is based on the system's admittance matrix, the formulation is able to consider radial and meshed network topologies without additional modifications (contrary to the compensation-based BFS). The simplicity and computational efficiency of the LPF method make it an attractive option for recursive power flow problems, such as those encountered in time-series analyses, metaheuristics, probabilistic analysis, reinforcement learning applied to power systems, and other related applications. DC power-flow Direct current load flow gives estimations of lines power flows on AC power systems. Direct current load flow looks only at active power flows and neglects reactive power flows. This method is non-iterative and absolutely convergent but less accurate than AC Load Flow solutions. Direct current load flow is used wherever repetitive and fast load flow estimations are required.
Technology
Concepts
null
670765
https://en.wikipedia.org/wiki/Biostratigraphy
Biostratigraphy
Biostratigraphy is the branch of stratigraphy which focuses on correlating and assigning relative ages of rock strata by using the fossil assemblages contained within them. The primary objective of biostratigraphy is correlation, demonstrating that a particular horizon in one geological section represents the same period of time as another horizon at a different section. Fossils within these strata are useful because sediments of the same age can look completely different, due to local variations in the sedimentary environment. For example, one section might have been made up of clays and marls, while another has more chalky limestones. However, if the fossil species recorded are similar, the two sediments are likely to have been laid down around the same time. Ideally these fossils are used to help identify biozones, as they make up the basic biostratigraphy units, and define geological time periods based upon the fossil species found within each section. Basic concepts of biostratigraphic principles were introduced centuries ago, going as far back as the early 1800s. A Danish scientist and bishop by the name of Nicolas Steno was one of the first geologists to recognize that rock layers correlate to the Law of Superposition. With advancements in science and technology, by the 18th century it began to be accepted that fossils were remains left by species that had become extinct, but were then preserved within the rock record. The method was well-established before Charles Darwin explained the mechanism behind it—evolution. Scientists William Smith, George Cuvier, and Alexandre Brongniart came to the conclusion that fossils then indicated a series of chronological events, establishing layers of rock strata as some type of unit, later termed biozone. From here on, scientists began relating the changes in strata and biozones to different geological eras, establishing boundaries and time periods within major faunal changes. By the late 18th century the Cambrian and Carboniferous periods were internationally recognized due to these findings. During the early 20th century, advancements in technology gave scientists the ability to study radioactive decay. Using this methodology, scientists were able to establish geological time, the boundaries of the different eras (Paleozoic, Mesozoic, Cenozoic), as well as Periods (Cambrian, Ordovician, Silurian) through the isotopes found within fossils via radioactive decay. Current 21st century uses of biostratigraphy involve interpretations of age for rock layers, which are primarily used by oil and gas industries for drilling workflows and resource allocations. Fossils as a basis for stratigraphic subdivision Fossil assemblages were traditionally used to designate the duration of periods. Since a large change in fauna was required to make early stratigraphers create a new period, most of the periods we recognize today are terminated by a major extinction event or faunal turnover. Concept of stage A stage is a major subdivision of strata, each systematically following the other each bearing a unique assemblage of fossils. Therefore, stages can be defined as a group of strata containing the same major fossil assemblages. French palaeontologist Alcide d'Orbigny is credited for the invention of this concept. He named stages after geographic localities with particularly good sections of rock strata that bear the characteristic fossils on which the stages are based. Concept of zone In 1856 German palaeontologist Albert Oppel introduced the concept of zone (also known as biozones or Oppel zone). A zone includes strata characterized by the overlapping range of fossils. They represent the time between the appearance of species chosen at the base of the zone and the appearance of other species chosen at the base of the next succeeding zone. Oppel's zones are named after a particular distinctive fossil species, called an index fossil. Index fossils are one of the species from the assemblage of species that characterize the zone. Biostratigraphy uses zones for the most fundamental unit of measurement. The thickness and range of these zones can be a few meters, up to hundreds of meters. They can also range from local to worldwide, as the extent of which they can reach in the horizontal plane relies on tectonic plates and tectonic activity. Two of the tectonic processes that run the risk of changing these zones' ranges are metamorphic folding and subduction. Furthermore, biostratigraphic units are divided into six principal kinds of biozones: Taxon range biozone, Concurrent range biozone, Interval biozone, Lineage biozone, Assemblage biozone, and Abundance biozone. The Taxon range biozone represents the known stratigraphic and geographic range of occurrence of a single taxon. Concurrent range biozone includes the concurrent, coincident, or overlapping part of the range of two specified taxa. Interval biozones include the strata between two specific biostratigraphic surfaces and can be based on lowest or highest occurrences. Lineage biozones are strata containing species representing a specific segment of an evolutionary lineage. Assemblage biozones are strata that contain a unique association of three or more taxa within it. Abundance biozones are strata in which the abundance of a particular taxon or group of taxa is significantly greater than in the adjacent part of the section. Index fossils Index fossils (also known as guide fossils, indicator fossils, or dating fossils) are the fossilized remains or traces of particular plants or animals that are characteristic of a particular span of geologic time or environment, and can be used to identify and date the containing rocks. To be practical, index fossils must have a limited vertical time range, wide geographic distribution, and rapid evolutionary trends. Rock formations separated by great distances but containing the same index fossil species are thereby known to have both formed during the limited time that the species lived. Index fossils were originally used to define and identify geologic units, then became a basis for defining geologic periods, and then for faunal stages and zones. Ammonites, graptolites, archeocyathids, inoceramids, and trilobites are groups of animals from which many species have been identified as index fossils that are widely used in biostratigraphy. Species of microfossils such as acritarchs, chitinozoans, conodonts, dinoflagellate cysts, ostracods, pollen, spores and foraminiferans are also frequently used. Different fossils work well for sediments of different ages; trilobites, for example, are particularly useful for sediments of Cambrian age. A long series of ammonite and inoceramid species are particularly useful for correlating environmental events around the world during the super-greenhouse of the Late Cretaceous. To work well, the fossils used must be widespread geographically, so that they can be found in many different places. They must also be short-lived as a species, so that the period of time during which they could be incorporated in the sediment is relatively narrow. The longer lived the species, the poorer the stratigraphic precision, so fossils that evolve rapidly, such as ammonites, are favored over forms that evolve much more slowly, like nautiloids. Often biostratigraphic correlations are based on a faunal assemblage, rather than an individual species — this allows greater precision as the time span in which all of the species in the assemblage existed together is narrower than the time spans of any of the members. Furthermore, if only one species is present in a sample, it can mean either that (1) the strata were formed in the known fossil range of that organism; or (2) that the fossil range of the organism was incompletely known, and the strata extend the known fossil range. For instance, the presence of the trace fossil Treptichnus pedum was used to define the base of the Cambrian period, but it has since been found in older strata. If the fossil is easy to preserve and easy to identify, more precise time estimating of the stratigraphic layers is possible. Faunal succession The concept of faunal succession was theorized at the beginning of the 19th century by William Smith. When Smith was studying rock strata, he began to recognize that rock outcrops contained a unique collection of fossils. The idea that these distant rock outcrops contained similar fossils allowed for Smith to order rock formations throughout England. With Smith's work on these rock outcrops and mapping around England, he began to notice some beds of rock may contain mostly similar species, however there were also subtle differences within or between these fossil groups. This difference in assemblages that appeared identical at first, lead to the principle of faunal succession, where fossil organisms succeed one another in a definite and determinable order, and therefore any time period can be categorized by its fossil extent.
Physical sciences
Stratigraphy
Earth science
670854
https://en.wikipedia.org/wiki/Jasmonate
Jasmonate
Jasmonate (JA) and its derivatives are lipid-based plant hormones that regulate a wide range of processes in plants, ranging from growth and photosynthesis to reproductive development. In particular, JAs are critical for plant defense against herbivory and plant responses to poor environmental conditions and other kinds of abiotic and biotic challenges. Some JAs can also be released as volatile organic compounds (VOCs) to permit communication between plants in anticipation of mutual dangers. History The isolation of methyl jasmonate (MeJA) from jasmine oil derived from Jasminum grandiflorum led to the discovery of the molecular structure of jasmonates and their name in 1962 while jasmonic acid itself was isolated from Lasiodiplodia theobromae by Alderidge et al in 1971. Biosynthesis Biosynthesis is reviewed by Acosta and Farmer 2010, Wasternack and Hause 2013, and Wasternack and Song 2017. Jasmonates (JA) are oxylipins, i.e. derivatives of oxygenated fatty acid. They are biosynthesized from linolenic acid in chloroplast membranes. Synthesis is initiated with the conversion of linolenic acid to 12-oxo-phytodienoic acid (OPDA), which then undergoes a reduction and three rounds of oxidation to form (+)-7-iso-JA, jasmonic acid. Only the conversion of linolenic acid to OPDA occurs in the chloroplast; all subsequent reactions occur in the peroxisome. JA itself can be further metabolized into active or inactive derivatives. Methyl JA (MeJA) is a volatile compound that is potentially responsible for interplant communication. JA conjugated with amino acid isoleucine (Ile) results in JA-Ile ((+)-7-iso-jasmonoyl--isoleucine), which Fonseca et al 2009 finds is involved in most JA signaling - see also the review by Katsir et al 2008. However Van Poecke & Dicke 2003 finds Arabidopsiss emission of volatiles to not require JA-Ile, nor VanDoorn et al 2011 for Solanum nigrums herbivore resistance. JA undergoes decarboxylation to give cis-jasmone. Function Although jasmonate (JA) regulates many different processes in the plant, its role in wound response is best understood. Following mechanical wounding or herbivory, JA biosynthesis is rapidly activated, leading to expression of the appropriate response genes. For example, in the tomato, wounding produces defense molecules that inhibit leaf digestion in guts of insects. Another indirect result of JA signaling is the volatile emission of JA-derived compounds. MeJA on leaves can travel airborne to nearby plants and elevate levels of transcripts related to wound response. In general, this emission can further upregulate JA biosynthesis and cell signaling, thereby inducing nearby plants to prime their defenses in case of herbivory. JAs have also been implicated in cell death and leaf senescence. JA can interact with many kinases and transcription factors associated with senescence. JA can also induce mitochondrial death by inducing the accumulation of reactive oxygen species (ROSs). These compounds disrupt mitochondria membranes and compromise the cell by causing apoptosis, or programmed cell death. JAs' roles in these processes are suggestive of methods by which the plant defends itself against biotic challenges and limits the spread of infections. JA and its derivatives have also been implicated in plant development, symbiosis, and a host of other processes included in the list below. By studying mutants overexpressing JA, one of the earliest discoveries made was that JA inhibits root growth. The mechanism behind this event is still not understood, but mutants in the COI1-dependent signaling pathway tend to show reduced inhibition, demonstrating that the COI1 pathway is somehow necessary for inhibiting root growth. JA plays many roles in flower development. Mutants in JA synthesis or in JA signaling in Arabidopsis present with male sterility, typically due to delayed development. The same genes promoting male fertility in Arabidopsis promote female fertility in tomatoes. Overexpression of 12-OH-JA can also delay flowering. JA and MeJA inhibit the germination of nondormant seeds and stimulate the germination of dormant seeds. High levels of JA encourage the accumulation of storage proteins; genes encoding vegetative storage proteins are JA responsive. Specifically, tuberonic acid, a JA derivative, induces the formation of tubers. JAs also play a role in symbiosis between plants and microorganisms; however, its precise role is still unclear. JA currently appears to regulate signal exchange and nodulation regulation between legumes and rhizobium. On the other hand, elevated JA levels appear to regulate carbohydrate partitioning and stress tolerance in mycorrhizal plants. JAs have been implicated in the development of carnivorous plants such as the Venus flytrap. Research suggests that evolutionary repurposing of the jasmonate signaling pathway, which mediates defense against herbivores in noncarnivorous plants, has supported the evolution of plant carnivory. Jasmonates can be used to signal the closing of traps and to control the release of enzymes and nutrient transporters which are used in plant digestion. However, not all carnivorous plants rely on the jasmonate pathway in the same way. Butterworts differ significantly from Venus flytraps and sundews, and may have developed methods of regulating digestive enzymes that are JA-independent. Role in pathogenesis Pseudomonas syringae causes bacterial speck disease in tomatoes by hijacking the plant's jasmonate (JA) signaling pathway. This bacteria utilizes a type III secretion system to inject a cocktail of viral effector proteins into host cells. One of the molecules included in this mixture is the phytotoxin coronatine (COR). JA-insensitive plants are highly resistant to P. syringae and unresponsive to COR; additionally, applying MeJA was sufficient to rescue virulence in COR mutant bacteria. Infected plants also expressed downstream JA and wound response genes but repressed levels of pathogenesis-related (PR) genes. All these data suggest COR acts through the JA pathway to invade host plants. Activation of a wound response is hypothesized to come at the expense of pathogen defense. By activating the JA wound response pathway, P. syringae could divert resources from its host's immune system and infect more effectively. Plants produce N-acylamides that confer resistance to necrotrophic pathogens by activating JA biosynthesis and signalling. Arachidonic acid (AA), the counterpart of the JA precursor α-LeA occurring in metazoan species but not in plants, is perceived by plants and acts through an increase in JA levels concomitantly with resistance to necrotrophic pathogens. AA is an evolutionarily conserved signalling molecule that acts in plants in response to stress similar to that in animal systems. Cross talk with other defense pathways While the jasmonate (JA) pathway is critical for wound response, it is not the only signaling pathway mediating defense in plants. To build an optimal yet efficient defense, the different defense pathways must be capable of cross talk to fine-tune and specify responses to abiotic and biotic challenges. One of the best studied examples of JA cross talk occurs with salicylic acid (SA). SA, a hormone, mediates defense against pathogens by inducing both the expression of pathogenesis-related genes and systemic acquired resistance (SAR), in which the whole plant gains resistance to a pathogen after localized exposure to it. Wound and pathogen response appear to be interact negatively. For example, silencing phenylalanine ammonia lyase (PAL), an enzyme synthesizing precursors to SA, reduces SAR but enhances herbivory resistance against insects. Similarly, overexpression of PAL enhances SAR but reduces wound response after insect herbivory. Generally, it has been found that pathogens living in live plant cells are more sensitive to SA-induced defenses, while herbivorous insects and pathogens that derive benefit from cell death are more susceptible to JA defenses. Thus, this trade-off in pathways optimizes defense and saves plant resources. Cross talk also occurs between JA and other plant hormone pathways, such as those of abscisic acid (ABA) and ethylene (ET). These interactions similarly optimize defense against pathogens and herbivores of different lifestyles. For example, MYC2 activity can be stimulated by both JA and ABA pathways, allowing it to integrate signals from both pathways. Other transcription factors such as ERF1 arise as a result of JA and ET signaling. All these molecules can act in combination to activate specific wound response genes. Finally, cross talk is not restricted for defense: JA and ET interactions are critical in development as well, and a balance between the two compounds is necessary for proper apical hook development in Arabidopsis seedlings. Still, further research is needed to elucidate the molecules regulating such cross talk. Mechanism of signaling In general, the steps in jasmonate (JA) signaling mirror that of auxin signaling: the first step comprises E3 ubiquitin ligase complexes, which tag substrates with ubiquitin to mark them for degradation by proteasomes. The second step utilizes transcription factors to effect physiological changes. One of the key molecules in this pathway is JAZ, which serves as the on-off switch for JA signaling. In the absence of JA, JAZ proteins bind to downstream transcription factors and limit their activity. However, in the presence of JA or its bioactive derivatives, JAZ proteins are degraded, freeing transcription factors for expression of genes needed in stress responses. Because JAZ did not disappear in null coi1 mutant plant backgrounds, protein COI1 was shown to mediate JAZ degradation. COI1 belongs to the family of highly conserved F-box proteins, and it recruits substrates for the E3 ubiquitin ligase SCFCOI1. The complexes that ultimately form are known as SCF complexes. These complexes bind JAZ and target it for proteasomal degradation. However, given the large spectrum of JA molecules, not all JA derivatives activate this pathway for signaling, and the range of those participating in this pathway is unknown. Thus far, only JA-Ile has been shown to be necessary for COI1-mediated degradation of JAZ11. JA-Ile and structurally related derivatives can bind to COI1-JAZ complexes and promote ubiquitination and thus degradation of the latter. This mechanistic model raises the possibility that COI1 serves as an intracellular receptor for JA signals. Recent research has confirmed this hypothesis by demonstrating that the COI1-JAZ complex acts as a co-receptor for JA perception. Specifically, JA-Ile binds both to a ligand-binding pocket in COI1 and to a 20 amino-acid stretch of the conserved Jas motif in JAZ. This JAZ residue acts as a plug for the pocket in COI1, keeping JA-Ile bound in the pocket. Additionally, Sheard et al 2010 co-purified and subsequently removed inositol pentakisphosphate (InsP5) from COI1, demonstrating InsP5 to be a necessary component of the co-receptor and playing a role in potentiating the co-receptor complex. Sheard's results may show varying binding specificity for various SCF-InsP-JAZ complexes. Once freed from JAZ, transcription factors can activate genes needed for a specific JA response. The best-studied transcription factors acting in this pathway belong to the MYC family of transcription factors, which are characterized by a basic helix-loop-helix (bHLH) DNA binding motif. These factors (of which there are three, MYC2, 3, and 4) tend to act additively. For example, a plant that has only lost one myc becomes more susceptible to insect herbivory than a normal plant. A plant that has lost all three will be as susceptible to damage as coi1 mutants, which are completely unresponsive to JA and cannot mount a defense against herbivory. However, while all these MYC molecules share functions, they vary greatly in expression patterns and transcription functions. For instance, MYC2 has a greater effect on root growth compared to MYC3 or MYC4. Additionally, MYC2 will loop back and regulate JAZ expression levels, leading to a negative feedback loop. These transcription factors all have different impacts on JAZ levels after JA signaling. JAZ levels in turn affect transcription factor and gene expression levels. In other words, on top of activating different response genes, the transcription factors can vary JAZ levels to achieve specificity in response to JA signals.
Biology and health sciences
Plant hormone
Biology
670855
https://en.wikipedia.org/wiki/Brassinosteroid
Brassinosteroid
Brassinosteroids (BRs or less commonly BS) are a class of polyhydroxysteroids that have been recognized as a sixth class of plant hormones and may have utility as anticancer drugs for treating endocrine-responsive cancers by inducing apoptosis of cancer cells and inhibiting cancerous growth. These brassinosteroids were first explored during the 1970s when Mitchell et al. reported promotion in stem elongation and cell division by the treatment of organic extracts of rapeseed (Brassica napus) pollen. Brassinolide was the first brassinosteroid to be isolated in 1979, when pollen from Brassica napus was shown to promote stem elongation and cell divisions, and the biologically active molecule was isolated. The yield of brassinosteroids from 230 kg of Brassica napus pollen was only 10 mg. Since their discovery, over 70 BR compounds have been isolated from plants. Biosynthesis The BR is biosynthesised from campesterol. The biosynthetic pathway was elucidated by Japanese researchers and later shown to be correct through the analysis of BR biosynthesis mutants in Arabidopsis thaliana, tomatoes, and peas. The sites for BR synthesis in plants have not been experimentally demonstrated. One well-supported hypothesis is that all tissues produce BRs, since BR biosynthetic and signal transduction genes are expressed in a wide range of plant organs, and short distance activity of the hormones also supports this. Experiments have shown that long distance transport is possible and that the flow is from the base to the tips (acropetal), but it is not known if this movement is biologically relevant. Functions BRs have been shown to be involved in numerous plant processes: Promotion of cell expansion and cell elongation; work with auxins to do so. It has an unclear role in cell division and cell wall regeneration. Promotion of vascular differentiation; BR signal transduction has been studied during vascular differentiation. Is necessary for pollen elongation for pollen tube formation. Acceleration of senescence in dying tissue cultured cells; delayed senescence in BR mutants supports that this action may be biologically relevant. Can provide some protection to plants during chilling and drought stress. Extract from the plant Lychnis viscaria contains a relatively high amount of Brassinosteroids. Lychnis viscaria increases the disease resistance of surrounding plants. 24-Epibrassinolide (EBL), a brassinosteroid isolated from Aegle marmelos Correa (Rutaceae), was further evaluated for the antigenotoxicity against maleic hydrazide (MH)-induced genotoxicity in Allium cepa chromosomal aberration assay. It was shown that the percentage of chromosomal aberrations induced by maleic hydrazide (0.01%) declined significantly with 24-epibrassinolide treatment. BRs have been reported to counteract both abiotic and biotic stress in plants. Application of brassinosteroids to cucumbers was demonstrated to increase the metabolism and removal of pesticides, which could be beneficial for reducing the human ingestion of residual pesticides from non-organically grown vegetables. BRs have also been reported to have a variety of effects when applied to rice seeds (Oryza sativa L.). Seeds treated with BRs were shown to reduce the growth inhibitory effect of salt stress. When the developed plants fresh weight was analyzed the treated seeds outperformed plants grown on saline and non-saline medium however when the dry weight was analyzed BR treated seeds only outperformed untreated plants that were grown on saline medium. When dealing with tomatoes (Lycopersicon esculentum) under salt stress the concentration of chlorophyll a and chlorophyll b were decreased and thus pigmentation was decreased as well. BR treated rice seeds considerably restored the pigment level in plants that were grown on saline medium when compared to non-treated plants under the same conditions. Signalling mechanism BRs are perceived at the cell membrane by a co-receptor complex, comprising brassinosteroid insensitive-1 (BRI1) and BRI1-associated receptor kinase 1 (BAK1). BRI1 acts as a kinase, but in the absence of BR its action is inhibited by another protein, BRI1 kinase inhibitor 1 (BKI1). When BR binds to the BRI1:BAK1 complex, BKI1 is released, and a phosphorylation cascade is triggered which results in the de-activation of another kinase, brassinosteroid insensitive 2 (BIN2). BIN2 and its close homologues inhibit several transcription factors. The inhibition of BIN2 by BR releases these transcription factors to bind to DNA and to enact certain developmental pathways. Agricultural uses BR might reveal to have a prominent interest in the role of horticultural crops. Based on extensive research BR has the ability to improve the quantity and quality of horticultural crops and protect plants against many stresses that can be present in the local environment. With the many advances in technology dealing with the synthesis of more stable synthetic analogues and the genetic manipulation of cellular BR activity, using BR in the production of horticultural crops has become a more practical and hopeful strategy for improving crop yields and success. The application of BR successfully alleviate drought stress and improve wheat growth under deficit irrigation system. It had further positive impacts on increasing plant growth parameters via their integral role in decreasing oxidative stress indicators. BR application has demonstrated efficacy against Phytophthora infestans, mildew on cucumber, viral diseases, and various others. BR could also help bridge the gap of the consumers' health concerns and the producers need for growth. A major benefit of using BR is that it does not interfere with the environment because they act in a natural way. Since it is a “plant strengthening substance” and it is natural, BR application would be more favorable than pesticides and does not contribute to the co-evolution of pests. In Germany, extract from the plant is allowed for use as a "plant strengthening substance."<ref name="Roth-et-al-2000 Detection and chemical analysis BRs can be detected by gas chromatography mass spectrometry and bioassays. There are some bioassays that can detect BRs in the plant such as the bean second internode elongation assay and the rice leaf lamina inclination test.
Biology and health sciences
Plant hormone
Biology
670856
https://en.wikipedia.org/wiki/Abscisic%20acid
Abscisic acid
Abscisic acid (ABA or abscisin II) is a plant hormone. ABA functions in many plant developmental processes, including seed and bud dormancy, the control of organ size and stomatal closure. It is especially important for plants in the response to environmental stresses, including drought, soil salinity, cold tolerance, freezing tolerance, heat stress and heavy metal ion tolerance. Discovery In the 1940s, Torsten Hemberg, while working at the University of Stockholm, found evidence that a positive correlation exists between the rest period and the occurrence of an acidic ether soluble growth inhibitor in potato tubers. In 1963, abscisic acid was first identified and characterized as a plant hormone by Frederick T. Addicott and Larry A. Davis. They were studying compounds that cause abscission (shedding) of cotton fruits (bolls). Two compounds were isolated and called abscisin I and abscisin II. Abscisin II is presently called abscisic acid (ABA). In plants Function ABA was originally believed to be involved in abscission, which is how it received its name. This is now known to be the case only in a small number of plants. ABA-mediated signaling also plays an important part in plant responses to environmental stress and plant pathogens. The plant genes for ABA biosynthesis and sequence of the pathway have been elucidated. ABA is also produced by some plant pathogenic fungi via a biosynthetic route different from ABA biosynthesis in plants. In preparation for winter, ABA is produced in terminal buds. This slows plant growth and directs leaf primordia to develop scales to protect the dormant buds during the cold season. ABA also inhibits the division of cells in the vascular cambium, adjusting to cold conditions in the winter by suspending primary and secondary growth. Abscisic acid is also produced in the roots in response to decreased soil water potential (which is associated with dry soil) and other situations in which the plant may be under stress. ABA then translocates to the leaves, where it rapidly alters the osmotic potential of stomatal guard cells, causing them to shrink and stomata to close. The ABA-induced stomatal closure reduces transpiration (evaporation of water out of the stomata), thus preventing further water loss from the leaves in times of low water availability. A close linear correlation was found between the ABA content of the leaves and their conductance (stomatal resistance) on a leaf area basis. Seed germination is inhibited by ABA in antagonism with gibberellin. ABA also prevents loss of seed dormancy. Several ABA-mutant Arabidopsis thaliana plants have been identified and are available from the Nottingham Arabidopsis Stock Centre - both those deficient in ABA production and those with altered sensitivity to its action. Plants that are hypersensitive or insensitive to ABA show phenotypes in seed dormancy, germination, stomatal regulation, and some mutants show stunted growth and brown/yellow leaves. These mutants reflect the importance of ABA in seed germination and early embryo development. Pyrabactin (a pyridyl containing ABA activator) is a naphthalene sulfonamide hypocotyl cell expansion inhibitor, which is an agonist of the seed ABA signaling pathway. It is the first agonist of the ABA pathway that is not structurally related to ABA. Homeostasis Biosynthesis Abscisic acid (ABA) is an isoprenoid plant hormone, which is synthesized in the plastidal 2-C-methyl-D-erythritol-4-phosphate (MEP) pathway; unlike the structurally related sesquiterpenes, which are formed from the mevalonic acid-derived precursor farnesyl diphosphate (FDP), the C15 backbone of ABA is formed after cleavage of C40 carotenoids in MEP. Zeaxanthin is the first committed ABA precursor; a series of enzyme-catalyzed epoxidations and isomerizations via violaxanthin, and final cleavage of the C40 carotenoid by a dioxygenation reaction yields the proximal ABA precursor, xanthoxin, which is then further oxidized to ABA. via abscisic aldehyde. Abamine has been designed, synthesized, developed and then patented as the first specific ABA biosynthesis inhibitor, which makes it possible to regulate endogenous levels of ABA. Locations and timing of ABA biosynthesis Synthesized in nearly all plant tissues, e.g., roots, flowers, leaves and stems Stored in mesophyll (chlorenchyma) cells where it is conjugated to glucose via uridine diphosphate-glucosyltransferase resulting in the inactivated form, ABA-glucose-ester Activated and released from the chlorenchyma in response to environmental stress, such as heat stress, water stress, salt stress Released during desiccation of the vegetative tissues and when roots encounter soil compaction. Synthesized in green fruits at the beginning of the winter period Synthesized in maturing seeds, establishing dormancy Mobile within the leaf and can be rapidly translocated from the leaves to the roots (opposite of previous belief) in the phloem Accumulation in the roots modifies lateral root development, improving the stress response ABA is synthesized in almost all cells that contain chloroplasts or amyloplasts Inactivation ABA can be catabolized to phaseic acid via CYP707A (a group of P450 enzymes) or inactivated by glucose conjugation (ABA-glucose ester) via the enzyme uridine diphosphate-glucosyltransferase (UDP-glucosyltransferase). Catabolism via the CYP707As is very important for ABA homeostasis, and mutants in those genes generally accumulate higher levels of ABA than lines overexpressing ABA biosynthetic genes. In soil bacteria, an alternative catabolic pathway leading to dehydrovomifoliol via the enzyme vomifoliol dehydrogenase has been reported. Effects Antitranspirant - Induces stomatal closure, decreasing transpiration to prevent water loss. Promotes root growth during periods of low humidity. Inhibits fruit ripening Responsible for seed dormancy by inhibiting cell growth – inhibits seed germination Inhibits the synthesis of Kinetin nucleotide Downregulates enzymes needed for photosynthesis. Acts on endodermis to prevent growth of roots when exposed to salty conditions Promotion of plant antiviral immunity Signal cascade In the absence of ABA, the phosphatase ABA-INSENSITIVE1 (ABI1) inhibits the action of SNF1-related protein kinases (subfamily 2) (SnRK2s). ABA is perceived by the PYRABACTIN RESISTANCE 1 (PYR1) and PYR1-like membrane proteins. On ABA binding, PYR1 binds to and inhibits ABI1. When SnRK2s are released from inhibition, they activate several transcription factors from the ABA RESPONSIVE ELEMENT-BINDING FACTOR (ABF) family. ABFs then go on to cause changes in the expression of a large number of genes. Around 10% of plant genes are thought to be regulated by ABA. In fungi Like plants, some fungal species (for example Cercospora rosicola, Botrytis cinerea and Magnaporthe oryzae) have an endogenous biosynthesis pathway for ABA. In fungi, it seems to be the MVA biosynthetic pathway that is predominant (rather than the MEP pathway that is responsible for ABA biosynthesis in plants). One role of ABA produced by these pathogens seems to be to suppress the plant immune responses. In animals ABA has also been found to be present in metazoans, from sponges up to mammals including humans. Currently, its biosynthesis and biological role in animals is poorly known. ABA elicits potent anti-inflammatory and anti-diabetic effects in mouse models of diabetes/obesity, inflammatory bowel disease, atherosclerosis and influenza infection. Many biological effects in animals have been studied using ABA as a nutraceutical or pharmacognostic drug, but ABA is also generated endogenously by some cells (like macrophages) when stimulated. There are also conflicting conclusions from different studies, where some claim that ABA is essential for pro-inflammatory responses whereas other show anti-inflammatory effects. Like with many natural substances with medical properties, ABA has become popular also in naturopathy. While ABA clearly has beneficial biological activities and many naturopathic remedies will contain high levels of ABA (such as wheatgrass juice, fruits and vegetables), some of the health claims made may be exaggerated or overly optimistic. In mammalian cells ABA targets a protein known as lanthionine synthetase C-like 2 (LANCL2), triggering an alternative mechanism of activation of peroxisome proliferator-activated receptor gamma (PPAR gamma). LANCL2 is conserved in plants and was originally suggested to be an ABA receptor also in plants, which was later challenged. Measurement of ABA concentration Several methods can help to quantify the concentration of abscisic acid in a variety of plant tissue. The quantitative methods used are based on HPLC and ELISA. Two independent FRET probes can measure intracellular ABA concentrations in real time in vivo.
Biology and health sciences
Plant hormone
Biology
671090
https://en.wikipedia.org/wiki/Cowan%E2%80%93Reines%20neutrino%20experiment
Cowan–Reines neutrino experiment
The Cowan–Reines neutrino experiment was conducted by physicists Clyde Cowan and Frederick Reines in 1956. The experiment confirmed the existence of neutrinos. Neutrinos, subatomic particles with no electric charge and very small mass, had been conjectured to be an essential particle in beta decay processes in the 1930s. With neither mass nor charge, such particles appeared to be impossible to detect. The experiment exploited a huge flux of (then hypothetical) electron antineutrinos emanating from a nearby nuclear reactor and a detector consisting of large tanks of water. Neutrino interactions with the protons of the water were observed, verifying the existence and basic properties of this particle for the first time. Background During the 1910s and 1920s, the observations of electrons from the nuclear beta decay showed that their energy had a continuous distribution. If the process involved only the atomic nucleus and the electron, the electron's energy would have a single, narrow peak, rather than a continuous energy spectrum. Only the resulting electron was observed, so its varying energy suggested that energy may not be conserved. This quandary and other factors led Wolfgang Pauli to attempt to resolve the issue by postulating the existence of the neutrino in 1930. If the fundamental principle of energy conservation was to be preserved, beta decay had to be a three-body, rather than a two-body, decay. Therefore, in addition to an electron, Pauli suggested that another particle was emitted from the atomic nucleus in beta decay. This particle, the neutrino, had very small mass and no electric charge; it was not observed, but it carried the missing energy. Pauli's suggestion was developed into a proposed theory for beta decay by Enrico Fermi in 1933. The theory posits that the beta decay process consists of four fermions directly interacting with one another. By this interaction, the neutron decays directly to an electron, the conjectured neutrino (later determined to be an antineutrino) and a proton. The theory, which proved to be remarkably successful, relied on the existence of the hypothetical neutrino. Fermi first submitted his "tentative" theory of beta decay to the journal Nature, which rejected it "because it contained speculations too remote from reality to be of interest to the reader." One problem with the neutrino conjecture and Fermi's theory was that the neutrino appeared to have such weak interactions with other matter that it would never be observed. In a 1934 paper, Rudolf Peierls and Hans Bethe calculated that neutrinos could easily pass through the Earth without interactions with any matter. Potential for experiment By inverse beta decay, the predicted neutrino, more correctly an electron antineutrino (), should interact with a proton () to produce a neutron () and positron (), The chance of this reaction occurring was small. The probability for any given reaction to occur is in proportion to its cross section. Cowan and Reines predicted a cross section for the reaction to be about . The usual unit for a cross section in nuclear physics is a barn, which is and 20 orders of magnitudes larger. Despite the low probability of the neutrino interaction, the signatures of the interaction are unique, making detection of the rare interactions possible. The positron, the antimatter counterpart of the electron, quickly interacts with any nearby electron, and they annihilate each other. The two resulting coincident gamma rays () are detectable. The neutron can be detected by its capture by an appropriate nucleus, releasing a third gamma ray. The coincidence of the positron annihilation and neutron capture events gives a unique signature of an antineutrino interaction. A water molecule is composed of an oxygen and two hydrogen atoms, and most of the hydrogen atoms of water have a single proton for a nucleus. Those protons can serve as targets for antineutrinos, so that simple water can serve as a primary detecting material. The hydrogen atoms are so weakly bound in water that they can be viewed as free protons for the neutrino interaction. The interaction mechanism of neutrinos with heavier nuclei, those with several protons and neutrons, is more complicated, since the constituent protons are strongly bound within the nuclei. Setup Given the small chance of interaction of a single neutrino with a proton, neutrinos could only be observed using a huge neutrino flux. Beginning in 1951, Cowan and Reines, both then scientists at Los Alamos, New Mexico, initially thought that neutrino bursts from the atomic weapons tests that were then occurring could provide the required flux. For a neutrino source, they proposed using an atomic bomb. Permission for this was obtained from the laboratory director, Norris Bradbury. The plan was to detonate a "20-kiloton nuclear bomb, comparable to that dropped on Hiroshima, Japan". The detector was proposed to be dropped at the moment of explosion into a hole 40 meters from the detonation site "to catch the flux at its maximum"; it was named "El Monstro". They eventually used a nuclear reactor as a source of neutrinos, as advised by Los Alamos physics division leader J.M.B. Kellogg. The reactor had a neutrino flux of neutrinos per second per square centimeter, far higher than any flux attainable from other radioactive sources. A detector consisting of two tanks of water was employed, offering a huge number of potential targets in the protons of the water. At those rare instances when neutrinos interacted with protons in the water, neutrons and positrons were created. The two gamma rays created by positron annihilation were detected by sandwiching the water tanks between tanks filled with liquid scintillator. The scintillator material gives off flashes of light in response to the gamma rays, and these light flashes are detected by photomultiplier tubes. The additional detection of the neutron from the neutrino interaction provided a second layer of certainty. Cowan and Reines detected the neutrons by dissolving cadmium chloride, CdCl2, in the tank. Cadmium is a highly effective neutron absorber and gives off a gamma ray when it absorbs a neutron. + → → + The arrangement was such that after a neutrino interaction event, the two gamma rays from the positron annihilation would be detected, followed by the gamma ray from the neutron absorption by cadmium several microseconds later. The experiment that Cowan and Reines devised used two tanks with a total of about 200 liters of water with about 40 kg of dissolved CdCl2. The water tanks were sandwiched between three scintillator layers which contained 110 five-inch (127 mm) photomultiplier tubes. Results In 1953, Cowan and Reines built a detector they dubbed "Herr Auge", "Mr. Eye" in German. They called the neutrino-searching experiment "Project Poltergeist", because of "the neutrino’s ghostly nature". A preliminary experiment was performed in 1953 at the Hanford Site in Washington state, but in late 1955 the experiment moved to the Savannah River Plant near Aiken, South Carolina. The Savannah River site had better shielding against cosmic rays. This shielded location was 11 m from the reactor and 12 m underground. After months of data collection, the accumulated data showed about three neutrino interactions per hour in the detector. To be absolutely sure that they were seeing neutrino events from the detection scheme described above, Cowan and Reines shut down the reactor to show that there was a difference in the rate of detected events. They had predicted a cross-section for the reaction to be about and their measured cross-section was . The results were published in the July 20, 1956 issue of Science. Legacy Clyde Cowan died in 1974 at the age of 54. In 1995, Frederick Reines was honored with the Nobel Prize for his work on neutrino physics. The basic strategy of employing massive detectors, often water based, for neutrino research was exploited by several subsequent experiments, including the Irvine–Michigan–Brookhaven detector, Kamiokande, the Sudbury Neutrino Observatory and the Homestake Experiment. The Homestake Experiment is a contemporary experiment which detected neutrinos from nuclear fusion in the solar core. Observatories such as these detected neutrino bursts from supernova SN 1987A in 1987, the birth of neutrino astronomy. Through observations of solar neutrinos, the Sudbury Neutrino Observatory was able to demonstrate the process of neutrino oscillation. Neutrino oscillation shows that neutrinos are not massless, a profound development in particle physics.
Physical sciences
Fermions
Physics
672092
https://en.wikipedia.org/wiki/Drawbridge
Drawbridge
A drawbridge or draw-bridge is a type of moveable bridge typically at the entrance to a castle or tower surrounded by a moat. In some forms of English, including American English, the word drawbridge commonly refers to all types of moveable bridges, such as bascule bridges, vertical-lift bridges and swing bridges, but this article concerns the narrower historical definition where the bridge is used in a defensive structure. As used in castles or defensive structures, drawbridges provide access across defensive structures when lowered, but can quickly be raised from within to deny entry to an enemy force. Castle drawbridges Medieval castles were usually defended by a ditch or moat, crossed by a wooden bridge. In early castles, the bridge might be designed to be destroyed or removed in the event of an attack, but drawbridges became very common. A typical arrangement would have the drawbridge immediately outside a gatehouse, consisting of a wooden deck with one edge hinged or pivoting at the gatehouse threshold, so that in the raised position the bridge would be flush against the gate, forming an additional barrier to entry. It would be backed by one or more portcullises and gates. Access to the bridge could be resisted with missiles from machicolations above or arrow slits in flanking towers. The bridge would be raised or lowered using ropes or chains attached to a windlass in a chamber in the gatehouse above the gate-passage. Only a very light bridge could be raised in this way without any form of counterweight, so some form of bascule arrangement is normally found. The bridge may extend into the gate-passage beyond the pivot point, either over a pit into which the internal portion can swing (providing a further obstacle to attack), or in the form of counterweighted beams that drop into slots in the floor. The raising chains could themselves be attached to counterweights. In some cases, a portcullis provides the weight, as at Alnwick. By the 14th century, a bascule arrangement was provided by lifting arms (called "gaffs") above and parallel to the bridge deck whose ends were linked by chains to the lifting part of the bridge. In the raised position, the gaffs would fit into slots in the gatehouse wall ("rainures") which can often still be seen in places like Herstmonceux Castle. Inside the castle, the gaffs were extended to bear counterweights, or might form the side-timbers of a stout gate which would be against the roof of the gate-passage when the drawbridge was down, but would close against the gate-arch as the bridge was raised. In France, working drawbridges survive at a number of châteaux, including the Château du Plessis-Bourré. In England, two working drawbridges remain in regular use at Helmingham Hall, which dates from the early sixteenth century. Turning bridge A bridge pivoted on central trunnions is called a turning bridge, and may or may not have the raising chains characteristic of a drawbridge. The inner end carried counterweights enabling it to sink into a pit in the gate-passage, and when horizontal the bridge would often be supported by stout pegs inserted through the side walls. This was a clumsy arrangement, and many turning bridges were replaced with more advanced drawbridges. Forts Drawbridges were also used on forts with Palmerston Forts using them in the form of Guthrie rolling bridges. In art Drawbridges have appeared in films as part of castle sets. When the drawbridge needs to be functional this may present engineering challenges since the set may not be able to support the weight of the bridge in the conventional manner. One solution is to build the drawbridge from steel and concrete before hiding the structural materials behind wood and plaster.
Technology
Transport infrastructure
null
672202
https://en.wikipedia.org/wiki/Yang%E2%80%93Mills%20theory
Yang–Mills theory
Yang–Mills theory is a quantum field theory for nuclear binding devised by Chen Ning Yang and Robert Mills in 1953, as well as a generic term for the class of similar theories. The Yang–Mills theory is a gauge theory based on a special unitary group , or more generally any compact Lie group. A Yang–Mills theory seeks to describe the behavior of elementary particles using these non-abelian Lie groups and is at the core of the unification of the electromagnetic force and weak forces (i.e. ) as well as quantum chromodynamics, the theory of the strong force (based on ). Thus it forms the basis of the understanding of the Standard Model of particle physics. History and qualitative description Gauge theory in electrodynamics All known fundamental interactions can be described in terms of gauge theories, but working this out took decades. Hermann Weyl's pioneering work on this project started in 1915 when his colleague Emmy Noether proved that every conserved physical quantity has a matching symmetry, and culminated in 1928 when he published his book applying the geometrical theory of symmetry (group theory) to quantum mechanics. Weyl named the relevant symmetry in Noether's theorem the "gauge symmetry", by analogy to distance standardization in railroad gauges. Erwin Schrödinger in 1922, three years before working on his equation, connected Weyl's group concept to electron charge. Schrödinger showed that the group produced a phase shift in electromagnetic fields that matched the conservation of electric charge. As the theory of quantum electrodynamics developed in the 1930's and 1940's the group transformations played a central role. Many physicists thought there must be an analog for the dynamics of nucleons. Chen Ning Yang in particular was obsessed with this possibility. Yang and Mills find the nuclear force gauge theory Yang's core idea was to look for a conserved quantity in nuclear physics comparable to electric charge and use it to develop a corresponding gauge theory comparable to electrodynamics. He settled on conservation of isospin, a quantum number that distinguishes a neutron from a proton, but he made no progress on a theory. Taking a break from Princeton in the summer of 1953, Yang met a collaborator who could help: Robert Mills. As Mills himself describes:"During the academic year 1953–1954, Yang was a visitor to Brookhaven National Laboratory ... I was at Brookhaven also ... and was assigned to the same office as Yang. Yang, who has demonstrated on a number of occasions his generosity to physicists beginning their careers, told me about his idea of generalizing gauge invariance and we discussed it at some length ... I was able to contribute something to the discussions, especially with regard to the quantization procedures, and to a small degree in working out the formalism; however, the key ideas were Yang's." In the summer 1953, Yang and Mills extended the concept of gauge theory for abelian groups, e.g. quantum electrodynamics, to non-abelian groups, selecting the group to provide an explanation for isospin conservation in collisions involving the strong interactions. Yang's presentation of the work at Princeton in February 1954 was challenged by Pauli, asking about the mass in the field developed with the gauge invariance idea. Pauli knew that this might be an issue as he had worked on applying gauge invariance but chose not to publish it, viewing the massless excitations of the theory to be "unphysical 'shadow particles'". Yang and Mills published in October 1954; near the end of the paper, they admit: This problem of unphysical massless excitation blocked further progress. The idea was set aside until 1960, when the concept of particles acquiring mass through symmetry breaking in massless theories was put forward, initially by Jeffrey Goldstone, Yoichiro Nambu, and Giovanni Jona-Lasinio. This prompted a significant restart of Yang–Mills theory studies that proved successful in the formulation of both electroweak unification and quantum chromodynamics (QCD). The electroweak interaction is described by the gauge group , while QCD is an Yang–Mills theory. The massless gauge bosons of the electroweak mix after spontaneous symmetry breaking to produce the three massive bosons of the weak interaction (, , and ) as well as the still-massless photon field. The dynamics of the photon field and its interactions with matter are, in turn, governed by the gauge theory of quantum electrodynamics. The Standard Model combines the strong interaction with the unified electroweak interaction (unifying the weak and electromagnetic interaction) through the symmetry group . In the current epoch the strong interaction is not unified with the electroweak interaction, but from the observed running of the coupling constants it is believed they all converge to a single value at very high energies. Phenomenology at lower energies in quantum chromodynamics is not completely understood due to the difficulties of managing such a theory with a strong coupling. This may be the reason why confinement has not been theoretically proven, though it is a consistent experimental observation. This shows why QCD confinement at low energy is a mathematical problem of great relevance, and why the Yang–Mills existence and mass gap problem is a Millennium Prize Problem. Parallel work on non-Abelian gauge theories In 1953, in a private correspondence, Wolfgang Pauli formulated a six-dimensional theory of Einstein's field equations of general relativity, extending the five-dimensional theory of Kaluza, Klein, Fock, and others to a higher-dimensional internal space. However, there is no evidence that Pauli developed the Lagrangian of a gauge field or the quantization of it. Because Pauli found that his theory "leads to some rather unphysical shadow particles", he refrained from publishing his results formally. Although Pauli did not publish his six-dimensional theory, he gave two seminar lectures about it in Zürich in November 1953. In January 1954 Ronald Shaw, a graduate student at the University of Cambridge also developed a non-Abelian gauge theory for nuclear forces. However, the theory needed massless particles in order to maintain gauge invariance. Since no such massless particles were known at the time, Shaw and his supervisor Abdus Salam chose not to publish their work. Shortly after Yang and Mills published their paper in October 1954, Salam encouraged Shaw to publish his work to mark his contribution. Shaw declined, and instead it only forms a chapter of his PhD thesis published in 1956. Mathematical overview Yang–Mills theories are special examples of gauge theories with a non-abelian symmetry group given by the Lagrangian with the generators of the Lie algebra, indexed by , corresponding to the -quantities (the curvature or field-strength form) satisfying Here, the are structure constants of the Lie algebra (totally antisymmetric if the generators of the Lie algebra are normalised such that is proportional to ), the covariant derivative is defined as is the identity matrix (matching the size of the generators), is the vector potential, and is the coupling constant. In four dimensions, the coupling constant is a pure number and for a group one has The relation can be derived by the commutator The field has the property of being self-interacting and the equations of motion that one obtains are said to be semilinear, as nonlinearities are both with and without derivatives. This means that one can manage this theory only by perturbation theory with small nonlinearities. Note that the transition between "upper" ("contravariant") and "lower" ("covariant") vector or tensor components is trivial for a indices (e.g. ), whereas for μ and ν it is nontrivial, corresponding e.g. to the usual Lorentz signature, From the given Lagrangian one can derive the equations of motion given by Putting these can be rewritten as A Bianchi identity holds which is equivalent to the Jacobi identity since Define the dual strength tensor then the Bianchi identity can be rewritten as A source enters into the equations of motion as Note that the currents must properly change under gauge group transformations. We give here some comments about the physical dimensions of the coupling. In dimensions, the field scales as and so the coupling must scale as This implies that Yang–Mills theory is not renormalizable for dimensions greater than four. Furthermore, for the coupling is dimensionless and both the field and the square of the coupling have the same dimensions of the field and the coupling of a massless quartic scalar field theory. So, these theories share the scale invariance at the classical level. Quantization A method of quantizing the Yang–Mills theory is by functional methods, i.e. path integrals. One introduces a generating functional for -point functions as but this integral has no meaning as it is because the potential vector can be arbitrarily chosen due to the gauge freedom. This problem was already known for quantum electrodynamics but here becomes more severe due to non-abelian properties of the gauge group. A way out has been given by Ludvig Faddeev and Victor Popov with the introduction of a ghost field (see Faddeev–Popov ghost) that has the property of being unphysical since, although it agrees with Fermi–Dirac statistics, it is a complex scalar field, which violates the spin–statistics theorem. So, we can write the generating functional as being for the field, for the gauge fixing and for the ghost. This is the expression commonly used to derive Feynman's rules (see Feynman diagram). Here we have for the ghost field while fixes the gauge's choice for the quantization. Feynman's rules obtained from this functional are the following These rules for Feynman's diagrams can be obtained when the generating functional given above is rewritten as with being the generating functional of the free theory. Expanding in and computing the functional derivatives, we are able to obtain all the -point functions with perturbation theory. Using LSZ reduction formula we get from the -point functions the corresponding process amplitudes, cross sections and decay rates. The theory is renormalizable and corrections are finite at any order of perturbation theory. For quantum electrodynamics the ghost field decouples because the gauge group is abelian. This can be seen from the coupling between the gauge field and the ghost field that is For the abelian case, all the structure constants are zero and so there is no coupling. In the non-abelian case, the ghost field appears as a useful way to rewrite the quantum field theory without physical consequences on the observables of the theory such as cross sections or decay rates. One of the most important results obtained for Yang–Mills theory is asymptotic freedom. This result can be obtained by assuming that the coupling constant is small (so small nonlinearities), as for high energies, and applying perturbation theory. The relevance of this result is due to the fact that a Yang–Mills theory that describes strong interaction and asymptotic freedom permits proper treatment of experimental results coming from deep inelastic scattering. To obtain the behavior of the Yang–Mills theory at high energies, and so to prove asymptotic freedom, one applies perturbation theory assuming a small coupling. This is verified a posteriori in the ultraviolet limit. In the opposite limit, the infrared limit, the situation is the opposite, as the coupling is too large for perturbation theory to be reliable. Most of the difficulties that research meets is just managing the theory at low energies. That is the interesting case, being inherent to the description of hadronic matter and, more generally, to all the observed bound states of gluons and quarks and their confinement (see hadrons). The most used method to study the theory in this limit is to try to solve it on computers (see lattice gauge theory). In this case, large computational resources are needed to be sure the correct limit of infinite volume (smaller lattice spacing) is obtained. This is the limit the results must be compared with. Smaller spacing and larger coupling are not independent of each other, and larger computational resources are needed for each. As of today, the situation appears somewhat satisfactory for the hadronic spectrum and the computation of the gluon and ghost propagators, but the glueball and hybrids spectra are yet a questioned matter in view of the experimental observation of such exotic states. Indeed, the resonance is not seen in any of such lattice computations and contrasting interpretations have been put forward. This is a hotly debated issue. Open problems Yang–Mills theories met with general acceptance in the physics community after Gerard 't Hooft, in 1972, worked out their renormalization, relying on a formulation of the problem worked out by his advisor Martinus Veltman. Renormalizability is obtained even if the gauge bosons described by this theory are massive, as in the electroweak theory, provided the mass is only an "acquired" one, generated by the Higgs mechanism. The mathematics of the Yang–Mills theory is a very active field of research, yielding e.g. invariants of differentiable structures on four-dimensional manifolds via work of Simon Donaldson. Furthermore, the field of Yang–Mills theories was included in the Clay Mathematics Institute's list of "Millennium Prize Problems". Here the prize-problem consists, especially, in a proof of the conjecture that the lowest excitations of a pure Yang–Mills theory (i.e. without matter fields) have a finite mass-gap with regard to the vacuum state. Another open problem, connected with this conjecture, is a proof of the confinement property in the presence of additional fermions. In physics the survey of Yang–Mills theories does not usually start from perturbation analysis or analytical methods, but more recently from systematic application of numerical methods to lattice gauge theories.
Physical sciences
Particle physics: General
Physics
672218
https://en.wikipedia.org/wiki/Isospin
Isospin
In nuclear physics and particle physics, isospin (I) is a quantum number related to the up- and down quark content of the particle. Isospin is also known as isobaric spin or isotopic spin. Isospin symmetry is a subset of the flavour symmetry seen more broadly in the interactions of baryons and mesons. The name of the concept contains the term spin because its quantum mechanical description is mathematically similar to that of angular momentum (in particular, in the way it couples; for example, a proton–neutron pair can be coupled either in a state of total isospin 1 or in one of 0). But unlike angular momentum, it is a dimensionless quantity and is not actually any type of spin. Before the concept of quarks was introduced, particles that are affected equally by the strong force but had different charges (e.g. protons and neutrons) were considered different states of the same particle, but having isospin values related to the number of charge states. A close examination of isospin symmetry ultimately led directly to the discovery and understanding of quarks and to the development of Yang–Mills theory. Isospin symmetry remains an important concept in particle physics. Isospin invariance To a good approximation the proton and neutron have the same mass: they can be interpreted as two states of the same particle. These states have different values for an internal isospin coordinate. The mathematical properties of this coordinate are completely analogous to intrinsic spin angular momentum. The component of the operator, , for this coordinate has eigenvalues + and −; it is related to the charge operator, : which has eigenvalues for the proton and zero for the neutron. For a system of n nucleons, the charge operator depends upon the mass number A: Isobars, nuclei with the same mass number like 40K and 40Ar, only differ in the value of the eigenvalue. For this reason isospin is also called "isobaric spin". The internal structure of these nucleons is governed by the strong interaction, but the Hamiltonian of the strong interaction is isospin invariant. As a consequence the nuclear forces are charge independent. Properties like the stability of deuterium can be predicted based on isospin analysis. However, this invariance is not exact and the quark model gives more precise results. Relation to hypercharge The charge operator can be expressed in terms of the projection of isospin and hypercharge, : This is known as the Gell-Mann–Nishijima formula. The hypercharge is the center of splitting for the isospin multiplet: This relation has an analog in the weak interaction where T is the weak isospin. Quark content and isospin In the modern formulation, isospin () is defined as a vector quantity in which up and down quarks have a value of  = , with the 3rd-component (3) being + for up quarks, and − for down quarks, while all other quarks have  = 0. Therefore, for hadrons in general, where u and d are the numbers of up and down quarks respectively, In any combination of quarks, the 3rd component of the isospin vector (3) could either be aligned between a pair of quarks, or face the opposite direction, giving different possible values for total isospin for any combination of quark flavours. Hadrons with the same quark content but different total isospin can be distinguished experimentally, verifying that flavour is actually a vector quantity, not a scalar (up vs down simply being a projection in the quantum mechanical  axis of flavour space). For example, a strange quark can be combined with an up and a down quark to form a baryon, but there are two different ways the isospin values can combine either adding (due to being flavour-aligned) or cancelling out (due to being in opposite flavour directions). The isospin-1 state (the ) and the isospin-0 state (the ) have different experimentally detected masses and half-lives. Isospin and symmetry Isospin is regarded as a symmetry of the strong interaction under the action of the Lie group SU(2), the two states being the up flavour and down flavour. In quantum mechanics, when a Hamiltonian has a symmetry, that symmetry manifests itself through a set of states that have the same energy (the states are described as being degenerate). In simple terms, the energy operator for the strong interaction gives the same result when an up quark and an otherwise identical down quark are swapped around. Like the case for regular spin, the isospin operator I is vector-valued: it has three components Ix, Iy, Iz, which are coordinates in the same 3-dimensional vector space where the 3 representation acts. Note that this vector space has nothing to do with the physical space, except similar mathematical formalism. Isospin is described by two quantum numbers: the total isospin, and 3 an eigenvalue of the Iz projection for which flavor states are eigenstates. In other words, each 3 state specifies certain flavor state of a multiplet. The third coordinate (), to which the "3" subscript refers, is chosen due to notational conventions that relate bases in 2 and 3 representation spaces. Namely, for the spin- case, components of I are equal to Pauli matrices divided by 2, and so Iz =  3, where While the forms of these matrices are isomorphic to those of spin, these Pauli matrices only act within the Hilbert space of isospin, not that of spin, and therefore is common to denote them with τ rather than σ to avoid confusion. Although isospin symmetry is actually very slightly broken, SU(3) symmetry is more badly broken, due to the much higher mass of the strange quark compared to the up and down. The discovery of charm, bottomness and topness could lead to further expansions up to SU(6) flavour symmetry, which would hold if all six quarks were identical. However, the very much larger masses of the charm, bottom, and top quarks means that SU(6) flavour symmetry is very badly broken in nature (at least at low energies), and assuming this symmetry leads to qualitatively and quantitatively incorrect predictions. In modern applications, such as lattice QCD, isospin symmetry is often treated as exact for the three light quarks (uds), while the three heavy quarks (cbt) must be treated separately. Hadron nomenclature Hadron nomenclature is based on isospin. Particles of total isospin are named Delta baryons and can be made by a combination of any three up or down quarks (but only up or down quarks). Particles of total isospin 1 can be made from two up quarks, two down quarks, or one of each: certain mesons further differentiated by total spin into pions (total spin 0) and rho mesons (total spin 1) with an additional quark of higher flavour Sigma baryons Particles of total isospin can be made from: a single up or down quark together with an additional quark of higher flavour strange (kaons), charm (D meson), or bottom (B meson) a single up or down quark together with two additional quarks of higher flavour Xi baryon an up quark, a down quark, and either an up or a down quark nucleons. Note that three identical quarks would be forbidden by the Pauli exclusion principle due to requirement of anti-symmetric wave function Particles of total isospin 0 can be made from a neutral quark-antiquark pair: or eta mesons one up quark and one down quark, with an additional quark of higher flavour Lambda baryons anything not involving any up or down quarks History Origin of isospin In 1932, Werner Heisenberg introduced a new (unnamed) concept to explain binding of the proton and the then newly discovered neutron (symbol n). His model resembled the bonding model for molecule Hydrogen ion, H2+: a single electron was shared by two protons. Heisenberg's theory had several problems, most notable it incorrectly predicted the exceptionally strong binding energy of He+2, alpha particles. However, its equal treatment of the proton and neutron gained significance when several experimental studies showed these particles must bind almost equally. In response, Eugene Wigner used Heisenberg's concept in his 1937 paper where he introduced the term "isotopic spin" to indicate how the concept is similar to spin in behavior. The particle zoo These considerations would also prove useful in the analysis of meson-nucleon interactions after the discovery of the pions in 1947. The three pions (, , ) could be assigned to an isospin triplet with and . By assuming that isospin was conserved by nuclear interactions, the new mesons were more easily accommodated by nuclear theory. As further particles were discovered, they were assigned into isospin multiplets according to the number of different charge states seen: 2 doublets of K mesons (, ), (, ), a triplet of Sigma baryons (, , ), a singlet Lambda baryon (), a quartet Delta baryons (, , , ), and so on. The power of isospin symmetry and related methods comes from the observation that families of particles with similar masses tend to correspond to the invariant subspaces associated with the irreducible representations of the Lie algebra SU(2). In this context, an invariant subspace is spanned by basis vectors which correspond to particles in a family. Under the action of the Lie algebra SU(2), which generates rotations in isospin space, elements corresponding to definite particle states or superpositions of states can be rotated into each other, but can never leave the space (since the subspace is in fact invariant). This is reflective of the symmetry present. The fact that unitary matrices will commute with the Hamiltonian means that the physical quantities calculated do not change even under unitary transformation. In the case of isospin, this machinery is used to reflect the fact that the mathematics of the strong force behaves the same if a proton and neutron are swapped around (in the modern formulation, the up and down quark). An example: Delta baryons For example, the particles known as the Delta baryons baryons of spin were grouped together because they all have nearly the same mass (approximately ) and interact in nearly the same way. They could be treated as the same particle, with the difference in charge being due to the particle being in different states. Isospin was introduced in order to be the variable that defined this difference of state. In an analogue to spin, an isospin projection (denoted ) is associated to each charged state; since there were four Deltas, four projections were needed. Like spin, isospin projections were made to vary in increments of 1. Hence, in order to have four increments of 1, an isospin value of is required (giving the projections ). Thus, all the Deltas were said to have isospin , and each individual charge had different (e.g. the was associated with ). In the isospin picture, the four Deltas and the two nucleons were thought to simply be the different states of two particles. The Delta baryons are now understood to be made of a mix of three up and down quarks uuu (), uud (), udd (), and ddd (); the difference in charge being difference in the charges of up and down quarks (+ e and − e respectively); yet, they can also be thought of as the excited states of the nucleons. Gauged isospin symmetry Attempts have been made to promote isospin from a global to a local symmetry. In 1954, Chen Ning Yang and Robert Mills suggested that the notion of protons and neutrons, which are continuously rotated into each other by isospin, should be allowed to vary from point to point. To describe this, the proton and neutron direction in isospin space must be defined at every point, giving local basis for isospin. A gauge connection would then describe how to transform isospin along a path between two points. This Yang–Mills theory describes interacting vector bosons, like the photon of electromagnetism. Unlike the photon, the SU(2) gauge theory would contain self-interacting gauge bosons. The condition of gauge invariance suggests that they have zero mass, just as in electromagnetism. Ignoring the massless problem, as Yang and Mills did, the theory makes a firm prediction: the vector particle should couple to all particles of a given isospin universally. The coupling to the nucleon would be the same as the coupling to the kaons. The coupling to the pions would be the same as the self-coupling of the vector bosons to themselves. When Yang and Mills proposed the theory, there was no candidate vector boson. J. J. Sakurai in 1960 predicted that there should be a massive vector boson which is coupled to isospin, and predicted that it would show universal couplings. The rho mesons were discovered a short time later, and were quickly identified as Sakurai's vector bosons. The couplings of the rho to the nucleons and to each other were verified to be universal, as best as experiment could measure. The fact that the diagonal isospin current contains part of the electromagnetic current led to the prediction of rho-photon mixing and the concept of vector meson dominance, ideas which led to successful theoretical pictures of GeV-scale photon-nucleus scattering. The introduction of quarks The discovery and subsequent analysis of additional particles, both mesons and baryons, made it clear that the concept of isospin symmetry could be broadened to an even larger symmetry group, now called flavor symmetry. Once the kaons and their property of strangeness became better understood, it started to become clear that these, too, seemed to be a part of an enlarged symmetry that contained isospin as a subgroup. The larger symmetry was named the Eightfold Way by Murray Gell-Mann, and was promptly recognized to correspond to the adjoint representation of SU(3). To better understand the origin of this symmetry, Gell-Mann proposed the existence of up, down and strange quarks which would belong to the fundamental representation of the SU(3) flavor symmetry. In the quark model, the isospin projection (I3) followed from the up and down quark content of particles; uud for the proton and udd for the neutron. Technically, the nucleon doublet states are seen to be linear combinations of products of 3-particle isospin doublet states and spin doublet states. That is, the (spin-up) proton wave function, in terms of quark-flavour eigenstates, is described by and the (spin-up) neutron by Here, is the up quark flavour eigenstate, and is the down quark flavour eigenstate, while and are the eigenstates of . Although these superpositions are the technically correct way of denoting a proton and neutron in terms of quark flavour and spin eigenstates, for brevity, they are often simply referred to as "uud" and "udd". The derivation above assumes exact isospin symmetry and is modified by SU(2)-breaking terms. Similarly, the isospin symmetry of the pions are given by: Although the discovery of the quarks led to reinterpretation of mesons as a vector bound state of a quark and an antiquark, it is sometimes still useful to think of them as being the gauge bosons of a hidden local symmetry. Weak isospin In 1961 Sheldon Glashow proposed that a relation similar to the Gell-Mann–Nishijima formula for charge to isospin would also apply to the weak interaction: Here the charge is related to the projection of weak isospin and the weak hypercharge . Isospin and weak isospin are related to the same symmetry but for different forces. Weak isospin is the gauge symmetry of the weak interaction which connects quark and lepton doublets of left-handed particles in all generations; for example, up and down quarks, top and bottom quarks, electrons and electron neutrinos. By contrast (strong) isospin connects only up and down quarks, acts on both chiralities (left and right) and is a global (not a gauge) symmetry.
Physical sciences
Quantum numbers
Physics
673275
https://en.wikipedia.org/wiki/Scale%20insect
Scale insect
Scale insects are small insects of the order Hemiptera, suborder Sternorrhyncha. Of dramatically variable appearance and extreme sexual dimorphism, they comprise the infraorder Coccomorpha which is considered a more convenient grouping than the superfamily Coccoidea due to taxonomic uncertainties. Adult females typically have soft bodies and no limbs, and are concealed underneath domed scales, extruding quantities of wax for protection. Some species are hermaphroditic, with a combined ovotestis instead of separate ovaries and testes. Males, in the species where they occur, have legs and sometimes wings, and resemble small flies. Scale insects are herbivores, piercing plant tissues with their mouthparts and remaining in one place, feeding on sap. The excess fluid they imbibe is secreted as honeydew on which sooty mold tends to grow. The insects often have a mutualistic relationship with ants, which feed on the honeydew and protect them from predators. There are about 8,000 described species. The oldest fossils of the group date to the Late Jurassic, preserved in amber. They were already substantially diversified by the Early Cretaceous suggesting an earlier origin during the Triassic or Jurassic. Their closest relatives are the jumping plant lice, whiteflies, phylloxera bugs and aphids. The majority of female scale insects remain in one place as adults, with newly hatched nymphs, known as "crawlers", being the only mobile life stage, apart from the short-lived males. The reproductive strategies of many species include at least some amount of asexual reproduction by parthenogenesis. Some scale insects are serious commercial pests, notably the cottony cushion scale (Icerya purchasi) on Citrus fruit trees; they are difficult to control as the scale and waxy covering protect them effectively from contact insecticides. Some species are used for biological control of pest plants such as the prickly pear, Opuntia. Others produce commercially valuable substances including carmine and kermes dyes, and shellac lacquer. The two red colour-names crimson and scarlet both derive from the names of Kermes products in other languages. Description Scale insects vary dramatically in appearance, from very small organisms (1–2 mm) that grow beneath wax covers (some shaped like oysters, others like mussel shells), to shiny pearl-like objects (about 5 mm), to animals covered with mealy wax. Adult females are almost always immobile (apart from mealybugs) and permanently attached to the plant on which they are feeding. They secrete a waxy coating for defence, making them resemble reptilian or fish scales, and giving them their common name. The key character that sets apart the Coccomorpha from all other Hemiptera is the single segmented tarsus on the legs with only one claw at the tip. The group is extremely sexually dimorphic; female scale insects, unusual for Hemiptera, retain the immature external morphology even when sexually mature, a condition known as neoteny. Adult females are pear-shaped, elliptical or circular, with no wings, and usually no constriction separating the head from the body. Segmentation of the body is indistinct, but may be indicated by the presence of marginal bristles. Legs are absent in the females of some families, and when present vary from single segment stubs to five-segmented limbs. Female scale insects have no compound eyes, but ocelli (simple eyes) are sometimes present in Margarodidae, Ortheziidae and Phenacoleachiidae. The family Beesoniidae lacks antennae, but other families possess antennae with from one to thirteen segments. The mouthparts are adapted for piercing and sucking. Adult males in contrast have the typical head, thorax and abdomen of other insect groups, and are so different from females that pairing them as a species is challenging. They are usually slender insects resembling aphids or small flies. They have antennae with nine or ten segments, compound eyes (Margarodidae and Ortheziidae) or simple eyes (most other families), and legs with five segments. Most species have wings, and in some, generations may alternate between being winged and wingless. Adult males do not feed, and die within two or three days of emergence. In species with winged males, generally only the forewings are fully functional. This is unusual among insects; it most closely resembles the situation in the true flies, the Diptera. However, the Diptera and Hemiptera are not closely related, and do not closely resemble each other in morphology; for example, the tail filaments of the Coccomorpha do not resemble anything in the morphology of flies. The hind (metathoracic) wings are reduced, commonly to the point that they can easily be overlooked. In some species the hind wings have hamuli, hooklets, that couple the hind wings to the main wings, as in the Hymenoptera. The vestigial wings are often reduced to pseudo-halteres, club-like appendages, but these are not homologous with the control organs of Diptera, and it is not clear whether they have any substantial control function. Hermaphroditism is very rare in insects, but several species of Icerya exhibit an unusual form. The adult possesses an ovotestis, consisting of both female and male reproductive tissue, and sperm is transmitted to the young for their future use. The fact that a new population can be founded by a single individual may have contributed to the success of the cottony cushion scale which has spread around the world. Life cycle Female scale insects in more advanced families develop from the egg through a first instar (crawler) stage and a second instar stage before becoming adult. In more primitive families there is an additional instar stage. Males pass through a first and second instar stage, a pre-pupal and a pupal stage before adulthood (actually a pseudopupa, as only holometabolous insects have a true pupa). The first instars of most species of scale insects emerge from the egg with functional legs, and are informally called "crawlers". They immediately crawl around in search of a suitable spot to settle down and feed. In some species they delay settling down either until they are starving, or until they have been blown away by wind onto what presumably is another plant, where they may establish a new colony. There are many variations on such themes, such as scale insects that are associated with species of ants that act as herders and carry the young ones to protected sites to feed. In either case, many such species of crawlers, when they moult, lose the use of their legs if they are female, and stay put for life. Only the males retain legs, and in some species wings, and use them in seeking females. To do this they usually walk, as their ability to fly is limited, but they may get carried to new locations by the wind. Adult females of the families Margarodidae, Ortheziidae and Pseudococcidae are mobile and can move to other parts of the host plant or even adjoining plants, but the mobile period is limited to a short period between moults. Some of these overwinter in crevices in the bark or among plant litter, moving in spring to tender young growth. However, the majority of female scale insects are sedentary as adults. Their dispersal ability depends on how far a crawler can crawl before it needs to shed its skin and start feeding. There are various strategies for dealing with deciduous trees. On these, males often feed on the leaves, usually beside the veins, while females select the twigs. Where there are several generations in the year, there may be a general retreat onto the twigs as fall approaches. On branches, the underside is usually preferred as giving protection against predation and adverse weather. The solenopsis mealybug feeds on the foliage of its host in summer and the roots in winter, and large numbers of scale species feed invisibly, year-round on roots. Reproduction and the genetics of sex determination Scale insects show a very wide range of variations in the genetics of sex determination and the modes of reproduction. Besides sexual reproduction, a number of different forms of reproductive systems are employed, including asexual reproduction by parthenogenesis. In some species, sexual and asexual populations are found in different locations, and in general, species with a wide geographic range and a diversity of plant hosts are more likely to be asexual. Large population size is hypothesized to protect an asexual population from becoming extinct, but nevertheless, parthenogenesis is uncommon among scale insects, with the most widespread generalist feeders reproducing sexually, the majority of these being pest species. Many species have the XX-XO system where the female is diploid and homogametic while the male is heterogametic and missing a sex chromosome. In some Diaspididae and Pseudococcidae, both sexes are produced from fertilized eggs but during development males eliminate the paternal genome and this system called paternal genome elimination (PGE) is found in nearly 14 scale insect families. This elimination is achieved with several variations. The commonest (known as the lecanoid system) involved deactivation of the paternal genome and elimination at the time of sperm production in males, this is seen in Pseudococcidae, Kerriidae and some Eriococcidae. In the other variant or Comstockiella system, the somatic cells have the paternal genome untouched. A third variant found in Diaspididae involves the paternal genome being completely removed at an early stage making males haploid both in somatic and germ cells even though they are formed from diploids, i.e., from fertilized eggs. In addition to this there is also true haplodiploidy with females born from fertilized eggs and males from unfertilized eggs. This is seen in the genus Icerya. In Parthenolecanium, males are born from unfertilized eggs but diploidy is briefly restored by fusion of haploid cleave nuclei and then one sex chromosome is lost through heterochromatinization. Females can reproduce parthenogenetically with six different variants based on whether males are entirely absent or not (obligate v. facultative parthenogenesis); the sex of fertilized v. unfertilized eggs; and based on how diploidy is restored in unfertilized eggs. The evolution of these systems are thought to be the result of intra-genomic conflict as well as possibly inter-genomic conflict with endosymbionts under varied selection pressures. The diversity of systems has made scale insects ideal models for research. Ecology Scale insects are an ancient group, having originated in the Cretaceous, the period in which angiosperms came to dominance among plants, with only a few groups species found on gymnosperms. They feed on a wide variety of plants but are unable to survive long away from their hosts. While some specialise on a single plant species (monophagous), and some on a single genus or plant family (oligophagous), others are less specialised and feed on several plant groups (polyphagous). The parasite biologist Robert Poulin notes that the feeding behaviour of scale insects closely resembles that of ectoparasites, living on the outside of their host and feeding only on them, even if they have not traditionally been so described; in his view, those species that remain immobile on a single host and feed only on it behave as obligate ectoparasites. For example, cochineal species are restricted to cactus hosts, and the gall-inducing Apiomorpha are restricted to Eucalyptus. Some species have certain habitat requirements; some Ortheziidae occur in damp meadows, among mosses and in woodland soil, and the boreal ensign scale (Newsteadia floccosa) inhabits plant litter. A Hawaiian mealybug Clavicoccus erinaceus that fed solely on the now critically endangered Abutilon sandwicense has gone extinct as has another species Phyllococcus oahuensis. Several other monophagous scale insects, especially those on islands, are threatened by coextinction due to threats faced by their host plants. Most scale insects are herbivores, feeding on phloem sap drawn directly from the plant's vascular system, but a few species feed on fungal mats and fungi, such as some species in the genus Newsteadia in the family Ortheziidae. Plant sap provides a liquid diet which is rich in sugar and non-essential amino acids. In order to make up for the shortage of essential amino acids, they depend on endosymbiotic proteobacteria. Scale insects secrete a large quantity of sticky viscid fluid known as "honeydew". This includes sugars, amino acids and minerals, and is attractive to ants as well as acting as a substrate on which sooty mould can grow. The mould can reduce photosynthesis by the leaves and detracts from the appearance of ornamental plants. The scale's activities can result in stress for the plant, causing reduced growth and giving it a greater susceptibility to plant diseases. Scale insects in the genus Cryptostigma live inside the nests of neotropical ant species. Many tropical plants need ants to survive which in turn cultivate scale insects thus forming a tripartite symbiosis. Some ants and scale insects have a mutualistic relationship; the ants feed on the honeydew and in return protect the scales. On a tulip tree, ants have been observed building a papery tent over the scales. In other instances, scale insects are carried inside the ant's nest; the ant Acropyga exsanguis takes this to an extreme by transporting a fertilised female mealybug with it on its nuptial flight, so that the nest it founds can be provisioned. This provides a means for the mealybug to be dispersed widely. Species of Hippeococcus have long clinging legs with claws to grip the Dolichoderus ants which tend them; they allow themselves to be carried into the ant colony. Here the mealybugs are safe from predation and environmental hazards, while the ants have a source of nourishment. Another species of ant maintains a herd of scale insects inside the hollow stems of a Barteria tree; the scale insects feed on the sap and the ants, while benefiting from the honeydew, drive away other herbivorous insects from the tree as well as preventing vines from smothering it. Scale insects have various natural enemies, and research in this field is largely directed at the species that are crop pests. Entomopathogenic fungi can attack suitable scales and completely overgrow them. The identity of the host is not always apparent as many fungi are host-specific, and may destroy all the scales of one species present on a leaf while not affecting another species. Fungi in the genus Septobasidium have a more complex, mutualistic relationship with scale insects. The fungus lives on trees where it forms a mat which overgrows the scales, reducing the growth of the individual parasitised scales and sometimes rendering them infertile, but protecting the scale colony from environmental conditions and predators. The fungus benefits by metabolising the sap extracted from the tree by the insects. Natural enemies include parasitoid wasps, mostly in the families Encyrtidae and Eulophidae, and predatory beetles such as fungus weevils, ladybirds and sap beetles. Ladybirds feed on aphids and scale insects, laying their eggs near their prey to ensure their larvae have immediate access to food. The ladybird Cryptolaemus montrouzieri is known as the "mealybug destroyer" because both adults and larvae feed on mealybugs and some soft scales. Ants looking after their providers of honeydew tend to drive off predators, but the mealybug destroyer has outwitted the ants by developing cryptic camouflage, with their larvae mimicking scale larvae. Significance As pests Many scale species are serious crop pests and are particularly problematic for their ability to evade quarantine measures. In 1990, they caused around $5 billion of damage to crops in the United States. The waxy covering of many species of scale protects their adults effectively from contact insecticides, which are only effective against the first-instar nymph stage known as the crawler. However, scales can often be controlled using horticultural oils that suffocate them, systemic pesticides that poison the sap of the host plants, or by biological control agents such as tiny parasitoid wasps and ladybirds. Insecticidal soap may also be used against scales. One species, the cottony cushion scale, is a serious commercial pest on 65 families of woody plants, including Citrus fruits. It has spread worldwide from Australia. As biological controls At the same time, some kinds of scale insects are themselves useful as biological control agents for pest plants, such as various species of cochineal insects that attack invasive species of prickly pear, which spread widely especially in Australia and Africa. Products Some types of scale insect are economically valuable for the substances they can yield under proper husbandry. Some, such as the cochineal, kermes, lac, Armenian cochineal, and Polish cochineal, have been used to produce red dyes for coloring foods and dyeing fabrics. Both the colour name "crimson" and the generic name Kermes are from Italian carmesi or cremesi for the dye used for Italian silk textiles, in turn from the Persian qirmizī (قرمز), meaning both the colour and the insect. The colour name "scarlet" is similarly derived from Arabic siklāt, denoting extremely expensive luxury silks dyed red using kermes. Some waxy scale species in the genera Ceroplastes and Ericerus produce materials such as Chinese wax, and several genera of lac scales produce shellac. Evolution The containing group of the scale insects was formerly treated as the superfamily Coccoidea but taxonomic uncertainties have led workers to prefer the use of the infraorder Coccomorpha as the preferred name for the group. Scale insects are members of the Sternorrhyncha. The phylogeny of the extant groups, inferred from analysis of small subunit (18S) ribosomal RNA, is shown in the first cladogram. The timing of phylogenetic diversification within the Coccomorpha was estimated in a 2016 study based on molecular clock divergence time estimates, along with fossils being used for calibration. They suggested that the main scale insect lineages diverged before their angiosperm hosts, and suggested that the insects switched from feeding on gymnosperms once the angiosperms became common and widespread in the Cretaceous. They estimated that the Coccomorpha appeared at the start of the Triassic period, around 245 million years ago, and that the neococcoids appeared during the Early Jurassic, some 185 million years ago. Scale insects are very well represented in the fossil record, with the oldest known member of the group reported from the Late Jurassic amber from Lebanon. They are abundantly preserved in amber from the Early Cretaceous, 130 mya, onwards; they were already highly diversified by Cretaceous times. All the families were monophyletic except for the Eriococcidae. The Coccomorpha are division into two clades the "Archaeococcoids" and "Neococcoids". The archaeococcoid families have adult males with either compound eyes or a row of unicorneal eyes and have abdominal spiracles in the females. In neoccoids, the females have no abdominal spiracles. In the cladogram below the genus Pityococcus is moved to the "Neococcoids". A cladogram showing the major families using this methodology is shown below. Recognition of scale insect families has fluctuated over time, and the validity of many remains in flux, with several recognized families not included in the phylogeny presented above including extinct groups are listed below: Archecoccoidea Apticoccidae Arnoldidae Burmacoccidae Callipappidae Coelostomidiidae Electrococcidae Grimaldiellidae Grohnidae Hammanococcidae Jankotejacoccidae Jersicoccidae Kozariidae Kukaspididae Kuwaniidae Labiococcidae Lebanococcidae Lithuanicoccidae Macrodrilidae Marchalinidae Margarodidae Matsucoccidae Monophlebidae Ortheziidae Pennygullaniidae Phenacoleachiidae Pityococcidae Putoidae Serafinidae Steingeliidae Stigmacoccidae Termitococcidae Weitschatidae Xylococcidae Neococcoidea Aclerdidae Albicoccidae Asterolecaniidae Beesoniidae Calycicoccidae Carayonemidae Cerococcidae Cissococcidae Coccidae Conchaspididae Cryptococcidae Dactylopiidae Diaspididae Eriococcoidae Halimococcidae Hodgsonicoccidae Inkaidae Kermesidae Kerriidae Lecanodiaspididae Micrococcidae Phoenicococcidae Porphyrophoridae Pseudococcidae Rhizoecidae Stictococcidae Tachardiidae
Biology and health sciences
Hemiptera (true bugs)
null
673530
https://en.wikipedia.org/wiki/Balto
Balto
Balto ( – March 14, 1933) was an Alaskan husky and sled dog belonging to musher and breeder Leonhard Seppala. He achieved fame when he led a team of sled dogs driven by Gunnar Kaasen on the final leg of the 1925 serum run to Nome, in which diphtheria antitoxin was transported from Anchorage, Alaska, to Nenana, Alaska, by train and then to Nome by dog sled to combat an outbreak of the disease. Balto's celebrity status, and that of Kaasen's, resulted in a two-reel motion picture, a statue in Central Park, and a nationwide tour on the vaudeville circuit. A falling out between Seppala and Kaasen resulted in Balto and his teammates being sold under disputed circumstances to a traveling circus operator and ultimately housed in squalor at a dime museum in Los Angeles. When news stories emerged in February 1927 about his poor living conditions, a two-week fundraising effort in Cleveland, Ohio, led to the successful purchase of Balto and his team by the citizenry of Cleveland. Balto lived in ease at the Brookside Zoo until his death on March 14, 1933, at the age of 14; his body was subsequently mounted and displayed in the Cleveland Museum of Natural History, where it remains to this day. While the subject of numerous cultural depictions and homages, including a 1995 animated film, Balto's role in the serum run remains controversial as contemporary media coverage focused almost entirely on him over the efforts of the other mushers and dogs—most notably, Seppala and his lead dog Togo—and has more recently undergone historical reappraisals. Life Early years Little is known about Balto's early years. Balto's birth year is commonly recognized as 1919, in Nome, Alaska, at the kennels of Leonhard Seppala, a native Norwegian, sled dog breeder, musher and competitive racer. He was named after Samuel Balto, a Sámi who was part of Fridtjof Nansen's exploration of Greenland in 1888, and whom Seppala admired. No birth records were kept for Balto or his litter as his body type did not align with other racing huskies that Seppala was breeding. The only evidence of Balto's birth year came from later interviews with Seppala. With a largely black fur coat, Balto had a small, stocky build, unique for a Siberian husky. Believing Balto to be "second rate" and not holding much potential, Seppala neutered him at six months of age. He considered him a "scrub dog", unable to run as fast as his other dogs, who were derisively called "Siberian rats" by mushers against whom Seppala competed. Seppala claimed in his memoir to have "given [Balto] every chance" to ride with his primary sled dog team "but could not qualify"; thus, Balto was relegated to haul freight and large cargo for short runs and was part of a team that pulled railcars with miners over a disused railroad. Gunnar Kaasen, another native Norwegian and a close family friend of Seppala with 21 years' dog sledding experience, came to know Balto through his work at Seppala's mining company. Kaasen believed Seppala misjudged Balto's potential and that the dog's short stature could allow him to be more strong and steady. The serum run In January 1925, doctors realized that a potentially deadly diphtheria epidemic was poised to sweep through the young people of Nome, Alaska, placing the city under quarantine. Dr. Curtis Welch, the primary physician in Nome, transmitted via Morse code that the town's existing serum, which was over six years old, was being depleted. Additional serum was made available in Anchorage, but the territory's only two usable aircraft had open cockpits and were thus grounded for the winter. After considering all the alternatives, officials decided to have the serum ferried via multiple dog sled teams over the "Seward-to-Nome Trail". The serum was transported by train from Anchorage to Nenana, where the first musher embarked as part of a relay. More than 20 mushers took part, facing a blizzard with temperatures and strong winds. Originally projected to arrive in Nome by February 6, the date was moved up several times as the teams repeatedly broke land speed records. News coverage of the event, in particular the hazards posed to the dogs and the leaders, was relayed worldwide; newspaper headlines read; "Relief Nears Nome!", "Dog Teams in Race with Death in Far North" and "Seppalla ... May Save Diphtheria Victims". As the serum run progressed, additional teams were recruited as Alaskan governor Scott Cordelle Bone worried about Seppala's team experiencing fatigue. Kaasen was appointed to drive a team of Seppala's dogs originally set aside for company business during the run, with Fox chosen by Seppala as the leader. Kaasen, however, chose Balto to co-lead alongside Fox, a move Seppala later disagreed with as he felt Balto was not worthy to be a lead dog. Balto had been largely untried as a sled dog prior to the run, but Kaasen expressed confidence in Balto's abilities and likely identified with him. The serum package was handed to Kaasen by Charlie Olson in Bluff at 10:00 p.m. on February 1. The blizzard quickly began to bear down on the team, causing them to become lost and confused. This prompted Kaasen to move Balto to the lead, yelling at him, "Go home, Balto." Balto's ability to pull heavy freight allowed him to steadily navigate the team through the storm; at one point, Balto stopped in front of a patch of ice on the Topkok River that broke underneath him, saving Kaasen's life along with the entire team. Kaasen suffered frostbite after his sled flipped and the serum package fell into the snow, forcing him to search bare-handed for it. Kaasen and his team arrived in Point Safety ahead of schedule, but found the last team of the run was not ready and the roadhouse they lodged in was dark. Ed Rohn, the leader of this final team, was asleep at the time under the impression Kaasen had been halted in nearby Solomon, a settlement Kaasen rode past without visibly recognizing due to the poor weather. Kaasen decided not to wake him up and continue on, knowing it would take time for Rohn to prepare and risk putting additional dogs in harm's way. Despite suffering from exposure and exhaustion, Kaasen and Balto traveled the remaining to Nome, and arrived at Front Street on February 2, 1925, at 5:30 a.m. While frozen solid, all 300,000 units of the antitoxin were intact, and Kaasen handed them over to be thawed for use by midday. Four of Kaasen's dogs were partially frozen when they arrived; one newspaper dispatch erroneously stated Balto and the majority of the team died several days later from frozen lungs, and was immediately retracted shortly after publication. Seppala reached Nome two days later and praised Kaasen for having continued on through blizzard conditions. Kaasen gave all credit to Balto, telling a United Press reporter, "I gave Balto, my lead dog, his head and trusted to him. He never once faltered ... [i]t was Balto who led the way, the credit is his." After reaching Dr. Welch's office to deliver the serum, Kaasen tended to Balto, hugging him and purportedly repeating, "Damn fine dog ... damn fine dog." On the U.S. Senate chamber floor several days later, Washington Senator Clarence Dill recognized the efforts of everyone who helped with the serum run but cited Balto in particular, saying, "[t]his black Siberian dog, through the darkness and storm, crossed this icy desert and kept the trail when no human being could possibly have found the way." The H. K. Mulford Company, one of the manufacturers of the serum units, awarded Kaasen a $1,000 prize () alongside inscribed medals which were given to all the mushers. Post-race fame: movies, statues, vaudeville and sale to a sideshow Newspapers were heralding the feats achieved during the serum run almost exclusively to Balto, eclipsing the efforts of the 18 other mushers and 150 sled dogs who participated. The death toll in Nome was seven people—not counting Alaskan Natives who were not recorded—adding further to the media sensation as the diphtheria epidemic was seemingly averted. When the New York Daily News published exclusive photos of Kaasen's arrival in Nome, Balto was pictured directly in the foreground of the entire team; these photos were later revealed to be staged recreations hours after Kaasen arrived. The recent adoption of radio in the contiguous United States also meant dispatches from Nome had been relayed to radio stations throughout the country. As 1925 ended, Balto was credited in news coverage as having accomplished the entire serum run by himself, a misconception that persisted long after his death. Film producer Sol Lesser promptly signed Kaasen, Balto and the team of "thirteen half wolves" to a contract with Educational Pictures for a movie based on the serum run. Film production began in April 1925 in Los Angeles. Upon arriving in the city, Balto was the recipient of the "bone of the city" by the mayor of Los Angeles, along with other dignitaries including actress Mary Pickford. The two-reel movie, Balto's Race to Nome, debuted the following month to positive reviews; it is now considered a lost film. Shortly after the film's release, Kaasen sued Lesser for unpaid wages; Lesser then sold the existing contract to the vaudeville circuit. Kaasen and Balto soon traveled across the country, making public appearances and being bestowed gifts from the cities visited. In one instance, while visiting Cleveland, Ohio, Kaasen was awarded a subscription to The Plain Dealer as a gift from an existing subscriber, to be delivered to his home in Nome. A statue of Balto, sculpted by Frederick Roth, was erected in New York City's Central Park on December 17, 1925, ten months after Balto's arrival in Nome. Balto modeled in front of Roth and was present for the monument's unveiling. The statue is located on the main path leading north from the Tisch Children's Zoo. In front of the statue, a low-relief slate plaque depicts Balto's sled team and bears the inscription, "[d]edicated to the indomitable spirit of the sled dogs that relayed antitoxin six hundred miles over rough ice, across treacherous waters, through Arctic blizzards from Nenana to the relief of stricken Nome in the winter of 1925: endurance, fidelity, intelligence". Seppala had been "amazed and vastly amused" at Balto and Kaasen's celebrity statuses, but was displeased as it overlooked his lead dog Togo, who went through the run's longest and most dangerous part. Seppala made a similar cross-country tour with Togo and his teammates in 1926, including a gala ice-rink appearance at Madison Square Garden, believing that Togo had been deprived of fame and acclaim. Before relocating to Poland Spring, Maine, in March 1927, Seppala claimed Fox was the actual leader of Kaasen's team and failed to get any proper credit due to Fox's name being more common and would not stand out in newspaper headlines like Balto. A February 1932 interview Seppala had with Henry McLemore furthered this, claiming a newspaper reporter simply chose Balto as "the lead dog ... that brought the serum in" after multiple names were offered by Seppala; as he was still riding to Nome with Togo at the time, this is likely anachronistic. The "Vaccine Research Association" unsuccessfully called for the Central Park statue's removal in 1931, citing a 1929 interview where Seppala claimed all the dramatic events surrounding the run were fabricated to sell newspapers. Unwilling to show disrespect to a sled dog, Seppala partly backtracked from these claims in his memoir: After the dispute with Lesser was resolved, Balto and his teammates were sold to Sam Houston, owner of a traveling circus. The exact circumstances for the sale are unclear: some accounts, including Houston himself, claimed Kaasen sold the dogs after tiring of the constant traveling and moved back to Alaska. Other accounts claimed Seppala made the deal with Houston and ordered Kaasen—who was still under his employ at the Pioneer Mining Company—back to Alaska. Seppala claimed in his memoir that he sold the dogs to Lesser, with Balto selling for much more "on account of the publicity given to his 'glorious achievements'". Kaasen and Seppala never spoke to each other again. Kaasen's departure occurred after the Central Park statue unveiling; upon returning to Nome one year after the run, he found himself alienated by residents of Nome over his fame, with some expressing resentment over the bypassing of Ed Rohn. By May 1947, Seppala dismissed the serum run as little more than "just an ordinary hard run" and Balto's fame as "a product of modern publicity rather than of outstanding merit ... Balto was just a good average dog". Cleveland fundraising effort and purchase Balto and his team continued on tour throughout much of 1926 under the ownership of Sam Houston in both his circus and theatre circuits. By February 1927, stories emerged of Balto and six teammates living in the back room of a "for men only" dime museum in Los Angeles, also described as a freak show. After leaving the vaudeville circuit, Balto and his team briefly resided at a farm, only to be taken back to the city after misbehaving and entering a chicken coop. Balto and his teammates were displayed chained to a sled, with their only exercise consisting of brief trips in the museum's back alley. They were malnourished, with their ribs showing. Jack Wooldridge of the Oakland Tribune wrote about the mistreatment, "[t]here probably was never a more dejected, sorrowful looking lot of malamutes than these as they now appear. Balto will never see the snow again. He's simply an exhibit in a museum." Cleveland businessman George Kimble visited the dime museum while in Los Angeles after noticing a sign outside advertising "Balto the wonder dog". Outraged at seeing Balto and his teammates in poor health, Kimble offered to buy the dogs from Sam Houston, who was willing to sell, but demanded $2,000 (), more than Kimble could personally afford. Kimble reached out to area businessmen and elected officials, along with The Plain Dealer, and assembled the Cleveland Balto Committee led by Common Pleas Judge James B. Ruhl, which negotiated with Houston. After Houston agreed to sell the dogs for $1,500, a fund raising campaign was formally announced in the March 1, 1927, Plain Dealer, and the Brookside Zoo promised to create lodging for the dogs. Raising $200 on the campaign's first day, a ten-day option was obtained, and the dogs were temporarily relocated to a ranch as a foster home. The Plain Dealer carried daily tallies of donations to the campaign. Donations came from all over the city, with Cleveland schoolchildren dropping loose change in buckets and offering their milk money; along with children, bank employees, offices and nonprofit institutions all making donations. Within four days, the committee grew from seven members to seventeen. Area kennel clubs, shops and hotels also made contributions. Appeals to donate were broadcast over radio stations WDBK, WHK and WTAM, along with stations in Detroit and elsewhere; one response came from Japan after a listener there heard an appeal over WJZ in New York by long-distance reception. Three models for the William Taylor & Son department store were driven around downtown Cleveland promoting the campaign. The Los Angeles Alaskan Society subsequently offered to buy the team if the $2,000 could not be raised in time by the Cleveland effort, as the ten-day option had been publicized in the Los Angeles Daily Times. By the evening of March 8, $1,517 had been raised, prompting one last-minute appeal by the Plain Dealer; the following morning, the fund surpassed the $2,000 goal, totaling $2,245.88 () and securing the purchase of the entire seven-dog team. The effort won the praise of Roald Amundsen, who compared it to the city of Oslo adopting the lone surviving dog from his expedition to the South Pole. A Plain Dealer editorial on the campaign's success read, "[t]he city which honors a worthy dumb animal honors itself. Cleveland looks forward to welcoming its Alaskan guests a few days hence and hopes their life here may be long and pleasant." Balto and his six teammates—Alaska Slim, Fox, Tillie, Billie, Old Moctoc and Sye—were transported by train from Los Angeles to Cleveland along with identification papers; arriving March 16, the dogs were escorted to temporary quarters at Brookside Zoo. A grand parade took place at the Public Square on March 19, 1927, which the city designated as "Balto Day". Despite rainy conditions, thousands of people were present as the team pulled a sled modified with iron wheels making it navigable on cobblestone streets and streetcar tracks; two local Boy Scout troops carried signs announcing Balto's arrival and a map of the serum run, while five local people were "sourdough" escorts. In the rotunda of Cleveland City Hall, Judge Ruhl read a deed of gift that transferred ownership of Balto and his team to the city "forever". After the parade, the dogs were all transported to a more permanent housing at the zoo. Later years Balto and his teammates made their formal debut at the Brookside Zoo on March 20, 1927, with estimates of up to 15,000 people visiting the zoo that day. Even with his permanent residence at the zoo, Balto occasionally made public appearances, including at an exhibition hosted by the Western Reserve Kennel Club in which Balto won "best of show" honors. Balto also made an appearance at a parade for the 1929 National Air Races. Owing to the Brookside Zoo's location in a valley, the team would pull sleds during winter weather conditions; one snowfall in early January 1928, turned the zoo's boulevard into an icy trail, with Balto and Fox alternating lead. The Plain Dealer occasionally anthropomorphized their depictions of Balto at the zoo, including an encounter with a visiting husky and his owner from Manitoba. Another 1929 story centered around his "daydreaming" of Richard E. Byrd's expedition of Antarctica; this resulted in multiple letters to the editor that criticized the enclosure and were concerned about the welfare of the dogs. One letter expressed regret for contributing to "Balto to Cleveland" fund as "one of the most inhuman acts we could have performed". One letter written decades later recalled visiting the zoo on a hot day, with Balto tied to a tree in front of a water pan "with a few drops of water in it". Even with these criticisms, the conditions at the zoo were generally seen as "excellent". Zoo staff frequently sprayed the dogs to discourage fleas, their steam-heated kennel had a purpose-built shower for nightly cleaning, and the dogs had a respectable diet of meat in the morning, nightly dog biscuits and plentiful access to water. Zoo superintendent John Kramer defended their treatment of the dogs, particularly with Fox, saying "people don't understand why we do certain things here... you can't please them all." Another enclosure meant for the summer months was built for the dogs in 1930. This enclosure included a bronze tablet on top of a granite monument located in front of Balto's cage. Bearing the names of the entire seven-dog team, the monument was intended as a shrine for all animal lovers of Greater Cleveland. The dogs lived out the remainders of their lives at the zoo: Billie was the first to die, followed by Fox. Between 1930 and 1933, Alaska Slim, Tillie and Old Moctoc all died, with Balto and Sye the only members in the team remaining. Death, mounting and display Balto died on March 14, 1933, at the age of 14. News of Balto's declining health was published four days earlier, having lost his sight and suffering decreased mobility and paralysis. Because of his advanced age, the city's veterinarian and zoo personnel estimated he would not be able to survive the week. Balto's death was attributed to both an enlarged heart and bladder, the former as a result of stress incurred from the serum run. The following day, the Cleveland Museum of Natural History (CMNH) agreed to display Balto in taxidermy form. Balto's mounting cost $50 () and was again raised through a fund-raising campaign; the process included the placement of Balto's skin and fur over a lifelike form as an effigy, a process that was finished by that May. Balto's thyroid and adrenal glands were preserved at the Cleveland Clinic in George Washington Crile's organ collection. Sye, the last of the seven dogs, was reportedly crestfallen over Balto's death, moaning, howling, and refusing to eat. Sye died on March 25, 1934, one year after Balto, and was the only dog of the group to sire offspring. As was the case with Balto, Sye's remains were mounted for display by the zoo, initially displayed over the zoo's tiger enclosure. By 1965, neither the zoo or CMNH could locate the remains of Sye, which are now presumed lost. Sye, Balto and Togo were the only three dogs that participated in the serum run to have had their remains mounted. The monument that was erected at the zoo for the dogs, retroactively regarded as a gravestone, was taken out of public display after Balto died. As other zoo buildings were subsequently erected on the site of the former enclosure for Balto's team, the exact site of the graves for Billie, Fox, Moctoc, Slim and Tillie are now unknown. Initially displayed, then placed in storage for several years, Balto was again put on public display in March 1940, coinciding with a dog show taking place at the Public Auditorium. Displays of Balto were intermittent in the years since, with his remains placed in cold storage at all other times. CMNH had so many animals in their collection that it became difficult to display Balto with greater frequency; in 1975, the Plain Dealer noted Balto's absence as the 50th anniversary of the serum run approached, prompting CMNH to arrange an exhibition. By 2000, CMNH centered Balto around exhibits about the serum run and Inuit people, making his visibility permanent; wildlife resources director Harvey Webster said, "he's an icon ... [the serum run is] a story about the remarkable confluences of men and dogs who did the seemingly impossible in short order." As part of a larger $150 million renovation project, Balto's remains were refurbished and reinstalled in CMNH's new Visitor Hall, which opened on October 15, 2023. Balto is among the museum's eight most iconic specimens that are represented in the Hall. Return visits to Alaska In early 1998, 22 second and third-grade students at Butte Elementary School in Palmer, Alaska, began a letter and petition drive to return Balto to Alaska after student Cody McGinn did a book report and discovered his remains were in Cleveland. Teacher Dwight Homstad viewed Balto's custody as a two-sided issue and that the students wanted to show the emotional attachment Alaskans still had toward Balto. Alaskan governor Tony Knowles endorsed the effort, writing to Homstad's class, "During a time of great need in Alaska's history, Balto persevered through treacherous and perilous conditions to save the lives of many Alaskans." Homstad also contracted for a shipping crate to be transported to CMNH containing the petitions and a video made of the students writing the petitions. By July 1998, the Alaska State Legislature passed a formal proclamation supporting Balto's return to Alaska. Homstad also offered the idea of a trade or barter with CMNH for Balto. CMNH declined both the requests for a permanent return or of shared custody (the latter McGinn advocated for) citing Balto's purchase by the people of Cleveland, that Balto spent 60 percent of his life in Cleveland, and the fragile condition of his remains; one taxidermist estimated that, if properly cared for, a mounted specimen like Balto's can last for up to two lifetimes. Despite the initial refusal, the effort was soon publicized internationally with coverage in both People and CNN; one museum trustee learned of the dispute while on vacation in Indonesia. CMNH announced in August 1998 that Balto would be loaned to the Anchorage Museum of History and Art, who paid substantial money to insure his mount, for six months. The Anchorage Museum previously sought to have Balto displayed in an exhibit tied to the 1988 Iditarod Trail Sled Dog Race and were in negotiations with CMNH earlier in the year about a loan. Balto was placed in a special crate for the trip to Anchorage with the label "Contents: One Hero Dog", and a CMNH curator was present at the museum for the exhibition's duration. A second exhibition of Balto took place at the Anchorage Museum between March and May 2017; again, a CMNH registrar accompanied Balto, who was placed in a climate-controlled crate on the flight to Alaska. Balto and Togo were displayed side-by-side as part of the 2017 exhibit. Legacy Controversy, rivalries and reevaluation Controversy continues to surround Balto's celebrity status. Mushers have placed doubt on claims Balto truly led Kaasen's team, based primarily on his prior track record. No records exist of Seppala ever having used him as a leader in runs or races prior to 1925, and Seppala himself stated Balto "was never in a winning team" and was a "scrub dog". The pictures and film of Kaasen and Balto in Nome were recreated hours after their arrival once the sun had risen. Speculation still exists as to whether Balto's position as lead was genuine or staged for media purposes due to Balto being a more newsworthy and appealing name than Fox. One 1927 newspaper editorial published after Seppala's claim that Fox actually led Kaasen's team read, "[w]hether 'Balto' or 'Fox' matters little. The performance stands and the nation-wide emotion which it aroused is recorded in history. Somebody else wrote the works of Shakespeare, Homer was not the author of the Iliad. The lead dog on that historic trip to Nome was not Balto. What does it matter?" Balto remains more famous to the general public due to the long-held misconceptions about his role. While some historians note it is possible Balto led Kaasen's team, he at most likely ran co-lead with Fox rather than running single-lead by himself. The aftermath of the serum run and the fame awarded to Balto and Kaasen initiated a feud between Seppala, Kaasen and Ed Rohn that lasted for the rest of their lives and has continued to the present day. Decorated mushers and others in the surrounding area—including Rohn, based on conversations the two men had before leaving Nome—believed that Kaasen's decision to not wake Rohn at Point Safety was motivated by a desire to grab the glory for himself. Conversely, supporters of Kaasen argue Rohn was inexperienced with mushing in severe weather. Despite being the subject of widespread fame alongside Balto, Kaasen rarely spoke about the run in later years and was reluctant to make public appearances. Other mushers and residents also died without providing a full account of their respective roles, making it unlikely for the facts to ever be known. The contribution of Alaska Natives, whose teams traveled the majority of the run, is also heavily obscured: while contributing to the area economy and the absence of language barriers, reporters and filmmakers were disinterested in their feats. Many of them died prior to the 1970s, when efforts were made to better preserve Alaskan history, and surviving mushers were given honorary "number one" designations in the early years of the Iditarod Trail Sled Dog Race. Balto vs. Togo The overlooking of Togo in popular culture has come to the displeasure of mushers, some of whom have reared dogs with bloodlines traced directly to Seppala's dogs and Togo specifically, something Balto could not do due to being neutered. Historian Jeff Dinsdale viewed the narrative around Balto as "heavily dependent on fantasy [that] evolved" to usurp Togo's feats and called Togo "the greatest sled dog of all time, sort of the Gordie Howe of dog sledding". Writer Kenneth Ungermann argued Balto's outsized fame was more a symbol of the feats achieved during the run, writing, "[t]o the American public, the glorified husky was representative of Jack, Dixie, Togo, and every other leader and dog that helped carry the antitoxin and hope to the people of Nome." In a 2020 op-ed for the Anchorage Daily News, historian David Reamer criticized Balto Seppala Park in Anchorage for fostering "the misconception of Balto as the singular hero dog of Nome" and "[a]ny opportunity is a good opportunity to spread the worthy truth of Togo"; Reamer praised the movie Togo for remedying "a historical misjustice". In ranking the top 10 heroic animals for Time, Katy Steinmetz placed Togo at number one, writing, "the dog that often gets credit ... is Balto, but he just happened to run the last ... leg in the race. The sled dog who did the lion's share of the work was Togo." The National Park Service credits Togo for having "led his team across the most dangerous leg of the journey... though Balto received the credit for saving the town, to those who know more than , Balto is considered the backup dog". CMNH has recognized Togo as "a superb leader... courageous and strong, smart and possessing an exceptional ability to find the trail and sense danger". The Cleveland Metroparks Zoo (successor to the Brookside Zoo) unveiled companion statues of Balto and Togo in 1997. In 2001, a statue of Togo was unveiled at Seward Park in New York City's Lower East Side, and later moved to a prominent position in the park by 2019; a change.org petition was also launched in late 2019 calling for the removal of Balto's Central Park statue in favor of a statue for Togo. Cultural depictions Books Alistair MacLean's 1959 novel Night Without End includes a sled dog named Balto, a fictional descendant and namesake of the original Balto. The 1966 Uncle Scrooge comic book North of the Yukon centered around the dog "Barko", created by Carl Barks as a direct homage to Balto. In January 1977, Margaret Davidson wrote Balto: The Dog Who Saved Nome, a children's book containing a telling of Balto's deeds. Film The 1995 animated feature film Balto is loosely based on Balto and the serum run, but is notable for multiple inaccuracies, including depicting Balto as a wolfdog and ending the movie with the Central Park statue unveiling. After Steven Spielberg announced the film was in development through his Amblimation studio, CMNH extended an invitation to him to meet Balto's mount, but this request was declined. Two direct-to-video sequels, Balto II: Wolf Quest and Balto III: Wings of Change, were later released but contained no historical references. The 2019 movie Togo centers around Leonhard Seppala and Togo, with Balto getting a brief appearance. Unlike past depictions, Togo's contributions, including completing the serum run's longest and most perilous stretch, are highlighted, and it is made clear Balto got most of the credit. The Great Alaskan Race, also released in 2019 and written/directed by Brian Presley, focuses primarily on Seppala's role but also depicts the heroics of both teams, along with Balto and Togo individually. During production, the film's cast and crew visited Balto's Central Park statue. Television The episode "Welcome Home, Balto" of the PBS Kids series Molly of Denali centers around protagonist Molly Mabray learning about Balto's story and being inspired to create a statue of him. Genome sequencing Balto's DNA was analyzed and sequenced as part of the Zoonomia Project, an international collaboration that has mapped the genomes of over 240 mammals. After being approached by Cornell University associate professor Heather Huson about including Balto in Zoonomia, CMNH agreed and sent a skin sample to University of California, Santa Cruz professor Katherine Moon. CMNH chief science officer Gavin Svenson was enthusiastic about the project and noted advancements in technology have since made it easier to map out genomes from 100-year old DNA. Compared with the genomes of 682 modern-day dogs and wolves, in addition to the 240 mammals in Zoonomia, Balto's genome was found to be more diverse with fewer unhealthy variants than modern purebred dogs and more similar to today's Alaskan huskies often outcrossed to promote better fitness and health. Balto shared part of his ancestry with modern Siberian huskies (39 percent) as well as Greenland dogs (18 percent), Chinese village dogs (17 percent), Samoyeds (6 percent) and Alaskan malamutes (4 percent). Balto had several DNA adaptations that promoted Arctic survival, including a thick double coat, the ability to digest starch, and bone and tissue development. Researchers were also able to accurately predict how Balto would have looked—fur coat color, eyes and fur thickness—from his surviving DNA, which was cross-referenced with historical photos and his remains. Testing also disproved the urban legend that Balto had grey wolf genetics; Moon said, "[h]e was not a wolf, he was just a good boy."
Biology and health sciences
Individual animals
Animals
673743
https://en.wikipedia.org/wiki/Craton
Craton
A craton ( , , or ; from "strength") is an old and stable part of the continental lithosphere, which consists of Earth's two topmost layers, the crust and the uppermost mantle. Having often survived cycles of merging and rifting of continents, cratons are generally found in the interiors of tectonic plates; the exceptions occur where geologically recent rifting events have separated cratons and created passive margins along their edges. Cratons are characteristically composed of ancient crystalline basement rock, which may be covered by younger sedimentary rock. They have a thick crust and deep lithospheric roots extending several hundred kilometres into Earth's mantle. Terminology The term craton is used to distinguish the stable portion of the continental crust from regions that are more geologically active and unstable. Cratons are composed of two layers: a continental shield, in which the basement rock crops out at the surface, and a platform which overlays the shield in some areas with sedimentary rock. The word craton was first proposed by the Austrian geologist Leopold Kober in 1921 as , referring to stable continental platforms, and orogen as a term for mountain or orogenic belts. Later Hans Stille shortened the former term to , from which craton derives. Examples Examples of cratons are the Dharwar Craton in India, North China Craton, the East European Craton, the Amazonian Craton in South America, the Kaapvaal Craton in South Africa, the North American Craton (also called the Laurentia Craton), and the Gawler Craton in South Australia. Structure Cratons have thick lithospheric roots. Mantle tomography shows that cratons are underlain by anomalously cold mantle corresponding to lithosphere more than twice the typical thickness of mature oceanic or non-cratonic, continental lithosphere. At that depth, craton roots extend into the asthenosphere, and the low-velocity zone seen elsewhere at these depths is weak or absent beneath stable cratons. Craton lithosphere is distinctly different from oceanic lithosphere because cratons have a neutral or positive buoyancy and a low intrinsic density. This low-density offsets density increases from geothermal contraction and prevents the craton from sinking into the deep mantle. The cratonic lithosphere is much older than the oceanic lithosphere—up to 4 billion years versus 180 million years. Rock fragments (xenoliths) carried up from the mantle by magmas containing peridotite have been delivered to the surface as inclusions in subvolcanic pipes called kimberlites. These inclusions have densities consistent with craton composition and are composed of mantle material residual from high degrees of partial melt. Peridotite is strongly influenced by the inclusion of moisture. Craton peridotite moisture content is unusually low, which leads to much greater strength. It also contains high percentages of low-weight magnesium instead of higher-weight calcium and iron. Peridotites are important for understanding the deep composition and origin of cratons because peridotite nodules are pieces of mantle rock modified by partial melting. Harzburgite peridotites represent the crystalline residues after extraction of melts of compositions like basalt and komatiite. Formation The process by which cratons were formed is called cratonization. Much about this process remains uncertain, with very little consensus in the scientific community. However, the first cratonic landmasses likely formed during the Archean eon. This is indicated by the age of diamonds, which originate in the roots of cratons and are almost always over 2 billion years and often over 3 billion years in age. Rock of the Archean age makes up only 7% of the world's current cratons; even allowing for erosion and destruction of past formations, this suggests that only 5 to 40 per cent of the present continental crust formed during the Archean. Cratonization likely was completed during the Proterozoic. Subsequent growth of continents was by accretion at continental margins. Root origin The origin of the roots of cratons is still debated. However, the present understanding of cratonization began with the publication in 1978 of a paper by Thomas H. Jordan in Nature. Jordan proposes that cratons formed from a high degree of partial melting of the upper mantle, with 30 to 40 per cent of the source rock entering the melt. Such a high degree of melting was possible because of the high mantle temperatures of the Archean. The extraction of so much magma left behind a solid peridotite residue that was enriched in lightweight magnesium and thus lower in chemical density than the undepleted mantle. This lower chemical density compensated for the effects of thermal contraction as the craton and its roots cooled so that the physical density of the cratonic roots matched that of the surrounding hotter but more chemically dense mantle. In addition to cooling the craton roots and lowering their chemical density, the extraction of magma also increased the viscosity and melting temperature of the craton roots and prevented mixing with the surrounding undepleted mantle. The resulting mantle roots have remained stable for billions of years. Jordan suggests that depletion occurred primarily in subduction zones and secondarily as flood basalts. This model of melt extraction from the upper mantle has held up well with subsequent observations. The properties of mantle xenoliths confirm that the geothermal gradient is much lower beneath continents than oceans. The olivine of craton root xenoliths is extremely dry, which would give the roots a very high viscosity. Rhenium–osmium dating of xenoliths indicates that the oldest melting events took place in the early to middle Archean. Significant cratonization continued into the late Archean, accompanied by voluminous mafic magmatism. However, melt extraction alone cannot explain all the properties of craton roots. Jordan notes in his paper that this mechanism could be effective for constructing craton roots only down to a depth of . The great depths of craton roots required further explanation. The 30 to 40 per cent partial melting of mantle rock at 4 to 10 GPa pressure produces komatiite magma and a solid residue very close in composition to Archean lithospheric mantle. Still, continental shields do not contain enough komatiite to match the expected depletion. Either much of the komatiite never reached the surface, or other processes aided craton root formation. There are many competing hypotheses of how cratons have been formed. Repeated continental collision model Jordan's model suggests that further cratonization resulted from repeated continental collisions. The thickening of the crust associated with these collisions may have been balanced by craton root thickening according to the principle of isostacy. Jordan likens this model to "kneading" of the cratons, allowing low-density material to move up and higher density to move down, creating stable cratonic roots as deep as . Molten plume model A second model suggests that the surface crust was thickened by a rising plume of molten material from the deep mantle. This would have built up a thick layer of depleted mantle underneath the cratons. Subducting ocean slab model A third model suggests that successive slabs of subducting oceanic lithosphere became lodged beneath a proto-craton, underplating the craton with chemically depleted rock. Impact origin model A fourth theory presented in a 2015 publication suggests that the origin of the cratons is similar to crustal plateaus observed on Venus, which may have been created by large asteroid impacts. In this model, large impacts on the Earth's early lithosphere penetrated deep into the mantle and created enormous lava ponds. The paper suggests these lava ponds cooled to form the craton's root. Evidence for each model The chemistry of xenoliths and seismic tomography both favor the two accretional models over the plume model. However, other geochemical evidence favors mantle plumes. Tomography shows two layers in the craton roots beneath North America. One is found at depths shallower than and may be Archean, while the second is found at depths from and may be younger. The second layer may be a less depleted thermal boundary layer that stagnated against the depleted "lid" formed by the first layer. The impact origin model does not require plumes or accretion; this model is, however, not incompatible with either. All these proposed mechanisms rely on buoyant, viscous material separating from a denser residue due to mantle flow, and it is possible that more than one mechanism contributed to craton root formation. Erosion The long-term erosion of cratons has been labelled the "cratonic regime". It involves processes of pediplanation and etchplanation that lead to the formation of flattish surfaces known as peneplains. While the process of etchplanation is associated to humid climate and pediplanation with arid and semi-arid climate, shifting climate over geological time leads to the formation of so-called polygenetic peneplains of mixed origin. Another result of the longevity of cratons is that they may alternate between periods of high and low relative sea levels. High relative sea level leads to increased oceanicity, while the opposite leads to increased inland conditions. Many cratons have had subdued topographies since Precambrian times. For example, the Yilgarn Craton of Western Australia was flattish already by Middle Proterozoic times and the Baltic Shield had been eroded into a subdued terrain already during the Late Mesoproterozoic when the rapakivi granites intruded.
Physical sciences
Tectonics
Earth science
673768
https://en.wikipedia.org/wiki/Continental%20crust
Continental crust
Continental crust is the layer of igneous, metamorphic, and sedimentary rocks that forms the geological continents and the areas of shallow seabed close to their shores, known as continental shelves. This layer is sometimes called sial because its bulk composition is richer in aluminium silicates (Al-Si) and has a lower density compared to the oceanic crust, called sima which is richer in magnesium silicate (Mg-Si) minerals. Changes in seismic wave velocities have shown that at a certain depth (the Conrad discontinuity), there is a reasonably sharp contrast between the more felsic upper continental crust and the lower continental crust, which is more mafic in character. Most continental crust is dry land above sea level. However, 94% of the Zealandia continental crust region is submerged beneath the Pacific Ocean, with New Zealand constituting 93% of the above-water portion. Thickness and density The continental crust consists of various layers, with a bulk composition that is intermediate (SiO2 wt% = 60.6). The average density of the continental crust is about, , less dense than the ultramafic material that makes up the mantle, which has a density of around . Continental crust is also less dense than oceanic crust, whose density is about . At in thickness, continental crust is considerably thicker than oceanic crust, which has an average thickness of around . Approximately 41% of Earth's surface area and about 70% of the volume of Earth's crust are continental crust. Importance Because the surface of continental crust mainly lies above sea level, its existence allowed land life to evolve from marine life. Its existence also provides broad expanses of shallow water known as epeiric seas and continental shelves where complex metazoan life could become established during early Paleozoic time, in what is now called the Cambrian explosion. Origin All continental crust is ultimately derived from mantle-derived melts (mainly basalt) through fractional differentiation of basaltic melt and the assimilation (remelting) of pre-existing continental crust. The relative contributions of these two processes in creating continental crust are debated, but fractional differentiation is thought to play the dominant role. These processes occur primarily at magmatic arcs associated with subduction. There is little evidence of continental crust prior to 3.5 Ga. About 20% of the continental crust's current volume was formed by 3.0 Ga. There was relatively rapid development on shield areas consisting of continental crust between 3.0 and 2.5 Ga. During this time interval, about 60% of the continental crust's current volume was formed. The remaining 20% has formed during the last 2.5 Ga. Proponents of a steady-state hypothesis argue that the total volume of continental crust has remained more or less the same after early rapid planetary differentiation of Earth and that presently found age distribution is just the result of the processes leading to the formation of cratons (the parts of the crust clustered in cratons being less likely to be reworked by plate tectonics). However, this is not generally accepted. Forces at work In contrast to the persistence of continental crust, the size, shape, and number of continents are constantly changing through geologic time. Different tracts rift apart, collide and recoalesce as part of a grand supercontinent cycle. There are currently about of continental crust, but this quantity varies because of the nature of the forces involved. The relative permanence of continental crust contrasts with the short life of oceanic crust. Because continental crust is less dense than oceanic crust, when active margins of the two meet in subduction zones, the oceanic crust is typically subducted back into the mantle. Continental crust is rarely subducted (this may occur where continental crustal blocks collide and overthicken, causing deep melting under mountain belts such as the Himalayas or the Alps). For this reason the oldest rocks on Earth are within the cratons or cores of the continents, rather than in repeatedly recycled oceanic crust; the oldest intact crustal fragment is the Acasta Gneiss at 4.01 Ga, whereas the oldest large-scale oceanic crust (located on the Pacific plate offshore of the Kamchatka Peninsula) is from the Jurassic (≈180 Ma), although there might be small older remnants in the Mediterranean Sea at about 340 Ma. Continental crust and the rock layers that lie on and within it are thus the best archive of Earth's history. The height of mountain ranges is usually related to the thickness of crust. This results from the isostasy associated with orogeny (mountain formation). The crust is thickened by the compressive forces related to subduction or continental collision. The buoyancy of the crust forces it upwards, the forces of the collisional stress balanced by gravity and erosion. This forms a keel or mountain root beneath the mountain range, which is where the thickest crust is found. The thinnest continental crust is found in rift zones, where the crust is thinned by detachment faulting and eventually severed, replaced by oceanic crust. The edges of continental fragments formed this way (both sides of the Atlantic Ocean, for example) are termed passive margins. The high temperatures and pressures at depth, often combined with a long history of complex distortion, cause much of the lower continental crust to be metamorphic – the main exception to this being recent igneous intrusions. Igneous rock may also be "underplated" to the underside of the crust, i.e. adding to the crust by forming a layer immediately beneath it. Continental crust is produced and (far less often) destroyed mostly by plate tectonic processes, especially at convergent plate boundaries. Additionally, continental crustal material is transferred to oceanic crust by sedimentation. New material can be added to the continents by the partial melting of oceanic crust at subduction zones, causing the lighter material to rise as magma, forming volcanoes. Also, material can be accreted horizontally when volcanic island arcs, seamounts or similar structures collide with the side of the continent as a result of plate tectonic movements. Continental crust is also lost through erosion and sediment subduction, tectonic erosion of forearcs, delamination, and deep subduction of continental crust in collision zones. Many theories of crustal growth are controversial, including rates of crustal growth and recycling, whether the lower crust is recycled differently from the upper crust, and over how much of Earth history plate tectonics has operated and so could be the dominant mode of continental crust formation and destruction. It is a matter of debate whether the amount of continental crust has been increasing, decreasing, or remaining constant over geological time. One model indicates that at prior to 3.7 Ga ago continental crust constituted less than 10% of the present amount. By 3.0 Ga ago the amount was about 25%, and following a period of rapid crustal evolution it was about 60% of the current amount by 2.6 Ga ago. The growth of continental crust appears to have occurred in spurts of increased activity corresponding to five episodes of increased production through geologic time.
Physical sciences
Tectonics
Earth science
673880
https://en.wikipedia.org/wiki/S%C3%A3o%20Paulo%20Metro
São Paulo Metro
The São Paulo Metro (, ), commonly called the Metrô, is a rapid transit system that forms part of the urban railways that serves the city of São Paulo, alongside the São Paulo Metropolitan Trains Company (CPTM), both forming the largest metropolitan rail transport network of Latin America. The six lines in the metro system operate on of route, serving 89 stations. The metro system carries about 4,200,000 passengers a day. Metro itself is far from covering the entire urban area in the city of São Paulo and only runs within the city limits. However, it is complemented by a network of metropolitan trains operated by CPTM and ViaMobilidade, which serve the city of São Paulo and the São Paulo Metropolitan Region. The two systems combined form a long network. The metropolitan trains differs from Metro because it also serves other municipalities around São Paulo with larger average distance between stations and freight trains operating in some lines. Considered the most modern in Latin America, the system is the first to install platform screen doors at a station, and use communications-based train control with lines 4 and 15 being fully automated. Line 15 is a monorail line that partially opened for service in 2014 and is the first high capacity monorail line of Latin America. The São Paulo Metro and CPTM both operate as State-owned companies and have received awards in the recent past as one of the cleanest systems in the world by ISO 9001. The São Paulo Metro was voted Best Metro Americas at the MetroRail 2010 industry conference and has been frequently chosen as one of the best metro systems in the world by specialist media outlets such as CNN and Business Insider, being the only system in Latin America to make the list. History The Companhia do Metropolitano de São Paulo (Metrô) was founded on April 24, 1968. Eight months later, work on the initial North–South line (now Line 1 - Blue) was initiated. In 1972, the first test train trip occurred between Jabaquara and Saúde stations. On September 14, 1974, the segment between Jabaquara and Vila Mariana entered into commercial operation. The first line, Norte/Sul (North/South), later renamed "Blue Line" or Line 1 - Blue, was opened on September 18, 1972, with an experimental operation between Saúde and Jabaquara stations. Commercial operations started on September 14, 1974, after an eight-year "gestation" period that began in 1966, under Mayor Faria Lima's administration. Expansion of the metro system includes new lines. As of late 2004, construction began on a US$1 billion, all-underground line (Line 4 - Yellow), with eleven stations, aimed at transporting almost one million people per day. By 2004, Line 2 was also being expanded, with two new stations open in 2006 and another one in 2007. A expansion of Line 5 was completed in 2018. , tickets cost R$5.00. In 2006, the São Paulo Metro system has started to use a smart card, called "Bilhete Único" (or "Single Ticket" in English). Current operational data The metro system consists of six color-coded lines: Line 1 (Blue), Line 2 (Green), Line 3 (Red), Line 4 (Yellow), Line 5 (Lilac) and Line 15 (Silver), operating from Sunday to Saturday, from 4:40 AM to midnight. Line 15 (Silver), is a high-capacity monorail, the rest being standard metro lines. The six lines achieved an average weekday ridership of 5.3 million in 2019. On 14 September 2019, Metrô recorded the highest ever ridership figure of 5.5 million on a single business day, caused by the recent expansion of some lines. The Metro provided 1.49 billion rides over the course of 2019. Bus terminals In May 1977, Metro assumed the administration and commercial utilization of the Inter-City Jabaquara Intermunicipal Terminal, and inaugurated, in May 1982, the modern Inter-city Tietê Bus Terminal, replacing the former Júlio Prestes Terminal. This agreement established that Metro would be in charge of the studies for the planning, implementation, and operation of passenger transportation in the municipal district of São Paulo, either directly or through third parties. Later, the other inter-city bus terminals were integrated into the system, such as Bresser, in January 1988, and Palmeiras-Barra Funda, in December 1989. In January 1990 the inter-city bus terminals were outsourced by Metrô, which through public bidding, contracted Consortium Prima for the administration and commercial utilization of the 4 inter-city bus terminals of the city of São Paulo. This contract included the responsibility for maintenance and conservation of the existing installations, as well as of the expansion and modernisation of the terminals. Rolling stock The first cars started operating in 1974, the same year the company's commercial activities were initiated. This model was named A Stock, whose cars received the numbers of 1001 to 1306 (51 trains of 6 cars each). They were designed in United States by the Budd Company, and the national rolling stock manufacturer Mafersa did the final assembly. The model was based on the Class A trains from the Bay Area Rapid Transit system, even using the same Westinghouse 1460 series chopper traction controls, and was to be used along the north–south line, now known as Line 1 - Blue. The initially they operated with two car trains with cars added as demand increased, up to a maximum of six cars. All of them have a pair of electric motors and a cab. Today, this stock is known as "A stock". The entire "A stock" was planned to be phased out by the beginning of 2015, as the recent modernization processes saw them being converted into two different stocks: I and J. The last A stock train was withdrawn from service in February 2018. To reduce the manufacturing costs, the Cobrasma company decided to provide, for the East-West Line, now Line 3. Trains had cabs only and made use of more advanced ventilation and maintenance systems. This stock was known by the name of "C". The batch of trains designed for this line were produced by two different national companies, Cobrasma and Mafersa (whose trains got named as "D"). The trains entered service between 1984 and 1986 on Line 3 and remained there for their entire service lives, although in their final years, some of the D stock trains were transferred to Line 1 where they ran with the older A stock trains. The only difference between the two is the front mask and some structural framework. Their original technical nomenclature was 300. According to it, the C stock was numbered from 301 (C01) to 325 (C25), and the D stock had trains numbered from 326 (D26) to 347 (D47). The C stock trains were already refurbished as K stock and the D stock was refurbished and created the L cars. The refurbishment program for the entire stock of A, C and D trains was completed in 2018. Today the rolling stock of the São Paulo Metro consists of 11 stocks, 232 trains and 1,419 cars and it is divided as follows: E stock: Built by Alstom and entered service between 1998 and 1999. They currently operate on Line 1 - Blue. F stock: Alstom trains specially built for Line 5 - Lilac between 2001 and 2002. G stock: Also built by Alstom and entered service in 2008. They currently run on Lines 1 - Blue and 3 - Red. H stock: Streamlined CAF-built trains built in 2010 which operate exclusively on Line 3 - Red since 2014. I and J stock: Refurbished A stock trains which operate on Lines 1 - Blue and 2 - Green from 2011. They differ cosmetically as well as mechanically. I stock was rebuilt by Alstom and Siemens while J stock was rebuilt by Bombardier, Temoinsa, BTT and Tejofran. K stock: Refurbished C stock trains rebuilt by a consortium consisting of T’trans, MTTrens, MPE and Temoinsa. They operate on Line 3 - Red just like the original trains. L stock: D stock refurbished by Alstom and IESA and operates on Line 1 - Blue M stock: The Monorail stock built by Bombardier between 2013 and 2016 and operates on Line 15 - Silver. P stock: CAF-built trains from 2013 which run on Line 5 - Lilac alongside the former F stock. 400 series: Driverless trains built in 2009-2010 and 2016-2017 by Hyundai Rotem for Line 4 - Yellow Security Metro's security agents have police powers and in case of need they will provide assistance. All police matters that occur within the system are directed to the police station of the subway system, Delegacia de Polícia do Metropolitano de São Paulo (DELPOM), located at Palmeiras-Barra Funda station. System lines Future developments Several conventional metro and monorail lines are currently under construction or under project. Network Map
Technology
Brazil
null
673887
https://en.wikipedia.org/wiki/Reticle
Reticle
A reticle, or reticule also known as a graticule, is a pattern of fine lines or markings built into the eyepiece of an optical device such as a telescopic sight, spotting scope, theodolite, optical microscope or the screen of an oscilloscope, to provide measurement references during visual inspections. Today, engraved lines or embedded fibers may be replaced by a digital image superimposed on a screen or eyepiece. Both terms may be used to describe any set of patterns used for aiding visual measurements and calibrations, but in modern use reticle is most commonly used for weapon sights, while graticule is more widely used for non-weapon measuring instruments such as oscilloscope display, astronomic telescopes, microscopes and slides, surveying instruments and other similar devices. There are many variations of reticle pattern; this article concerns itself mainly with the most rudimentary reticle: the crosshair. Crosshairs are typically represented as a pair of perpendicularly intersecting lines in the shape of a cross, "+", though many variations of additional features exist including dots, posts, concentric circles/horseshoes, chevrons, graduated markings, or a combination of above. Most commonly associated with telescopic sights for aiming firearms, crosshairs are also common in optical instruments used for astronomy and surveying, and are also popular in graphical user interfaces as a precision pointer. The reticle is said to have been invented by Robert Hooke, and dates to the 17th century. Another candidate as inventor is the amateur astronomer William Gascoigne, who predated Hooke. The term reticle comes from the Latin reticulum, meaning small net. Uses Firearms Telescopic sights for firearms, generally just called scopes, are probably the device most often associated with crosshairs. Motion pictures and the media often use a view through crosshairs as a dramatic device, which has given crosshairs wide cultural exposure. Reticle shape While the traditional thin crossing lines are the original and still the most familiar cross-hair shape, they are really best suited for precision aiming at high contrast targets, as the thin lines are easily lost in complex backgrounds, such as those encountered while hunting. Thicker bars are much easier to discern against a complex background, but lack the precision of thin bars. The most popular types of cross-hair in modern scopes are variants on the duplex cross-hair, with bars that are thick on the perimeter and thin out in the middle. The thick bars allow the eye to quickly locate the center of the reticle, and the thin lines in the center allow for precision aiming. The thin bars in a duplex reticle may also be designed to be used as a measure. Called a 30/30 reticle, the thin bars on such a reticle span 30 minutes of arc (0.5º), which is approximately equal to 30 inches at 100 yards or 90 centimeters at 100 meters. This enables an experienced shooter to deduce, on the basis of the known size of an object in view, (as opposed to guess or estimate) the range within an acceptable error limit. Wire crosshairs Originally crosshairs were constructed out of hair or spiderweb, these materials being sufficiently thin and strong. Many modern scopes use wire crosshairs, which can be flattened to various degrees to change the width. These wires are usually silver in color, but appear black when backlit by the image passing through the scope's optics. Wire reticles are by nature fairly simple, as they require lines that pass all the way across the reticle, and the shapes are limited to the variations in thickness allowed by flattening the wire; duplex crosshairs, and crosshairs with dots are possible, and multiple horizontal or vertical lines may be used. The advantage of wire crosshairs is that they are fairly tough and durable, and provide no obstruction to light passing through the scope. Etched reticles The first suggestion for etched glass reticles was made by Philippe de La Hire in 1700. His method was based on engraving the lines on a glass plate with a diamond point. Many modern crosshairs are actually etched onto a thin plate of glass, which allows a far greater latitude in shapes. Etched glass reticles can have floating elements, which do not cross the reticle; circles and dots are common, and some types of glass reticles have complex sections designed for use in range estimation and bullet drop and drift compensation (see external ballistics). A potential disadvantage of glass reticles is that the surface of the glass reflects some light (about 4% per surface on uncoated glass) lessening transmission through the scope, although this light loss is near zero if the glass is multicoated (coating being the norm for all modern high quality optical products). Illuminated reticles Reticles may be illuminated, either by a plastic or fiber optic light pipe collecting ambient light or, in low light conditions, by a battery powered LED. Some sights also use the radioactive decay of tritium for illumination that can work for 11 years without using a battery, used in the British SUSAT sight for the SA80 (L85) assault rifle and in the American ACOG (Advanced Combat Optical Gunsight). Red is the most common color used, as it is the least destructive to the shooter's night vision, but some products use green or yellow illumination, either as a single colour or changeable via user selection. Graticule Another term for reticle is graticule, which is frequently encountered in British and British military technical manuals. It came into common use during World War I. Reticle focal plane The reticle may be located at the front or rear focal plane (First Focal Plane (FFP) or Second Focal Plane (SFP)) of the telescopic sight. On fixed power telescopic sights there is no significant difference, but on variable power telescopic sights the front plane reticle remains at a constant size compared to the target, while rear plane reticles remain a constant size to the user as the target image grows and shrinks. Front focal plane reticles are slightly more durable, but most American users prefer that the reticle remains constant as the image changes size, so nearly all modern American variable power telescopic sights are rear focal plane designs. American and European high end optics manufacturers often leave the customer the choice between a FFP or SFP mounted reticle. Collimated reticles Collimated reticles are produced by non-magnifying optical devices such as reflector sights (often called reflex sights) that give the viewer an image of the reticle superimposed over the field of view, and blind collimator sights that are used with both eyes. Collimated reticles are created using refractive or reflective optical collimators to generate a collimated image of an illuminated or reflective reticle. These types of sights are used on surveying/triangulating equipment, to aid celestial telescope aiming, and as sights on firearms. Historically they were used on larger military weapon systems that could supply an electrical source to illuminate them and where the operator needed a wide field of view to track and range a moving target visually (i.e. weapons from the pre laser/radar/computer era). More recently sights using low power consumption durable light emitting diodes as the reticle (called red dot sights) have become common on small arms with versions like the Aimpoint CompM2 being widely fielded by the U.S. Military. Holographic reticles Holographic weapon sights use a holographic image of a reticle at finite set range built into the viewing window and a collimated laser diode to illuminate it. An advantage to holographic sights is that they eliminate a type of parallax problem found in some optical collimator based sights (such as the red dot sight) where the spherical mirror used induces spherical aberration that can cause the reticle to skew off the sight's optical axis. The use of a hologram also eliminates the need for image dimming narrow band reflective coatings and allows for reticles of almost any shape or mil size. A downside to the holographic weapon sight can be the weight and shorter battery life. As with red dot sights, holographic weapon sights have also become common on small arms with versions like the Eotech 512.A65 and similar models fielded by the U.S. Military and various law enforcement agencies. Surveying and astronomy In older instruments, reticle crosshairs and stadia marks were made using threads taken from the cocoon of the brown recluse spider. This very fine, strong spider silk makes for an excellent crosshair. Surveying In surveying, reticles are designed for specific uses. Levels and theodolites would have slightly different reticles. However, both may have features such as stadia marks to allow distance measurements. Astronomy For astronomical uses, reticles could be simple crosshair designs or more elaborate designs for special purposes. Telescopes used for polar alignment could have a reticle that indicates the position of Polaris relative to the north celestial pole. Telescopes that are used for very precise measurements would have a filar micrometer as a reticle; this could be adjusted by the operator to measure angular distances between stars. For aiming telescopes, reflex sights are popular, often in conjunction with a small telescope with a crosshair reticle. They make aiming the telescope at an astronomical object easier. The constellation Reticulum was designated to recognize the reticle and its contributions to astronomy.
Technology
Surveying tools
null
674050
https://en.wikipedia.org/wiki/Picard%20theorem
Picard theorem
In complex analysis, Picard's great theorem and Picard's little theorem are related theorems about the range of an analytic function. They are named after Émile Picard. The theorems Little Picard Theorem: If a function is entire and non-constant, then the set of values that assumes is either the whole complex plane or the plane minus a single point. Sketch of Proof: Picard's original proof was based on properties of the modular lambda function, usually denoted by , and which performs, using modern terminology, the holomorphic universal covering of the twice punctured plane by the unit disc. This function is explicitly constructed in the theory of elliptic functions. If omits two values, then the composition of with the inverse of the modular function maps the plane into the unit disc which implies that is constant by Liouville's theorem. This theorem is a significant strengthening of Liouville's theorem which states that the image of an entire non-constant function must be unbounded. Many different proofs of Picard's theorem were later found and Schottky's theorem is a quantitative version of it. In the case where the values of are missing a single point, this point is called a lacunary value of the function. Great Picard's Theorem: If an analytic function has an essential singularity at a point , then on any punctured neighborhood of takes on all possible complex values, with at most a single exception, infinitely often. This is a substantial strengthening of the Casorati–Weierstrass theorem, which only guarantees that the range of is dense in the complex plane. A result of the Great Picard Theorem is that any entire, non-polynomial function attains all possible complex values infinitely often, with at most one exception. The "single exception" is needed in both theorems, as demonstrated here: ez is an entire non-constant function that is never 0, has an essential singularity at 0, but still never attains 0 as a value. Proof Little Picard Theorem Suppose is an entire function that omits two values and . By considering we may assume without loss of generality that and . Because is simply connected and the range of omits , f has a holomorphic logarithm. Let be an entire function such that . Then the range of omits all integers. By a similar argument using the quadratic formula, there is an entire function such that . Then the range of omits all complex numbers of the form , where is an integer and is a nonnegative integer. By Landau's theorem, if , then for all , the range of contains a disk of radius . But from above, any sufficiently large disk contains at least one number that the range of h omits. Therefore for all . By the fundamental theorem of calculus, is constant, so is constant. Great Picard Theorem Suppose f is an analytic function on the punctured disk of radius r around the point w, and that f omits two values z0 and z1. By considering (f(p + rz) − z0)/(z1 − z0) we may assume without loss of generality that z0 = 0, z1 = 1, w = 0, and r = 1. The function F(z) = f(e−z) is analytic in the right half-plane Re(z) > 0. Because the right half-plane is simply connected, similar to the proof of the Little Picard Theorem, there are analytic functions G and H defined on the right half-plane such that F(z) = e2πiG(z) and G(z) = cos(H(z)). For any w in the right half-plane, the open disk with radius Re(w) around w is contained in the domain of H. By Landau's theorem and the observation about the range of H in the proof of the Little Picard Theorem, there is a constant C > 0 such that |H′(w)| ≤ C / Re(w). Thus, for all real numbers x ≥ 2 and 0 ≤ y ≤ 2π, where A > 0 is a constant. So |G(x + iy)| ≤ xA. Next, we observe that F(z + 2πi) = F(z) in the right half-plane, which implies that G(z + 2πi) − G(z) is always an integer. Because G is continuous and its domain is connected, the difference G(z + 2πi) − G(z) = k is a constant. In other words, the function G(z) − kz / (2πi) has period 2πi. Thus, there is an analytic function g defined in the punctured disk with radius e−2 around 0 such that G(z) − kz / (2πi) = g(e−z). Using the bound on G above, for all real numbers x ≥ 2 and 0 ≤ y ≤ 2π, holds, where A′ > A and C′ > 0 are constants. Because of the periodicity, this bound actually holds for all y. Thus, we have a bound |g(z)| ≤ C′(−log|z|)A′ for 0 < |z| < e−2. By Riemann's theorem on removable singularities, g extends to an analytic function in the open disk of radius e−2 around 0. Hence, G(z) − kz / (2πi) is bounded on the half-plane Re(z) ≥ 3. So F(z)e−kz is bounded on the half-plane Re(z) ≥ 3, and f(z)zk is bounded in the punctured disk of radius e−3 around 0. By Riemann's theorem on removable singularities, f(z)zk extends to an analytic function in the open disk of radius e−3 around 0. Therefore, f does not have an essential singularity at 0. Therefore, if the function f has an essential singularity at 0, the range of f in any open disk around 0 omits at most one value. If f takes a value only finitely often, then in a sufficiently small open disk around 0, f omits that value. So f(z) takes all possible complex values, except at most one, infinitely often. Generalization and current research Great Picard's theorem is true in a slightly more general form that also applies to meromorphic functions: Great Picard's Theorem (meromorphic version): If M is a Riemann surface, w a point on M, P1(C) = C ∪ {∞} denotes the Riemann sphere and f : M\{w} → P1(C) is a holomorphic function with essential singularity at w, then on any open subset of M containing w, the function f(z) attains all but at most two points of P1(C) infinitely often. Example: The function f(z) = 1/(1 − e1/z) is meromorphic on C* = C - {0}, the complex plane with the origin deleted. It has an essential singularity at z = 0 and attains the value ∞ infinitely often in any neighborhood of 0; however it does not attain the values 0 or 1. With this generalization, Little Picard Theorem follows from Great Picard Theorem because an entire function is either a polynomial or it has an essential singularity at infinity. As with the little theorem, the (at most two) points that are not attained are lacunary values of the function. The following conjecture is related to "Great Picard's Theorem": Conjecture: Let {U1, ..., Un} be a collection of open connected subsets of C that cover the punctured unit disk D \ {0}. Suppose that on each Uj there is an injective holomorphic function fj, such that dfj = dfk on each intersection Uj ∩ Uk. Then the differentials glue together to a meromorphic 1-form on D. It is clear that the differentials glue together to a holomorphic 1-form g dz on D \ {0}. In the special case where the residue of g at 0 is zero the conjecture follows from the "Great Picard's Theorem".
Mathematics
Complex analysis
null
674207
https://en.wikipedia.org/wiki/Torso
Torso
The torso or trunk is an anatomical term for the central part, or the core, of the body of many animals (including human beings), from which the head, neck, limbs, tail and other appendages extend. The tetrapod torso — including that of a human — is usually divided into the thoracic segment (also known as the upper torso, where the forelimbs extend), the abdominal segment (also known as the "mid-section" or "midriff"), and the pelvic and perineal segments (sometimes known together with the abdomen as the lower torso, where the hindlimbs extend). Anatomy Major organs In humans, most critical organs, with the notable exception of the brain, are housed within the torso. In the upper chest, the heart and lungs are protected by the rib cage, and the abdomen contains most of the organs responsible for digestion: the stomach, which breaks down partially digested food via gastric acid; the liver, which respectively produces bile necessary for digestion; the large and small intestines, which extract nutrients from food; the anus, from which fecal wastes are egested; the rectum, which stores feces; the gallbladder, which stores and concentrates bile; the kidneys, which produce urine, the ureters, which pass it to the bladder for storage; and the urethra, which excretes urine and in a male passes sperm through the seminal vesicles. Finally, the pelvic region houses both the male and female reproductive organs. Major muscle groups The torso also harbours many of the main groups of muscles in the tetrapod body, including the pectoral, abdominal, lateral and epaxial muscles. Nerve supply The organs, muscles, and other contents of the torso are supplied by nerves, which mainly originate as nerve roots from the thoracic and lumbar parts of the spinal cord. Some organs also receive a nerve supply from the vagus nerve. The sensation to the skin is provided by the lateral and dorsal cutaneous branches.
Biology and health sciences
External anatomy and regions of the body
Biology
13947297
https://en.wikipedia.org/wiki/Buttocks
Buttocks
The buttocks (: buttock) are two rounded portions of the exterior anatomy of most mammals, located on the posterior of the pelvic region. In humans, the buttocks are located between the lower back and the perineum. They are composed of a layer of exterior skin and underlying subcutaneous fat superimposed on a left and right gluteus maximus and gluteus medius muscles. The two gluteus maximus muscles are the largest muscles in the human body. They are responsible for movements such as straightening the body into the upright (standing) posture when it is bent at the waist; maintaining the body in the upright posture by keeping the hip joints extended; and propelling the body forward via further leg (hip) extension when walking or running. In many cultures, the buttocks play a role in sexual attraction. Many cultures have also used the buttocks as a primary target for corporal punishment, as the buttocks' layer of subcutaneous fat offers protection against injury while still allowing for the infliction of pain. Structure The buttocks are formed by the masses of the gluteal muscles or "glutes" (the gluteus maximus muscle and the gluteus medius muscle) superimposed by a layer of fat. The superior aspect of the buttock ends at the iliac crest, and the lower aspect is outlined by the horizontal gluteal crease. The gluteus maximus has two insertion points: superior portion of the linea aspera of the femur, and the superior portion of the iliotibial tractus. The masses of the gluteus maximus muscle are separated by an intermediate intergluteal cleft or "crack" in which the anus is situated. The buttocks allow primates to sit upright without resting their weight on their feet as four-legged animals do. Females of certain species of baboon have red buttocks that blush to attract males. In the case of humans, females tend to have proportionally wider and thicker buttocks due to higher subcutaneous fat and proportionally wider hips. In humans they also have a role in propelling the body in a forward motion and aiding bowel movement. Some baboons and all gibbons, though otherwise fur-covered, have characteristic naked callosities on their buttocks. While human children generally have smooth buttocks, mature males and females have varying degrees of hair growth, as on other parts of their body. Females may have hair growth in the gluteal cleft (including around the anus), sometimes extending laterally onto the lower aspect of the cheeks. Males may have hair growth over some or all of the buttocks. Names The Latin name for the buttocks is (English pronunciation , classical pronunciation nătes ) which is plural; the singular, (buttock), is rarely used. There are many colloquial terms for them. Gallery of art Gallery
Biology and health sciences
External anatomy and regions of the body
Biology
2367035
https://en.wikipedia.org/wiki/Cumulus%20congestus%20cloud
Cumulus congestus cloud
Cumulus congestus or towering cumulus clouds are a species of cumulus that can be based in the low- to middle-height ranges. They achieve considerable vertical development in areas of deep, moist convection. They are an intermediate stage between cumulus mediocris and cumulonimbus, sometimes producing rainshowers, snow, or ice pellets. Precipitation that evaporates before reaching the surface is virga. Description Cumulus congestus clouds are characteristic of unstable regions of atmosphere that are undergoing convection. They are often characterized by sharp outlines and great vertical development. Since strong updrafts produce (and primarily compose) them, the clouds are typically taller than they are wide; cloud tops can reach , or higher in the tropics. Cumulus congestus clouds are formed by the development of cumulus mediocris generally, though they can also be formed from altocumulus castellanus or stratocumulus castellanus, which are forms of cumulus castellanus. The congestus species of cloud can only be found in the genus cumulus and is designated as towering cumulus (TCu) by the International Civil Aviation Organization (ICAO). Congestus clouds are capable of producing severe turbulence and showers of moderate to heavy intensity. This species is classified as vertical or multi-étage and is coded CL2 in the synop report. These clouds are usually too large and opaque to have any opacity or pattern-based varieties. Congestus and especially cumulonimbus are hazardous to aviation. An approaching weather front often brings mid-level clouds (e.g. altostratus or altocumulus), which when expansive and dense, reduces insolation and infringes cumulus from reaching the congestus stage. Occasionally however, particularly if the air below the mid-level cloud is very warm or unstable, some of the cumuli may become congestus and the tops of them may rise above the mid level cloud layer, sometimes resulting in showers ahead of the main rainband. This is often a sign the approaching front contains at least a few cumulonimbi amongst the nimbostratus rain clouds, and therefore any rain may be accompanied by thunderstorms. Cumulus congestus will develop into cumulonimbus calvus under conditions of sufficient instability. This transformation can be seen by the presence of smooth, fibrous, or striated aspects assumed by the cloud's upper part. While all congestus produce showers, this development could produce heavy precipitation. A flammagenitus cloud, or pyrocumulus, (FgCu or FgCu con) is a rapidly growing convective cloud associated with volcanic eruptions and large-scale fires (typically wildfires). Pyrocumulus congestus may thus form under those special circumstances that can also cause severe turbulence. Cumulus congestus can also be associated with fair weather waterspouts forming from rotation at the open water surface being stretched and tightened under their updraft. Landspouts most often form under congestus, as well. Both of these non-mesocyclone associated tornadoes typically dissipate when a more pronounced precipitation shaft forms and the downdraft cuts off this process. In highly sheared environments or within the flanking line of a supercell, congestus can rotate and, on rare occasions, produce mesocyclonic-type tornadoes, with waterspouts and landspouts emanating from misocyclones (a related but distinct process). Turkey tower Turkey tower is a slang term for a narrow, tall, individual towering cloud from a small cumulus cloud which develops and suddenly falls apart. Sudden development of turkey towers could signify the breaking or weakening of a capping inversion, and an area where these consistently form is an "agitated area", a term that applies to congestus generally.
Physical sciences
Clouds
Earth science
2367828
https://en.wikipedia.org/wiki/Tin%28IV%29%20oxide
Tin(IV) oxide
Tin(IV) oxide, also known as stannic oxide, is the inorganic compound with the formula SnO2. The mineral form of SnO2 is called cassiterite, and this is the main ore of tin. With many other names, this oxide of tin is an important material in tin chemistry. It is a colourless, diamagnetic, amphoteric solid. Structure Tin(IV) oxide crystallises with the rutile structure. As such the tin atoms are six coordinate and the oxygen atoms three coordinate. SnO2 is usually regarded as an oxygen-deficient n-type semiconductor. Hydrous forms of SnO2 have been described as stannic acid. Such materials appear to be hydrated particles of SnO2 where the composition reflects the particle size. Preparation Tin(IV) oxide occurs naturally. Synthetic tin(IV) oxide is produced by burning tin metal in air. Annual production is in the range of 10 kilotons. SnO2 is reduced industrially to the metal with carbon in a reverberatory furnace at 1200–1300 °C. Amphoterism Although SnO2 is insoluble in water, it is amphoteric, dissolving in base and acid. "Stannic acid" refers to hydrated tin (IV) oxide, SnO2, which is also called "stannic oxide." Tin oxides dissolve in acids. Halogen acids attack SnO2 to give hexahalostannates, such as [SnI6]2−. One report describes reacting a sample in refluxing HI for many hours. SnO2 + 6 HI → H2SnI6 + 2 H2O Similarly, SnO2 dissolves in sulfuric acid to give the sulfate: SnO2 + 2 H2SO4 → Sn(SO4)2 + 2 H2O The latter compound can add additional hydrogen sulfate ligands to give hexahydrogensulfatostannic acid. SnO2 dissolves in strong bases to give "stannates," with the nominal formula Na2SnO3. Dissolving the solidified SnO2/NaOH melt in water gives Na2[Sn(OH)6], "preparing salt," which is used in the dye industry. Uses In conjunction with vanadium oxide, it is used as a catalyst for the oxidation of aromatic compounds in the synthesis of carboxylic acids and acid anhydrides. Ceramic glazes Tin(IV) oxide has long been used as an opacifier and as a white colorant in ceramic glazes.'The Glazer's Book' – 2nd edition. A.B.Searle.The Technical Press Limited. London. 1935. This has probably led to the discovery of the pigment lead-tin-yellow, which was produced using tin(IV) oxide as a compound. The use of tin(IV) oxide has been particularly common in glazes for earthenware, sanitaryware and wall tiles; see the articles tin-glazing and Tin-glazed pottery. Tin oxide remains in suspension in vitreous matrix of the fired glazes, and, with its high refractive index being sufficiently different from the matrix, light is scattered, and hence increases the opacity of the glaze. The degree of dissolution increases with the firing temperature, and hence the extent of opacity diminishes. Although dependent on the other constituents the solubility of tin oxide in glaze melts is generally low. Its solubility is increased by Na2O, K2O and B2O3, and reduced by CaO, BaO, ZnO, Al2O3, and to a limited extent PbO. SnO2 has been used as pigment in the manufacture of glasses, enamels and ceramic glazes. Pure SnO2 gives a milky white colour; other colours are achieved when mixed with other metallic oxides e.g. V2O5 yellow; Cr2O3 pink; and Sb2O5 grey blue. Dyes This oxide of tin has been utilized as a mordant in the dyeing process since ancient Egypt. A German by the name of Kuster first introduced its use to London in 1533 and by means of it alone, the color scarlet was produced there. Polishing Tin(IV) oxide can be used as a polishing powder, sometimes in mixtures also with lead oxide, for polishing glass, jewelry, marble and silver. Tin(IV) oxide for this use is sometimes called as "putty powder" or "jeweler's putty". Glass coatings SnO2 coatings can be applied using chemical vapor deposition, vapour deposition techniques that employ SnCl4 or organotin trihalides e.g. butyltin trichloride as the volatile agent. This technique is used to coat glass bottles with a thin (<0.1 μm) layer of SnO2, which helps to adhere a subsequent, protective polymer coating such as polyethylene to the glass. Thicker layers doped with Sb or F ions are electrically conducting and used in electroluminescent devices and photovoltaics. Gas sensing SnO2 is used in sensors of combustible gases including carbon monoxide detectors. In these the sensor area is heated to a constant temperature (few hundred °C) and in the presence of a combustible gas the electrical resistivity drops. Room temperature gas sensors are also being developed using reduced graphene oxide-SnO2 composites(e.g. for ethanol detection). Doping with various compounds has been investigated (e.g. with CuO). Doping with cobalt and manganese, gives a material that can be used in e.g. high voltage varistors. Tin(IV) oxide can be doped with the oxides of iron or manganese.
Physical sciences
Oxide salts
Chemistry
3254510
https://en.wikipedia.org/wiki/Scala%20%28programming%20language%29
Scala (programming language)
Scala ( ) is a strong statically typed high-level general-purpose programming language that supports both object-oriented programming and functional programming. Designed to be concise, many of Scala's design decisions are intended to address criticisms of Java. Scala source code can be compiled to Java bytecode and run on a Java virtual machine (JVM). Scala can also be transpiled to JavaScript to run in a browser, or compiled directly to a native executable. When running on the JVM, Scala provides language interoperability with Java so that libraries written in either language may be referenced directly in Scala or Java code. Like Java, Scala is object-oriented, and uses a syntax termed curly-brace which is similar to the language C. Since Scala 3, there is also an option to use the off-side rule (indenting) to structure blocks, and its use is advised. Martin Odersky has said that this turned out to be the most productive change introduced in Scala 3. Unlike Java, Scala has many features of functional programming languages (like Scheme, Standard ML, and Haskell), including currying, immutability, lazy evaluation, and pattern matching. It also has an advanced type system supporting algebraic data types, covariance and contravariance, higher-order types (but not higher-rank types), anonymous types, operator overloading, optional parameters, named parameters, raw strings, and an experimental exception-only version of algebraic effects that can be seen as a more powerful version of Java's checked exceptions. The name Scala is a portmanteau of scalable and language, signifying that it is designed to grow with the demands of its users. History The design of Scala started in 2001 at the École Polytechnique Fédérale de Lausanne (EPFL) (in Lausanne, Switzerland) by Martin Odersky. It followed on from work on Funnel, a programming language combining ideas from functional programming and Petri nets. Odersky formerly worked on Generic Java, and javac, Sun's Java compiler. After an internal release in late 2003, Scala was released publicly in early 2004 on the Java platform, A second version (v2.0) followed in March 2006. On 17 January 2011, the Scala team won a five-year research grant of over €2.3 million from the European Research Council. On 12 May 2011, Odersky and collaborators launched Typesafe Inc. (later renamed Lightbend Inc.), a company to provide commercial support, training, and services for Scala. Typesafe received a $3 million investment in 2011 from Greylock Partners. Platforms and license Scala runs on the Java platform (Java virtual machine) and is compatible with existing Java programs. As Android applications are typically written in Java and translated from Java bytecode into Dalvik bytecode (which may be further translated to native machine code during installation) when packaged, Scala's Java compatibility makes it well-suited to Android development, the more so when a functional approach is preferred. The reference Scala software distribution, including compiler and libraries, is released under the Apache license. Other compilers and targets Scala.js is a Scala compiler that compiles to JavaScript, making it possible to write Scala programs that can run in web browsers or Node.js. The compiler, in development since 2013, was announced as no longer experimental in 2015 (v0.6). Version v1.0.0-M1 was released in June 2018 and version 1.1.1 in September 2020. Scala Native is a Scala compiler that targets the LLVM compiler infrastructure to create executable code that uses a lightweight managed runtime, which uses the Boehm garbage collector. The project is led by Denys Shabalin and had its first release, 0.1, on 14 March 2017. Development of Scala Native began in 2015 with a goal of being faster than just-in-time compilation for the JVM by eliminating the initial runtime compilation of code and also providing the ability to call native routines directly. A reference Scala compiler targeting the .NET Framework and its Common Language Runtime was released in June 2004, but was officially dropped in 2012. Examples "Hello World" example The Hello World program written in Scala 3 has this form: @main def main() = println("Hello, World!") Unlike the stand-alone Hello World application for Java, there is no class declaration and nothing is declared to be static. When the program is stored in file HelloWorld.scala, the user compiles it with the command: $ scalac HelloWorld.scala and runs it with $ scala HelloWorld This is analogous to the process for compiling and running Java code. Indeed, Scala's compiling and executing model is identical to that of Java, making it compatible with Java build tools such as Apache Ant. A shorter version of the "Hello World" Scala program is: println("Hello, World!") Scala includes an interactive shell and scripting support. Saved in a file named HelloWorld2.scala, this can be run as a script using the command: $ scala HelloWorld2.scala Commands can also be entered directly into the Scala interpreter, using the option : $ scala -e 'println("Hello, World!")' Expressions can be entered interactively in the REPL: $ scala Welcome to Scala 2.12.2 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_131). Type in expressions for evaluation. Or try :help. scala> List(1, 2, 3).map(x => x * x) res0: List[Int] = List(1, 4, 9) scala> Basic example The following example shows the differences between Java and Scala syntax. The function mathFunction takes an integer, squares it, and then adds the cube root of that number to the natural log of that number, returning the result (i.e., ): Some syntactic differences in this code are: Scala does not require semicolons (;) to end statements. Value types are capitalized (sentence case): Int, Double, Boolean instead of int, double, boolean. Parameter and return types follow, as in Pascal, rather than precede as in C. Methods must be preceded by def. Local or class variables must be preceded by val (indicates an immutable variable) or var (indicates a mutable variable). The return operator is unnecessary in a function (although allowed); the value of the last executed statement or expression is normally the function's value. Instead of the Java cast operator (Type) foo, Scala uses foo.asInstanceOf[Type], or a specialized function such as toDouble or toInt. Function or method foo() can also be called as just foo; method thread.send(signo) can also be called as just thread send signo; and method foo.toString() can also be called as just foo toString. These syntactic relaxations are designed to allow support for domain-specific languages. Some other basic syntactic differences: Array references are written like function calls, e.g. array(i) rather than array[i]. (Internally in Scala, the former expands into array.apply(i) which returns the reference) Generic types are written as e.g. List[String] rather than Java's List<String>. Instead of the pseudo-type void, Scala has the actual singleton class Unit (see below). Example with classes The following example contrasts the definition of classes in Java and Scala. The code above shows some of the conceptual differences between Java and Scala's handling of classes: Scala has no static variables or methods. Instead, it has singleton objects, which are essentially classes with only one instance. Singleton objects are declared using object instead of class. It is common to place static variables and methods in a singleton object with the same name as the class name, which is then known as a companion object. (The underlying class for the singleton object has a $ appended. Hence, for class Foo with companion object object Foo, under the hood there's a class Foo$ containing the companion object's code, and one object of this class is created, using the singleton pattern.) In place of constructor parameters, Scala has class parameters, which are placed on the class, similar to parameters to a function. When declared with a val or var modifier, fields are also defined with the same name, and automatically initialized from the class parameters. (Under the hood, external access to public fields always goes through accessor (getter) and mutator (setter) methods, which are automatically created. The accessor function has the same name as the field, which is why it's unnecessary in the above example to explicitly declare accessor methods.) Note that alternative constructors can also be declared, as in Java. Code that would go into the default constructor (other than initializing the member variables) goes directly at class level. In Scala it is possible to define operators by using symbols as method names. In place of addPoint, the Scala example defines +=, which is then invoked with infix notation as grid += this. Default visibility in Scala is public. Features (with reference to Java) Scala has the same compiling model as Java and C#, namely separate compiling and dynamic class loading, so that Scala code can call Java libraries. Scala's operational characteristics are the same as Java's. The Scala compiler generates byte code that is nearly identical to that generated by the Java compiler. In fact, Scala code can be decompiled to readable Java code, with the exception of certain constructor operations. To the Java virtual machine (JVM), Scala code and Java code are indistinguishable. The only difference is one extra runtime library, scala-library.jar. Scala adds a large number of features compared with Java, and has some fundamental differences in its underlying model of expressions and types, which make the language theoretically cleaner and eliminate several corner cases in Java. From the Scala perspective, this is practically important because several added features in Scala are also available in C#. Syntactic flexibility As mentioned above, Scala has a good deal of syntactic flexibility, compared with Java. The following are some examples: Semicolons are unnecessary; lines are automatically joined if they begin or end with a token that cannot normally come in this position, or if there are unclosed parentheses or brackets. Any method can be used as an infix operator, e.g. "%d apples".format(num) and "%d apples" format num are equivalent. In fact, arithmetic operators like + and << are treated just like any other methods, since function names are allowed to consist of sequences of arbitrary symbols (with a few exceptions made for things like parens, brackets and braces that must be handled specially); the only special treatment that such symbol-named methods undergo concerns the handling of precedence. Methods apply and update have syntactic short forms. foo()—where foo is a value (singleton object or class instance)—is short for foo.apply(), and foo() = 42 is short for foo.update(42). Similarly, foo(42) is short for foo.apply(42), and foo(4) = 2 is short for foo.update(4, 2). This is used for collection classes and extends to many other cases, such as STM cells. Scala distinguishes between no-parens (def foo = 42) and empty-parens (def foo() = 42) methods. When calling an empty-parens method, the parentheses may be omitted, which is useful when calling into Java libraries that do not know this distinction, e.g., using foo.toString instead of foo.toString(). By convention, a method should be defined with empty-parens when it performs side effects. Method names ending in colon (:) expect the argument on the left-hand-side and the receiver on the right-hand-side. For example, the 4 :: 2 :: Nil is the same as Nil.::(2).::(4), the first form corresponding visually to the result (a list with first element 4 and second element 2). Class body variables can be transparently implemented as separate getter and setter methods. For trait FooLike { var bar: Int }, an implementation may be . The call site will still be able to use a concise foo.bar = 42. The use of curly braces instead of parentheses is allowed in method calls. This allows pure library implementations of new control structures. For example, breakable { ... if (...) break() ... } looks as if breakable was a language defined keyword, but really is just a method taking a thunk argument. Methods that take thunks or functions often place these in a second parameter list, allowing to mix parentheses and curly braces syntax: Vector.fill(4) { math.random } is the same as Vector.fill(4)(math.random). The curly braces variant allows the expression to span multiple lines. For-expressions (explained further down) can accommodate any type that defines monadic methods such as map, flatMap and filter. By themselves, these may seem like questionable choices, but collectively they serve the purpose of allowing domain-specific languages to be defined in Scala without needing to extend the compiler. For example, Erlang's special syntax for sending a message to an actor, i.e. actor ! message can be (and is) implemented in a Scala library without needing language extensions. Unified type system Java makes a sharp distinction between primitive types (e.g. int and boolean) and reference types (any class). Only reference types are part of the inheritance scheme, deriving from java.lang.Object. In Scala, all types inherit from a top-level class Any, whose immediate children are AnyVal (value types, such as Int and Boolean) and AnyRef (reference types, as in Java). This means that the Java distinction between primitive types and boxed types (e.g. int vs. Integer) is not present in Scala; boxing and unboxing is completely transparent to the user. Scala 2.10 allows for new value types to be defined by the user. For-expressions Instead of the Java "foreach" loops for looping through an iterator, Scala has for-expressions, which are similar to list comprehensions in languages such as Haskell, or a combination of list comprehensions and generator expressions in Python. For-expressions using the yield keyword allow a new collection to be generated by iterating over an existing one, returning a new collection of the same type. They are translated by the compiler into a series of map, flatMap and filter calls. Where yield is not used, the code approximates to an imperative-style loop, by translating to foreach. A simple example is: val s = for (x <- 1 to 25 if x*x > 50) yield 2*x The result of running it is the following vector: Vector(16, 18, 20, 22, 24, 26, 28, 30, 32, 34, 36, 38, 40, 42, 44, 46, 48, 50) (Note that the expression 1 to 25 is not special syntax. The method to is rather defined in the standard Scala library as an extension method on integers, using a technique known as implicit conversions that allows new methods to be added to existing types.) A more complex example of iterating over a map is: // Given a map specifying Twitter users mentioned in a set of tweets, // and number of times each user was mentioned, look up the users // in a map of known politicians, and return a new map giving only the // Democratic politicians (as objects, rather than strings). val dem_mentions = for (mention, times) <- mentions account <- accounts.get(mention) if account.party == "Democratic" yield (account, times) Expression (mention, times) <- mentions is an example of pattern matching (see below). Iterating over a map returns a set of key-value tuples, and pattern-matching allows the tuples to easily be destructured into separate variables for the key and value. Similarly, the result of the comprehension also returns key-value tuples, which are automatically built back up into a map because the source object (from the variable mentions) is a map. Note that if mentions instead held a list, set, array or other collection of tuples, exactly the same code above would yield a new collection of the same type. Functional tendencies While supporting all of the object-oriented features available in Java (and in fact, augmenting them in various ways), Scala also provides a large number of capabilities that are normally found only in functional programming languages. Together, these features allow Scala programs to be written in an almost completely functional style and also allow functional and object-oriented styles to be mixed. Examples are: No distinction between statements and expressions Type inference Anonymous functions with capturing semantics (i.e., closures) Immutable variables and objects Lazy evaluation Delimited continuations (since 2.8) Higher-order functions Nested functions Currying Pattern matching Algebraic data types (through case classes) Tuples Everything is an expression Unlike C or Java, but similar to languages such as Lisp, Scala makes no distinction between statements and expressions. All statements are in fact expressions that evaluate to some value. Functions that would be declared as returning void in C or Java, and statements like while that logically do not return a value, are in Scala considered to return the type Unit, which is a singleton type, with only one object of that type. Functions and operators that never return at all (e.g. the throw operator or a function that always exits non-locally using an exception) logically have return type Nothing, a special type containing no objects; that is, a bottom type, i.e. a subclass of every possible type. (This in turn makes type Nothing compatible with every type, allowing type inference to function correctly.) Similarly, an if-then-else "statement" is actually an expression, which produces a value, i.e. the result of evaluating one of the two branches. This means that such a block of code can be inserted wherever an expression is desired, obviating the need for a ternary operator in Scala: For similar reasons, return statements are unnecessary in Scala, and in fact are discouraged. As in Lisp, the last expression in a block of code is the value of that block of code, and if the block of code is the body of a function, it will be returned by the function. To make it clear that all functions are expressions, even methods that return Unit are written with an equals sign def printValue(x: String): Unit = println("I ate a %s".format(x)) or equivalently (with type inference, and omitting the unnecessary newline): def printValue(x: String) = println("I ate a %s" format x) Type inference Due to type inference, the type of variables, function return values, and many other expressions can typically be omitted, as the compiler can deduce it. Examples are val x = "foo" (for an immutable constant or immutable object) or var x = 1.5 (for a variable whose value can later be changed). Type inference in Scala is essentially local, in contrast to the more global Hindley-Milner algorithm used in Haskell, ML and other more purely functional languages. This is done to facilitate object-oriented programming. The result is that certain types still need to be declared (most notably, function parameters, and the return types of recursive functions), e.g. def formatApples(x: Int) = "I ate %d apples".format(x) or (with a return type declared for a recursive function) def factorial(x: Int): Int = if x == 0 then 1 else x * factorial(x - 1) Anonymous functions In Scala, functions are objects, and a convenient syntax exists for specifying anonymous functions. An example is the expression x => x < 2, which specifies a function with one parameter, that compares its argument to see if it is less than 2. It is equivalent to the Lisp form (lambda (x) (< x 2)). Note that neither the type of x nor the return type need be explicitly specified, and can generally be inferred by type inference; but they can be explicitly specified, e.g. as (x: Int) => x < 2 or even (x: Int) => (x < 2): Boolean. Anonymous functions behave as true closures in that they automatically capture any variables that are lexically available in the environment of the enclosing function. Those variables will be available even after the enclosing function returns, and unlike in the case of Java's anonymous inner classes do not need to be declared as final. (It is even possible to modify such variables if they are mutable, and the modified value will be available the next time the anonymous function is called.) An even shorter form of anonymous function uses placeholder variables: For example, the following: list map { x => sqrt(x) } can be written more concisely as list map { sqrt(_) } or even list map sqrt Immutability Scala enforces a distinction between immutable and mutable variables. Mutable variables are declared using the var keyword and immutable values are declared using the val keyword. A variable declared using the val keyword cannot be reassigned in the same way that a variable declared using the final keyword can't be reassigned in Java. vals are only shallowly immutable, that is, an object referenced by a val is not guaranteed to itself be immutable. Immutable classes are encouraged by convention however, and the Scala standard library provides a rich set of immutable collection classes. Scala provides mutable and immutable variants of most collection classes, and the immutable version is always used unless the mutable version is explicitly imported. The immutable variants are persistent data structures that always return an updated copy of an old object instead of updating the old object destructively in place. An example of this is immutable linked lists where prepending an element to a list is done by returning a new list node consisting of the element and a reference to the list tail. Appending an element to a list can only be done by prepending all elements in the old list to a new list with only the new element. In the same way, inserting an element in the middle of a list will copy the first half of the list, but keep a reference to the second half of the list. This is called structural sharing. This allows for very easy concurrency — no locks are needed as no shared objects are ever modified. Lazy (non-strict) evaluation Evaluation is strict ("eager") by default. In other words, Scala evaluates expressions as soon as they are available, rather than as needed. However, it is possible to declare a variable non-strict ("lazy") with the lazy keyword, meaning that the code to produce the variable's value will not be evaluated until the first time the variable is referenced. Non-strict collections of various types also exist (such as the type Stream, a non-strict linked list), and any collection can be made non-strict with the view method. Non-strict collections provide a good semantic fit to things like server-produced data, where the evaluation of the code to generate later elements of a list (that in turn triggers a request to a server, possibly located somewhere else on the web) only happens when the elements are actually needed. Tail recursion Functional programming languages commonly provide tail call optimization to allow for extensive use of recursion without stack overflow problems. Limitations in Java bytecode complicate tail call optimization on the JVM. In general, a function that calls itself with a tail call can be optimized, but mutually recursive functions cannot. Trampolines have been suggested as a workaround. Trampoline support has been provided by the Scala library with the object scala.util.control.TailCalls since Scala 2.8.0 (released 14 July 2010). A function may optionally be annotated with @tailrec, in which case it will not compile unless it is tail recursive. An example of this optimization could be implemented using the factorial definition. For instance, the recursive version of the factorial: def factorial(n: Int): Int = if n == 0 then 1 else n * factorial(n - 1)Could be optimized to the tail recursive version like this: @tailrec def factorial(n: Int, accum: Int): Int = if n == 0 then accum else factorial(n - 1, n * accum)However, this could compromise composability with other functions because of the new argument on its definition, so it is common to use closures to preserve its original signature: def factorial(n: Int): Int = @tailrec def loop(current: Int, accum: Int): Int = if n == 0 then accum else loop(current - 1, n * accum) loop(n, 1) // Call to the closure using the base case end factorial This ensures tail call optimization and thus prevents a stack overflow error. Case classes and pattern matching Scala has built-in support for pattern matching, which can be thought of as a more sophisticated, extensible version of a switch statement, where arbitrary data types can be matched (rather than just simple types like integers, Booleans and strings), including arbitrary nesting. A special type of class known as a case class is provided, which includes automatic support for pattern matching and can be used to model the algebraic data types used in many functional programming languages. (From the perspective of Scala, a case class is simply a normal class for which the compiler automatically adds certain behaviors that could also be provided manually, e.g., definitions of methods providing for deep comparisons and hashing, and destructuring a case class on its constructor parameters during pattern matching.) An example of a definition of the quicksort algorithm using pattern matching is this: def qsort(list: List[Int]): List[Int] = list match case Nil => Nil case pivot :: tail => val (smaller, rest) = tail.partition(_ < pivot) qsort(smaller) ::: pivot :: qsort(rest) The idea here is that we partition a list into the elements less than a pivot and the elements not less, recursively sort each part, and paste the results together with the pivot in between. This uses the same divide-and-conquer strategy of mergesort and other fast sorting algorithms. The match operator is used to do pattern matching on the object stored in list. Each case expression is tried in turn to see if it will match, and the first match determines the result. In this case, Nil only matches the literal object Nil, but pivot :: tail matches a non-empty list, and simultaneously destructures the list according to the pattern given. In this case, the associated code will have access to a local variable named pivot holding the head of the list, and another variable tail holding the tail of the list. Note that these variables are read-only, and are semantically very similar to variable bindings established using the let operator in Lisp and Scheme. Pattern matching also happens in local variable declarations. In this case, the return value of the call to tail.partition is a tuple — in this case, two lists. (Tuples differ from other types of containers, e.g. lists, in that they are always of fixed size and the elements can be of differing types — although here they are both the same.) Pattern matching is the easiest way of fetching the two parts of the tuple. The form _ < pivot is a declaration of an anonymous function with a placeholder variable; see the section above on anonymous functions. The list operators :: (which adds an element onto the beginning of a list, similar to cons in Lisp and Scheme) and ::: (which appends two lists together, similar to append in Lisp and Scheme) both appear. Despite appearances, there is nothing "built-in" about either of these operators. As specified above, any string of symbols can serve as function name, and a method applied to an object can be written "infix"-style without the period or parentheses. The line above as written: qsort(smaller) ::: pivot :: qsort(rest) could also be written thus: qsort(rest).::(pivot).:::(qsort(smaller)) in more standard method-call notation. (Methods that end with a colon are right-associative and bind to the object to the right.) Partial functions In the pattern-matching example above, the body of the match operator is a partial function, which consists of a series of case expressions, with the first matching expression prevailing, similar to the body of a switch statement. Partial functions are also used in the exception-handling portion of a try statement: try ... catch case nfe: NumberFormatException => { println(nfe); List(0) } case _ => Nil Finally, a partial function can be used alone, and the result of calling it is equivalent to doing a match over it. For example, the prior code for quicksort can be written thus: val qsort: List[Int] => List[Int] = case Nil => Nil case pivot :: tail => val (smaller, rest) = tail.partition(_ < pivot) qsort(smaller) ::: pivot :: qsort(rest) Here a read-only variable is declared whose type is a function from lists of integers to lists of integers, and bind it to a partial function. (Note that the single parameter of the partial function is never explicitly declared or named.) However, we can still call this variable exactly as if it were a normal function: scala> qsort(List(6,2,5,9)) res32: List[Int] = List(2, 5, 6, 9) Object-oriented extensions Scala is a pure object-oriented language in the sense that every value is an object. Data types and behaviors of objects are described by classes and traits. Class abstractions are extended by subclassing and by a flexible mixin-based composition mechanism to avoid the problems of multiple inheritance. Traits are Scala's replacement for Java's interfaces. Interfaces in Java versions under 8 are highly restricted, able only to contain abstract function declarations. This has led to criticism that providing convenience methods in interfaces is awkward (the same methods must be reimplemented in every implementation), and extending a published interface in a backwards-compatible way is impossible. Traits are similar to mixin classes in that they have nearly all the power of a regular abstract class, lacking only class parameters (Scala's equivalent to Java's constructor parameters), since traits are always mixed in with a class. The super operator behaves specially in traits, allowing traits to be chained using composition in addition to inheritance. The following example is a simple window system: abstract class Window: // abstract def draw() class SimpleWindow extends Window: def draw() println("in SimpleWindow") // draw a basic window trait WindowDecoration extends Window trait HorizontalScrollbarDecoration extends WindowDecoration: // "abstract override" is needed here for "super()" to work because the parent // function is abstract. If it were concrete, regular "override" would be enough. abstract override def draw() println("in HorizontalScrollbarDecoration") super.draw() // now draw a horizontal scrollbar trait VerticalScrollbarDecoration extends WindowDecoration: abstract override def draw() println("in VerticalScrollbarDecoration") super.draw() // now draw a vertical scrollbar trait TitleDecoration extends WindowDecoration: abstract override def draw() println("in TitleDecoration") super.draw() // now draw the title bar A variable may be declared thus: val mywin = new SimpleWindow with VerticalScrollbarDecoration with HorizontalScrollbarDecoration with TitleDecoration The result of calling mywin.draw() is: in TitleDecoration in HorizontalScrollbarDecoration in VerticalScrollbarDecoration in SimpleWindow In other words, the call to draw first executed the code in TitleDecoration (the last trait mixed in), then (through the super() calls) threaded back through the other mixed-in traits and eventually to the code in Window, even though none of the traits inherited from one another. This is similar to the decorator pattern, but is more concise and less error-prone, as it doesn't require explicitly encapsulating the parent window, explicitly forwarding functions whose implementation isn't changed, or relying on run-time initialization of entity relationships. In other languages, a similar effect could be achieved at compile-time with a long linear chain of implementation inheritance, but with the disadvantage compared to Scala that one linear inheritance chain would have to be declared for each possible combination of the mix-ins. Expressive type system Scala is equipped with an expressive static type system that mostly enforces the safe and coherent use of abstractions. The type system is, however, not sound. In particular, the type system supports: Classes and abstract types as object members Structural types Path-dependent types Compound types Explicitly typed self references Generic classes Polymorphic methods Upper and lower type bounds Variance Annotation Views Scala is able to infer types by use. This makes most static type declarations optional. Static types need not be explicitly declared unless a compiler error indicates the need. In practice, some static type declarations are included for the sake of code clarity. Type enrichment A common technique in Scala, known as "enrich my library" (originally termed "pimp my library" by Martin Odersky in 2006; concerns were raised about this phrasing due to its negative connotations and immaturity), allows new methods to be used as if they were added to existing types. This is similar to the C# concept of extension methods but more powerful, because the technique is not limited to adding methods and can, for instance, be used to implement new interfaces. In Scala, this technique involves declaring an implicit conversion from the type "receiving" the method to a new type (typically, a class) that wraps the original type and provides the additional method. If a method cannot be found for a given type, the compiler automatically searches for any applicable implicit conversions to types that provide the method in question. This technique allows new methods to be added to an existing class using an add-on library such that only code that imports the add-on library gets the new functionality, and all other code is unaffected. The following example shows the enrichment of type Int with methods isEven and isOdd: object MyExtensions: extension (i: Int) def isEven = i % 2 == 0 def isOdd = !i.isEven import MyExtensions.* // bring implicit enrichment into scope 4.isEven // -> true Importing the members of MyExtensions brings the implicit conversion to extension class IntPredicates into scope. Concurrency Scala's standard library includes support for futures and promises, in addition to the standard Java concurrency APIs. Originally, it also included support for the actor model, which is now available as a separate source-available platform Akka licensed by Lightbend Inc. Akka actors may be distributed or combined with software transactional memory (transactors). Alternative communicating sequential processes (CSP) implementations for channel-based message passing are Communicating Scala Objects, or simply via JCSP. An Actor is like a thread instance with a mailbox. It can be created by system.actorOf, overriding the receive method to receive messages and using the ! (exclamation point) method to send a message. The following example shows an EchoServer that can receive messages and then print them. val echoServer = actor(new Act: become: case msg => println("echo " + msg) ) echoServer ! "hi" Scala also comes with built-in support for data-parallel programming in the form of Parallel Collections integrated into its Standard Library since version 2.9.0. The following example shows how to use Parallel Collections to improve performance. val urls = List("https://scala-lang.org", "https://github.com/scala/scala") def fromURL(url: String) = scala.io.Source.fromURL(url) .getLines().mkString("\n") val t = System.currentTimeMillis() urls.par.map(fromURL(_)) // par returns parallel implementation of a collection println("time: " + (System.currentTimeMillis - t) + "ms") Besides futures and promises, actor support, and data parallelism, Scala also supports asynchronous programming with software transactional memory, and event streams. Cluster computing The most well-known open-source cluster-computing solution written in Scala is Apache Spark. Additionally, Apache Kafka, the publish–subscribe message queue popular with Spark and other stream processing technologies, is written in Scala. Testing There are several ways to test code in Scala. ScalaTest supports multiple testing styles and can integrate with Java-based testing frameworks. ScalaCheck is a library similar to Haskell's QuickCheck. specs2 is a library for writing executable software specifications. ScalaMock provides support for testing high-order and curried functions. JUnit and TestNG are popular testing frameworks written in Java. Versions Comparison with other JVM languages Scala is often compared with Groovy and Clojure, two other programming languages also using the JVM. Substantial differences between these languages exist in the type system, in the extent to which each language supports object-oriented and functional programming, and in the similarity of their syntax to that of Java. Scala is statically typed, while both Groovy and Clojure are dynamically typed. This makes the type system more complex and difficult to understand but allows almost all type errors to be caught at compile-time and can result in significantly faster execution. By contrast, dynamic typing requires more testing to ensure program correctness, and thus is generally slower, to allow greater programming flexibility and simplicity. Regarding speed differences, current versions of Groovy and Clojure allow optional type annotations to help programs avoid the overhead of dynamic typing in cases where types are practically static. This overhead is further reduced when using recent versions of the JVM, which has been enhanced with an invoke dynamic instruction for methods that are defined with dynamically typed arguments. These advances reduce the speed gap between static and dynamic typing, although a statically typed language, like Scala, is still the preferred choice when execution efficiency is very important. Regarding programming paradigms, Scala inherits the object-oriented model of Java and extends it in various ways. Groovy, while also strongly object-oriented, is more focused in reducing verbosity. In Clojure, object-oriented programming is deemphasised with functional programming being the main strength of the language. Scala also has many functional programming facilities, including features found in advanced functional languages like Haskell, and tries to be agnostic between the two paradigms, letting the developer choose between the two paradigms or, more frequently, some combination thereof. Regarding syntax similarity with Java, Scala inherits much of Java's syntax, as is the case with Groovy. Clojure on the other hand follows the Lisp syntax, which is different in both appearance and philosophy. Adoption Language rankings Back in 2013, when Scala was in version 2.10, the ThoughtWorks Technology Radar, which is an opinion based biannual report of a group of senior technologists, recommended Scala adoption in its languages and frameworks category. In July 2014, this assessment was made more specific and now refers to a “Scala, the good parts”, which is described as “To successfully use Scala, you need to research the language and have a very strong opinion on which parts are right for you, creating your own definition of Scala, the good parts.”. In the 2018 edition of the State of Java survey, which collected data from 5160 developers on various Java-related topics, Scala places third in terms of use of alternative languages on the JVM. Relative to the prior year's edition of the survey, Scala's use among alternative JVM languages fell from 28.4% to 21.5%, overtaken by Kotlin, which rose from 11.4% in 2017 to 28.8% in 2018. The Popularity of Programming Language Index, which tracks searches for language tutorials, ranked Scala 15th in April 2018 with a small downward trend, and 17th in Jan 2021. This makes Scala the 3rd most popular JVM-based language after Java and Kotlin, ranked 12th. The RedMonk Programming Language Rankings, which establishes rankings based on the number of GitHub projects and questions asked on Stack Overflow, in January 2021 ranked Scala 14th. Here, Scala was placed inside a second-tier group of languages–ahead of Go, PowerShell, and Haskell, and behind Swift, Objective-C, Typescript, and R. The TIOBE index of programming language popularity employs internet search engine rankings and similar publication counting to determine language popularity. In September 2021, it showed Scala in 31st place. In this ranking, Scala was ahead of Haskell (38th) and Erlang, but below Go (14th), Swift (15th), and Perl (19th). , JVM-based languages such as Clojure, Groovy, and Scala are highly ranked, but still significantly less popular than the original Java language, which is usually ranked in the top three places. Companies In April 2009, Twitter announced that it had switched large portions of its backend from Ruby to Scala and intended to convert the rest. Tesla, Inc. uses Akka with Scala in the backend of the Tesla Virtual Power Plant. Thereby, the Actor model is used for representing and operating devices that together with other components make up an instance of the virtual power plant, and Reactive Streams are used for data collection and data processing. Apache Kafka is implemented in Scala with regards to most of its core and other critical parts. It is maintained and extended through the open source project and by the company Confluent. Gilt uses Scala and Play Framework. Foursquare uses Scala and Lift. Coursera uses Scala and Play Framework. Apple Inc. uses Scala in certain teams, along with Java and the Play framework. The Guardian newspaper's high-traffic website guardian.co.uk announced in April 2011 that it was switching from Java to Scala. The New York Times revealed in 2014 that its internal content management system Blackbeard is built using Scala, Akka, and Play. The Huffington Post newspaper started to employ Scala as part of its content delivery system Athena in 2013. Swiss bank UBS approved Scala for general production use. LinkedIn uses the Scalatra microframework to power its Signal API. Meetup uses Unfiltered toolkit for real-time APIs. Remember the Milk uses Unfiltered toolkit, Scala and Akka for public API and real-time updates. Verizon seeking to make "a next-generation framework" using Scala. Airbnb develops open-source machine-learning software "Aerosolve", written in Java and Scala. Zalando moved its technology stack from Java to Scala and Play. SoundCloud uses Scala for its back-end, employing technologies such as Finagle (micro services), Scalding and Spark (data processing). Databricks uses Scala for the Apache Spark Big Data platform. Morgan Stanley uses Scala extensively in their finance and asset-related projects. There are teams within Google and Alphabet Inc. that use Scala, mostly due to acquisitions such as Firebase and Nest. Walmart Canada uses Scala for their back-end platform. Duolingo uses Scala for their back-end module that generates lessons. HMRC uses Scala for many UK Government tax applications. M1 Finance uses Scala for their back-end platform. Criticism In November 2011, Yammer moved away from Scala for reasons that included the learning curve for new team members and incompatibility from one version of the Scala compiler to the next. In March 2015, former VP of the Platform Engineering group at Twitter Raffi Krikorian, stated that he would not have chosen Scala in 2011 due to its learning curve. The same month, LinkedIn SVP Kevin Scott stated their decision to "minimize [their] dependence on Scala".
Technology
Programming languages
null
3255917
https://en.wikipedia.org/wiki/Transmission%20system%20operator
Transmission system operator
A transmission system operator (TSO) is an entity entrusted with transporting energy in the form of natural gas or electrical power on a national or regional level, using fixed infrastructure. The term is defined by the European Commission. The certification procedure for transmission system operators is listed in Article 10 of the Electricity and Gas Directives of 2009. Due to the cost of establishing a transmission infrastructure, such as main power lines or gas main lines and associated connection points, a TSO is usually a natural monopoly, and as such is often subjected to regulations. In electrical power business, a TSO is an operator that transmits electrical power from generation plants over the electrical grid to regional or local electricity distribution operators. In natural gas business, a TSO receives gas from producers, transports it via pipeline through an area and delivers to gas distribution companies. The United States has similar organizational categories: independent system operator (ISO) and regional transmission organization (RTO). Role in electrical power transmission Safety and reliability are a critical issue for transmission system operators, since any failure on their grid or their electrical generation sources might propagate to a very large number of customers, causing personal and property damages. Natural hazards and generation/consumption imbalances are a major cause of concern. To minimize the probability of grid instability and failure, regional or national transmission system operators are interconnected to each other. Electricity market operations The role of the system operator in a wholesale electricity market is to manage the security of the power system in real time and co-ordinate the supply of and demand for electricity, in a manner that avoids fluctuations in frequency or interruptions of supply. The system operator service is normally specified in rules or codes established as part of the electricity market. The system operator function may be owned by the transmission grid company, or may be fully independent. They are often wholly or partly owned by state or national governments. In many cases they are independent of electricity generation companies (upstream) and electricity distribution companies (downstream). They are financed either by the states or countries or by charging a toll proportional to the energy they carry. The system operator is required to maintain a continuous (second-by-second) balance between electricity supply from power stations and demand from consumers, and also ensure the provision of reserves that will allow for sudden contingencies. The system operator achieves this by determining the optimal combination of generating stations and reserve providers for each market trading period, instructing generators when and how much electricity to generate, and managing any contingent events that cause the balance between supply and demand to be disrupted. System operations staff undertake this work using sophisticated energy modelling and communications systems. In addition to its roles of real-time dispatch of generation and managing security, the system operator also carries out investigations and planning to ensure that supply can meet demand and system security can be maintained during future trading periods. Examples of planning work may include coordinating generator and transmission outages, facilitating commissioning of new generating plant and procuring ancillary services to support power system operation. Role in gas transmission A gas TSO works for the functioning of the internal market and cross-border trade for gas and to ensure the optimal management, coordinated operation and sound technical evolution of the natural gas transmission network. Some gas TSOs also provide the marketplace for gas trading.
Technology
Electricity transmission and distribution
null
201605
https://en.wikipedia.org/wiki/Sampling%20%28signal%20processing%29
Sampling (signal processing)
In signal processing, sampling is the reduction of a continuous-time signal to a discrete-time signal. A common example is the conversion of a sound wave to a sequence of "samples". A sample is a value of the signal at a point in time and/or space; this definition differs from the term's usage in statistics, which refers to a set of such values. A sampler is a subsystem or operation that extracts samples from a continuous signal. A theoretical ideal sampler produces samples equivalent to the instantaneous value of the continuous signal at the desired points. The original signal can be reconstructed from a sequence of samples, up to the Nyquist limit, by passing the sequence of samples through a reconstruction filter. Theory Functions of space, time, or any other dimension can be sampled, and similarly in two or more dimensions. For functions that vary with time, let be a continuous function (or "signal") to be sampled, and let sampling be performed by measuring the value of the continuous function every seconds, which is called the sampling interval or sampling period. Then the sampled function is given by the sequence: , for integer values of . The sampling frequency or sampling rate, , is the average number of samples obtained in one second, thus , with the unit samples per second, sometimes referred to as hertz, for example 48 kHz is 48,000 samples per second. Reconstructing a continuous function from samples is done by interpolation algorithms. The Whittaker–Shannon interpolation formula is mathematically equivalent to an ideal low-pass filter whose input is a sequence of Dirac delta functions that are modulated (multiplied) by the sample values. When the time interval between adjacent samples is a constant , the sequence of delta functions is called a Dirac comb. Mathematically, the modulated Dirac comb is equivalent to the product of the comb function with . That mathematical abstraction is sometimes referred to as impulse sampling. Most sampled signals are not simply stored and reconstructed. The fidelity of a theoretical reconstruction is a common measure of the effectiveness of sampling. That fidelity is reduced when contains frequency components whose cycle length (period) is less than 2 sample intervals (see Aliasing). The corresponding frequency limit, in cycles per second (hertz), is cycle/sample × samples/second = , known as the Nyquist frequency of the sampler. Therefore, is usually the output of a low-pass filter, functionally known as an anti-aliasing filter. Without an anti-aliasing filter, frequencies higher than the Nyquist frequency will influence the samples in a way that is misinterpreted by the interpolation process. Practical considerations In practice, the continuous signal is sampled using an analog-to-digital converter (ADC), a device with various physical limitations. This results in deviations from the theoretically perfect reconstruction, collectively referred to as distortion. Various types of distortion can occur, including: Aliasing. Some amount of aliasing is inevitable because only theoretical, infinitely long, functions can have no frequency content above the Nyquist frequency. Aliasing can be made arbitrarily small by using a sufficiently large order of the anti-aliasing filter. Aperture error results from the fact that the sample is obtained as a time average within a sampling region, rather than just being equal to the signal value at the sampling instant. In a capacitor-based sample and hold circuit, aperture errors are introduced by multiple mechanisms. For example, the capacitor cannot instantly track the input signal and the capacitor can not instantly be isolated from the input signal. Jitter or deviation from the precise sample timing intervals. Noise, including thermal sensor noise, analog circuit noise, etc.. Slew rate limit error, caused by the inability of the ADC input value to change sufficiently rapidly. Quantization as a consequence of the finite precision of words that represent the converted values. Error due to other non-linear effects of the mapping of input voltage to converted output value (in addition to the effects of quantization). Although the use of oversampling can completely eliminate aperture error and aliasing by shifting them out of the passband, this technique cannot be practically used above a few GHz, and may be prohibitively expensive at much lower frequencies. Furthermore, while oversampling can reduce quantization error and non-linearity, it cannot eliminate these entirely. Consequently, practical ADCs at audio frequencies typically do not exhibit aliasing, aperture error, and are not limited by quantization error. Instead, analog noise dominates. At RF and microwave frequencies where oversampling is impractical and filters are expensive, aperture error, quantization error and aliasing can be significant limitations. Jitter, noise, and quantization are often analyzed by modeling them as random errors added to the sample values. Integration and zero-order hold effects can be analyzed as a form of low-pass filtering. The non-linearities of either ADC or DAC are analyzed by replacing the ideal linear function mapping with a proposed nonlinear function. Applications Audio sampling Digital audio uses pulse-code modulation (PCM) and digital signals for sound reproduction. This includes analog-to-digital conversion (ADC), digital-to-analog conversion (DAC), storage, and transmission. In effect, the system commonly referred to as digital is in fact a discrete-time, discrete-level analog of a previous electrical analog. While modern systems can be quite subtle in their methods, the primary usefulness of a digital system is the ability to store, retrieve and transmit signals without any loss of quality. When it is necessary to capture audio covering the entire 20–20,000 Hz range of human hearing such as when recording music or many types of acoustic events, audio waveforms are typically sampled at 44.1 kHz (CD), 48 kHz, 88.2 kHz, or 96 kHz. The approximately double-rate requirement is a consequence of the Nyquist theorem. Sampling rates higher than about 50 kHz to 60 kHz cannot supply more usable information for human listeners. Early professional audio equipment manufacturers chose sampling rates in the region of 40 to 50 kHz for this reason. There has been an industry trend towards sampling rates well beyond the basic requirements: such as 96 kHz and even 192 kHz Even though ultrasonic frequencies are inaudible to humans, recording and mixing at higher sampling rates is effective in eliminating the distortion that can be caused by foldback aliasing. Conversely, ultrasonic sounds may interact with and modulate the audible part of the frequency spectrum (intermodulation distortion), degrading the fidelity. One advantage of higher sampling rates is that they can relax the low-pass filter design requirements for ADCs and DACs, but with modern oversampling delta-sigma-converters this advantage is less important. The Audio Engineering Society recommends 48 kHz sampling rate for most applications but gives recognition to 44.1 kHz for CD and other consumer uses, 32 kHz for transmission-related applications, and 96 kHz for higher bandwidth or relaxed anti-aliasing filtering. Both Lavry Engineering and J. Robert Stuart state that the ideal sampling rate would be about 60 kHz, but since this is not a standard frequency, recommend 88.2 or 96 kHz for recording purposes. A more complete list of common audio sample rates is: Bit depth Audio is typically recorded at 8-, 16-, and 24-bit depth, which yield a theoretical maximum signal-to-quantization-noise ratio (SQNR) for a pure sine wave of, approximately, 49.93 dB, 98.09 dB and 122.17 dB. CD quality audio uses 16-bit samples. Thermal noise limits the true number of bits that can be used in quantization. Few analog systems have signal to noise ratios (SNR) exceeding 120 dB. However, digital signal processing operations can have very high dynamic range, consequently it is common to perform mixing and mastering operations at 32-bit precision and then convert to 16- or 24-bit for distribution. Speech sampling Speech signals, i.e., signals intended to carry only human speech, can usually be sampled at a much lower rate. For most phonemes, almost all of the energy is contained in the 100 Hz – 4 kHz range, allowing a sampling rate of 8 kHz. This is the sampling rate used by nearly all telephony systems, which use the G.711 sampling and quantization specifications. Video sampling Standard-definition television (SDTV) uses either 720 by 480 pixels (US NTSC 525-line) or 720 by 576 pixels (UK PAL 625-line) for the visible picture area. High-definition television (HDTV) uses 720p (progressive), 1080i (interlaced), and 1080p (progressive, also known as Full-HD). In digital video, the temporal sampling rate is defined as the frame rateor rather the field raterather than the notional pixel clock. The image sampling frequency is the repetition rate of the sensor integration period. Since the integration period may be significantly shorter than the time between repetitions, the sampling frequency can be different from the inverse of the sample time: 50 Hz – PAL video 60 / 1.001 Hz ~= 59.94 Hz – NTSC video Video digital-to-analog converters operate in the megahertz range (from ~3 MHz for low quality composite video scalers in early games consoles, to 250 MHz or more for the highest-resolution VGA output). When analog video is converted to digital video, a different sampling process occurs, this time at the pixel frequency, corresponding to a spatial sampling rate along scan lines. A common pixel sampling rate is: 13.5 MHz – CCIR 601, D1 video Spatial sampling in the other direction is determined by the spacing of scan lines in the raster. The sampling rates and resolutions in both spatial directions can be measured in units of lines per picture height. Spatial aliasing of high-frequency luma or chroma video components shows up as a moiré pattern. 3D sampling The process of volume rendering samples a 3D grid of voxels to produce 3D renderings of sliced (tomographic) data. The 3D grid is assumed to represent a continuous region of 3D space. Volume rendering is common in medical imaging, X-ray computed tomography (CT/CAT), magnetic resonance imaging (MRI), positron emission tomography (PET) are some examples. It is also used for seismic tomography and other applications. Undersampling When a bandpass signal is sampled slower than its Nyquist rate, the samples are indistinguishable from samples of a low-frequency alias of the high-frequency signal. That is often done purposefully in such a way that the lowest-frequency alias satisfies the Nyquist criterion, because the bandpass signal is still uniquely represented and recoverable. Such undersampling is also known as bandpass sampling, harmonic sampling, IF sampling, and direct IF to digital conversion. Oversampling Oversampling is used in most modern analog-to-digital converters to reduce the distortion introduced by practical digital-to-analog converters, such as a zero-order hold instead of idealizations like the Whittaker–Shannon interpolation formula. Complex sampling Complex sampling (or I/Q sampling) is the simultaneous sampling of two different, but related, waveforms, resulting in pairs of samples that are subsequently treated as complex numbers. When one waveform, , is the Hilbert transform of the other waveform, , the complex-valued function, , is called an analytic signal, whose Fourier transform is zero for all negative values of frequency. In that case, the Nyquist rate for a waveform with no frequencies ≥ B can be reduced to just B (complex samples/sec), instead of (real samples/sec). More apparently, the equivalent baseband waveform, , also has a Nyquist rate of , because all of its non-zero frequency content is shifted into the interval . Although complex-valued samples can be obtained as described above, they are also created by manipulating samples of a real-valued waveform. For instance, the equivalent baseband waveform can be created without explicitly computing , by processing the product sequence, , through a digital low-pass filter whose cutoff frequency is . Computing only every other sample of the output sequence reduces the sample rate commensurate with the reduced Nyquist rate. The result is half as many complex-valued samples as the original number of real samples. No information is lost, and the original waveform can be recovered, if necessary.
Technology
Signal processing
null
201629
https://en.wikipedia.org/wiki/Gradian
Gradian
In trigonometry, the gradianalso known as the gon (), grad, or gradeis a unit of measurement of an angle, defined as one-hundredth of the right angle; in other words, 100 gradians is equal to 90 degrees. It is equivalent to of a turn, of a degree, or of a radian. Measuring angles in gradians (gons) is said to employ the centesimal system of angular measurement, initiated as part of metrication and decimalisation efforts. In continental Europe, the French word centigrade, also known as centesimal minute of arc, was in use for one hundredth of a grade; similarly, the centesimal second of arc was defined as one hundredth of a centesimal arc-minute, analogous to decimal time and the sexagesimal minutes and seconds of arc. The chance of confusion was one reason for the adoption of the term Celsius to replace centigrade as the name of the temperature scale. Gradians (Gons) are principally used in surveying (especially in Europe), and to a lesser extent in mining and geology. The gon (gradian) is a legally recognised unit of measurement in the European Union and in Switzerland. However, this unit is not part of the International System of Units (SI). History and name The unit originated in France in connection with the French Revolution as the , along with the metric system, hence it is occasionally referred to as a metric degree. Due to confusion with the existing term grad(e) in some northern European countries (meaning a standard degree, of a turn), the name gon was later adopted, first in those regions, and later as the international standard. In France, it was also called . In German, the unit was formerly also called (new degree) (whereas the standard degree was referred to as (old degree)), likewise in Danish, Swedish and Norwegian (also gradian), and in Icelandic. Although attempts at a general introduction were made, the unit was only adopted in some countries, and for specialised areas such as surveying, mining and geology. Today, the degree, of a turn, or the mathematically more convenient radian, of a turn (used in the SI system of units) is generally used instead. In the 1990s, most scientific calculators offered the gon (gradian), as well as radians and degrees, for their trigonometric functions. In the 2010s, some scientific calculators lack support for gradians. Symbol The international standard symbol for this unit is "gon" (see ISO 31-1, Annex B). Other symbols used in the past include "gr", "grd", and "g", the last sometimes written as a superscript, similarly to a degree sign: 50g = 45°. A metric prefix is sometimes used, as in "dgon", "cgon", "mgon", denoting respectively 0.1 gon, 0.01 gon, 0.001 gon. Centesimal arc-minutes and centesimal arc-seconds were also denoted with superscripts c and cc, respectively. Advantages and disadvantages Each quadrant is assigned a range of 100 gon, which eases recognition of the four quadrants, as well as arithmetic involving perpendicular or opposite angles. {| |- |align="right"| 0° ||align="center"| = ||align="right"| 0 gradians |- |align="right"| 90° ||align="center"| = ||align="right"| 100 gradians |- |align="right"| 180° ||align="center"| = ||align="right"| 200 gradians |- |align="right"| 270° ||align="center"| = ||align="right"| 300 gradians |- |align="right"| 360° ||align="center"| = ||align="right"| 400 gradians |} One advantage of this unit is that right angles to a given angle are easily determined. If one is sighting down a compass course of 117 gon, the direction to one's left is 17 gon, to one's right 217 gon, and behind one 317 gon. A disadvantage is that the common angles of 30° and 60° in geometry must be expressed in fractions (as  gon and  gon respectively). Conversion Relation to the metre In the 18th century, the metre was defined as the 10-millionth part of a quarter meridian. Thus, 1 gon corresponds to an arc length along the Earth's surface of approximately 100 kilometres; 1 centigon to 1 kilometre; 10 microgons to 1 metre. (The metre has been redefined with increasing precision since then.) Relation to the SI system of units The gradian is not part of the International System of Units (SI). The EU directive on the units of measurement notes that the gradian "does not appear in the lists drawn up by the CGPM, CIPM or BIPM." The most recent, 9th edition of the SI Brochure does not mention the gradian at all. The previous edition mentioned it only in the following footnote:
Physical sciences
Angle
Basics and measurement
201635
https://en.wikipedia.org/wiki/Cinereous%20vulture
Cinereous vulture
The cinereous vulture (Aegypius monachus) is a large raptor in the family Accipitridae and distributed through much of temperate Eurasia. It is also known as the black vulture, monk vulture and Eurasian black vulture. With a body length of , across the wings and a maximum weight of , it is the largest Old World vulture and largest member of the Accipitridae family. Aegypius monachus is one of the largest birds of prey and it plays a huge role in its various ecosystems by eating carcasses, which in turn reduces the spread of diseases. The vultures are constantly exposed to many pathogens because of their eating habits. A study of the cinereous vulture's gastric and immune defense systems conducted in 2015 sequenced the bird's entire genome. The study compared cinereous vultures to bald eagles, finding positively selected genetic variations associated with respiration and the ability of the vulture's immune defense responses and gastric acid secretion to digest carcasses. Taxonomy The genus name Aegypius is a Greek word (αἰγυπιός) for 'vulture', or a bird not unlike one; Aelian describes the aegypius as "halfway between a vulture (gyps) and an eagle". Some authorities think this a good description of a lammergeier; others do not. Aegypius is the eponym of the species, whatever it was in ancient Greek. The English name 'black vulture' refers to the plumage colour, while 'monk vulture', a direct translation of its German name Mönchsgeier, refers to the bald head and ruff of neck feathers like a monk's cowl. 'Cinereous vulture' (Latin cineraceus, ash-coloured; pale, whitish grey), was a deliberate attempt to rename it with a new name distinct from the American black vulture. This bird is an Old World vulture, and as such is only distantly related to the New World vultures, which are in a separate family, Cathartidae, of the same order. It is, therefore, not closely related to the much smaller American black vulture (Coragyps atratus) despite the similar name and coloration. Description The cinereous vulture measures in total length with a wingspan. Males can weigh from , whereas females can weigh from . It is thus one of the world's heaviest flying birds. Average weights were long not known to have been published for this species but the median weight figures from two sources were and . However in a Korean study, a large survey of wild cinereous vultures was found to have weighed an average of with a mean total length of , this standing as the only attempt to attain the average sizes of free-flying mature birds of the species, as opposed to nestlings or captive specimens. Unlike most accipitrids, males can broadly overlap in size with the females, although not uncommonly the females may be slightly heavier. These are one of the two largest extant Old World vultures and accipitrids, with similar total length and perhaps wingspans recorded in the Himalayan vulture (Gyps himalayensis), as indicated by broadly similar wing and tail proportions, but the cinereous appears to be slightly heavier as well as slightly larger in tarsus and bill length. Superficially similar but unrelated New World condors can either be of similar wing area and bulk or slightly larger in these aspects. Despite limited genetic variation in the species, body size increases from west to east based on standard measurements, with the birds from southwest Europe (Spain and south France) averaging about 10% smaller than the vultures from central Asia (Manchuria, Mongolia and northern China). Among standard measurements, the wing chord is , the tail is and the tarsus is . The cinereous vulture is distinctly dark, with the whole body being brown excepting the pale head in adults, which is covered in fine blackish down. This down is absent in the closely related lappet-faced vulture (Torgos tracheliotos). The skin of the head and neck is bluish-gray and a paler whitish color above the eye. The adult has brown eyes, a purplish cere, a blue-gray bill and pale blue-gray legs. The primary quills are often actually black. From a distance, flying birds can easily appear all black. The immature plumage is sepia-brown above, with a much paler underside than in adults. Immature cinereous vultures have grey down on the head, a pale mauve cere and grey legs. Its massive bill is one of the largest of any living accipitrid, a feature enhanced by the relatively small skull of the species. The exposed culmen of the cinereous vulture measures . Only their cousin, the lappet-faced vulture, with a bill length of up to about , can rival or outsize the bill of the cinereous. The wings, with serrated leading edges, are held straight or slightly arched in flight and are broad, sometimes referred to as "barn door wings". Its flight is slow and buoyant, with deep, heavy flaps when necessary. The combination of huge size and dark coloration renders the cinereous vulture relatively distinct, especially against smaller raptors such as eagles or buzzards. The most similar-shaped species, the lappet-faced vulture (with which there might be limited range overlap in the southern Middle East), is distinguished by its bare, pinkish head and contrasting plumage. On the lappet-face, the thighs and belly are whitish in adult birds against black to brownish over the remainder of the plumage. All potential Gyps vultures are distinguished by having paler, often streaky plumage, with bulging wing primaries giving them a less evenly broad-winged form. Cinereous vultures are generally very silent, with a few querulous mewing, roaring or guttural cries solely between adults and their offspring at the nest site. Distribution and habitat The cinereous vulture is a Eurasian species. The western limits of its range are in Spain and inland Portugal, with a reintroduced population in south France. They are found discontinuously to Greece, Turkey and throughout the central Middle East. Their range continues through Afghanistan eastwards to northern India to its eastern limits in central Asia, where they breed in northern Manchuria, Mongolia and Korea. Their range is fragmented especially throughout their European range. It is generally a permanent resident except in those parts of its range where hard winters cause limited altitudinal movement and for juveniles when they reach breeding maturity. In the eastern limits of its range, birds from the northernmost reaches may migrate down to southern Korea and China. A limited migration has also been reported in the Middle East but is not common. This vulture is a bird of hilly, mountainous areas, especially favoring dry semi-open habitats such as meadows at high altitudes over much of the range. Nesting usually occurs near the tree line in the mountains. They are always associated with undisturbed, remote areas with limited human disturbance. They forage for carcasses over various kinds of terrain, including steppe, other grasslands, open woodlands, along riparian habitats or any kind or gradient of mountainous habitat. In their current European range and through the Caucasus and Middle East, cinereous vultures are found from in elevation, while in their Asian distribution, they are typically found at higher elevations. Two habitat types were found to be preferred by the species in China and Tibet. Some cinereous vultures in these areas live in mountainous forests and shrubland from , while the others preferred arid or semi-arid alpine meadows and grasslands at in elevation. This species can fly at a very high altitude. One cinereous vulture was observed at an elevation of on Mount Everest. It has a specialised haemoglobin alpha subunit of high oxygen affinity which makes it possible to take up oxygen efficiently despite the low partial pressure in the upper troposphere. Behaviour The cinereous vulture is a largely solitary bird, being found alone or in pairs much more frequently than most other Old World vultures. At large carcasses or feeding sites, small groups may congregate. Such groups can rarely include up to 12 to 20 vultures, with some older reports of up to 30 or 40. Breeding In Europe, the cinereous vulture return to the nesting ground in January or February. In Spain and Algeria, they start nesting in February in March, in Crimea in early March, in northwestern India in February or April, in northeastern India in January, and in Turkestan in January. They breed in loose colonies, with nests rarely being found in the same tree or rock formation, unlike other Old World vultures which often nest in tight-knit colonies. In Spain, nests have been found from to apart from each other. The cinereous vulture breeds in high mountains and large forests, nesting in trees or occasionally on cliff ledges. The breeding season lasts from February until September or October. The most common display consists of synchronous flight movements by pairs. However, flight play between pairs and juveniles is not unusual, with the large birds interlocking talons and spiraling down through the sky. The birds use sticks and twigs as building materials, and males and females cooperate in all matters of rearing the young. The huge nest is across and deep. The nest increases in size as a pair uses it repeatedly over the years and often comes to be decorated with dung and animal skins. The nests can range up to high in a large tree such as an oak, juniper, wild pear, almond or pine trees. Most nesting trees are found along cliffs. In a few cases, cinereous vultures have been recorded as nesting directly on cliffs. One cliff nest completely filled a ledge that was wide and in depth. The egg clutch typically only a single egg, though two may be exceptionally laid. The eggs have a white or pale buff base color are often overlaid with red, purplish or red-brown marks, being almost as spotted as the egg of a falcon. Eggs measure from in height and in width, with an average of . The incubation period ranges from 50 to 62 days, averaging 50–56 days, and hatching occurs in April or May in Europe. The young are covered in greyish-white to grey-brown colored down which becomes paler with age. The first flight feathers start growing from the same sockets as the down when the nestling is around 30 days old and completely cover the down by 60 days of age. The parents feed the young by regurgitation and an active nest reportedly becomes very foul and stinking. Weights of nestlings in Mongolia increased from as little as when they are around a month old in early June to being slightly more massive than their parents at up to nearly shortly before fledging in early autumn. The nesting success of cinereous vultures is relatively high, with around 90% of eggs successfully hatching and more than half of yearling birds known to survive to adulthood. They are devoted, active parents, with both members of a breeding pair protecting the nest and feeding the young in shifts via regurgitation. In Mongolia, Pallas's cat (Otocolobus manul) and the common raven (Corvus corax) are considered potential predators of eggs in potentially both tree and cliff nests. Gray wolves (Canis lupus) and foxes are also mentioned as potential nest predators. There have been witnessed accounts of bearded vultures (Gypaetus barbatus) and Spanish imperial eagles (Aquila adalberti) attempting to kill nestlings, but in both cases they were chased off by the parents. There is a single case of a Spanish imperial eagle attacking and killing a cinereous vulture in an act of defense of its own nest in Spain. Golden eagles and Eurasian eagle-owls may rarely attempt to dispatch an older nestling or even adults in an ambush, but the species is not verified prey for either and it would be a rare event in all likelihood if it does occur. This species may live for up to 39 years, though 20 years or less is probably more common, with no regular predators of adults other than man. Feeding Like all vultures, the cinereous vulture eats mostly carrion. The cinereous vulture feeds on carrion of almost any type, from the largest mammals available to fish and reptiles. In Tibet, commonly eaten carcasses can include both wild and domestic yaks (Bos mutus and Bos grunniens), Bharal, Tibetan gazelles (Pseudois nayaur), kiangs (Equus kiang), woolly hares (Lepus oiostolus), Himalayan marmots (Marmota himalayana), domestic sheep (Ovis aries), and even humans, mainly those at their celestial burial grounds. Reportedly in Mongolia, Tarbagan marmots (Marmota sibirica) comprised the largest part of the diet, although that species is now endangered as it is preferred in the diet of local people, wild prey ranging from corsac fox (Vulpes corsac) to Argali (Ovis ammon) may be eaten additionally in Mongolia. Historically, cinereous vultures in the Iberian Peninsula fed mostly on European rabbit (Oryctolagus cuniculus) carcasses, but since viral hemorrhagic pneumonia (VHP) devastated the once abundant rabbit population there, the vultures now rely on the carrion of domestic sheep, supplemented by pigs (Sus scrofa domesticus) and deer. In Turkey, the dietary preferences were argali (Ovis ammon) (92 carrion items), wild boar (Sus scrofa) (53 items), chickens (Gallus gallus domesticus) (27 items), gray wolves (13 items) and red foxes (Vulpes vulpes) (13 items). Unusually, a large amount of plant material was found in pellets from Turkey, especially pine cones. Among the vultures in its range, the cinereous is best equipped to tear open tough carcass skins thanks to its powerful bill. It can even break apart bones, such as ribs, to access the flesh of large animals. It is dominant over other scavengers in its range, even over other large vultures such as Gyps vultures, bearded vultures or fierce ground predators such as foxes. While the noisy Gyps vultures squawk and fly around, the often silent cinereous vultures will keep them well at bay until they are satisfied and have had their own fill. A series of photos taken recently show a cinereous vulture attacking a Himalayan griffon in flight for unknown reasons, although the griffon was not seriously injured. Cinereous vultures frequently bully and dominate steppe eagles (Aquila nipalensis) when the two species are attracted to the same prey and carrion while wintering in Asia. A rare successful act of kleptoparasitism on a cinereous vulture was filmed in Korea when a Steller's sea eagle (Haliaeetus pelagicus) stole food from the vulture. Its closest living relative is probably the lappet-faced vulture, which takes live prey on occasion. Occasionally, the cinereous vulture has been recorded as preying on live prey as well. Live animals reportedly taken by cinereous vultures include calves of yaks and domestic cattle (Bos primigenius taurus), piglets, domestic lambs and puppies (Canis lupus familiaris), foxes, lambs of wild sheep, together with nestling and fledglings of large birds such as geese, swans and pheasants, various rodents and rarely amphibians and reptiles. This species has hunted tortoises (which the vultures are likely to kill by carrying in flight and dropping on rocks to penetrate the shell; cf. Aeschylus#Death) and lizards. Although rarely observed in the act of killing ungulates, cinereous vultures have been recorded as flying low around herds and feeding on recently killed wild ungulates they are believed to have killed. Mainly neonatal lambs or calves are hunted, especially sickly ones. Although not normally thought to be a threat to healthy domestic lambs, rare predation on apparently healthy lambs has been confirmed. Species believed to be hunted by cinereous vultures have included argali, saiga antelope (Saiga tatarica), Mongolian gazelle (Procapra gutturosa) and Tibetan antelope (Pantholops hodgsonii). Status and conservation The cinereous vulture has declined over most of its range in the last 200 years in part due to poisoning by eating poisoned bait put out to kill dogs and other predators, and to higher hygiene standards reducing the amount of available carrion; it is currently listed as Near Threatened. Vultures of all species, although not the target of poisoning operations, may be shot on sight by locals. Trapping and hunting of cinereous vultures is particularly prevalent in China and Russia, although the poaching for trophy hunting are also known for Armenia, and probably other countries in Caucasus. Perhaps an even greater threat to this desolation-loving species is development and habitat destruction. Nests, often fairly low in the main fork of a tree, are relatively easy to access and thus have been historically compromised by egg and firewood collectors regularly. The decline has been the greatest in the western half of the range, with extinction in many European countries (France, Italy, Austria, Poland, Slovakia, Albania, Moldova, Romania) and its entire breeding range in northwest Africa (Morocco and Algeria). They no longer nest in Israel. Turkey holds the second largest population of this species in the Western Palearctic. Despite the recent demographic bottleneck, this population has maintained moderate levels of genetic diversity, with no significant genetic structuring indicating that this is a single meta-population connected by frequent dispersal. More recently, protection and deliberate feeding schemes have allowed some local recoveries in numbers, particularly in Spain, where numbers increased to about 1,000 pairs by 1992 after an earlier decline to 200 pairs in 1970. This colony have now spread its breeding grounds to Portugal. Elsewhere in Europe, very small but increasing numbers breed in Bulgaria and Greece, and a re-introduction scheme is under way in France. Trends in the small populations in Ukraine (Crimea) and European Russia, and in Asian populations, are not well recorded. In the former USSR, it is still threatened by illegal capture for zoos, and in Tibet by rodenticides. It is a regular winter visitor around the coastal areas of Pakistan in small numbers. As of the turn of the 21st century, the worldwide population of cinereous vultures is estimated at 4,500–5,000 individuals. The most recent global population estimate for Cinereous Vulture (according to Bird Life International (2017)) is 7,800-10,500 pairs, roughly equating to 15,600-21,000 mature individuals. This consists of 2,300-2,500 pairs in Europe (2004) and 5,500-8,000 pairs in Asia. Culture and mythology The Hebrew word for "eagle" is also used for the cinereous vulture. As such, Biblical passages alluding to eagles might actually be referring to this or other vultures.
Biology and health sciences
Accipitrimorphae
Animals
201657
https://en.wikipedia.org/wiki/Scientific%20evidence
Scientific evidence
Scientific evidence is evidence that serves to either support or counter a scientific theory or hypothesis, although scientists also use evidence in other ways, such as when applying theories to practical problems. Such evidence is expected to be empirical evidence and interpretable in accordance with the scientific method. Standards for scientific evidence vary according to the field of inquiry, but the strength of scientific evidence is generally based on the results of statistical analysis and the strength of scientific controls. Principles of inference A person's assumptions or beliefs about the relationship between observations and a hypothesis will affect whether that person takes the observations as evidence. These assumptions or beliefs will also affect how a person utilizes the observations as evidence. For example, the Earth's apparent lack of motion may be taken as evidence for a geocentric cosmology. However, after sufficient evidence is presented for heliocentric cosmology and the apparent lack of motion is explained, the initial observation is strongly discounted as evidence. When rational observers have different background beliefs, they may draw different conclusions from the same scientific evidence. For example, Priestley, working with phlogiston theory, explained his observations about the decomposition of mercuric oxide using phlogiston. In contrast, Lavoisier, developing the theory of elements, explained the same observations with reference to oxygen. A causal relationship between the observations and hypothesis does not exist to cause the observation to be taken as evidence, but rather the causal relationship is provided by the person seeking to establish observations as evidence. A more formal method to characterize the effect of background beliefs is Bayesian inference. In Bayesian inference, beliefs are expressed as percentages indicating one's confidence in them. One starts from an initial probability (a prior), and then updates that probability using Bayes' theorem after observing evidence. As a result, two independent observers of the same event will rationally arrive at different conclusions if their priors (previous observations that are also relevant to the conclusion) differ. The importance of background beliefs in the determination of what observations are evidence can be illustrated using deductive reasoning, such as syllogisms. If either of the propositions is not accepted as true, the conclusion will not be accepted either. Utility of scientific evidence Philosophers, such as Karl R. Popper, have provided influential theories of the scientific method within which scientific evidence plays a central role. In summary, Popper provides that a scientist creatively develops a theory that may be falsified by testing the theory against evidence or known facts. Popper's theory presents an asymmetry in that evidence can prove a theory wrong, by establishing facts that are inconsistent with the theory. In contrast, evidence cannot prove a theory correct because other evidence, yet to be discovered, may exist that is inconsistent with the theory. Philosophical versus scientific views In the 20th century, many philosophers investigated the logical relationship between evidence statements and hypotheses, whereas scientists tended to focus on how the data used for statistical inference are generated. But according to philosopher Deborah Mayo, by the end of the 20th century philosophers had come to understand that "there are key features of scientific practice that are overlooked or misdescribed by all such logical accounts of evidence, whether hypothetico-deductive, Bayesian, or instantiationist". There were a variety of 20th-century philosophical approaches to decide whether an observation may be considered evidence; many of these focused on the relationship between the evidence and the hypothesis. In the 1950s, Rudolf Carnap recommended distinguishing such approaches into three categories: classificatory (whether the evidence confirms the hypothesis), comparative (whether the evidence supports a first hypothesis more than an alternative hypothesis) or quantitative (the degree to which the evidence supports a hypothesis). A 1983 anthology edited by Peter Achinstein provided a concise presentation by prominent philosophers on scientific evidence, including Carl Hempel (on the logic of confirmation), R. B. Braithwaite (on the structure of a scientific system), Norwood Russell Hanson (on the logic of discovery), Nelson Goodman (of grue fame, on a theory of projection), Rudolf Carnap (on the concept of confirming evidence), Wesley C. Salmon (on confirmation and relevance), and Clark Glymour (on relevant evidence). In 1990, William Bechtel provided four factors (clarity of the data, replication by others, consistency with results arrived at by alternative methods, and consistency with plausible theories of mechanisms) that biologists used to settle controversies about procedures and reliability of evidence. In 2001, Achinstein published his own book on the subject titled The Book of Evidence, in which, among other topics, he distinguished between four concepts of evidence: epistemic-situation evidence (evidence relative to a given epistemic situation), subjective evidence (considered to be evidence by a particular person at a particular time), veridical evidence (a good reason to believe that a hypothesis is true), and potential evidence (a good reason to believe that a hypothesis is highly probable). Achinstein defined all his concepts of evidence in terms of potential evidence, since any other kind of evidence must at least be potential evidence, and he argued that scientists mainly seek veridical evidence but they also use the other concepts of evidence, which rely on a distinctive concept of probability, and Achinstein contrasted this concept of probability with previous probabilistic theories of evidence such as Bayesian, Carnapian, and frequentist. Simplicity is one common philosophical criterion for scientific theories. Based on the philosophical assumption of the strong Church-Turing thesis, a mathematical criterion for evaluation of evidence has been conjectured, with the criterion having a resemblance to the idea of Occam's razor that the simplest comprehensive description of the evidence is most likely correct. It states formally, "The ideal principle states that the prior probability associated with the hypothesis should be given by the algorithmic universal probability, and the sum of the log universal probability of the model plus the log of the probability of the data given the model should be minimized." However, some philosophers (including Richard Boyd, Mario Bunge, John D. Norton, and Elliott Sober) have adopted a skeptical or deflationary view of the role of simplicity in science, arguing in various ways that its importance has been overemphasized. Emphasis on hypothesis testing as the essence of science is prevalent among both scientists and philosophers. However, philosophers have noted that testing hypotheses by confronting them with new evidence does not account for all the ways that scientists use evidence. For example, when Geiger and Marsden scattered alpha particles through thin gold foil, the resulting data enabled their experimental adviser, Ernest Rutherford, to very accurately calculate the mass and size of an atomic nucleus for the first time. Rutherford used the data to develop a new atomic model, not only to test an existing hypothesis; such use of evidence to produce new hypotheses is sometimes called abduction (following C. S. Peirce). Social-science methodologist Donald T. Campbell, who emphasized hypothesis testing throughout his career, later increasingly emphasized that the essence of science is "not experimentation per se" but instead the iterative competition of "plausible rival hypotheses", a process that at any given phase may start from evidence or may start from hypothesis. Other scientists and philosophers have emphasized the central role of questions and problems in the use of data and hypotheses. Concept of scientific proof While the phrase "scientific proof" is often used in the popular media, many scientists and philosophers have argued that there is really no such thing as infallible proof. For example, Karl Popper once wrote that "In the empirical sciences, which alone can furnish us with information about the world we live in, proofs do not occur, if we mean by 'proof' an argument which establishes once and for ever the truth of a theory." Albert Einstein said: However, in contrast to the ideal of infallible proof, in practice theories may be said to be proved according to some standard of proof used in a given inquiry. In this limited sense, proof is the high degree of acceptance of a theory following a process of inquiry and critical evaluation according to the standards of a scientific community.
Physical sciences
Science basics
Basics and measurement
201676
https://en.wikipedia.org/wiki/Stationery
Stationery
Stationery refers to writing materials, including cut paper, envelopes, continuous form paper, and other office supplies. Stationery usually specifies materials to be written on by hand (e.g., letter paper) or by equipment such as computer printers. History of stationery Originally, the term 'stationery' referred to all products sold by a stationer, whose name indicated that his book shop was on a fixed spot. This was usually somewhere near a university, and permanent, while medieval trading was mainly carried on by itinerant peddlers (including chapmen, who sold books) and others (such as farmers and craftsmen) at markets and fairs. It was a unique term used between the 13th and 15th centuries in the manuscript culture. Stationers' shops were places where books were bound, copied, and published. These shops often loaned books to nearby university students for a fee. The books were loaned out in sections, allowing students to study or copy them, and the only way to get the next part of the book was to return the previous section. In some cases, stationers' shops became the preferred choice for scholars to find books, instead of university libraries due to stationers' shops' wider collection of books. The Stationers' Company formerly held a monopoly over the publishing industry in England and was responsible for copyright regulations. Uses of stationery Printing Printing is the process of applying a colouring agent to a surface to create a body of text or illustrations. This is often achieved through printing technology, but can be done by hand using more traditional methods. The earliest form of printing is wood blocking. Letterpress Letterpress is a process of printing several identical copies that presses words and designs onto the page. The print may be inked or blind, but is typically done in a single color. Motifs or designs may be added as many letterpress machines use movable plates that must be hand-set. Letterpress printing remained the primary method of printing until the 19th century. Single documents When a single document needs to be produced, it may be handwritten or printed, typically by a computer printer. Several copies of one original paper can be produced by some printers using multipart stationery. Typing with a typewriter is largely obsolete, having been superseded for most purposes by preparing a document with a word processor and then printing it. Thermographic Thermographic printing is a process that involves several stages but can be implemented in a low-cost manufacturing process. The process involves printing the desired designs or text with an ink that remains wet, rather than drying on contact with the paper. The paper is then dusted with a powdered polymer that adheres to the ink. The paper is vacuumed or agitated, mechanically or by hand, to remove excess powder, and then heated to near combustion. The wet ink and polymer bond and dry, resulting in a raised print surface similar to the result of an engraving process. Embossing Embossing is a printing technique used to create raised surfaces in the converted paper stock. The process relies upon mated dies that press the paper into a shape that can be observed on both the front and back surfaces. Two things are required during the process of embossing: a die and a stock. The result is a three-dimensional (3D) effect that emphasizes a particular area of the design. Engraving Engraving is a process that requires a design to be cut into a plate made of relatively hard material. The metal plate is first polished so that the design cut can be easily visible to the person. This technology has a long history and requires a significant amount of skill, experience, and expertise. The finished plate is usually covered in ink, and then the ink is removed from all of the un-etched portions of the plate. The plate is then pressed into paper under substantial pressure. The result is a design that is slightly raised on the surface of the paper and covered in ink. Due to the cost of the process and expertise required, many consumers opt for thermographic printing, a process that results in a similarly raised print surface, but through different means at less cost. Classifications Business Stationery: Business card, letterhead, invoices, receipts Ink and toner: Dot matrix printer's ink ribbon Inkjet cartridge Laser printer toner Photocopier toner Filing and storage: Expandable file File folder Hanging file folder Index cards and files Two-pocket portfolios Mailing and shipping supplies: Envelope Paper and pad: Notebooks, wirebound notebook, writing pads, college ruled paper, wide-ruled paper, Office paper: dot matrix paper, inkjet printer paper, laser printer paper, photocopy paper. Loose leaves School supplies Many shops that sell stationery also sell other school supplies for students in primary and secondary education, including pocket calculators, display boards, compasses and protractors, set squares, lunch boxes, and related items. Major brands, manufacturers and retailers of stationery This section contains an incomplete list of famous brands, manufacturers and retailers of stationery worldwide. In US and Canada, Office Depot and Staples are two major retailers of stationery. Notable stationery brands in Europe include LAMY, MOLESKINE, Staedtler, and Faber-Castell. In Japan, major manufacturers of stationery include Kokuyo, Maruman, Lihit Lab, King Jim, MUJI and Tombow. MUJI also has about 800 retail stores worldwide. In mainland China, 晨光文具 (Chén guāng wén jù) is a major manufacturer and retailer of stationery, and MUJI is a popular retailer in larger cities.
Technology
Media and communication: Basics
null
201689
https://en.wikipedia.org/wiki/Polypropylene
Polypropylene
Polypropylene (PP), also known as polypropene, is a thermoplastic polymer used in a wide variety of applications. It is produced via chain-growth polymerization from the monomer propylene. Polypropylene belongs to the group of polyolefins and is partially crystalline and non-polar. Its properties are similar to polyethylene, but it is slightly harder and more heat-resistant. It is a white, mechanically rugged material and has a high chemical resistance. Polypropylene is the second-most widely produced commodity plastic (after polyethylene). History Phillips Petroleum chemists J. Paul Hogan and Robert Banks first demonstrated the polymerization of propylene in 1951. The stereoselective polymerization to the isotactic was discovered by Giulio Natta and Karl Rehn in March 1954. This pioneering discovery led to large-scale commercial production of isotactic polypropylene by the Italian firm Montecatini from 1957 onwards. Syndiotactic polypropylene was also first synthesized by Natta. Interest in polypropylene development is ongoing to the present. For example, making polypropylene from bio-based resources is a topic of interest in the 21st century. Chemical and physical properties Polypropylene is in little aspects similar to polyethylene, especially in solution behavior and electrical properties. The methyl group improves mechanical properties and thermal resistance, although the chemical resistance decreases. The properties of polypropylene depend on the molecular weight and molecular weight distribution, crystallinity, type and proportion of comonomer (if used) and the isotacticity. In isotactic polypropylene, for example, the methyl groups are oriented on one side of the carbon backbone. This arrangement creates a greater degree of crystallinity and results in a stiffer material that is more resistant to creep than both atactic polypropylene and polyethylene. Mechanical properties The density of PP is between 0.895 and 0.93 g/cm3. Therefore, PP is the commodity plastic with the lowest density. With lower density, moldings parts with lower weight and more parts of a certain mass of plastic can be produced. Unlike polyethylene, crystalline and amorphous regions differ only slightly in their density. However, the density of polyethylene can significantly change with fillers. The Young's modulus of PP is between 1300 and 1800 N/mm². Polypropylene is normally tough and flexible, especially when copolymerized with ethylene. This allows polypropylene to be used as an engineering plastic, competing with materials such as acrylonitrile butadiene styrene (ABS). Polypropylene is reasonably economical. Polypropylene has good resistance to fatigue. Thermal properties The melting point of polypropylene occurs in a range, so the melting point is determined by finding the highest temperature of a differential scanning calorimetry chart. Perfectly isotactic PP has a melting point of . Commercial isotactic PP has a melting point that ranges from , depending on atactic material and crystallinity. Syndiotactic PP with a crystallinity of 30% has a melting point of . Below 0 °C, PP becomes brittle. The thermal expansion of PP is significant, but somewhat less than that of polyethylene. Chemical properties Propylene molecules prefer to join together "head-to-tail", giving a chain with methyl groups on every other carbon, but some randomness occurs. Polypropylene at room temperature is resistant to fats and almost all organic solvents, apart from strong oxidants. Non-oxidizing acids and bases can be stored in containers made of PP. At elevated temperature, PP can be dissolved in nonpolar solvents such as xylene, tetralin and decalin. Due to the tertiary carbon atom, PP is chemically less resistant than PE (see Markovnikov rule). Most commercial polypropylene is isotactic and has an intermediate level of crystallinity between that of low-density polyethylene (LDPE) and high-density polyethylene (HDPE). Isotactic & atactic polypropylene is soluble in p-xylene at 140 °C. Isotactic precipitates when the solution is cooled to 25 °C and atactic portion remains soluble in p-xylene. The melt flow rate (MFR) or melt flow index (MFI) is a measure of molecular weight of polypropylene. The measure helps to determine how easily the molten raw material will flow during processing. Polypropylene with higher MFR will fill the plastic mold more easily during the injection or blow-molding production process. As the melt flow increases, however, some physical properties, like impact strength, will decrease. There are three general types of polypropylene: homopolymer, random copolymer, and block copolymer. The comonomer is typically used with ethylene. Ethylene-propylene rubber or EPDM added to polypropylene homopolymer increases its low temperature impact strength. Randomly polymerized ethylene monomer added to polypropylene homopolymer decreases the polymer crystallinity, lowers the melting point and makes the polymer more transparent. Molecular structure – tacticity Polypropylene can be categorized as atactic polypropylene (aPP), syndiotactic polypropylene (sPP) and isotactic polypropylene (iPP). In case of atactic polypropylene, the methyl group (-CH3) is randomly aligned, alternating (alternating) for syndiotactic polypropylene and evenly for isotactic polypropylene. This has an impact on the crystallinity (amorphous or semi-crystalline) and the thermal properties (expressed as glass transition point Tg and melting point Tm). The term tacticity describes for polypropylene how the methyl group is oriented in the polymer chain. Commercial polypropylene is usually isotactic. This article therefore always refers to isotactic polypropylene, unless stated otherwise. The tacticity is usually indicated in percent, using the isotactic index (according to DIN 16774). The index is measured by determining the fraction of the polymer insoluble in boiling heptane. Commercially available polypropylenes usually have an isotactic index between 85 and 95%. The tacticity effects the polymer's physical properties. As the methyl group is in isotactic propylene consistently located at the same side, it forces the macromolecule in a helical shape, as also found in starch. An isotactic structure leads to a semi-crystalline polymer. The higher the isotonicity(the isotactic fraction), the greater the crystallinity, and thus also the softening point, rigidity, e-modulus and hardness. Atactic polypropylene, on the other hand, lacks any regularity, which prevents it from crystallization, thereby creating an amorphous material. Crystal structure of polypropylene Isotactic polypropylene has a high degree of crystallinity, in industrial products 30–60%. Syndiotactic polypropylene is slightly less crystalline, atactic PP is amorphous (not crystalline). Isotactic polypropylene (iPP) Is polypropylene can exist in various crystalline modifications which differ by the molecular arrangement of the polymer chains. The crystalline modifications are categorized into the α-, β- and γ-modification as well as mesomorphic (smectic) forms. The α-modifications is predominant in iPP. Such crystals are built from lamellae in the form of folded chains. A characteristic anomaly is that the lame are arranged in the so-called "cross-hatched" structure. The melting point of α-crystalline regions is given as 185 to 220 °C, the density as 0.936 to 0.946 g·cm−3. The β-modification is in comparison somewhat less ordered, as a result of which it forms faster and has a lower melting point of 170 to 200 °C. The formation of the β-modification can be promoted by nucleating agents, suitable temperatures and shear stress. The γ-modification is hardly formed under the conditions used in industry and is poorly understood. The mesomorphic modification, however, occurs often in industrial processing, since the plastic is usually cooled quickly. The degree of order of the mesomorphic phase ranges between the crystalline and the amorphous phase, its density is with 0.916 g·cm−3 comparatively. The mesomorphic phase is considered as cause for the transparency in rapidly cooled films (due to low order and small crystallites). Syndiotactic polypropylene (sPP) Syndiotactic polypropylene was discovered much later than isotactic PP and could only be prepared by using metallocene catalysts. Syndiotactic PP has a lower melting point, with 161 to 186 °C, depending on the degree of tacticity. Atactic polypropylene (aPP) Atactic polypropylene is amorphous and has therefore no crystal structure. Due to its lack of crystallinity, it is readily soluble even at moderate temperatures, which allows to separate it as by-product from isotactic polypropylene by extraction. However, the aPP obtained this way is not completely amorphous but can still contain 15% crystalline parts. Atactic polypropylene can also be produced selectively using metallocene catalysts, atactic polypropylene produced this way has a considerably higher molecular weight. Atactic polypropylene has lower density, melting point and softening temperature than the crystalline types and is tacky and rubber-like at room temperature. It is a colorless, cloudy material and can be used between −15 and +120 °C. Atactic polypropylene is used as a sealant, as an insulating material for automobiles and as an additive to bitumen. Copolymers Polypropylene copolymers are in use as well. A particularly important one is polypropylene random copolymer (PPR or PP-R), a random copolymer with polyethylene used for plastic pipework. PP-RCT Polypropylene random crystallinity temperature (PP-RCT), also used for plastic pipework, is a new form of this plastic. It achieves higher strength at high temperature by β-crystallization. Degradation Polypropylene is liable to chain degradation from exposure to temperatures above 100 °C. Oxidation usually occurs at the tertiary carbon centers leading to chain breaking via reaction with oxygen. In external applications, degradation is evidenced by cracks and crazing. It may be protected by the use of various polymer stabilizers, including UV-absorbing additives and anti-oxidants such as phosphites (e.g. tris(2,4-di-tert-butylphenyl)phosphite) and hindered phenols, which prevent polymer degradation. Microbial communities isolated from soil samples mixed with starch have been shown to be capable of degrading polypropylene. Polypropylene has been reported to degrade while in the human body as implantable mesh devices. The degraded material forms a tree bark-like layer at the surface of mesh fibers. Optical properties PP can be made translucent when uncolored but is not as readily made transparent as polystyrene, acrylic, or certain other plastics. It is often opaque or colored using pigments. Production Polypropylene is produced by the chain-growth polymerization of propene: The industrial production processes can be grouped into gas phase polymerization, bulk polymerization and slurry polymerization. All state-of-the-art processes use either gas-phase or bulk reactor systems. In gas-phase and slurry-reactors, the polymer is formed around heterogeneous catalyst particles. The gas-phase polymerization is carried out in a fluidized bed reactor, propene is passed over a bed containing the heterogeneous (solid) catalyst and the formed polymer is separated as a fine powder and then converted into pellets. Unreacted gas is recycled and fed back into the reactor. In bulk polymerization, liquid propene acts as a solvent to prevent the precipitation of the polymer. The polymerization proceeds at 60 to 80 °C and 30–40 atm are applied to keep the propene in the liquid state. For the bulk polymerization, typically loop reactors are applied. The bulk polymerization is limited to a maximum of 5% ethene as comonomer due to a limited solubility of the polymer in the liquid propene. In the slurry polymerization, typically C4–C6 alkanes (butane, pentane or hexane) are utilized as inert diluent to suspend the growing polymer particles. Propene is introduced into the mixture as a gas. Catalysts The properties of PP are strongly affected by its tacticity, the orientation of the methyl groups () relative to the methyl groups in neighboring monomer units. A Ziegler–Natta catalyst is able to restrict linking of monomer molecules to a specific orientation, either isotactic, when all methyl groups are positioned at the same side with respect to the backbone of the polymer chain, or syndiotactic, when the positions of the methyl groups alternate. Commercially available isotactic polypropylene is made with two types of Ziegler-Natta catalysts. The first group of the catalysts encompasses solid (mostly supported) catalysts and certain types of soluble metallocene catalysts. Such isotactic macromolecules coil into a helical shape; these helices then line up next to one another to form the crystals that give commercial isotactic polypropylene many of its desirable properties. Modern supported Ziegler-Natta catalysts developed for the polymerization of propylene and other 1-alkenes to isotactic polymers usually use as an active ingredient and as a support. The catalysts also contain organic modifiers, either aromatic acid esters and diesters or ethers. These catalysts are activated with special co-catalysts containing an organoaluminium compound such as Al(C2H5)3 and the second type of a modifier. The catalysts are differentiated depending on the procedure used for fashioning catalyst particles from MgCl2 and depending on the type of organic modifiers employed during catalyst preparation and use in polymerization reactions. Two most important technological characteristics of all the supported catalysts are high productivity and a high fraction of the crystalline isotactic polymer they produce at 70–80 °C under standard polymerization conditions. Commercial synthesis of isotactic polypropylene is usually carried out either in the medium of liquid propylene or in gas-phase reactors. Commercial synthesis of syndiotactic polypropylene is carried out with the use of a special class of metallocene catalysts. They employ bridged bis-metallocene complexes of the type bridge-(Cp1)(Cp2)ZrCl2 where the first Cp ligand is the cyclopentadienyl group, the second Cp ligand is the fluorenyl group, and the bridge between the two Cp ligands is -CH2-CH2-, >SiMe2, or >SiPh2. These complexes are converted to polymerization catalysts by activating them with a special organoaluminium co-catalyst, methylaluminoxane (MAO). Atactic polypropylene is an amorphous rubbery material. It can be produced commercially either with a special type of supported Ziegler-Natta catalyst or with some metallocene catalysts. Manufacturing from polypropylene Melting process of polypropylene can be achieved via extrusion and molding. Common extrusion methods include production of melt-blown and spun-bond fibers to form long rolls for future conversion into a wide range of useful products, such as face masks, filters, diapers and wipes. The most common shaping technique is injection molding, which is used for parts such as cups, cutlery, vials, caps, containers, housewares, and automotive parts such as batteries. The related techniques of blow molding and injection-stretch blow molding are also used, which involve both extrusion and molding. The large number of end-use applications for polypropylene are often possible because of the ability to tailor grades with specific molecular properties and additives during its manufacture. For example, antistatic additives can be added to help polypropylene surfaces resist dust and dirt. Many physical finishing techniques can also be used on polypropylene, such as machining. Surface treatments can be applied to polypropylene parts in order to promote adhesion of printing ink and paints. Expanded Polypropylene (EPP) has been produced through both solid and melt state processing. EPP is manufactured using melt processing with either chemical or physical blowing agents. Expansion of PP in solid state, due to its highly crystalline structure, has not been successful. In this regard, two novel strategies were developed for expansion of PP. It was observed that PP can be expanded to make EPP through controlling its crystalline structure or through blending with other polymers. Biaxially oriented polypropylene (BOPP) When polypropylene film is extruded and stretched in both the machine direction and across machine direction it is called biaxially oriented polypropylene. Two methods are widely used for producing BOPP films, namely, a bi-directional stenter process or a double-bubble blown film extrusion process. Biaxial orientation increases strength and clarity. BOPP is widely used as a packaging material for packaging products such as snack foods, fresh produce and confectionery. It is easy to coat, print and laminate to give the required appearance and properties for use as a packaging material. This process is normally called converting. It is normally produced in large rolls which are slit on slitting machines into smaller rolls for use on packaging machines. BOPP is also used for stickers and labels in addition to OPP. It is non-reactive, which makes BOPP suitable for safe use in the pharmaceutical and food industry. It is one of the most important commercial polyolefin films. BOPP films are available in different thicknesses and widths. They are transparent and flexible. Applications As polypropylene is resistant to fatigue, most plastic living hinges, such as those on flip-top bottles, are made from this material. However, it is important to ensure that chain molecules are oriented across the hinge to maximise strength. Polypropylene is used in the manufacturing of piping systems, both ones concerned with high purity and ones designed for strength and rigidity (e.g., those intended for use in potable plumbing, hydronic heating and cooling, and reclaimed water). This material is often chosen for its resistance to corrosion and chemical leaching, its resilience against most forms of physical damage, including impact and freezing, its environmental benefits, and its ability to be joined by heat fusion rather than gluing. Many plastic items for medical or laboratory use can be made from polypropylene because it can withstand the heat in an autoclave. Its heat resistance also enables it to be used as the manufacturing material of consumer-grade kettles. Food containers made from it will not melt in the dishwasher, and do not melt during industrial hot filling processes. For this reason, most plastic tubs for dairy products are polypropylene sealed with aluminum foil (both heat-resistant materials). After the product has cooled, the tubs are often given lids made of a less heat-resistant material, such as LDPE or polystyrene. Such containers provide a good hands-on example of the difference in modulus, since the rubbery (softer, more flexible) feeling of LDPE with respect to polypropylene of the same thickness is readily apparent. Rugged, translucent, reusable plastic containers made in a wide variety of shapes and sizes for consumers from various companies such as Rubbermaid and Sterilite are commonly made of polypropylene, although the lids are often made of somewhat more flexible LDPE so they can snap onto the container to close it. Polypropylene can also be made into disposable bottles to contain liquid, powdered, or similar consumer products, although HDPE and polyethylene terephthalate are commonly also used to make bottles. Plastic pails, car batteries, wastebaskets, pharmacy prescription bottles, cooler containers, dishes and pitchers are often made of polypropylene or HDPE, both of which commonly have rather similar appearance, feel, and properties at ambient temperature. An abundance of medical devices are made from PP. A common application for polypropylene is as biaxially oriented polypropylene (BOPP). These BOPP sheets are used to make a wide variety of materials including clear bags. When polypropylene is biaxially oriented, it becomes crystal clear and serves as an excellent packaging material for artistic and retail products. Polypropylene, highly colorfast, is widely used in manufacturing carpets, rugs and mats to be used at home. Polypropylene is widely used in ropes, distinctive because they are light enough to float in water. For equal mass and construction, polypropylene rope is similar in strength to polyester rope. Polypropylene costs less than most other synthetic fibers. Polypropylene is also used as an alternative to polyvinyl chloride (PVC) as insulation for electrical cables for LSZH cable in low-ventilation environments, primarily tunnels. This is because it emits less smoke and no toxic halogens, which may lead to production of acid in high-temperature conditions. Polypropylene is also used in particular roofing membranes as the waterproofing top layer of single-ply systems as opposed to modified-bit systems. Polypropylene is most commonly used for plastic moldings, wherein it is injected into a mold while molten, forming complex shapes at relatively low cost and high volume; examples include bottle tops, bottles, and fittings. It can also be produced in sheet form, widely used for the production of stationery folders, packaging, and storage boxes. The wide color range, durability, low cost, and resistance to dirt make it ideal as a protective cover for papers and other materials. It is used in Rubik's Cube stickers because of these characteristics. The availability of sheet polypropylene has provided an opportunity for the use of the material by designers. The light-weight, durable, and colorful plastic makes an ideal medium for the creation of light shades, and a number of designs have been developed using interlocking sections to create elaborate designs. Polypropylene fibres are used as a concrete additive to increase strength and reduce cracking and spalling. In some areas susceptible to earthquakes (e.g., California), PP fibers are added with soils to improve the soil's strength and damping when constructing the foundation of structures such as buildings, bridges, etc. Clothing Polypropylene is a major polymer used in nonwovens, with over 50% used for diapers or sanitary products where it is treated to absorb water (hydrophilic) rather than naturally repelling water (hydrophobic). Other non-woven uses include filters for air, gas, and liquids in which the fibers can be formed into sheets or webs that can be pleated to form cartridges or layers that filter in various efficiencies in the 0.5 to 30 micrometre range. Such applications occur in houses as water filters or in air-conditioning-type filters. The high surface-area and naturally oleophilic polypropylene nonwovens are ideal absorbers of oil spills with the familiar floating barriers near oil spills on rivers. Polypropylene, or 'polypro', has been used for the fabrication of cold-weather base layers, such as long-sleeve shirts or long underwear. Polypropylene is also used in warm-weather clothing, in which it transports sweat away from the skin. Polyester has replaced polypropylene in these applications in the U.S. military, such as in the ECWCS. Although polypropylene clothes are not easily flammable, they can melt, which may result in severe burns if the wearer is involved in an explosion or fire of any kind. Polypropylene undergarments are known for retaining body odors which are then difficult to remove. The current generation of polyester does not have this disadvantage. Medical Its most common medical use is in the synthetic, nonabsorbable suture Prolene, manufactured by Ethicon Inc. Polypropylene has been used in hernia and pelvic organ prolapse repair operations to protect the body from new hernias in the same location. A small patch of the material is placed over the spot of the hernia, below the skin, and is painless and rarely, if ever, rejected by the body. However, a polypropylene mesh will erode the tissue surrounding it over the uncertain period from days to years. A notable application was as a transvaginal mesh, used to treat vaginal prolapse and concurrent urinary incontinence. Due to the above-mentioned propensity for polypropylene mesh to erode the tissue surrounding it, the FDA has issued several warnings on the use of polypropylene mesh medical kits for certain applications in pelvic organ prolapse, specifically when introduced in close proximity to the vaginal wall due to a continued increase in number of mesh-driven tissue erosions reported by patients over the past few years. On 3 January 2012, the FDA ordered 35 manufacturers of these mesh products to study the side effects of these devices. Due to the outbreak of the COVID-19 pandemic in 2020, the demand for PP has increased significantly because it is a vital raw material for producing meltblown fabric, which is in turn the raw material for producing facial masks. Recycling Most polypropylene recycling uses mechanical recycling, as for polyethylene: the material is heated to soften or melt it, and mechanically formed it into new products. As of 2015, less than 1% of polypropylene generated was recycled. Heating degrades the carbon backbone more severely than for polyethylene, breaking it into smaller organic molecules, because the methyl side group of PP is susceptible to thermo-oxidative and photo-oxidative degradation. Polypropylene has the number "5" as its resin identification code: Repairing PP objects can be joined with a two-part epoxy glue or using hot-glue guns. PP can be melted using a speed tip welding technique. With speed welding, the plastic welder, similar to a soldering iron in appearance and wattage, is fitted with a feed tube for the plastic weld rod. The speed tip heats the rod and the substrate, while at the same time it presses the molten weld rod into position. A bead of softened plastic is laid into the joint and the parts and weld rod fuse. With polypropylene, the melted welding rod must be "mixed" with the semi-melted base material being fabricated or repaired. A speed tip "gun" is essentially a soldering iron with a broad, flat tip that can be used to melt the weld joint and filler material to create a bond. Health concerns The advocacy organization Environmental Working Group classifies PP as of low hazard. PP is dope-dyed; no water is used in its dyeing, in contrast with cotton. Polypropylene was the most common microplastic fiber found in the olfactory bulbs in 8 of 15 deceased individuals in a study. Combustibility Like all organic compounds, polypropylene is combustible. The flash point of a typical composition is 260 °C; autoignition temperature is 388 °C.
Physical sciences
Hydrocarbons
null
201716
https://en.wikipedia.org/wiki/Lithium%20polymer%20battery
Lithium polymer battery
A lithium polymer battery, or more correctly, lithium-ion polymer battery (abbreviated as LiPo, LIP, Li-poly, lithium-poly, and others), is a rechargeable battery of lithium-ion technology using a polymer electrolyte instead of a liquid electrolyte. Highly conductive semisolid (gel) polymers form this electrolyte. These batteries provide higher specific energy than other lithium battery types. They are used in applications where weight is critical, such as mobile devices, radio-controlled aircraft, and some electric vehicles. They are widely used in laptop computers, tablets, and smartphones. History Lithium polymer cells follow the history of lithium-ion and lithium-metal cells, which underwent extensive research during the 1980s, reaching a significant milestone with Sony's first commercial cylindrical lithium-ion cell in 1991. After that, other packaging forms evolved, including the flat pouch format. Design origin and terminology Lithium polymer cells have evolved from lithium-ion and lithium-metal batteries. The primary difference is that instead of using a liquid lithium-salt electrolyte (such as lithium hexafluorophosphate, LiPF6) held in an organic solvent (such as EC/DMC/DEC), the battery uses a solid polymer electrolyte (SPE) such as polyethylene glycol (PEG), polyacrylonitrile (PAN), poly(methyl methacrylate) (PMMA) or poly(vinylidene fluoride) (PVdF). In the 1970s, the original polymer design used a solid dry polymer electrolyte resembling a plastic-like film, replacing the traditional porous separator soaked with electrolyte. The solid electrolyte can typically be classified into three types: dry SPE, gelled SPE, and porous SPE. The dry SPE was the first used in prototype batteries, around 1978 by Michel Armand, and 1985 by ANVAR and Elf Aquitaine of France, and Hydro-Québec of Canada. Since 1990, several organisations, such as Mead and Valence in the United States and GS Yuasa in Japan, have developed batteries using gelled SPEs. In 1996, Bellcore in the United States announced a rechargeable lithium polymer cell using porous SPE. A typical cell has four main components: a positive electrode, a negative electrode, a separator, and an electrolyte. The separator itself may be a polymer, such as a microporous film of polyethylene (PE) or polypropylene (PP); thus, even when the cell has a liquid electrolyte, it will still contain a "polymer" component. In addition to this, the positive electrode can be further divided into three parts: the lithium-transition-metal-oxide (such as LiCoO2 or LiMn2O4), a conductive additive, and a polymer binder of poly(vinylidene fluoride) (PVdF). The negative electrode material may have the same three parts, only with carbon replacing the lithium-metal-oxide. The main difference between lithium-ion polymer cells and lithium-ion cells is the physical phase of the electrolyte, such that LiPo cells use dry solid, gel-like electrolytes, whereas Li-ion cells use liquid electrolytes. Working principle Like other lithium-ion cells, LiPos work on the intercalation and de-intercalation of lithium ions from a positive electrode material and a negative electrode material, with the liquid electrolyte providing a conductive medium. To prevent the electrodes from touching each other directly, a microporous separator is in between, which allows only the ions and not the electrode particles to migrate from one side to the other. Voltage and state of charge The voltage of a single LiPo cell depends on its chemistry and varies from about 4.2 V (fully charged) to about 2.7–3.0 V (fully discharged). The nominal voltage is 3.6 or 3.7 volts (about the middle value of the highest and lowest value) for cells based on lithium-metal-oxides (such as LiCoO2). This compares to 3.6–3.8 V (charged) to 1.8–2.0 V (discharged) for those based on lithium-iron-phosphate (LiFePO4). The exact voltage ratings should be specified in product data sheets, with the understanding that the cells should be protected by an electronic circuit that won't allow them to overcharge or over-discharge under use. LiPo battery packs, with cells connected in series and parallel, have separate pin-outs for every cell. A specialized charger may monitor the charge per cell so that all cells are brought to the same state of charge (SOC). Applying pressure on lithium polymer cells Unlike lithium-ion cylindrical and prismatic cells, with a rigid metal case, LiPo cells have a flexible, foil-type (polymer laminate) case, so they are relatively unconstrained. Moderate pressure on the stack of layers that compose the cell results in increased capacity retention, because the contact between the components is maximised and delamination and deformation is prevented, which is associated with increase of cell impedance and degradation. Applications LiPo cells provide manufacturers with compelling advantages. They can easily produce batteries of almost any desired shape. For example, the space and weight requirements of mobile devices and notebook computers can be met. They also have a low self-discharge rate of about 5% per month. Drones, radio-controlled equipment, and aircraft LiPo batteries are now almost ubiquitous when used to power commercial and hobby drones (unmanned aerial vehicles), radio-controlled aircraft, radio-controlled cars, and large-scale model trains, where the advantages of lower weight and increased capacity and power delivery justify the price. Test reports warn of the risk of fire when the batteries are not used per the instructions. The voltage for long-time storage of LiPo battery used in the R/C model should be 3.6~3.9 V range per cell, otherwise it may cause damage to the battery. LiPo packs also see widespread use in airsoft, where their higher discharge currents and better energy density than traditional NiMH batteries have very noticeable performance gain (higher rate of fire). Personal electronics LiPo batteries are pervasive in mobile devices, power banks, very thin laptop computers, portable media players, wireless controllers for video game consoles, wireless PC peripherals, electronic cigarettes, and other applications where small form factors are sought. The high energy density outweighs cost considerations. Electric vehicles Hyundai Motor Company uses this type of battery in some of its battery-electric and hybrid vehicles and Kia Motors in its battery-electric Kia Soul. The Bolloré Bluecar, which is used in car-sharing schemes in several cities, also uses this type of battery. Uninterruptible power supply systems Lithium-ion batteries are becoming increasingly commonplace in Uninterruptible power supply (UPS) systems. They offer numerous benefits over the traditional VRLA battery, and with stability and safety improvements confidence in the technology is growing. Their power-to-size and weight ratio is seen as a major benefit in many industries requiring critical power backup, including data centers where space is often at a premium. The longer cycle life, usable energy (Depth of discharge), and thermal runaway are also seen as a benefit of using Li-po batteries over VRLA batteries. Jump starter The battery used to start a vehicle engine is typically 12 V or 24 V, so a portable jump starter or battery booster uses three or six LiPo batteries in series (3S1P/6S1P) to start the vehicle in an emergency instead of the other jump-start methods. The price of a lead-acid jump starter is less but they are bigger and heavier than comparable lithium batteries. So such products have mostly switched to LiPo batteries or sometimes lithium iron phosphate batteries. Safety All Li-ion cells expand at high levels of state of charge (SOC) or overcharge due to slight vaporisation of the electrolyte. This may result in delamination and, thus, bad contact with the internal layers of the cell, which in turn diminishes the reliability and overall cycle life. This is very noticeable for LiPos, which can visibly inflate due to the lack of a hard case to contain their expansion. Lithium polymer batteries' safety characteristics differ from those of lithium iron phosphate batteries. Polymer electrolytes Polymer electrolytes can be divided into two large categories: dry solid polymer electrolytes (SPE) and gel polymer electrolytes (GPE). In comparison to liquid electrolytes and solid organic electrolytes, polymer electrolytes offer advantages such as increased resistance to variations in the volume of the electrodes throughout the charge and discharge processes, improved safety features, excellent flexibility, and processability. Solid polymer electrolyte was initially defined as a polymer matrix swollen with lithium salts, now called dry solid polymer electrolyte. Lithium salts are dissolved in the polymer matrix to provide ionic conductivity. Due to its physical phase, there is poor ion transfer, resulting in poor conductivity at room temperature. To improve the ionic conductivity at room temperature, gelled electrolyte is added resulting in the formation of GPEs. GPEs are formed by incorporating an organic liquid electrolyte in the polymer matrix. Liquid electrolyte is entrapped by a small amount of polymer network, hence the properties of GPE is characterized by properties between those of liquid and solid electrolytes. The conduction mechanism is similar for liquid electrolytes and polymer gels, but GPEs have higher thermal stability and a low volatile nature which also further contribute to safety. Lithium cells with solid polymer electrolyte Cells with solid polymer electrolytes have not been fully commercialised and are still a topic of research. Prototype cells of this type could be considered to be between a traditional lithium-ion battery (with liquid electrolyte) and a completely plastic, solid-state lithium-ion battery. The simplest approach is to use a polymer matrix, such as polyvinylidene fluoride (PVdF) or poly(acrylonitrile) (PAN), gelled with conventional salts and solvents, such as LiPF6 in EC/DMC/DEC. Nishi mentions that Sony started research on lithium-ion cells with gelled polymer electrolytes (GPE) in 1988, before the commercialisation of the liquid-electrolyte lithium-ion cell in 1991. At that time, polymer batteries were promising, and it seemed polymer electrolytes would become indispensable. Eventually, this type of cell went into the market in 1998. However, Scrosati argues that, in the strictest sense, gelled membranes cannot be classified as "true" polymer electrolytes but rather as hybrid systems where the liquid phases are contained within the polymer matrix. Although these polymer electrolytes may be dry to the touch, they can still include 30% to 50% liquid solvent. In this regard, how to define a "polymer battery" remains an open question. Other terms used in the literature for this system include hybrid polymer electrolyte (HPE), where "hybrid" denotes the combination of the polymer matrix, the liquid solvent, and the salt. It was a system like this that Bellcore used to develop an early lithium-polymer cell in 1996, which was called a "plastic" lithium-ion cell (PLiON) and subsequently commercialised in 1999. A solid polymer electrolyte (SPE) is a solvent-free salt solution in a polymer medium. It may be, for example, a compound of lithium bis(fluorosulfonyl)imide (LiFSI) and high molecular weight poly(ethylene oxide) (PEO), a high molecular weight poly(trimethylene carbonate) (PTMC), polypropylene oxide (PPO), poly[bis(methoxy-ethoxy-ethoxy)phosphazene] (MEEP), etc. PEO exhibits the most promising performance as a solid solvent for lithium salts, mainly due to its flexible ethylene oxide segments and other oxygen atoms that comprise a strong donor character, readily solvating Li+ cations. PEO is also commercially available at a very reasonable cost. The performance of these proposed electrolytes is usually measured in a half-cell configuration against an electrode of metallic lithium, making the system a "lithium-metal" cell. Still, it has also been tested with a common lithium-ion cathode material such as lithium-iron-phosphate (LiFePO4). Other attempts to design a polymer electrolyte cell include the use of inorganic ionic liquids such as 1-butyl-3-methylimidazolium tetrafluoroborate ([BMIM]BF4) as a plasticizer in a microporous polymer matrix like poly(vinylidene fluoride-co-hexafluoropropylene)/poly(methyl methacrylate) (PVDF-HFP/PMMA).
Technology
Energy storage
null
201826
https://en.wikipedia.org/wiki/Trapezoid
Trapezoid
In geometry, a trapezoid () in North American English, or trapezium () in British English, is a quadrilateral that has one pair of parallel sides. The parallel sides are called the bases of the trapezoid. The other two sides are called the legs (or the lateral sides) if they are not parallel; otherwise, the trapezoid is a parallelogram, and there are two pairs of bases. A scalene trapezoid is a trapezoid with no sides of equal measure, in contrast with the special cases below. A trapezoid is usually considered to be a convex quadrilateral in Euclidean geometry, but there are also crossed cases. If ABCD is a convex trapezoid, then ABDC is a crossed trapezoid. The metric formulas in this article apply in convex trapezoids. Etymology and trapezium versus trapezoid The ancient Greek mathematician Euclid defined five types of quadrilateral, of which four had two sets of parallel sides (known in English as square, rectangle, rhombus and rhomboid) and the last did not have two sets of parallel sides – a τραπέζια (trapezia literally 'table', itself from τετράς (tetrás) 'four' + πέζα (péza) 'foot; end, border, edge'). Two types of trapezia were introduced by Proclus (AD 412 to 485) in his commentary on the first book of Euclid's Elements: one pair of parallel sides – a trapezium (τραπέζιον), divided into isosceles (equal legs) and scalene (unequal) trapezia no parallel sides – trapezoid (τραπεζοειδή, trapezoeidé, literally 'trapezium-like' (εἶδος means 'resembles'), in the same way as cuboid means 'cube-like' and rhomboid means 'rhombus-like') All European languages follow Proclus's structure as did English until the late 18th century, until an influential mathematical dictionary published by Charles Hutton in 1795 supported without explanation a transposition of the terms. This was reversed in British English in about 1875, but it has been retained in American English to the present. The following table compares usages, with the most specific definitions at the top to the most general at the bottom. Inclusive versus exclusive definition There is some disagreement whether parallelograms, which have two pairs of parallel sides, should be regarded as trapezoids. Some define a trapezoid as a quadrilateral having only one pair of parallel sides (the exclusive definition), thereby excluding parallelograms. Some sources use the term proper trapezoid to describe trapezoids under the exclusive definition, analogous to uses of the word proper in some other mathematical objects. Others define a trapezoid as a quadrilateral with at least one pair of parallel sides (the inclusive definition), making the parallelogram a special type of trapezoid. The latter definition is consistent with its uses in higher mathematics such as calculus. This article uses the inclusive definition and considers parallelograms as special cases of a trapezoid. This is also advocated in the taxonomy of quadrilaterals. Under the inclusive definition, all parallelograms (including rhombuses, squares and non-square rectangles) are trapezoids. Rectangles have mirror symmetry on mid-edges; rhombuses have mirror symmetry on vertices, while squares have mirror symmetry on both mid-edges and vertices. Special cases A right trapezoid (also called right-angled trapezoid) has two adjacent right angles. Right trapezoids are used in the trapezoidal rule for estimating areas under a curve. An acute trapezoid has two adjacent acute angles on its longer base edge. An obtuse trapezoid on the other hand has one acute and one obtuse angle on each base. An isosceles trapezoid is a trapezoid where the base angles have the same measure. As a consequence the two legs are also of equal length and it has reflection symmetry. This is possible for acute trapezoids or right trapezoids (as rectangles). A parallelogram is (under the inclusive definition) a trapezoid with two pairs of parallel sides. A parallelogram has central 2-fold rotational symmetry (or point reflection symmetry). It is possible for obtuse trapezoids or right trapezoids (rectangles). A tangential trapezoid is a trapezoid that has an incircle. A Saccheri quadrilateral is similar to a trapezoid in the hyperbolic plane, with two adjacent right angles, while it is a rectangle in the Euclidean plane. A Lambert quadrilateral in the hyperbolic plane has 3 right angles. Condition of existence Four lengths a, c, b, d can constitute the consecutive sides of a non-parallelogram trapezoid with a and b parallel only when The quadrilateral is a parallelogram when , but it is an ex-tangential quadrilateral (which is not a trapezoid) when . Characterizations Given a convex quadrilateral, the following properties are equivalent, and each implies that the quadrilateral is a trapezoid: It has two adjacent angles that are supplementary, that is, they add up to 180 degrees. The angle between a side and a diagonal is equal to the angle between the opposite side and the same diagonal. The diagonals cut each other in mutually the same ratio (this ratio is the same as that between the lengths of the parallel sides). The diagonals cut the quadrilateral into four triangles of which one opposite pair have equal areas. The product of the areas of the two triangles formed by one diagonal equals the product of the areas of the two triangles formed by the other diagonal. The areas S and T of some two opposite triangles of the four triangles formed by the diagonals satisfy the equation where K is the area of the quadrilateral. The midpoints of two opposite sides of the trapezoid and the intersection of the diagonals are collinear. The angles in the quadrilateral ABCD satisfy The cosines of two adjacent angles sum to 0, as do the cosines of the other two angles. The cotangents of two adjacent angles sum to 0, as do the cotangents of the other two adjacent angles. One bimedian divides the quadrilateral into two quadrilaterals of equal areas. Twice the length of the bimedian connecting the midpoints of two opposite sides equals the sum of the lengths of the other sides. Additionally, the following properties are equivalent, and each implies that opposite sides a and b are parallel: The consecutive sides a, c, b, d and the diagonals p, q satisfy the equation The distance v between the midpoints of the diagonals satisfies the equation Midsegment and height The midsegment of a trapezoid is the segment that joins the midpoints of the legs. It is parallel to the bases. Its length m is equal to the average of the lengths of the bases a and b of the trapezoid, The midsegment of a trapezoid is one of the two bimedians (the other bimedian divides the trapezoid into equal areas). The height (or altitude) is the perpendicular distance between the bases. In the case that the two bases have different lengths (a ≠ b), the height of a trapezoid h can be determined by the length of its four sides using the formula where c and d are the lengths of the legs and . Area The area K of a trapezoid is given by where a and b are the lengths of the parallel sides, h is the height (the perpendicular distance between these sides), and m is the arithmetic mean of the lengths of the two parallel sides. In 499 AD Aryabhata, a great mathematician-astronomer from the classical age of Indian mathematics and Indian astronomy, used this method in the Aryabhatiya (section 2.8). This yields as a special case the well-known formula for the area of a triangle, by considering a triangle as a degenerate trapezoid in which one of the parallel sides has shrunk to a point. The 7th-century Indian mathematician Bhāskara I derived the following formula for the area of a trapezoid with consecutive sides a, c, b, d: where a and b are parallel and b > a. This formula can be factored into a more symmetric version When one of the parallel sides has shrunk to a point (say a = 0), this formula reduces to Heron's formula for the area of a triangle. Another equivalent formula for the area, which more closely resembles Heron's formula, is where is the semiperimeter of the trapezoid. (This formula is similar to Brahmagupta's formula, but it differs from it, in that a trapezoid might not be cyclic (inscribed in a circle). The formula is also a special case of Bretschneider's formula for a general quadrilateral). From Bretschneider's formula, it follows that The bimedian connecting the parallel sides bisects the area. Diagonals The lengths of the diagonals are where a is the short base, b is the long base, and c and d are the trapezoid legs. If the trapezoid is divided into four triangles by its diagonals AC and BD (as shown on the right), intersecting at O, then the area of is equal to that of , and the product of the areas of and is equal to that of and . The ratio of the areas of each pair of adjacent triangles is the same as that between the lengths of the parallel sides. Let the trapezoid have vertices A, B, C, and D in sequence and have parallel sides AB and DC. Let E be the intersection of the diagonals, and let F be on side DA and G be on side BC such that FEG is parallel to AB and CD. Then FG is the harmonic mean of AB and DC: The line that goes through both the intersection point of the extended nonparallel sides and the intersection point of the diagonals, bisects each base. Other properties The center of area (center of mass for a uniform lamina) lies along the line segment joining the midpoints of the parallel sides, at a perpendicular distance x from the longer side b given by The center of area divides this segment in the ratio (when taken from the short to the long side) If the angle bisectors to angles A and B intersect at P, and the angle bisectors to angles C and D intersect at Q, then Applications Architecture In architecture the word is used to refer to symmetrical doors, windows, and buildings built wider at the base, tapering toward the top, in Egyptian style. If these have straight sides and sharp angular corners, their shapes are usually isosceles trapezoids. This was the standard style for the doors and windows of the Inca. Geometry The crossed ladders problem is the problem of finding the distance between the parallel sides of a right trapezoid, given the diagonal lengths and the distance from the perpendicular leg to the diagonal intersection. Biology In morphology, taxonomy and other descriptive disciplines in which a term for such shapes is necessary, terms such as trapezoidal or trapeziform commonly are useful in descriptions of particular organs or forms. Computer engineering In computer engineering, specifically digital logic and computer architecture, trapezoids are typically utilized to symbolize multiplexors. Multiplexors are logic elements that select between multiple elements and produce a single output based on a select signal. Typical designs will employ trapezoids without specifically stating they are multiplexors as they are universally equivalent.
Mathematics
Two-dimensional space
null
201943
https://en.wikipedia.org/wiki/Bird%20migration
Bird migration
Bird migration is a seasonal movement of birds between breeding and wintering grounds that occurs twice a year. It is typically from north to south or from south to north. Migration is inherently risky, due to predation and mortality. The Arctic tern holds the long-distance migration record for birds, travelling between Arctic breeding grounds and the Antarctic each year. Some species of tubenoses, such as albatrosses, circle the Earth, flying over the southern oceans, while others such as Manx shearwaters migrate between their northern breeding grounds and the southern ocean. Shorter migrations are common, while longer ones are not. The shorter migrations include altitudinal migrations on mountains, including the Andes and Himalayas. The timing of migration seems to be controlled primarily by changes in day length. Migrating birds navigate using celestial cues from the Sun and stars, the Earth's magnetic field, and mental maps. Historical views Writings of ancient Greeks recognized the seasonal comings and goings of birds. Aristotle wrote that birds transmuted into other birds or species like fish and animals, which explained their disappearance and reappearance. Aristotle thought many birds disappeared during cold weather because they were torpid, undetected in unseen environments like tree hollows or burrowing down in mud found at the bottom of ponds, then reemerging months later. Still, Aristotle recorded that cranes traveled from the steppes of Scythia to marshes at the headwaters of the Nile, an observation repeated by Pliny the Elder in his Historia Naturalis. Two books of the Bible may address avian migration. The Book of Job notes migrations with the inquiry: "Is it by your insight that the hawk hovers, spreads its wings southward?" The Book of Jeremiah comments: "Even the stork in the heavens knows its seasons, and the turtle dove, the swift and the crane keep the time of their arrival." In the Pacific, traditional land-finding techniques used by Micronesians and Polynesians suggest that bird migration was observed and interpreted for more than 3,000 years. In Samoan tradition, for example, Tagaloa sent his daughter Sina to Earth in the form of a bird, Tuli, to find dry land, the word tuli referring specifically to land-finding waders, often to the Pacific golden plover. Swallow migration versus hibernation Aristotle, however, suggested that swallows and other birds hibernated. This belief persisted as late as 1878 when Elliott Coues listed the titles of no fewer than 182 papers dealing with the hibernation of swallows. Even the "highly observant" Gilbert White, in his posthumously published 1789 The Natural History of Selborne, quoted a man's story about swallows being found in a chalk cliff collapse "while he was a schoolboy at Brighthelmstone", though the man denied being an eyewitness. However, he writes that "as to swallows being found in a torpid state during the winter in the Isle of Wight or any part of this country, I never heard any such account worth attending to", and that if early swallows "happen to find frost and snow they immediately withdraw for a time—a circumstance this much more in favour of hiding than migration", since he doubts they would "return for a week or two to warmer latitudes". Only at the end of the eighteenth century was migration accepted as an explanation for the winter disappearance of birds from northern climes. Thomas Bewick's A History of British Birds (Volume 1, 1797) mentions a report from "a very intelligent master of a vessel" who, "between the islands of Menorca and Majorca, saw great numbers of Swallows flying northward", and states the situation in Britain as follows: Bewick then describes an experiment that succeeded in keeping swallows alive in Britain for several years, where they remained warm and dry through the winters. He concludes: Pfeilstörche In 1822, a white stork was found in the German state of Mecklenburg with an arrow made from central African hardwood, which provided some of the earliest evidence of long-distance stork migration. This bird was referred to as a Pfeilstorch, German for "Arrow stork". Since then, around 25 Pfeilstörche have been documented. General patterns Migration is the regular seasonal movement, often north and south, undertaken by many species of birds. Migration is marked by its annual seasonality and movement between breeding and non-breeding areas. Nonmigratory bird movements include those made in response to environmental changes including in food availability, habitat, or weather. Sometimes, journeys are not termed "true migration" because they are irregular (nomadism, invasions, irruptions) or in only one direction (dispersal, movement of young away from natal area). Non-migratory birds are said to be resident or sedentary. Approximately 1,800 of the world's 10,000 bird species are long-distance migrants. Many bird populations migrate long distances along a flyway. The most common pattern involves flying north in the spring to breed in the temperate or Arctic summer and returning in the autumn to wintering grounds in warmer regions to the south. Of course, in the southern hemisphere, the directions are reversed, but there is less land area in the far south to support long-distance migration. The primary motivation for migration appears to be food; for example, some hummingbirds choose not to migrate if fed through the winter. In addition, the longer days of the northern summer provide extended time for breeding birds to feed their young. This helps diurnal birds to produce larger clutches than related non-migratory species that remain in the tropics. As the days shorten in autumn, the birds return to warmer regions where the available food supply varies little with the season. These advantages offset the high stress, physical exertion costs, and other risks of migration. Predation can be heightened during migration: Eleonora's falcon Falco eleonorae, which breeds on Mediterranean islands, has a very late breeding season, coordinated with the autumn passage of southbound passerine migrants, which it feeds to its young. A similar strategy is adopted by the greater noctule bat, which preys on nocturnal passerine migrants. The higher concentrations of migrating birds at stopover sites make them prone to parasites and pathogens, which require a heightened immune response. Within a species not all populations may be migratory; this is known as "partial migration". Partial migration is very common in the southern continents; in Australia, 44% of non-passerine birds and 32% of passerine species are partially migratory. In some species, the population at higher latitudes tends to be migratory and will often winter at lower latitude. The migrating birds bypass the latitudes where other populations may be sedentary, where suitable wintering habitats may already be occupied. This is an example of leap-frog migration. Many fully migratory species show leap-frog migration (birds that nest at higher latitudes spend the winter at lower latitudes), and many show the alternative, chain migration, where populations 'slide' more evenly north and south without reversing the order. Within a population, it is common for different ages and/or sexes to have different patterns of timing and distance. Female chaffinches Fringilla coelebs in Eastern Fennoscandia migrate earlier in the autumn than males do and the European tits of genera Parus and Cyanistes only migrate their first year. Most migrations begin with the birds starting off in a broad front. Often, this front narrows into one or more preferred routes termed flyways. These routes typically follow mountain ranges or coastlines, sometimes rivers, and may take advantage of updrafts and other wind patterns or avoid geographical barriers such as large stretches of open water. The specific routes may be genetically programmed or learned to varying degrees. The routes taken on forward and return migration are often different. A common pattern in North America is clockwise migration, where birds flying North tend to be further West, and flying South tend to shift Eastwards. Many, if not most, birds migrate in flocks. For larger birds, flying in flocks reduces the energy cost. Geese in a V formation may conserve 12–20% of the energy they would need to fly alone. Red knots Calidris canutus and dunlins Calidris alpina were found in radar studies to fly faster in flocks than when they were flying alone. Birds fly at varying altitudes during migration. An expedition to Mt. Everest found skeletons of northern pintail Anas acuta and black-tailed godwit Limosa limosa at on the Khumbu Glacier. Bar-headed geese Anser indicus have been recorded by GPS flying at up to while crossing the Himalayas, at the same time engaging in the highest rates of climb to altitude for any bird. Anecdotal reports of them flying much higher have yet to be corroborated with any direct evidence. Seabirds fly low over water but gain altitude when crossing land, and the reverse pattern is seen in land birds. However most bird migration is in the range of . Bird strike Aviation records from the United States show most collisions occur below and almost none above . Bird migration is not limited to birds that can fly. Most species of penguin (Spheniscidae) migrate by swimming. These routes can cover over . Dusky grouse Dendragapus obscurus perform altitudinal migration mostly by walking. Emus Dromaius novaehollandiae in Australia have been observed to undertake long-distance movements on foot during droughts. Nocturnal migratory behaviour During nocturnal migration ("nocmig"), many birds give nocturnal flight calls, which are short, contact-type calls. These likely serve to maintain the composition of a migrating flock, and can sometimes encode the sex of a migrating individual, and to avoid collision in the air. Nocturnal migration can be monitored using weather radar data, allowing ornithologists to estimate the number of birds migrating on a given night, and the direction of the migration. Future research includes the automatic detection and identification of nocturnally calling migrant birds. Nocturnal migrants land in the morning and may feed for a few days before resuming their migration. These birds are referred to as passage migrants in the regions where they occur for a short period between the origin and destination. Nocturnal migrants minimize depredation, avoid overheating and can feed during the day. One cost of nocturnal migration is the loss of sleep. Migrants may be able to alter their quality of sleep to compensate for the loss. Long-distance migration The typical image of migration is of northern land birds, such as swallows (Hirundinidae) and birds of prey, making long flights to the tropics. However, many Holarctic wildfowl and finch (Fringillidae) species winters in the North Temperate Zone, in regions with milder winters than their summer breeding grounds. For example, the pink-footed goose migrates from Iceland to Britain and neighbouring countries, whilst the dark-eyed junco migrates from subarctic and arctic climates to the contiguous United States and the American goldfinch from taiga to wintering grounds extending from the American South northwestward to Western Oregon. Some ducks, such as the garganey Anas querquedula, move completely or partially into the tropics. The European pied flycatcher Ficedula hypoleuca follows this migratory trend, breeding in Asia and Europe and wintering in Africa. Migration routes and wintering grounds are both genetically and traditionally determined depending on the social system of the species. In long-lived, social species such as white storks (Ciconia ciconia), flocks are often led by the oldest members and young storks learn the route on their first journey. In short-lived species that migrate alone, such as the Eurasian blackcap Sylvia atricapilla or the yellow-billed cuckoo Coccyzus americanus, first-year migrants follow a genetically determined route that is alterable with selective breeding. Many migration routes of long-distance migratory birds are circuitous due to evolutionary history: the breeding range of Northern wheatears Oenanthe oenanthe has expanded to cover the entire Northern Hemisphere, but the species still migrates up to 14,500 km to reach ancestral wintering grounds in sub-Saharan Africa rather than establish new wintering grounds closer to breeding areas. A migration route often does not follow the most direct line between breeding and wintering grounds. Rather, it could follow a hooked or arched line, with detours around geographical barriers or towards suitable stopover habitat. For most land birds, such barriers could consist of large water bodies or high mountain ranges, a lack of stopover or feeding sites, or a lack of thermal columns (important for broad-winged birds). Conversely, in water-birds, large areas of land without wetlands offering suitable feeding sites may present a barrier, and detours avoiding such barriers are observed. For example, brent geese Branta bernicla bernicla migrating between the Taymyr Peninsula and the Wadden Sea travel via low-lying coastal feeding-areas on the White Sea and the Baltic Sea rather than directly across the Arctic Ocean and the Scandinavian mainland. Great snipes make non-stop flights of 4,000–7,000 km, lasting 60–90 h, during which they change their average cruising heights from 2,000 m (above sea level) at night to around 4,000 m during daytime. In waders A similar situation occurs with waders (called shorebirds in North America). Many species, such as dunlin Calidris alpina and western sandpiper Calidris mauri, undertake long movements from their Arctic breeding grounds to warmer locations in the same hemisphere, but others such as semipalmated sandpiper C. pusilla travel longer distances to the tropics in the Southern Hemisphere. For some species of waders, migration success depends on the availability of certain key food resources at stopover points along the migration route. This gives the migrants an opportunity to refuel for the next leg of the voyage. Some examples of important stopover locations are the Bay of Fundy and Delaware Bay. Some bar-tailed godwits Limosa lapponica baueri have the longest known non-stop flight of any migrant, flying 11,000 km from Alaska to their New Zealand non-breeding areas. Prior to migration, 55 percent of their bodyweight is stored as fat to fuel this uninterrupted journey. In seabirds Seabird migration is similar in pattern to those of the waders and waterfowl. Some, such as the black guillemot Cepphus grylle and some gulls, are quite sedentary; others, such as most terns and auks breeding in the temperate northern hemisphere, move varying distances south in the northern winter. The Arctic tern Sterna paradisaea has the longest-distance migration of any bird, and sees more daylight than any other, moving from its Arctic breeding grounds to the Antarctic non-breeding areas. One Arctic tern, ringed (banded) as a chick on the Farne Islands in Northumberland off the British east coast, reached Melbourne, Australia in just three months from fledging, a sea journey of over , while another also from the Farne Islands with a light level geolocator tag 'G82' covered a staggering in just 10 months from the end of one breeding season to the start of the next, travelling not just the length of the Atlantic Ocean and the width of the Indian Ocean, but also half way across the South Pacific to the boundary between the Ross and Amundsen Seas before returning back west along the Antarctic coast and back up the Atlantic. Many tubenosed birds breed in the southern hemisphere and migrate north in the southern winter. The most pelagic species, mainly in the 'tubenose' order Procellariiformes, are great wanderers, and the albatrosses of the southern oceans may circle the globe as they ride the "Roaring Forties" outside the breeding season. The tubenoses spread widely over large areas of open ocean, but congregate when food becomes available. Many are among the longest-distance migrants; sooty shearwaters Puffinus griseus nesting on the Falkland Islands migrate between the breeding colony and the North Atlantic Ocean off Norway. Some Manx shearwaters Puffinus puffinus do this same journey in reverse. As they are long-lived birds, they may cover enormous distances during their lives; one record-breaking Manx shearwater is calculated to have flown during its over-50-year lifespan. Diurnal migration in large birds using thermals Some large broad-winged birds rely on thermal columns of rising hot air to enable them to soar. These include many birds of prey such as vultures, eagles, and buzzards, but also storks. These birds migrate in the daytime. Migratory species in these groups have great difficulty crossing large bodies of water, since thermals only form over land, and these birds cannot maintain active flight for long distances. Mediterranean and other seas present a major obstacle to soaring birds, which must cross at the narrowest points. Massive numbers of large raptors and storks pass through areas such as the Strait of Messina, Gibraltar, Falsterbo, and the Bosphorus at migration times. More common species, such as the European honey buzzard Pernis apivorus, can be counted in hundreds of thousands in autumn. Other barriers, such as mountain ranges, can cause funnelling, particularly of large diurnal migrants, as in the Central American migratory bottleneck. The Batumi bottleneck in the Caucasus is one of the heaviest migratory funnels on earth, created when hundreds of thousands of soaring birds avoid flying over the Black Sea surface and across high mountains. Birds of prey such as honey buzzards which migrate using thermals lose only 10 to 20% of their weight during migration, which may explain why they forage less during migration than do smaller birds of prey with more active flight such as falcons, hawks and harriers. From observing the migration of eleven soaring bird species over the Strait of Gibraltar, species which did not advance their autumn migration dates were those with declining breeding populations in Europe. Short-distance and altitudinal migration Many long-distance migrants appear to be genetically programmed to respond to changing day length. Species that move short distances, however, may not need such a timing mechanism, instead moving in response to local weather conditions. Thus mountain and moorland breeders, such as wallcreeper Tichodroma muraria and white-throated dipper Cinclus cinclus, may move only altitudinally to escape the cold higher ground. Other species such as merlin Falco columbarius and Eurasian skylark Alauda arvensis move further, to the coast or towards the south. Species like the chaffinch are much less migratory in Britain than those of continental Europe, mostly not moving more than 5 km in their lives. Short-distance passerine migrants have two evolutionary origins. Those that have long-distance migrants in the same family, such as the common chiffchaff Phylloscopus collybita, are species of southern hemisphere origins that have progressively shortened their return migration to stay in the northern hemisphere. Species that have no long-distance migratory relatives, such as the waxwings Bombycilla, are effectively moving in response to winter weather and the loss of their usual winter food, rather than enhanced breeding opportunities. In the tropics there is little variation in the length of day throughout the year, and it is always warm enough for a food supply, but altitudinal migration occurs in some tropical birds. There is evidence that this enables the migrants to obtain more of their preferred foods such as fruits. Altitudinal migration is common on mountains worldwide, such as in the Himalayas and the Andes. Dusky grouse in Colorado migrate less than a kilometer away from their summer grounds to winter sites which may be higher or lower by about 400 m in altitude than the summer sites. Many bird species in arid regions across southern Australia are nomadic; they follow water and food supply around the country in an irregular pattern, unrelated to season but related to rainfall. Several years may pass between visits to an area by a particular species. Irruptions and dispersal Sometimes circumstances such as a good breeding season followed by a food source failure the following year lead to irruptions in which large numbers of a species move far beyond the normal range. Bohemian waxwings Bombycilla garrulus well show this unpredictable variation in annual numbers, with five major arrivals in Britain during the nineteenth century, but 18 between the years 1937 and 2000. Red crossbills Loxia curvirostra too are irruptive, with widespread invasions across England noted in 1251, 1593, 1757, and 1791. Bird migration is primarily, but not entirely, a Northern Hemisphere phenomenon. This is because continental landmasses of the northern hemisphere are almost entirely temperate and subject to winter food shortages driving bird populations south (including the Southern Hemisphere) to overwinter; In contrast, among (pelagic) seabirds, species of the Southern Hemisphere are more likely to migrate. This is because there is a large area of ocean in the Southern Hemisphere, and more islands suitable for seabirds to nest. Physiology and control The control of migration, its timing and response are genetically controlled and appear to be a primitive trait that is present even in non-migratory species of birds. The ability to navigate and orient themselves during migration is a much more complex phenomenon that may include both endogenous programs as well as learning. Timing The primary physiological cue for migration is the changes in the day length. These changes are related to hormonal changes in the birds. In the period before migration, many birds display higher activity or Zugunruhe (), first described by Johann Friedrich Naumann in 1795, as well as physiological changes such as increased fat deposition. The occurrence of Zugunruhe even in cage-raised birds with no environmental cues (e.g. shortening of day and falling temperature) has pointed to the role of circannual endogenous programs in controlling bird migrations. Caged birds display a preferential flight direction that corresponds with the migratory direction they would take in nature, changing their preferential direction at roughly the same time their wild conspecifics change course. Satellite tracking of 48 individual Asian houbaras (Chlamydotis macqueenii) across multiple migrations showed that this species uses the local temperature to time their spring migration departure. Notably, departure responses to temperature varied between individuals but were individually repeatable (when tracked over multiple years). This suggests that individual use of temperature is a cue that allows for population-level adaptation to climate change. In other words, in a warming world, many migratory birds are predicted to depart earlier in the year for their summer or winter destination. In polygynous species with considerable sexual dimorphism, males tend to return earlier to the breeding sites than their females. This is termed protandry. Orientation and navigation Navigation is based on a variety of senses. Many birds have been shown to use a sun compass. Using the Sun for direction involves the need for making compensation based on the time. Navigation has been shown to be based on a combination of other abilities including the ability to detect magnetic fields (magnetoreception), use visual landmarks as well as olfactory cues. Long-distance migrants are believed to disperse as young birds and form attachments to potential breeding sites and to favourite wintering sites. Once the site attachment is made they show high site-fidelity, visiting the same wintering sites year after year. The ability of birds to navigate during migrations cannot be fully explained by endogenous programming, even with the help of responses to environmental cues. The ability to successfully perform long-distance migrations can probably only be fully explained with an accounting for the cognitive ability of the birds to recognize habitats and form mental maps. Satellite tracking of day migrating raptors such as ospreys and honey buzzards has shown that older individuals are better at making corrections for wind drift. Birds rely for navigation on a combination of innate biological senses and experience, as with the two electromagnetic tools that they use. A young bird on its first migration flies in the correct direction according to the Earth's magnetic field, but does not know how far the journey will be. It does this through a radical pair mechanism whereby chemical reactions in special photo pigments sensitive to short wavelengths are affected by the field. Although this only works during daylight hours, it does not use the position of the Sun in any way. With experience, it learns various landmarks and this "mapping" is done by magnetites in the trigeminal system, which tell the bird how strong the field is. Because birds migrate between northern and southern regions, the magnetic field strengths at different latitudes let it interpret the radical pair mechanism more accurately and let it know when it has reached its destination. There is a neural connection between the eye and "Cluster N", the part of the forebrain that is active during migrational orientation, suggesting that birds may actually be able to see the magnetic field of the Earth. Vagrancy Migrating birds can lose their way and appear outside their normal ranges. This can be due to flying past their destinations as in the "spring overshoot" in which birds returning to their breeding areas overshoot and end up further north than intended. Certain areas, because of their location, have become famous as watchpoints for such birds. Examples are the Point Pelee National Park in Canada, and Spurn in England. Reverse migration, where the genetic programming of young birds fails to work properly, can lead to rarities turning up as vagrants thousands of kilometres out of range. Drift migration of birds blown off course by the wind can result in "falls" of large numbers of migrants at coastal sites. A related phenomenon called "abmigration" involves birds from one region joining similar birds from a different breeding region in the common winter grounds and then migrating back along with the new population. This is especially common in some waterfowl, which shift from one flyway to another. Migration conditioning It has been possible to teach a migration route to a flock of birds, for example in re-introduction schemes. After a trial with Canada geese Branta canadensis, microlight aircraft were used in the US to teach safe migration routes to reintroduced whooping cranes Grus americana. Adaptations Birds need to alter their metabolism to meet the demands of migration. The storage of energy through the accumulation of fat and the control of sleep in nocturnal migrants require special physiological adaptations. In addition, the feathers of a bird suffer from wear-and-tear and must be moulted. The timing of this moult – usually once a year but sometimes twice – varies with some species moulting prior to moving to their winter grounds and others molting prior to returning to their breeding grounds. Apart from physiological adaptations, migration sometimes requires behavioural changes such as flying in flocks to reduce the energy used in migration or the risk of predation. Evolutionary and ecological factors Migration in birds is highly labile and is believed to have developed independently in many avian lineages. While it is agreed that the behavioural and physiological adaptations necessary for migration are under genetic control, some authors have argued that no genetic change is necessary for migratory behaviour to develop in a sedentary species because the genetic framework for migratory behaviour exists in nearly all avian lineages. This explains the rapid appearance of migratory behaviour after the most recent glacial maximum. Theoretical analyses show that detours that increase flight distance by up to 20% will often be adaptive on aerodynamic grounds – a bird that loads itself with food to cross a long barrier flies less efficiently. However some species show circuitous migratory routes that reflect historical range expansions and are far from optimal in ecological terms. An example is the migration of continental populations of Swainson's thrush Catharus ustulatus, which fly far east across North America before turning south via Florida to reach northern South America; this route is believed to be the consequence of a range expansion that occurred about 10,000 years ago. Detours may also be caused by differential wind conditions, predation risk, or other factors. Climate change Large scale climatic changes are expected to have an effect on the timing of migration. Studies have shown a variety of effects including timing changes in migration, breeding as well as population declines. Bird migration is generally synchronised to take advantage of seasonal resources. For example, there is a strong link between seasonal migration and vegetation greenness in North America. Climate-induced shifts in the phenology of seasonal resource availability can cause mismatches between the timing of increased resource availability and important life-history events such as migration and breeding (aka phenological mismatch or phenological asynchrony). These mismatches between the timing of resource availability and when organisms need additional resources may impact species’ fitness, as described by the match-mismatch hypothesis. In birds, individuals may use local temperature as a cue for migration. Changing temperature patterns due to climate change can result in population-level shifts in migration phenology. Such shifts in the timing of migration of hundreds of species are already detectable at the continental scale. While phenological mismatches appear to be more pronounced in long-distance migrants, certain species traits such as a generalist diet may help some species avoid more severe consequences of mismatches. Ecological effects The migration of birds also aids the movement of other species, including those of ectoparasites such as ticks and lice, which in turn may carry micro-organisms including those of concern to human health. Due to the global spread of avian influenza, bird migration has been studied as a possible mechanism of disease transmission, but it has been found not to present a special risk; import of pet and domestic birds is a greater threat. Some viruses that are maintained in birds without lethal effects, such as the West Nile virus may however be spread by migrating birds. Birds may also have a role in the dispersal of propagules of plants and plankton. Some predators take advantage of the concentration of birds during migration. Greater noctule bats feed on nocturnal migrating passerines. Some birds of prey specialize on migrating waders. Study techniques Early studies on the timing of migration began in 1749 in Finland, with Johannes Leche of Turku collecting the dates of arrivals of spring migrants. Bird migration routes have been studied by a variety of techniques including the oldest, marking. Swans have been marked with a nick on the beak since about 1560 in England. Scientific ringing was pioneered by Hans Christian Cornelius Mortensen in 1899. Other techniques include radar and satellite tracking. The rate of bird migration over the Alps (up to a height of 150 m) was found to be highly comparable between fixed-beam radar measurements and visual bird counts, highlighting the potential use of this technique as an objective way of quantifying bird migration. Stable isotopes of hydrogen, oxygen, carbon, nitrogen, and sulphur can establish avian migratory connectivity between wintering sites and breeding grounds. Stable isotopic methods to establish migratory linkage rely on spatial isotopic differences in bird diet that are incorporated into inert tissues like feathers, or into growing tissues such as claws and muscle or blood. An approach to identify migration intensity makes use of upward pointing microphones to record the nocturnal contact calls of flocks flying overhead. These are then analyzed in a laboratory to measure time, frequency and species. An older technique developed by George Lowery and others to quantify migration involves observing the face of the full moon with a telescope and counting the silhouettes of flocks of birds as they fly at night. Orientation behaviour studies have been traditionally carried out using variants of a setup known as the Emlen funnel, which consists of a circular cage with the top covered by glass or wire-screen so that either the sky is visible or the setup is placed in a planetarium or with other controls on environmental cues. The orientation behaviour of the bird inside the cage is studied quantitatively using the distribution of marks that the bird leaves on the walls of the cage. Other approaches used in pigeon homing studies make use of the direction in which the bird vanishes on the horizon. Threats and conservation Human activities have threatened many migratory bird species. The distances involved in bird migration mean that they often cross political boundaries of countries and conservation measures require international cooperation. Several international treaties have been signed to protect migratory species including the Migratory Bird Treaty Act of 1918 of the US. and the African-Eurasian Migratory Waterbird Agreement The concentration of birds during migration can put species at risk. Some spectacular migrants have already gone extinct; during the passenger pigeon's (Ectopistes migratorius) migration the enormous flocks were wide, darkening the sky, and long, taking several days to pass. Hunting along migration routes threatens some bird species. The populations of Siberian cranes (Leucogeranus leucogeranus) that wintered in India declined due to hunting along the route, particularly in Afghanistan and Central Asia. Birds were last seen in their favourite wintering grounds in Keoladeo National Park in 2002. Structures such as power lines, wind farms and offshore oil-rigs have also been known to affect migratory birds. Other migration hazards include pollution, storms, wildfires, and habitat destruction along migration routes, denying migrants food at stopover points. For example, in the East Asian–Australasian Flyway, up to 65% of key intertidal habitat at the Yellow Sea migration bottleneck has been destroyed since the 1950s. Other significant areas include stop-over sites between the wintering and breeding territories. A capture-recapture study of passerine migrants with high fidelity for breeding and wintering sites did not show similar strict association with stop-over sites. Unfortunately, many historic stopover sites have been destroyed or drastically reduced due to human agricultural development, leading to an increased risk of bird extinction, especially in the face of climate change. Conversely, so-called "ship-assisted migration" may be a modern benefit to migrating birds by giving them a mid-ocean rest stop on ships. Stopover site conservation efforts California's Central Valley was once a massive stopover site for birds traveling along the Pacific Flyway, before being converted into agricultural land. 90% of North America's shorebirds utilize this migration path and the destruction of rest stops has had detrimental impacts on bird populations, as they cannot get adequate rest and food and can be unable to complete their migration. As a solution, conservationists and farmers in the United States are now working together to help provide stopover habitats for migrating birds. In the winter, when many of these birds are migrating, farmers are now flooding their fields in order to provide temporary wetlands for birds to rest and feed before continuing their journey. Rice is a major crop produced along this flyway, and flooded rice paddies have shown to be important areas for at least 169 different bird species. For example, in California, legislation changes have made it illegal for farmers to burn excess rice straw, so instead they have begun flooding their fields during the winter. Similar practices are now taking place across the nation, with the Mississippi Alluvial Valley being a primary area of interest due to its agricultural use and its importance for migration. Plant debris provides food sources for the birds while the newly formed wetland serves as a habitat for bird prey species such as insects and other invertebrates. In turn, bird foraging assists in breaking down plant matter. Droppings then help to fertilize the field, helping the farmers and in turn significantly decreasing their need for artificial fertilizers by at least 13 percent. Recent studies have shown that the implementation of these temporary wetlands has had significant positive impacts on bird populations, such as the White-fronted Goose, as well as various species of wading birds. The artificial nature of these temporary wetlands also greatly reduces the threat of predation from other wild animals. This practice requires extremely low investment on behalf of the farmers, and researchers believe that mutually beneficial approaches such as this are key to wildlife conservation moving forward. Economic incentives are key to getting more farmers to participate in this practice. However, issues can arise if bird populations are too high with their large amounts of droppings decreasing water quality and potentially leading to eutrophication. Increasing participation in this practice would allow migratory birds to spread out and rest on a wider variety of locations, decreasing the negative impacts of having too many birds congregated in a small area. Using this practice in areas with close proximity to natural wetlands could also greatly increase their positive impact.
Biology and health sciences
Ethology
Biology
201951
https://en.wikipedia.org/wiki/European%20honey%20buzzard
European honey buzzard
The European honey buzzard (Pernis apivorus), also known as the pern or common pern, is a bird of prey in the family Accipitridae. Taxonomy The European honey buzzard was formally described in 1758 by the Swedish naturalist Carl Linnaeus in the tenth edition of his Systema Naturae. He placed it with the falcons and eagles in the genus Falco and coined the binomial name Falco apivorus. Linnaeus cited earlier works including the 1678 description by the English naturalist Francis Willughby and the 1713 description by John Ray. The European honey buzzard is now one of four species placed in the genus Pernis that was introduced by Georges Cuvier in 1816. The species is monotypic: no subspecies are recognised. The binomen is derived from Ancient Greek pernes πέρνης, a term used by Aristotle for a bird of prey, and Latin apivorus "bee-eating", from apis, "bee" and -vorus, "-eating". In fact, bees are much less important than wasps in the birds' diet. Note that it is accordingly called ("wasp buzzard") in German and similarly in some other Germanic languages and also in Hungarian ("darázsölyv"). Despite its English name, this species is more closely related to kites of the genera Leptodon and Chondrohierax than to true buzzards in Buteo. Description The -long honey buzzard is larger and longer winged, with a wingspan, when compared to the smaller common buzzard (Buteo buteo). It appears longer necked with a small head, and soars on flat wings. It has a longer tail, which has fewer bars than the Buteo buzzard, usually with two narrow dark bars and a broad dark subterminal bar. The sexes can be distinguished on plumage, which is unusual for a large bird of prey. The male has a blue-grey head, while the female's head is brown. The female is slightly larger and darker than the male. The soaring jizz is quite diagnostic; the wings are held straight with the wing tips horizontal or sometimes slightly pointed down. The head protrudes forwards with a slight kink downwards and sometimes a very angular chest can be seen, similar to a sparrowhawk, although this may not be diagnostic. The angular chest is most pronounced when seen in direct flight with tail narrowed. The call is a clear peee-lu. Distribution and habitat The European honey buzzard is a summer migrant to a relatively small area in the western Palearctic from most of Europe to as far east as southwestern Siberia. The eastern area boundary is not yet known exactly, it is thought to be in the Tomsk–Novosibirsk–Barnaul area. It is seen in a wide range of habitats, but generally prefers woodland and exotic plantations. It migrates to tropical Africa for European winters. Movements Being a long-distance migrant, the honey buzzard relies on magnetic orientation to find its way south, as well as a visual memory of remarkable geographical features such as mountain ranges and rivers, along the way. It avoids large expanses of water over which it cannot soar. Accordingly, great numbers of honey buzzards can be seen crossing the Mediterranean Sea over its narrowest stretches, such as the Gibraltar Strait, the Messina Strait, the Bosphorus, Lebanon, or in Israel. Status in Britain The bird is an uncommon breeder in, and a scarce though increasing migrant to, Britain. Its most well-known summer population is in the New Forest (Hampshire) but it is also found in the Tyne Valley (Northumberland), Wareham Forest (Dorset), Swanton Novers Great Wood (Norfolk), the Neath Valleys (South Wales), the Clumber Park area (Nottinghamshire), near Wykeham Forest (North Yorkshire), Haldon Forest Park (Devon) and elsewhere. Mimicry The similarity in plumage between juvenile European honey buzzard and common buzzard may have arisen as a partial protection against predation by Eurasian goshawks. Although that formidable predator is capable of killing both species, it is likely to be more cautious about attacking the better protected Buteo species, with its stronger bill and talons. Similar Batesian mimicry is shown by the Asian Pernis species, which resemble the Spizaetus hawk-eagles. Behaviour It is sometimes seen soaring in thermals. When flying in wooded vegetation, honey buzzards usually fly quite low and perch in midcanopy, holding the body relatively horizontal with its tail drooping. The bird also hops from branch to branch, each time flapping its wings once, and so emitting a loud clap. The bird often appears restless with much ruffling of the wings and shifting around on its perch. The honey buzzard often inspects possible locations of food from its perch, cocking its head this way and that to get a good look at possible food locations. This behaviour is reminiscent of an inquisitive parrot. Breeding The honey buzzard breeds in woodland, and is inconspicuous except in the spring, when the mating display includes wing-clapping. Breeding males are fiercely territorial. The clutch typically consists of two eggs, less often one or three. Siblicide is rarely observed. Feeding It is a specialist feeder, living mainly on the larvae and nests of wasps and hornets, although it will take small mammals, reptiles, and birds. It is the only known predator of the Asian hornet. It spends large amounts of time on the forest floor excavating wasp nests. It is equipped with long toes and claws adapted to raking and digging, and scale-like feathering on its head, thought to be a defence against the stings of its prey. Honey buzzards are thought to have a chemical deterrent in their feathers that protects them from wasp attacks. In culture The honey buzzard was historically considered a winter delicacy in Europe, with 19th century texts stating it was frequently caught in winter and described as "fat and delicious eating".
Biology and health sciences
Accipitrimorphae
Animals
201968
https://en.wikipedia.org/wiki/Synapsida
Synapsida
Synapsida is a diverse group of tetrapod vertebrates that includes all mammals and their extinct relatives. It is one of the two major clades of the group Amniota, the other being the more diverse group Sauropsida (which includes all extant reptiles and birds). Unlike other amniotes, synapsids have a single temporal fenestra, an opening low in the skull roof behind each eye socket, leaving a bony arch beneath each; this accounts for the name "synapsid". The distinctive temporal fenestra developed about 318 million years ago during the Late Carboniferous period, when synapsids and sauropsids diverged, but was subsequently merged with the orbit in early mammals. The basal amniotes (reptiliomorphs) from which synapsids evolved were historically simply called "reptiles". Therefore, stem group synapsids were then described as mammal-like reptiles in classical systematics, and non-therapsid synapsids were also referred to as pelycosaurs or pelycosaur-grade synapsids. These paraphyletic terms have now fallen out of favor and are only used informally (if at all) in modern literature, as it is now known that all extant reptiles are more closely related to each other and birds than to synapsids, so the word "reptile" has been re-defined to mean only members of Sauropsida or even just an under-clade thereof. In a cladistic sense, synapsids are in fact a monophyletic sister taxon of sauropsids, rather than a part of the sauropsid lineage. Therefore, calling synapsids "mammal-like reptiles" is incorrect under the new definition of "reptile", so they are now referred to as stem mammals, proto-mammals, paramammals or pan-mammals. Most lineages of pelycosaur-grade synapsids were replaced by the more advanced therapsids, which evolved from sphenacodontoid pelycosaurs, at the end of the Early Permian during the so-called Olson's Extinction. Synapsids were the largest terrestrial vertebrates in the Permian period (299 to 251 mya), rivalled only by some large pareiasaurian parareptiles such as Scutosaurus. They were the dominant land predators of the late Paleozoic and early Mesozoic, with eupelycosaurs such as Dimetrodon, Titanophoneus and Inostrancevia being the apex predators during the Permian, and theriodonts such as Moschorhinus during the Early Triassic. Synapsid population and diversity were severely reduced by the Capitanian mass extinction event and the Permian–Triassic extinction event, and only two groups of therapsids, the dicynodonts and eutheriodonts (consisting of therocephalians and cynodonts) are known to have survived into the Triassic. These therapsids rebounded as disaster taxa during the early Mesozoic, with the dicynodont Lystrosaurus making up as much as 95% of all land species at one time, but declined again after the Smithian–Spathian boundary event with their dominant niches largely taken over by the rise of archosaurian sauropsids, first by the pseudosuchians and then by the pterosaurs and dinosaurs. The cynodont group Probainognathia, which includes the group Mammaliaformes, were the only synapsids to survive beyond the Triassic, and mammals are the only synapsid lineage that have survived past the Jurassic, having lived mostly nocturnally to avoid competition with dinosaurs. After the Cretaceous-Paleogene extinction wiped out all non-avian dinosaurs and pterosaurs, synapsids (as mammals) rose to dominance once again during the Cenozoic. Linnaean and cladistic classifications At the turn of the 20th century, synapsids were thought to be one of the four main subclasses of reptiles. However, this notion was disproved upon closer inspection of skeletal remains, as synapsids are differentiated from reptiles by their distinctive temporal openings. These openings in the skull bones allowed the attachment of larger jaw muscles, hence a more efficient bite. Synapsids were subsequently considered to be a later reptilian lineage that became mammals by gradually evolving increasingly mammalian features, hence the name "mammal-like reptiles" (also known as pelycosaurs). These became the traditional terms for all Paleozoic (early) synapsids. More recent studies have debunked this notion as well, and reptiles are now classified within Sauropsida (sauropsids), the sister group to synapsids, thus making synapsids their own taxonomic group. As a result, the paraphyletic terms "mammal-like reptile" and "pelycosaur" are seen as outdated and disfavored in technical literature, and the term stem mammal (or sometimes protomammal or paramammal) is used instead. Phylogenetically, it is now understood that synapsids comprise an independent branch of the tree of life. The monophyly of Synapsida is not in doubt, and the expressions such as "Synapsida contains the mammals" and "synapsids gave rise to the mammals" both express the same phylogenetic hypothesis. This terminology reflects the modern cladistic approach to animal relationships, according to which the only valid groups are those that include all of the descendants of a common ancestor: these are known as monophyletic groups, or clades. Additionally, Reptilia (reptiles) has been revised into a monophyletic group and is considered entirely distinct from Synapsida, falling within Sauropsida, the sister group of Synapsida within Amniota. Primitive and advanced synapsids The synapsids are traditionally divided for convenience, into therapsids, an advanced group of synapsids and the branch within which mammals evolved, and stem mammals, (previously known as pelycosaurs), comprising the other six more primitive families of synapsids. Stem mammals were all rather lizard-like, with sprawling gait and possibly horny scutes, while therapsids tended to have a more erect pose and possibly hair, at least in some forms. In traditional taxonomy, the Synapsida encompasses two distinct grades: the low-slung stem mammals have given rise to the more erect therapsids, who in their turn have given rise to the mammals. In traditional vertebrate classification, the stem mammals and therapsids were both considered orders of the subclass Synapsida. Practical versus phylogenetic usage of "synapsid" and "therapsid" In phylogenetic nomenclature, the terms are used somewhat differently, as the daughter clades are included. Most papers published during the 21st century have treated "Pelycosaur" as an informal grouping of primitive members. Therapsida has remained in use as a clade containing both the traditional therapsid families and mammals. Although Synapsida and Therapsida include modern mammals, in practical usage, those two terms are used almost exclusively when referring to the more basal members that lie outside of Mammaliaformes. Characteristics Temporal openings Synapsids evolved a temporal fenestra behind each eye orbit on the lateral surface of the skull. It may have provided new attachment sites for jaw muscles. A similar development took place in the diapsids, which evolved two rather than one opening behind each eye. Originally, the openings in the skull left the inner cranium covered only by the jaw muscles, but in higher therapsids and mammals, the sphenoid bone has expanded to close the opening. This has left the lower margin of the opening as an arch extending from the lower edges of the braincase. Teeth Synapsids are characterized by having differentiated teeth. These include the canines, molars, and incisors. The trend towards differentiation is found in some labyrinthodonts and early anapsid reptilians in the form of enlargement of the first teeth on the maxilla, forming a sort of protocanines. This trait was subsequently lost in the diapsid line, but developed further in the synapsids. Early synapsids could have two or even three enlarged "canines", but in the therapsids, the pattern had settled to one canine in each upper jaw half. The lower canines developed later. Jaw The jaw transition is a good classification tool, as most other fossilized features that make a chronological progression from a reptile-like to a mammalian condition follow the progression of the jaw transition. The mandible, or lower jaw, consists of a single, tooth-bearing bone in mammals (the dentary), whereas the lower jaw of modern and prehistoric reptiles consists of a conglomeration of smaller bones (including the dentary, articular, and others). As they evolved in synapsids, these jaw bones were reduced in size and either lost or, in the case of the articular, gradually moved into the ear, forming one of the middle ear bones: while modern mammals possess the malleus, incus and stapes, basal synapsids (like all other tetrapods) possess only a stapes. The malleus is derived from the articular (a lower jaw bone), while the incus is derived from the quadrate (a cranial bone). Mammalian jaw structures are also set apart by the dentary-squamosal jaw joint. In this form of jaw joint, the dentary forms a connection with a depression in the squamosal known as the glenoid cavity. In contrast, all other jawed vertebrates, including reptiles and nonmammalian synapsids, possess a jaw joint in which one of the smaller bones of the lower jaw, the articular, makes a connection with a bone of the cranium called the quadrate bone to form the articular-quadrate jaw joint. In forms transitional to mammals, the jaw joint is composed of a large, lower jaw bone (similar to the dentary found in mammals) that does not connect to the squamosal, but connects to the quadrate with a receding articular bone. Palate Over time, as synapsids became more mammalian and less 'reptilian', they began to develop a secondary palate, separating the mouth and nasal cavity. In early synapsids, a secondary palate began to form on the sides of the maxilla, still leaving the mouth and nostril connected. Eventually, the two sides of the palate began to curve together, forming a U shape instead of a C shape. The palate also began to extend back toward the throat, securing the entire mouth and creating a full palatine bone. The maxilla is also closed completely. In fossils of one of the first eutheriodonts, the beginnings of a palate are clearly visible. The later Thrinaxodon has a full and completely closed palate, forming a clear progression. Skin and fur In addition to the glandular skin covered in fur found in most modern mammals, modern and extinct synapsids possess a variety of modified skin coverings, including osteoderms (bony armor embedded in the skin), scutes (protective structures of the dermis often with a horny covering), hair or fur, and scale-like structures (often formed from modified hair, as in pangolins and some rodents). While the skin of reptiles is rather thin, that of mammals has a thick dermal layer. The ancestral skin type of synapsids has been subject to discussion. The type specimen of the oldest known synapsid Asaphestera preserved scales. Among the early synapsids, only two species of small varanopids have been found to possess osteoderms; fossilized rows of osteoderms indicate bony armour on the neck and back. However, some recent studies have cast doubt on the placement of Varanopidae in Synapsida, while others have countered and lean towards this traditional placement. Skin impressions indicate some early synapsids basal possessed rectangular scutes on their undersides and tails. The pelycosaur scutes probably were nonoverlapping dermal structures with a horny overlay, like those found in modern crocodiles and turtles. These differed in structure from the scales of lizards and snakes, which are an epidermal feature (like mammalian hair or avian feathers). Recently, skin impressions from the genus Ascendonanus suggest that at least varanopsids developed scales similar to those of squamates. It is currently unknown exactly when mammalian characteristics such as body hair and mammary glands first appeared, as the fossils only rarely provide direct evidence for soft tissues. An exceptionally well-preserved skull of Estemmenosuchus, a therapsid from the Upper Permian, preserves smooth skin with what appear to be glandular depressions, an animal noted as being semi-aquatic. The oldest known fossil showing unambiguous imprints of hair is the Callovian (late middle Jurassic) Castorocauda and several contemporary haramiyidans, both non-mammalian mammaliaform (see below, however). More primitive members of the Cynodontia are also hypothesized to have had fur or a fur-like covering based on their inferred warm-blooded metabolism. While more direct evidence of fur in early cynodonts has been proposed in the form of small pits on the snout possibly associated with whiskers, such pits are also found in some reptiles that lack whiskers. There is evidence that some other non-mammalian cynodonts more basal than Castorocauda, such as Morganucodon, had Harderian glands, which are associated with the grooming and maintenance of fur. The apparent absence of these glands in non-mammaliaformes may suggest that fur did not originate until that point in synapsid evolution. It is possible that fur and associated features of true warm-bloodedness did not appear until some synapsids became extremely small and nocturnal, necessitating a higher metabolism. The oldest examples of nocturnality in synapsids is believed to have been in species that lived more than 300 million years ago. However, Late Permian coprolites from Russia and possibly South Africa showcase that at least some synapsids did already have pre-mammalian hair in this epoch. These are the oldest impressions of hair-like structures on synapsids. Mammary glands Early synapsids, as far back as their known evolutionary debut in the Late Carboniferous period, may have laid parchment-shelled (leathery) eggs, which lacked a calcified layer, as most modern reptiles and monotremes do. This may also explain why there is no fossil evidence for synapsid eggs to date. Because they were vulnerable to desiccation, secretions from apocrine-like glands may have helped keep the eggs moist. According to Oftedal, early synapsids may have buried the eggs into moisture laden soil, hydrating them with contact with the moist skin, or may have carried them in a moist pouch, similar to that of monotremes (echidnas carry their eggs and offspring via a temporary pouch), though this would limit the mobility of the parent. The latter may have been the primitive form of egg care in synapsids rather than simply burying the eggs, and the constraint on the parent's mobility would have been solved by having the eggs "parked" in nests during foraging or other activities and periodically be hydrated, allowing higher clutch sizes than could fit inside a pouch (or pouches) at once, and large eggs, which would be cumbersome to carry in a pouch, would be easier to care for. The basis of Oftedal's speculation is the fact that many species of anurans can carry eggs or tadpoles attached to the skin, or embedded within cutaneous "pouches" and how most salamanders curl around their eggs to keep them moist, both groups also having glandular skin. The glands involved in this mechanism would later evolve into true mammary glands with multiple modes of secretion in association with hair follicles. Comparative analyses of the evolutionary origin of milk constituents support a scenario in which the secretions from these glands evolved into a complex, nutrient-rich milk long before true mammals arose (with some of the constituents possibly predating the split between the synapsid and sauropsid lines). Cynodonts were almost certainly able to produce this, which allowed a progressive decline of yolk mass and thus egg size, resulting in increasingly altricial hatchlings as milk became the primary source of nutrition, which is all evidenced by the small body size, the presence of epipubic bones, and limited tooth replacement in advanced cynodonts, as well as in mammaliaforms. Patagia Aerial locomotion first began in non-mammalian haramiyidan cynodonts, with Arboroharamiya, Xianshou, Maiopatagium and Vilevolodon bearing exquisitely preserved, fur-covered wing membranes that stretch across the limbs and tail. Their fingers are elongated, similar to those of bats and colugos and likely sharing similar roles both as wing supports and to hang on tree branches. Within true mammals, aerial locomotion first occurs in volaticotherian eutriconodonts. A fossil Volaticotherium has an exquisitely preserved furry patagium with delicate wrinkles and that is very extensive, "sandwiching" the poorly preserved hands and feet and extending to the base of the tail. Argentoconodon, a close relative, shares a similar femur adapted for flight stresses, indicating a similar lifestyle. Therian mammals would only achieve powered flight and gliding long after these early aeronauts became extinct, with the earliest-known gliding metatherians and bats evolving in the Paleocene. Metabolism Recently, it has been found that endothermy was developed as early as Ophiacodon in the late Carboniferous. The presence of fibrolamellar, a specialised type of bone that can grow quickly while maintaining a stable structure, shows that Ophiacodon would have used its high internal body temperature to fuel a fast growth comparable to modern endotherms. Evolutionary history Over the course of synapsid evolution, progenitor taxa at the start of adaptive radiations have tended to be derived carnivores. Synapsid adaptive radiations have generally occurred after extinction events that depleted the biosphere and left vacant niches open to be filled by newly evolved taxa. In non-mammaliaform synapsids, those taxa that gave rise to rapidly diversifying lineages have been both small and large in body size, although after the Late Triassic, progenitors of new synapsid lineages have generally been small, unspecialised generalists. The earliest known synapsid Asaphestera coexisted with the earliest known sauropsid Hylonomus which lived during the Bashkirian age of the Late Carboniferous. It was one of many types of primitive synapsids that are now informally grouped together as stem mammals or sometimes as protomammals (previously known as pelycosaurs). The early synapsids spread and diversified, becoming the largest terrestrial animals in the latest Carboniferous and Early Permian periods, ranging up to in length. They were sprawling, bulky, possibly cold-blooded, and had small brains. Some, such as Dimetrodon, had large sails that might have helped raise their body temperature. A few relict groups lasted into the later Permian but, by the middle of the Late Permian, all had either died off or evolved into their successors, the therapsids. The therapsids, a more advanced group of synapsids, appeared during the Middle Permian and included the largest terrestrial animals in the Middle and Late Permian. They included herbivores and carnivores, ranging from small animals the size of a rat (e.g.: Robertia), to large, bulky herbivores a ton or more in weight (e.g.: Moschops). After flourishing for many millions of years, these successful animals were all but wiped out by the Permian–Triassic mass extinction about 250 mya, the largest known extinction in Earth's history, possibly related to the Siberian Traps volcanic event. Only a few therapsids went on to be successful in the new early Triassic landscape; they include Lystrosaurus and Cynognathus, the latter of which appeared later in the Early Triassic. However, they were accompanied by the early archosaurs (soon to give rise to the dinosaurs). Some of these archosaurs, such as Euparkeria, were small and lightly built, while others, such as Erythrosuchus, were as big as or bigger than the largest therapsids. After the Permian extinction, the synapsids did not count more than three surviving clades. The first comprised the therocephalians, which only lasted the first 20 million years of the Triassic period. The second were specialised, beaked herbivores known as dicynodonts (such as the Kannemeyeriidae), which contained some members that reached large size (up to a tonne or more). And finally there were the increasingly mammal-like carnivorous, herbivorous, and insectivorous cynodonts, including the eucynodonts from the Olenekian age, an early representative of which was Cynognathus. Unlike the dicynodonts, which were large, the cynodonts became progressively smaller and more mammal-like as the Triassic progressed, though some forms like Trucidocynodon remained large. The first mammaliaforms evolved from the cynodonts during the early Norian age of the Late Triassic, about 225 mya. During the evolutionary succession from early therapsid to cynodont to eucynodont to mammal, the main lower jaw bone, the dentary, replaced the adjacent bones. Thus, the lower jaw gradually became just one large bone, with several of the smaller jaw bones migrating into the inner ear and allowing sophisticated hearing. Whether through climate change, vegetation change, ecological competition, or a combination of factors, most of the remaining large cynodonts (belonging to the Traversodontidae) and dicynodonts (of the family Kannemeyeriidae) had disappeared by the Rhaetian age, even before the Triassic–Jurassic extinction event that killed off most of the large non-dinosaurian archosaurs. The remaining Mesozoic synapsids were small, ranging from the size of a shrew to the badger-like mammal Repenomamus. During the Jurassic and Cretaceous, the remaining non-mammalian cynodonts were small, such as Tritylodon. No cynodont grew larger than a cat. Most Jurassic and Cretaceous cynodonts were herbivorous, though some were carnivorous. The family Tritheledontidae, which first appeared near the end of the Triassic, was carnivorous and persisted well into the Middle Jurassic. The other, Tritylodontidae, first appeared at the same time as the tritheledonts, but was herbivorous. This group became extinct at the end of the Early Cretaceous epoch. Dicynodonts are generally thought to have become extinct near the end of the Triassic period, but there was evidence this group survived, in the form of six fragments of fossil bone that were found in Cretaceous rocks of Queensland, Australia. If true, it would mean there is a significant ghost lineage of Dicynodonts in Gondwana. However, these fossils were re-described in 2019 as being Pleistocene in age, and possibly belonging to a diprotodontid marsupial. Today, the 5,500 species of living synapsids, known as the mammals, include both aquatic (cetaceans) and flying (bats) species, and the largest animal ever known to have existed (the blue whale). Humans are synapsids, as well. Most mammals are viviparous and give birth to live young rather than laying eggs with the exception being the monotremes. Triassic and Jurassic ancestors of living mammals, along with their close relatives, had high metabolic rates. This meant consuming food (generally thought to be insects) in much greater quantity. To facilitate rapid digestion, these synapsids evolved mastication (chewing) and specialized teeth that aided chewing. Limbs also evolved to move under the body instead of to the side, allowing them to breathe more efficiently during locomotion. This helped make it possible to support their higher metabolic demands. Relationships Below is a cladogram of the most commonly accepted phylogeny of synapsids, showing a long stem lineage including Mammalia and successively more basal clades such as Theriodontia, Therapsida and Sphenacodontia: Most uncertainty in the phylogeny of synapsids lies among the earliest members of the group, including forms traditionally placed within Pelycosauria. As one of the earliest phylogenetic analyses, Brinkman & Eberth (1983) placed the family Varanopidae with Caseasauria as the most basal offshoot of the synapsid lineage. Reisz (1986) removed Varanopidae from Caseasauria, placing it in a more derived position on the stem. While most analyses find Caseasauria to be the most basal synapsid clade, Benson's analysis (2012) placed a clade containing Ophiacodontidae and Varanopidae as the most basal synapsids, with Caseasauria occupying a more derived position. Benson attributed this revised phylogeny to the inclusion of postcranial characteristics, or features of the skeleton other than the skull, in his analysis. When only cranial or skull features were included, Caseasauria remained the most basal synapsid clade. Below is a cladogram modified from Benson's analysis (2012): However, more recent examination of the phylogeny of basal synapsids, incorporating newly described basal caseids and eothyridids, returned Caseasauria to its position as the sister to all other synapsids. Brocklehurst et al. (2016) demonstrated that many of the postcranial characters used by Benson (2012) to unite Caseasauria with Sphenacodontidae and Edaphosauridae were absent in the newly discovered postcranial material of eothyridids, and were therefore acquired convergently.
Biology and health sciences
Dinosaurs and prehistoric reptiles
null
201983
https://en.wikipedia.org/wiki/Coronavirus
Coronavirus
Coronaviruses are a group of related RNA viruses that cause diseases in mammals and birds. In humans and birds, they cause respiratory tract infections that can range from mild to lethal. Mild illnesses in humans include some cases of the common cold (which is also caused by other viruses, predominantly rhinoviruses), while more lethal varieties can cause SARS, MERS and COVID-19. In cows and pigs they cause diarrhea, while in mice they cause hepatitis and encephalomyelitis. Coronaviruses constitute the subfamily Orthocoronavirinae, in the family Coronaviridae, order Nidovirales and realm Riboviria. They are enveloped viruses with a positive-sense single-stranded RNA genome and a nucleocapsid of helical symmetry. The genome size of coronaviruses ranges from approximately 26 to 32 kilobases, one of the largest among RNA viruses. They have characteristic club-shaped spikes that project from their surface, which in electron micrographs create an image reminiscent of the stellar corona, from which their name derives. Etymology The name "coronavirus" is derived from Latin corona, meaning "crown" or "wreath", itself a borrowing from Greek korṓnē, "garland, wreath". The name was coined by June Almeida and David Tyrrell who first observed and studied human coronaviruses. The word was first used in print in 1968 by an informal group of virologists in the journal Nature to designate the new family of viruses. The name refers to the characteristic appearance of virions (the infective form of the virus) by electron microscopy, which have a fringe of large, bulbous surface projections creating an image reminiscent of the solar corona or halo. This morphology is created by the viral spike peplomers, which are proteins on the surface of the virus. The scientific name Coronavirus was accepted as a genus name by the International Committee for the Nomenclature of Viruses (later renamed International Committee on Taxonomy of Viruses) in 1971. As the number of new species increased, the genus was split into four genera, namely Alphacoronavirus, Betacoronavirus, Deltacoronavirus, and Gammacoronavirus in 2009. The common name coronavirus is used to refer to any member of the subfamily Orthocoronavirinae. As of 2020, 45 species are officially recognised. History The earliest reports of a coronavirus infection in animals occurred in the late 1920s, when an acute respiratory infection of domesticated chickens emerged in North America. Arthur Schalk and M.C. Hawn in 1931 made the first detailed report which described a new respiratory infection of chickens in North Dakota. The infection of new-born chicks was characterized by gasping and listlessness with high mortality rates of 40–90%. Leland David Bushnell and Carl Alfred Brandly isolated the virus that caused the infection in 1933. The virus was then known as infectious bronchitis virus (IBV). Charles D. Hudson and Fred Robert Beaudette cultivated the virus for the first time in 1937. The specimen came to be known as the Beaudette strain. In the late 1940s, two more animal coronaviruses, JHM that causes brain disease (murine encephalitis) and mouse hepatitis virus (MHV) that causes hepatitis in mice were discovered. It was not realized at the time that these three different viruses were related. Human coronaviruses were discovered in the 1960s using two different methods in the United Kingdom and the United States. E.C. Kendall, Malcolm Bynoe, and David Tyrrell working at the Common Cold Unit of the British Medical Research Council collected a unique common cold virus designated B814 in 1961. The virus could not be cultivated using standard techniques which had successfully cultivated rhinoviruses, adenoviruses and other known common cold viruses. In 1965, Tyrrell and Bynoe successfully cultivated the novel virus by serially passing it through organ culture of human embryonic trachea. The new cultivating method was introduced to the lab by Bertil Hoorn. The isolated virus when intranasally inoculated into volunteers caused a cold and was inactivated by ether which indicated it had a lipid envelope. Dorothy Hamre and John Procknow at the University of Chicago isolated a novel cold from medical students in 1962. They isolated and grew the virus in kidney tissue culture, designating it 229E. The novel virus caused a cold in volunteers and, like B814, was inactivated by ether. Scottish virologist June Almeida at St Thomas' Hospital in London, collaborating with Tyrrell, compared the structures of IBV, B814 and 229E in 1967. Using electron microscopy the three viruses were shown to be morphologically related by their general shape and distinctive club-like spikes. A research group at the National Institute of Health the same year was able to isolate another member of this new group of viruses using organ culture and named one of the samples OC43 (OC for organ culture). Like B814, 229E, and IBV, the novel cold virus OC43 had distinctive club-like spikes when observed with the electron microscope. The IBV-like novel cold viruses were soon shown to be also morphologically related to the mouse hepatitis virus. This new group of viruses were named coronaviruses after their distinctive morphological appearance. Human coronavirus 229E and human coronavirus OC43 continued to be studied in subsequent decades. The coronavirus strain B814 was lost. It is not known which present human coronavirus it was. Other human coronaviruses have since been identified, including SARS-CoV in 2003, HCoV NL63 in 2003, HCoV HKU1 in 2004, MERS-CoV in 2013, and SARS-CoV-2 in 2019. There have also been a large number of animal coronaviruses identified since the 1960s. Microbiology Structure Coronaviruses are large, roughly spherical particles with unique surface projections. Their size is highly variable with average diameters of 80 to 120 nm. Extreme sizes are known from 50 to 200 nm in diameter. The total molecular mass is on average 40,000 kDa. They are enclosed in an envelope embedded with a number of protein molecules. The lipid bilayer envelope, membrane proteins, and nucleocapsid protect the virus when it is outside the host cell. The viral envelope is made up of a lipid bilayer in which the membrane (M), envelope (E) and spike (S) structural proteins are anchored. The molar ratio of E:S:M in the lipid bilayer is approximately 1:20:300. The E and M protein are the structural proteins that combined with the lipid bilayer to shape the viral envelope and maintain its size. S proteins are needed for interaction with the host cells. But human coronavirus NL63 is peculiar in that its M protein has the binding site for the host cell, and not its S protein. The diameter of the envelope is 85 nm. The envelope of the virus in electron micrographs appears as a distinct pair of electron-dense shells (shells that are relatively opaque to the electron beam used to scan the virus particle). The M protein is the main structural protein of the envelope that provides the overall shape and is a type III membrane protein. It consists of 218 to 263 amino acid residues and forms a layer 7.8 nm thick. It has three domains, a short N-terminal ectodomain, a triple-spanning transmembrane domain, and a C-terminal endodomain. The C-terminal domain forms a matrix-like lattice that adds to the extra-thickness of the envelope. Different species can have either N- or O-linked glycans in their protein amino-terminal domain. The M protein is crucial during the assembly, budding, envelope formation, and pathogenesis stages of the virus lifecycle. The E proteins are minor structural proteins and highly variable in different species. There are only about 20 copies of the E protein molecule in a coronavirus particle. They are 8.4 to 12 kDa in size and are composed of 76 to 109 amino acids. They are integral proteins (i.e. embedded in the lipid layer) and have two domains namely a transmembrane domain and an extramembrane C-terminal domain. They are almost fully α-helical, with a single α-helical transmembrane domain, and form pentameric (five-molecular) ion channels in the lipid bilayer. They are responsible for virion assembly, intracellular trafficking and morphogenesis (budding). The spikes are the most distinguishing feature of coronaviruses and are responsible for the corona- or halo-like surface. On average a coronavirus particle has 74 surface spikes. Each spike is about 20 nm long and is composed of a trimer of the Sprotein. The S protein is in turn composed of an S1 and S2 subunit. The homotrimeric Sprotein is a class I fusion protein which mediates the receptor binding and membrane fusion between the virus and host cell. The S1 subunit forms the head of the spike and has the receptor-binding domain (RBD). The S2 subunit forms the stem which anchors the spike in the viral envelope and on protease activation enables fusion. The two subunits remain noncovalently linked as they are exposed on the viral surface until they attach to the host cell membrane. In a functionally active state, three S1 are attached to two S2 subunits. The subunit complex is split into individual subunits when the virus binds and fuses with the host cell under the action of proteases such as cathepsin family and transmembrane protease serine 2 (TMPRSS2) of the host cell. S1 proteins are the most critical components in terms of infection. They are also the most variable components as they are responsible for host cell specificity. They possess two major domains named N-terminal domain (S1-NTD) and C-terminal domain (S1-CTD), both of which serve as the receptor-binding domains. The NTDs recognize and bind sugars on the surface of the host cell. An exception is the MHV NTD that binds to a protein receptor carcinoembryonic antigen-related cell adhesion molecule 1 (CEACAM1). S1-CTDs are responsible for recognizing different protein receptors such as angiotensin-converting enzyme 2 (ACE2), aminopeptidase N (APN), and dipeptidyl peptidase 4 (DPP4). A subset of coronaviruses (specifically the members of betacoronavirus subgroup A) also has a shorter spike-like surface protein called hemagglutinin esterase (HE). The HE proteins occur as homodimers composed of about 400 amino acid residues and are 40 to 50 kDa in size. They appear as tiny surface projections of 5 to 7 nm long embedded in between the spikes. They help in the attachment to and detachment from the host cell. Inside the envelope, there is the nucleocapsid, which is formed from multiple copies of the nucleocapsid (N) protein, which are bound to the positive-sense single-stranded RNA genome in a continuous beads-on-a-string type conformation. N protein is a phosphoprotein of 43 to 50 kDa in size, and is divided into three conserved domains. The majority of the protein is made up of domains 1 and 2, which are typically rich in arginines and lysines. Domain 3 has a short carboxy terminal end and has a net negative charge due to excess of acidic over basic amino acid residues. Genome Coronaviruses contain a positive-sense, single-stranded RNA genome. The genome size for coronaviruses ranges from 26.4 to 31.7 kilobases. The genome size is one of the largest among RNA viruses. The genome has a 5′ methylated cap and a 3′ polyadenylated tail. The genome organization for a coronavirus is 5′-leader-UTR-replicase (ORF1ab)-spike (S)-envelope (E)-membrane (M)-nucleocapsid (N)-3′UTR-poly (A) tail. The open reading frames 1a and 1b, which occupy the first two-thirds of the genome, encode the replicase polyprotein (pp1ab). The replicase polyprotein self cleaves to form 16 nonstructural proteins (nsp1–nsp16). The later reading frames encode the four major structural proteins: spike, envelope, membrane, and nucleocapsid. Interspersed between these reading frames are the reading frames for the accessory proteins. The number of accessory proteins and their function is unique depending on the specific coronavirus. Replication cycle Cell entry Infection begins when the viral spike protein attaches to its complementary host cell receptor. After attachment, a protease of the host cell cleaves and activates the receptor-attached spike protein. Depending on the host cell protease available, cleavage and activation allows the virus to enter the host cell by endocytosis or direct fusion of the viral envelope with the host membrane. Coronaviruses can enter cells by either fusing to their lipid envelope with the cell membrane on the cell surface or by internalization via endocytosis. Genome translation On entry into the host cell, the virus particle is uncoated, and its genome enters the cell cytoplasm. The coronavirus RNA genome has a 5′ methylated cap and a 3′ polyadenylated tail, which allows it to act like a messenger RNA and be directly translated by the host cell's ribosomes. The host ribosomes translate the initial overlapping open reading frames ORF1a and ORF1b of the virus genome into two large overlapping polyproteins, pp1a and pp1ab. The larger polyprotein pp1ab is a result of a -1 ribosomal frameshift caused by a slippery sequence (UUUAAAC) and a downstream RNA pseudoknot at the end of open reading frame ORF1a. The ribosomal frameshift allows for the continuous translation of ORF1a followed by ORF1b. The polyproteins have their own proteases, PLpro (nsp3) and 3CLpro (nsp5), which cleave the polyproteins at different specific sites. The cleavage of polyprotein pp1ab yields 16 nonstructural proteins (nsp1 to nsp16). Product proteins include various replication proteins such as RNA-dependent RNA polymerase (nsp12), RNA helicase (nsp13), and exoribonuclease (nsp14). Replicase-transcriptase A number of the nonstructural proteins coalesce to form a multi-protein replicase-transcriptase complex (RTC). The main replicase-transcriptase protein is the RNA-dependent RNA polymerase (RdRp). It is directly involved in the replication and transcription of RNA from an RNA strand. The other nonstructural proteins in the complex assist in the replication and transcription process. The exoribonuclease nonstructural protein, for instance, provides extra fidelity to replication by providing a proofreading function which the RNA-dependent RNA polymerase lacks. Replication – One of the main functions of the complex is to replicate the viral genome. RdRp directly mediates the synthesis of negative-sense genomic RNA from the positive-sense genomic RNA. This is followed by the replication of positive-sense genomic RNA from the negative-sense genomic RNA. Transcription – The other important function of the complex is to transcribe the viral genome. RdRp directly mediates the synthesis of negative-sense subgenomic RNA molecules from the positive-sense genomic RNA. This process is followed by the transcription of these negative-sense subgenomic RNA molecules to their corresponding positive-sense mRNAs. The subgenomic mRNAs form a "nested set" which have a common 5'-head and partially duplicate 3'-end. Recombination – The replicase-transcriptase complex is also capable of genetic recombination when at least two viral genomes are present in the same infected cell. RNA recombination appears to be a major driving force in determining genetic variability within a coronavirus species, the capability of a coronavirus species to jump from one host to another and, infrequently, in determining the emergence of novel coronaviruses. The exact mechanism of recombination in coronaviruses is unclear, but likely involves template switching during genome replication. Assembly and release The replicated positive-sense genomic RNA becomes the genome of the progeny viruses. The mRNAs are gene transcripts of the last third of the virus genome after the initial overlapping reading frame. These mRNAs are translated by the host's ribosomes into the structural proteins and many accessory proteins. RNA translation occurs inside the endoplasmic reticulum. The viral structural proteins S, E, and M move along the secretory pathway into the Golgi intermediate compartment. There, the Mproteins direct most protein-protein interactions required for the assembly of viruses following its binding to the nucleocapsid. Progeny viruses are then released from the host cell by exocytosis through secretory vesicles. Once released the viruses can infect other host cells. Transmission Infected carriers are able to shed viruses into the environment. The interaction of the coronavirus spike protein with its complementary cell receptor is central in determining the tissue tropism, infectivity, and species range of the released virus. Coronaviruses mainly target epithelial cells. They are transmitted from one host to another host, depending on the coronavirus species, by either an aerosol, fomite, or fecal-oral route. Human coronaviruses infect the epithelial cells of the respiratory tract, while animal coronaviruses generally infect the epithelial cells of the digestive tract. SARS coronavirus, for example, infects the human epithelial cells of the lungs via an aerosol route by binding to the angiotensin-converting enzyme 2 (ACE2) receptor. Transmissible gastroenteritis coronavirus (TGEV) infects the pig epithelial cells of the digestive tract via a fecal–oral route by binding to the alanine aminopeptidase (APN) receptor. Classification Coronaviruses form the subfamily Orthocoronavirinae, which is one of two subfamilies in the family Coronaviridae, order Nidovirales, and realm Riboviria. They are divided into the four genera: Alphacoronavirus, Betacoronavirus, Gammacoronavirus and Deltacoronavirus. Alphacoronaviruses and betacoronaviruses infect mammals, while gammacoronaviruses and deltacoronaviruses primarily infect birds. Genus: Alphacoronavirus; Species: Alphacoronavirus 1 (TGEV, Feline coronavirus, Canine coronavirus), Human coronavirus 229E, Human coronavirus NL63, Miniopterus bat coronavirus 1, Miniopterus bat coronavirus HKU8, Porcine epidemic diarrhea virus, Rhinolophus bat coronavirus HKU2, Scotophilus bat coronavirus 512 Genus Betacoronavirus; Species: Betacoronavirus 1 (Bovine Coronavirus, Human coronavirus OC43), Hedgehog coronavirus 1, Human coronavirus HKU1, Middle East respiratory syndrome-related coronavirus, Murine coronavirus, Pipistrellus bat coronavirus HKU5, Rousettus bat coronavirus HKU9, Severe acute respiratory syndrome–related coronavirus (SARS-CoV-1, SARS-CoV-2), Tylonycteris bat coronavirus HKU4 Genus Gammacoronavirus; Species: Avian coronavirus, Beluga whale coronavirus SW1 Genus Deltacoronavirus Species: Bulbul coronavirus HKU11, Porcine coronavirus HKU15 Origin The most recent common ancestor (MRCA) of all coronaviruses is estimated to have existed as recently as 8000 BCE, although some models place the common ancestor as far back as 55 million years or more, implying long term coevolution with bat and avian species. The most recent common ancestor of the alphacoronavirus line has been placed at about 2400 BCE, of the betacoronavirus line at 3300 BCE, of the gammacoronavirus line at 2800 BCE, and the deltacoronavirus line at about 3000 BCE. Bats and birds, as warm-blooded flying vertebrates, are an ideal natural reservoir for the coronavirus gene pool (with bats the reservoir for alphacoronaviruses and betacoronavirusand birds the reservoir for gammacoronaviruses and deltacoronaviruses). The large number and global range of bat and avian species that host viruses have enabled extensive evolution and dissemination of coronaviruses. Many human coronaviruses have their origin in bats. The human coronavirus NL63 shared a common ancestor with a bat coronavirus (ARCoV.2) between 1190 and 1449 CE. The human coronavirus 229E shared a common ancestor with a bat coronavirus (GhanaGrp1 Bt CoV) between 1686 and 1800 CE. More recently, alpaca coronavirus and human coronavirus 229E diverged sometime before 1960. MERS-CoV emerged in humans from bats through the intermediate host of camels. MERS-CoV, although related to several bat coronavirus species, appears to have diverged from these several centuries ago. The most closely related bat coronavirus and SARS-CoV diverged in 1986. The ancestors of SARS-CoV first infected leaf-nose bats of the genus Hipposideridae; subsequently, they spread to horseshoe bats in the species Rhinolophidae, then to Asian palm civets, and finally to humans. Unlike other betacoronaviruses, bovine coronavirus of the species Betacoronavirus 1 and subgenus Embecovirus is thought to have originated in rodents and not in bats. In the 1790s, equine coronavirus diverged from the bovine coronavirus after a cross-species jump. Later in the 1890s, human coronavirus OC43 diverged from bovine coronavirus after another cross-species spillover event. It is speculated that the flu pandemic of 1890 may have been caused by this spillover event, and not by the influenza virus, because of the related timing, neurological symptoms, and unknown causative agent of the pandemic. Besides causing respiratory infections, human coronavirus OC43 is also suspected of playing a role in neurological diseases. In the 1950s, the human coronavirus OC43 began to diverge into its present genotypes. Phylogenetically, mouse hepatitis virus (Murine coronavirus), which infects the mouse's liver and central nervous system, is related to human coronavirus OC43 and bovine coronavirus. Human coronavirus HKU1, like the aforementioned viruses, also has its origins in rodents. Infection in humans Coronaviruses vary significantly in risk factor. Some can kill more than 30% of those infected, such as MERS-CoV, and some are relatively harmless, such as the common cold. Coronaviruses can cause colds with major symptoms, such as fever, and a sore throat from swollen adenoids. Coronaviruses can cause pneumonia (either direct viral pneumonia or secondary bacterial pneumonia) and bronchitis (either direct viral bronchitis or secondary bacterial bronchitis). The human coronavirus discovered in 2003, SARS-CoV, which causes severe acute respiratory syndrome (SARS), has a unique pathogenesis because it causes both upper and lower respiratory tract infections. Six species of human coronaviruses are known, with one species subdivided into two different strains, making seven strains of human coronaviruses altogether. Four human coronaviruses produce symptoms that are generally mild, even though it is contended they might have been more aggressive in the past: Human coronavirus OC43 (HCoV-OC43), β-CoV Human coronavirus HKU1 (HCoV-HKU1), β-CoV Human coronavirus 229E (HCoV-229E), α-CoV Human coronavirus NL63 (HCoV-NL63), α-CoV– Three human coronaviruses produce potentially severe symptoms: Severe acute respiratory syndrome coronavirus (SARS-CoV), β-CoV (identified in 2003) Middle East respiratory syndrome-related coronavirus (MERS-CoV), β-CoV (identified in 2012) Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), β-CoV (identified in 2019) These cause the diseases commonly called SARS, MERS, and COVID-19 respectively. Common cold Although the common cold is usually caused by rhinoviruses, in about 15% of cases the cause is a coronavirus. The human coronaviruses HCoV-OC43, HCoV-HKU1, HCoV-229E, and HCoV-NL63 continually circulate in the human population in adults and children worldwide and produce the generally mild symptoms of the common cold. The four mild coronaviruses have a seasonal incidence occurring in the winter months in temperate climates. There is no preponderance in any season in tropical climates. Severe acute respiratory syndrome (SARS) In 2003, following the outbreak of severe acute respiratory syndrome (SARS) which had begun the prior year in Asia, and secondary cases elsewhere in the world, the World Health Organization (WHO) issued a press release stating that a novel coronavirus identified by several laboratories was the causative agent for SARS. The virus was officially named the SARS coronavirus (SARS-CoV). More than 8,000 people from 29 countries and territories were infected, and at least 774 died. Middle East respiratory syndrome (MERS) In September 2012, a new type of coronavirus was identified, initially called Novel Coronavirus 2012, and now officially named Middle East respiratory syndrome coronavirus (MERS-CoV). The World Health Organization issued a global alert soon after. The WHO update on 28 September 2012 said the virus did not seem to pass easily from person to person. However, on 12 May 2013, a case of human-to-human transmission in France was confirmed by the French Ministry of Social Affairs and Health. In addition, cases of human-to-human transmission were reported by the Ministry of Health in Tunisia. Two confirmed cases involved people who seemed to have caught the disease from their late father, who became ill after a visit to Qatar and Saudi Arabia. Despite this, it appears the virus had trouble spreading from human to human, as most individuals who are infected do not transmit the virus. By 30 October 2013, there were 124 cases and 52 deaths in Saudi Arabia. After the Dutch Erasmus Medical Centre sequenced the virus, the virus was given a new name, Human Coronavirus–Erasmus Medical Centre (HCoV-EMC). The final name for the virus is Middle East respiratory syndrome coronavirus (MERS-CoV). The only U.S. cases (both survived) were recorded in May 2014. In May 2015, an outbreak of MERS-CoV occurred in the Republic of Korea, when a man who had traveled to the Middle East, visited four hospitals in the Seoul area to treat his illness. This caused one of the largest outbreaks of MERS-CoV outside the Middle East. As of December 2019, 2,468 cases of MERS-CoV infection had been confirmed by laboratory tests, 851 of which were fatal, a mortality rate of approximately 34.5%. Coronavirus disease 2019 (COVID-19) In December 2019, a pneumonia outbreak was reported in Wuhan, China. On 31 December 2019, the outbreak was traced to a novel strain of coronavirus, which was given the interim name 2019-nCoV by the World Health Organization, later renamed SARS-CoV-2 by the International Committee on Taxonomy of Viruses. As of , there were at least confirmed deaths and more than confirmed cases in the COVID-19 pandemic. The Wuhan strain has been identified as a new strain of Betacoronavirus from group 2B with approximately 70% genetic similarity to the SARS-CoV. The virus has a 96% similarity to a bat coronavirus, so it is widely suspected to originate from bats as well. Coronavirus HuPn-2018 During a surveillance study of archived samples of Malaysian viral pneumonia patients, virologists identified a strain of canine coronavirus which has infected humans in 2018. Infection in animals Coronaviruses have been recognized as causing pathological conditions in veterinary medicine since the 1930s. They infect a range of animals including swine, cattle, horses, camels, cats, dogs, rodents, birds and bats. The majority of animal related coronaviruses infect the intestinal tract and are transmitted by a fecal-oral route. Significant research efforts have been focused on elucidating the viral pathogenesis of these animal coronaviruses, especially by virologists interested in veterinary and zoonotic diseases. Farm animals Coronaviruses infect domesticated birds. Infectious bronchitis virus (IBV), a type of coronavirus, causes avian infectious bronchitis. The virus is of concern to the poultry industry because of the high mortality from infection, its rapid spread, and its effect on production. The virus affects both meat production and egg production and causes substantial economic loss. In chickens, infectious bronchitis virus targets not only the respiratory tract but also the urogenital tract. The virus can spread to different organs throughout the chicken. The virus is transmitted by aerosol and food contaminated by feces. Different vaccines against IBV exist and have helped to limit the spread of the virus and its variants. Infectious bronchitis virus is one of a number of strains of the species Avian coronavirus. Another strain of avian coronavirus is turkey coronavirus (TCV) which causes enteritis in turkeys. Coronaviruses also affect other branches of animal husbandry such as pig farming and cattle raising. Swine acute diarrhea syndrome coronavirus (SADS-CoV), which is related to bat coronavirus HKU2, causes diarrhea in pigs. Porcine epidemic diarrhea virus (PEDV) is a coronavirus that has recently emerged and similarly causes diarrhea in pigs. Transmissible gastroenteritis virus (TGEV), which is a member of the species Alphacoronavirus 1, is another coronavirus that causes diarrhea in young pigs. In the cattle industry bovine coronavirus (BCV), which is a member of the species Betacoronavirus 1 and related to HCoV-OC43, is responsible for severe profuse enteritis in young calves. Domestic pets Coronaviruses infect domestic pets such as cats, dogs, and ferrets. There are two forms of feline coronavirus which are both members of the species Alphacoronavirus 1. Feline enteric coronavirus is a pathogen of minor clinical significance, but spontaneous mutation of this virus can result in feline infectious peritonitis (FIP), a disease with high mortality. There are two different coronaviruses that infect dogs. Canine coronavirus (CCoV), which is a member of the species Alphacoronavirus 1, causes mild gastrointestinal disease. Canine respiratory coronavirus (CRCoV), which is a member of the species Betacoronavirus 1 and related to HCoV-OC43, cause respiratory disease. Similarly, there are two types of coronavirus that infect ferrets. Ferret enteric coronavirus causes a gastrointestinal syndrome known as epizootic catarrhal enteritis (ECE), and a more lethal systemic version of the virus (like FIP in cats) known as ferret systemic coronavirus (FSC). Laboratory animals Coronaviruses infect laboratory animals. Mouse hepatitis virus (MHV), which is a member of the species Murine coronavirus, causes an epidemic murine illness with high mortality, especially among colonies of laboratory mice. Prior to the discovery of SARS-CoV, MHV was the best-studied coronavirus both in vivo and in vitro as well as at the molecular level. Some strains of MHV cause a progressive demyelinating encephalitis in mice which has been used as a murine model for multiple sclerosis. Sialodacryoadenitis virus (SDAV), which is a strain of the species Murine coronavirus, is highly infectious coronavirus of laboratory rats, which can be transmitted between individuals by direct contact and indirectly by aerosol. Rabbit enteric coronavirus causes acute gastrointestinal disease and diarrhea in young European rabbits. Mortality rates are high. Prevention and treatment A number of vaccines using different methods have been developed against human coronavirus SARS-CoV-2. Antiviral targets against human coronaviruses have also been identified such as viral proteases, polymerases, and entry proteins. Drugs are in development which target these proteins and the different steps of viral replication. Vaccines are available for animal coronaviruses IBV, TGEV, and Canine CoV, although their effectiveness is limited. In the case of outbreaks of highly contagious animal coronaviruses, such as PEDV, measures such as destruction of entire herds of pigs may be used to prevent transmission to other herds.
Biology and health sciences
Infectious disease
null
202240
https://en.wikipedia.org/wiki/Bivalvia
Bivalvia
Bivalvia () or bivalves, in previous centuries referred to as the Lamellibranchiata and Pelecypoda, is a class of aquatic molluscs (marine and freshwater) that have laterally compressed soft bodies enclosed by a calcified exoskeleton consisting of a hinged pair of half-shells known as valves. As a group, bivalves have no head and lack some typical molluscan organs such as the radula and the odontophore. Their gills have evolved into ctenidia, specialised organs for feeding and breathing. Common bivalves include clams, oysters, cockles, mussels, scallops, and numerous other families that live in saltwater, as well as a number of families that live in freshwater. Majority of the class are benthic filter feeders that bury themselves in sediment, where they are relatively safe from predation. Others lie on the sea floor or attach themselves to rocks or other hard surfaces. Some bivalves, such as scallops and file shells, can swim. Shipworms bore into wood, clay, or stone and live inside these substances. The shell of a bivalve is composed of calcium carbonate, and consists of two, usually similar, parts called valves. These valves are for feeding and for disposal of waste. These are joined together along one edge (the hinge line) by a flexible ligament that, usually in conjunction with interlocking "teeth" on each of the valves, forms the hinge. This arrangement allows the shell to be opened and closed without the two halves detaching. The shell is typically bilaterally symmetrical, with the hinge lying in the sagittal plane. Adult shell sizes of bivalves vary from fractions of a millimetre to over a metre in length, but the majority of species do not exceed 10 cm (4 in). Bivalves have long been a part of the diet of coastal and riparian human populations. Oysters were cultured in ponds by the Romans, and mariculture has more recently become an important source of bivalves for food. Modern knowledge of molluscan reproductive cycles has led to the development of hatcheries and new culture techniques. A better understanding of the potential hazards of eating raw or undercooked shellfish has led to improved storage and processing. Pearl oysters (the common name of two very different families in salt water and fresh water) are the most common source of natural pearls. The shells of bivalves are used in craftwork, and the manufacture of jewellery and buttons. Bivalves have also been used in the biocontrol of pollution. Bivalves appear in the fossil record first in the early Cambrian more than 500 million years ago. The total number of known living species is about 9,200. These species are placed within 1,260 genera and 106 families. Marine bivalves (including brackish water and estuarine species) represent about 8,000 species, combined in four subclasses and 99 families with 1,100 genera. The largest recent marine families are the Veneridae, with more than 680 species and the Tellinidae and Lucinidae, each with over 500 species. The freshwater bivalves include seven families, the largest of which are the Unionidae, with about 700 species. Etymology The taxonomic term Bivalvia was first used by Linnaeus in the 10th edition of his Systema Naturae in 1758 to refer to animals having shells composed of two valves. More recently, the class was known as Pelecypoda, meaning "axe-foot" (based on the shape of the foot of the animal when extended). The name "bivalve" is derived from the Latin , meaning 'two', and , meaning 'leaves of a door'. ("Leaf" is an older word for the main, movable part of a door. We normally consider this the door itself.) Paired shells have evolved independently several times among animals that are not bivalves; other animals with paired valves include certain gastropods (small sea snails in the family Juliidae), members of the phylum Brachiopoda and the minute crustaceans known as ostracods and conchostracans. Anatomy Bivalves have bilaterally symmetrical and laterally flattened bodies, with a blade-shaped foot, vestigial head and no radula. At the dorsal or back region of the shell is the hinge point or line, which contain the umbo and beak and the lower, curved margin is the ventral or underside region. The anterior or front of the shell is where the byssus (when present) and foot are located, and the posterior of the shell is where the siphons are located. With the hinge uppermost and with the anterior edge of the animal towards the viewer's left, the valve facing the viewer is the left valve and the opposing valve the right. Many bivalves such as clams, which appear upright, are evolutionarily lying on their side. Mantle and shell The shell is composed of two calcareous valves held together by a ligament. The valves are made of either calcite, as is the case in oysters, or both calcite and aragonite. Sometimes, the aragonite forms an inner, nacreous layer, as is the case in the order Pteriida. In other taxa, alternate layers of calcite and aragonite are laid down. The ligament and byssus, if calcified, are composed of aragonite. The outermost layer of the shell is the periostracum, a thin layer composed of horny conchiolin. The periostracum is secreted by the outer mantle and is easily abraded. The outer surface of the valves is often sculpted, with clams often having concentric striations, scallops having radial ribs and oysters a latticework of irregular markings. In all molluscs, the mantle forms a thin membrane that covers the animal's body and extends out from it in flaps or lobes. In bivalves, the mantle lobes secrete the valves, and the mantle crest secretes the whole hinge mechanism consisting of ligament, byssus threads (where present), and teeth. The posterior mantle edge may have two elongated extensions known as siphons, through one of which water is inhaled, and the other expelled. The siphons retract into a cavity, known as the pallial sinus. The shell grows larger when more material is secreted by the mantle edge, and the valves themselves thicken as more material is secreted from the general mantle surface. Calcareous matter comes from both its diet and the surrounding seawater. Concentric rings on the exterior of a valve are commonly used to age bivalves. For some groups, a more precise method for determining the age of a shell is by cutting a cross section through it and examining the incremental growth bands. The shipworms, in the family Teredinidae have greatly elongated bodies, but their shell valves are much reduced and restricted to the anterior end of the body, where they function as scraping organs that permit the animal to dig tunnels through wood. Muscles and ligaments The main muscular system in bivalves is the posterior and anterior adductor muscles. These muscles connect the two valves and contract to close the shell. The valves are also joined dorsally by the hinge ligament, which is an extension of the periostracum. The ligament is responsible for opening the shell, and works against the adductor muscles when the animal opens and closes. Retractor muscles connect the mantle to the edge of the shell, along a line known as the pallial line. These muscles pull the mantle though the valves. In sedentary or recumbent bivalves that lie on one valve, such as the oysters and scallops, the anterior adductor muscle has been lost and the posterior muscle is positioned centrally. In species that can swim by flapping their valves, a single, central adductor muscle occurs. These muscles are composed of two types of muscle fibres, striated muscle bundles for fast actions and smooth muscle bundles for maintaining a steady pull. Paired pedal protractor and retractor muscles operate the animal's foot. Nervous system The sedentary habits of the bivalves have meant that in general the nervous system is less complex than in most other molluscs. The animals have no brain; the nervous system consists of a nerve network and a series of paired ganglia. In all but the most primitive bivalves, two cerebropleural ganglia are on either side of the oesophagus. The cerebral ganglia control the sensory organs, while the pleural ganglia supply nerves to the mantle cavity. The pedal ganglia, which control the foot, are at its base, and the visceral ganglia, which can be quite large in swimming bivalves, are under the posterior adductor muscle. These ganglia are both connected to the cerebropleural ganglia by nerve fibres. Bivalves with long siphons may also have siphonal ganglia to control them. Senses The sensory organs of bivalves are largely located on the posterior mantle margins. The organs are usually mechanoreceptors or chemoreceptors, in some cases located on short tentacles. The osphradium is a patch of sensory cells located below the posterior adductor muscle that may serve to taste the water or measure its turbidity. Statocysts within the organism help the bivalve to sense and correct its orientation. In the order Anomalodesmata, the inhalant siphon is surrounded by vibration-sensitive tentacles for detecting prey. Many bivalves have no eyes, but a few members of the Arcoidea, Limopsoidea, Mytiloidea, Anomioidea, Ostreoidea, and Limoidea have simple eyes on the margin of the mantle. These consist of a pit of photosensory cells and a lens. Scallops have more complex eyes with a lens, a two-layered retina, and a concave mirror. All bivalves have light-sensitive cells that can detect a shadow falling over the animal. Circulation and respiration Bivalves have an open circulatory system that bathes the organs in blood (hemolymph). The heart has three chambers: two auricles receiving blood from the gills, and a single ventricle. The ventricle is muscular and pumps hemolymph into the aorta, and then to the rest of the body. Some bivalves have a single aorta, but most also have a second, usually smaller, aorta serving the hind parts of the animal. The hemolymph usually lacks any respiratory pigment. In the carnivorous genus Poromya, the hemolymph has red amoebocytes containing a haemoglobin pigment. The paired gills are located posteriorly and consist of hollow tube-like filaments with thin walls for gas exchange. The respiratory demands of bivalves are low, due to their relative inactivity. Some freshwater species, when exposed to the air, can gape the shell slightly and gas exchange can take place. Oysters, including the Pacific oyster (Magallana gigas), are recognized as having varying metabolic responses to environmental stress, with changes in respiration rate being frequently observed. Digestive system Modes of feeding Most bivalves are filter feeders, using their gills to capture particulate food such as phytoplankton from the water. Protobranchs feed in a different way, scraping detritus from the seabed, and this may be the original mode of feeding used by all bivalves before the gills became adapted for filter feeding. These primitive bivalves hold on to the bottom with a pair of tentacles at the edge of the mouth, each of which has a single palp, or flap. The tentacles are covered in mucus, which traps the food, and cilia, which transport the particles back to the palps. These then sort the particles, rejecting those that are unsuitable or too large to digest, and conveying others to the mouth. In more advanced bivalves, water is drawn into the shell from the posterior ventral surface of the animal, passes upwards through the gills, and doubles back to be expelled just above the intake. There may be two elongated, retractable siphons reaching up to the seabed, one each for the inhalant and exhalant streams of water. The gills of filter-feeding bivalves are known as ctenidia and have become highly modified to increase their ability to capture food. For example, the cilia on the gills, which originally served to remove unwanted sediment, have become adapted to capture food particles, and transport them in a steady stream of mucus to the mouth. The filaments of the gills are also much longer than those in more primitive bivalves, and are folded over to create a groove through which food can be transported. The structure of the gills varies considerably, and can serve as a useful means for classifying bivalves into groups. A few bivalves, such as the granular poromya (Poromya granulata), are carnivorous, eating much larger prey than the tiny microalgae consumed by other bivalves. Muscles draw water in through the inhalant siphon which is modified into a cowl-shaped organ, sucking in prey. The siphon can be retracted quickly and inverted, bringing the prey within reach of the mouth. The gut is modified so that large food particles can be digested. The unusual genus, Entovalva, is endosymbiotic, being found only in the oesophagus of sea cucumbers. It has mantle folds that completely surround its small valves. When the sea cucumber sucks in sediment, the bivalve allows the water to pass over its gills and extracts fine organic particles. To prevent itself from being swept away, it attaches itself with byssal threads to the host's throat. The sea cucumber is unharmed. Digestive tract The digestive tract of typical bivalves consists of an oesophagus, stomach, and intestine. Protobranch stomachs have a mere sac attached to them while filter-feeding bivalves have elongated rod of solidified mucus referred to as the "crystalline style" projected into the stomach from an associated sac. Cilia in the sac cause the style to rotate, winding in a stream of food-containing mucus from the mouth, and churning the stomach contents. This constant motion propels food particles into a sorting region at the rear of the stomach, which distributes smaller particles into the digestive glands, and heavier particles into the intestine. Waste material is consolidated in the rectum and voided as pellets into the exhalent water stream through an anal pore. Feeding and digestion are synchronized with diurnal and tidal cycles. Carnivorous bivalves generally have reduced crystalline styles and the stomach has thick, muscular walls, extensive cuticular linings and diminished sorting areas and gastric chamber sections. Excretory system The excretory organs of bivalves are a pair of nephridia. Each of these consists of a long, looped, glandular tube, which opens into the pericardium, and a bladder to store urine. They also have pericardial glands either line the auricles of the heart or attach to the pericardium, and serve as extra filtration organs. Metabolic waste is voided from the bladders through a nephridiopore near the front of the upper part of the mantle cavity and excreted. Reproduction and development The sexes are usually separate in bivalves but some hermaphroditism is known. The gonads either open into the nephridia or through a separate pore into a chamber over the gills. The ripe gonads of males and females release sperm and eggs into the water column. Spawning may take place continually or be triggered by environmental factors such as day length, water temperature, or the presence of sperm in the water. Some species are "dribble spawners", releasing gametes during protracted period that can extend for weeks. Others are mass spawners and release their gametes in batches or all at once. Fertilization is usually external. Typically, a short stage lasts a few hours or days before the eggs hatch into trochophore larvae. These later develop into veliger larvae which settle on the seabed and undergo metamorphosis into adults. In some species, such as those in the genus Lasaea, females draw water containing sperm in through their inhalant siphons and fertilization takes place inside the female. These species then brood the young inside their mantle cavity, eventually releasing them into the water column as veliger larvae or as crawl-away juveniles. Most of the bivalve larvae that hatch from eggs in the water column feed on diatoms or other phytoplankton. In temperate regions, about 25% of species are lecithotrophic, depending on nutrients stored in the yolk of the egg where the main energy source is lipids. The longer the period is before the larva first feeds, the larger the egg and yolk need to be. The reproductive cost of producing these energy-rich eggs is high and they are usually smaller in number. For example, the Baltic tellin (Macoma balthica) produces few, high-energy eggs. The larvae hatching out of these rely on the energy reserves and do not feed. After about four days, they become D-stage larvae, when they first develop hinged, D-shaped valves. These larvae have a relatively small dispersal potential before settling out. The common mussel (Mytilus edulis) produces 10 times as many eggs that hatch into larvae and soon need to feed to survive and grow. They can disperse more widely as they remain planktonic for a much longer time. Freshwater bivalves have different lifecycle. Sperm is drawn into a female's gills with the inhalant water and internal fertilization takes place. The eggs hatch into glochidia larvae that develop within the female's shell. Later they are released and attach themselves parasitically to the gills or fins of a fish host. After several weeks they drop off their host, undergo metamorphosis and develop into adults on the substrate. Some of the species in the freshwater mussel family, Unionidae, commonly known as pocketbook mussels, have evolved an unusual reproductive strategy. The female's mantle protrudes from the shell and develops into an imitation small fish, complete with fish-like markings and false eyes. This decoy moves in the current and attracts the attention of real fish. Some fish see the decoy as prey, while others see a conspecific. They approach for a closer look and the mussel releases huge numbers of larvae from its gills, dousing the inquisitive fish with its tiny, parasitic young. These glochidia larvae are drawn into the fish's gills, where they attach and trigger a tissue response that forms a small cyst around each larva. The larvae then feed by breaking down and digesting the tissue of the fish within the cysts. After a few weeks they release themselves from the cysts and fall to the stream bed as juvenile molluscs. Comparison with brachiopods Brachiopods are shelled marine organisms that superficially resemble bivalves in that they are of similar size and have a hinged shell in two parts. However, brachiopods evolved from a very different ancestral line, and the resemblance to bivalves only arose because they occupy similar ecological niches. The differences between the two groups are due to their separate ancestral origins. Different initial structures have been adapted to solve the same problems, a case of convergent evolution. In modern times, brachiopods are not as common as bivalves. Both groups have a shell consisting of two valves, but the organization of the shell is quite different in the two groups. In brachiopods, the two valves are positioned on the dorsal and ventral surfaces of the body, while in bivalves, the valves are on the left and right sides of the body, and are, in most cases, mirror images of one other. Brachiopods have a lophophore, a coiled, rigid cartilaginous internal apparatus adapted for filter feeding, a feature shared with two other major groups of marine invertebrates, the bryozoans and the phoronids. Some brachiopod shells are made of calcium phosphate but most are calcium carbonate in the form of the biomineral calcite, whereas bivalve shells are always composed entirely of calcium carbonate, often in the form of the biomineral aragonite. Evolutionary history The Cambrian explosion took place around 540 to 520 million years ago (Mya). In this geologically brief period, most major animal phyla diverged including some of the first creatures with mineralized skeletons. Brachiopods and bivalves made their appearance at this time, and left their fossilized remains behind in the rocks. Possible early bivalves include Pojetaia and Fordilla; these probably lie in the stem rather than crown group. Watsonella and Anabarella are perceived to be (earlier) close relatives of these taxa. Only five genera of supposed Cambrian "bivalves" exist, the others being Tuarangia, Camya and Arhouriella and potentially Buluniella. Bivalve fossils can be formed when the sediment in which the shells are buried hardens into rock. Often, the impression made by the valves remains as the fossil rather than the valves. During the Early Ordovician, a great increase in the diversity of bivalve species occurred, and the dysodont, heterodont, and taxodont dentitions evolved. By the Early Silurian, the gills were becoming adapted for filter feeding, and during the Devonian and Carboniferous periods, siphons first appeared, which, with the newly developed muscular foot, allowed the animals to bury themselves deep in the sediment. By the middle of the Paleozoic, around 400 Mya, the brachiopods were among the most abundant filter feeders in the ocean, and over 12,000 fossil species are recognized. By the Permian–Triassic extinction event 250 Mya, bivalves were undergoing a huge radiation of diversity. The bivalves were hard hit by this event, but re-established themselves and thrived during the Triassic period that followed. In contrast, the brachiopods lost 95% of their species diversity. The ability of some bivalves to burrow and thus avoid predators may have been a major factor in their success. Other new adaptations within various families allowed species to occupy previously unused evolutionary niches. These included increasing relative buoyancy in soft sediments by developing spines on the shell, gaining the ability to swim, and in a few cases, adopting predatory habits. For a long time, bivalves were thought to be better adapted to aquatic life than brachiopods were, outcompeting and relegating them to minor niches in later ages. These two taxa appeared in textbooks as an example of replacement by competition. Evidence given for this included the fact that bivalves needed less food to subsist because of their energetically efficient ligament-muscle system for opening and closing valves. All this has been broadly disproven, though; rather, the prominence of modern bivalves over brachiopods seems due to chance disparities in their response to extinction events. Diversity of extant bivalves The adult maximum size of living species of bivalve ranges from in Condylonucula maya, a nut clam, to a length of in Kuphus polythalamia, an elongated, burrowing shipworm. However, the species generally regarded as the largest living bivalve is the giant clam Tridacna gigas, which can grow to a length of and a weight of more than 200 kg (441 lb). The largest known extinct bivalve is a species of Platyceramus whose fossils measure up to in length. In his 2010 treatise, Compendium of Bivalves, Markus Huber gives the total number of living bivalve species as about 9,200 combined in 106 families. Huber states that the number of 20,000 living species, often encountered in literature, could not be verified and presents the following table to illustrate the known diversity: Distribution The bivalves are a highly successful class of invertebrates found in aquatic habitats throughout the world. Most are infaunal and live buried in sediment on the seabed, or in the sediment in freshwater habitats. A large number of bivalve species are found in the intertidal and sublittoral zones of the oceans. A sandy sea beach may superficially appear to be devoid of life, but often a very large number of bivalves and other invertebrates are living beneath the surface of the sand. On a large beach in South Wales, careful sampling produced an estimate of 1.44 million cockles (Cerastoderma edule) per acre of beach. Bivalves inhabit the tropics, as well as temperate and boreal waters. A number of species can survive and even flourish in extreme conditions. They are abundant in the Arctic, about 140 species being known from that zone. The Antarctic scallop, Adamussium colbecki, lives under the sea ice at the other end of the globe, where the subzero temperatures mean that growth rates are very slow. The giant mussel, Bathymodiolus thermophilus, and the giant white clam, Calyptogena magnifica, both live clustered around hydrothermal vents at abyssal depths in the Pacific Ocean. They have chemosymbiotic bacteria in their gills that oxidise hydrogen sulphide, and the molluscs absorb nutrients synthesized by these bacteria. Some species are found in the hadal zone, like Vesicomya sergeevi, which occurs at depths of 7600–9530 meters. The saddle oyster, Enigmonia aenigmatica, is a marine species that could be considered amphibious. It lives above the high tide mark in the tropical Indo-Pacific on the underside of mangrove leaves, on mangrove branches, and on sea walls in the splash zone. Some freshwater bivalves have very restricted ranges. For example, the Ouachita creekshell mussel, Villosa arkansasensis, is known only from the streams of the Ouachita Mountains in Arkansas and Oklahoma, and like several other freshwater mussel species from the southeastern US, it is in danger of extinction. In contrast, a few species of freshwater bivalves, including the golden mussel (Limnoperna fortunei), are dramatically increasing their ranges. The golden mussel has spread from Southeast Asia to Argentina, where it has become an invasive species. Another well-travelled freshwater bivalve, the zebra mussel (Dreissena polymorpha) originated in southeastern Russia, and has been accidentally introduced to inland waterways in North America and Europe, where the species damages water installations and disrupts local ecosystems. Behaviour Most bivalves adopt a sedentary or even sessile lifestyle, often spending their whole lives in the area in which they first settled as juveniles. The majority of bivalves are infaunal, living under the seabed, buried in soft substrates such as sand, silt, mud, gravel, or coral fragments. Many of these live in the intertidal zone where the sediment remains damp even when the tide is out. When buried in the sediment, burrowing bivalves are protected from the pounding of waves, desiccation, and overheating during low tide, and variations in salinity caused by rainwater. They are also out of the reach of many predators. Their general strategy is to extend their siphons to the surface for feeding and respiration during high tide, but to descend to greater depths or keep their shell tightly shut when the tide goes out. They use their muscular foot to dig into the substrate. To do this, the animal relaxes its adductor muscles and opens its shell wide to anchor itself in position while it extends its foot downwards into the substrate. Then it dilates the tip of its foot, retracts the adductor muscles to close the shell, shortens its foot and draws itself downwards. This series of actions is repeated to dig deeper. Other bivalves, such as mussels, attach themselves to hard surfaces using tough byssus threads made of collagen and elastin proteins. Some species, including the true oysters, the jewel boxes, the jingle shells, the thorny oysters and the kitten's paws, cement themselves to stones, rock or larger dead shells. In oysters, the lower valve may be almost flat while the upper valve develops layer upon layer of thin horny material reinforced with calcium carbonate. Oysters sometimes occur in dense beds in the neritic zone and, like most bivalves, are filter feeders. Bivalves filter large amounts of water to feed and breathe but they are not permanently open. They regularly shut their valves to enter a resting state, even when they are permanently submerged. In oysters, for example, their behaviour follows very strict circatidal and circadian rhythms according to the relative positions of the moon and sun. During neap tides, they exhibit much longer closing periods than during spring tides. Although many non-sessile bivalves use their muscular foot to move around, or to dig, members of the freshwater family Sphaeriidae are exceptional in that these small clams climb about quite nimbly on weeds using their long and flexible foot. The European fingernail clam (Sphaerium corneum), for example, climbs around on water weeds at the edges of lakes and ponds; this enables the clam to find the best position for filter feeding. Predators and defence The thick shell and rounded shape of bivalves make them awkward for potential predators to tackle. Nevertheless, a number of different creatures include them in their diet. Many species of demersal fish feed on them including the common carp (Cyprinus carpio), which is being used in the upper Mississippi River to try to control the invasive zebra mussel (Dreissena polymorpha). Birds such as the Eurasian oystercatcher (Haematopus ostralegus) have specially adapted beaks which can pry open their shells. The herring gull (Larus argentatus) sometimes drops heavy shells onto rocks in order to crack them open. Sea otters feed on a variety of bivalve species and have been observed to use stones balanced on their chests as anvils on which to crack open the shells. The Pacific walrus (Odobenus rosmarus divergens) is one of the main predators feeding on bivalves in Arctic waters. Shellfish have formed part of the human diet since prehistoric times, a fact evidenced by the remains of mollusc shells found in ancient middens. Examinations of these deposits in Peru has provided a means of dating long past El Niño events because of the disruption these caused to bivalve shell growth. Further changes in shell development due to environmental stress has also been suggested to cause increased mortality in oysters due to reduced shell strength. Invertebrate predators include crustaceans, starfish and octopuses. Crustaceans crack the shells with their pincers and starfish use their water vascular system to force the valves apart and then insert part of their stomach between the valves to digest the bivalve's body. It has been found experimentally that both crabs and starfish preferred molluscs that are attached by byssus threads to ones that are cemented to the substrate. This was probably because they could manipulate the shells and open them more easily when they could tackle them from different angles. Octopuses either pull bivalves apart by force, or they bore a hole into the shell and insert a digestive fluid before sucking out the liquified contents. Certain carnivorous gastropod snails such as whelks (Buccinidae) and murex snails (Muricidae) feed on bivalves by boring into their shells. A dog whelk (Nucella) drills a hole with its radula assisted by a shell-dissolving secretion. The dog whelk then inserts its extendible proboscis and sucks out the body contents of the victim, which is typically a blue mussel. Razor shells can dig themselves into the sand with great speed to escape predation. When a Pacific razor clam (Siliqua patula) is laid on the surface of the beach, it can bury itself completely in seven seconds and the Atlantic jackknife clam, Ensis directus, can do the same within fifteen seconds. Scallops and file clams can swim by opening and closing their valves rapidly; water is ejected on either side of the hinge area and they move with the flapping valves in front. Scallops have simple eyes around the margin of the mantle and can clap their valves shut to move sharply, hinge first, to escape from danger. Cockles can use their foot to move across the seabed or leap away from threats. The foot is first extended before being contracted suddenly when it acts like a spring, projecting the animal forwards. In many bivalves that have siphons, they can be retracted back into the safety of the shell. If the siphons inadvertently get attacked by a predator, in some cases, they snap off. The animal can regenerate them later, a process that starts when the cells close to the damaged site become activated and remodel the tissue back to its pre-existing form and size. In some other cases, it does not snap off. If the siphon is exposed, it is the key for a predatory fish to obtain the entire body. This tactic has been observed against bivalves with an infaunal lifestyle. File shells such as Limaria fragilis can produce a noxious secretion when stressed. It has numerous tentacles which fringe its mantle and protrude some distance from the shell when it is feeding. If attacked, it sheds tentacles in a process known as autotomy. The toxin released by this is distasteful and the detached tentacles continue to writhe which may also serve to distract potential predators. Mariculture Oysters, mussels, clams, scallops and other bivalve species are grown with food materials that occur naturally in their culture environment in the sea and lagoons. One-third of the world's farmed food fish harvested in 2010 was achieved without the use of feed, through the production of bivalves and filter-feeding carps. European flat oysters (Ostrea edulis) were first farmed by the Romans in shallow ponds and similar techniques are still in use. Seed oysters are either raised in a hatchery or harvested from the wild. Hatchery production provides some control of the broodstock but remains problematic because disease-resistant strains of this oyster have not yet been developed. Wild spats are harvested either by broadcasting empty mussel shells on the seabed or by the use of long, small-mesh nets filled with mussel shells supported on steel frames. The oyster larvae preferentially settle out on the mussel shells. Juvenile oysters are then grown on in nursery trays and are transferred to open waters when they reach in length. Many juveniles are further reared off the seabed in suspended rafts, on floating trays or cemented to ropes. Here they are largely free from bottom-dwelling predators such as starfish and crabs but more labour is required to tend them. They can be harvested by hand when they reach a suitable size. Other juveniles are laid directly on the seabed at the rate of per hectare. They grow on for about two years before being harvested by dredging. Survival rates are low at about 5%. The Pacific oyster (Crassostrea gigas) is cultivated by similar methods but in larger volumes and in many more regions of the world. This oyster originated in Japan where it has been cultivated for many centuries. It is an estuarine species and prefers salinities of 20 to 25 parts per thousand. Breeding programmes have produced improved stock that is available from hatcheries. A single female oyster can produce 50–80 million eggs in a batch so the selection of broodstock is of great importance. The larvae are grown on in tanks of static or moving water. They are fed high quality microalgae and diatoms and grow fast. At metamorphosis the juveniles may be allowed to settle on PVC sheets or pipes, or crushed shell. In some cases, they continue their development in "upwelling culture" in large tanks of moving water rather than being allowed to settle on the bottom. They then may be transferred to transitional, nursery beds before being moved to their final rearing quarters. Culture there takes place on the bottom, in plastic trays, in mesh bags, on rafts or on long lines, either in shallow water or in the intertidal zone. The oysters are ready for harvesting in 18 to 30 months depending on the size required. Similar techniques are used in different parts of the world to cultivate other species including the Sydney rock oyster (Saccostrea commercialis), the northern quahog (Mercenaria mercenaria), the blue mussel (Mytilus edulis), the Mediterranean mussel (Mytilus galloprovincialis), the New Zealand green-lipped mussel (Perna canaliculus), the grooved carpet shell (Ruditapes decussatus), the Japanese carpet shell (Venerupis philippinarum), the pullet carpet shell (Venerupis pullastra) and the Yesso scallop (Patinopecten yessoensis). Production of bivalve molluscs by mariculture in 2010 was 12,913,199 tons, up from 8,320,724 tons in 2000. Culture of clams, cockles and ark shells more than doubled over this time period from 2,354,730 to 4,885,179 tons. Culture of mussels over the same period grew from 1,307,243 to 1,812,371 tons, of oysters from 3,610,867 to 4,488,544 tons and of scallops from 1,047,884 to 1,727,105 tons. Use as food Bivalves have been an important source of food for humans at least since Roman times and empty shells found in middens at archaeological sites are evidence of earlier consumption. Oysters, scallops, clams, ark clams, mussels and cockles are the most commonly consumed kinds of bivalve, and are eaten cooked or raw. In 1950, the year in which the Food and Agriculture Organization (FAO) started making such information available, world trade in bivalve molluscs was 1,007,419 tons. By 2010, world trade in bivalves had risen to 14,616,172 tons, up from 10,293,607 tons a decade earlier. The figures included 5,554,348 (3,152,826) tons of clams, cockles and ark shells, 1,901,314 (1,568,417) tons of mussels, 4,592,529 (3,858,911) tons of oysters and 2,567,981 (1,713,453) tons of scallops. China increased its consumption 400-fold during the period 1970 to 1997. It has been known for more than a century that consumption of raw or insufficiently cooked shellfish can be associated with infectious diseases. These are caused either by bacteria naturally present in the sea such as Vibrio spp. or by viruses and bacteria from sewage effluent that sometimes contaminates coastal waters. As filter feeders, bivalves pass large quantities of water through their gills, filtering out the organic particles, including the microbial pathogens. These are retained in the animals' tissues and become concentrated in their liver-like digestive glands. Another possible source of contamination occurs when bivalves contain marine biotoxins as a result of ingesting numerous dinoflagellates. These microalgae are not associated with sewage but occur unpredictably as algal blooms. Large areas of a sea or lake may change colour as a result of the proliferation of millions of single-cell algae, and this condition is known as a red tide. Viral and bacterial infections In 1816 in France, a physician, J. P. A. Pasquier, described an outbreak of typhoid linked to the consumption of raw oysters. The first report of this kind in the United States was in Connecticut in 1894. As sewage treatment programmes became more prevalent in the late 19th century, more outbreaks took place. This may have been because sewage was released through outlets into the sea providing more food for bivalves in estuaries and coastal habitats. A causal link between the bivalves and the illness was not easy to demonstrate because the illness might come on days or even weeks after the ingestion of the contaminated shellfish. One viral pathogen is the Norwalk virus. This is resistant to treatment with chlorine-containing chemicals and may be present in the marine environment even when coliform bacteria have been killed by the treatment of sewage. Since the 1970s, outbreaks of oyster-vectored diseases have occurs throughout the world. The mortality rate of one disease causing bacteria Vibrio vulnificus, was high at 50%. In 1978, an oyster-associated gastrointestinal infection affecting more than 2,000 people occurred in Australia. The causative agent was found to be the Norwalk virus and the epidemic caused major economic difficulties to the oyster farming industry in the country. In 1988, an outbreak of hepatitis A associated with the consumption of inadequately cooked clams (Anadara subcrenata) took place in the Shanghai area of China. An estimated 290,000 people were infected and there were 47 deaths. In the United States and the European Union, since the early 1990s regulations have been in place that are designed to prevent shellfish from contaminated waters entering restaurants. Paralytic shellfish poisoning Paralytic shellfish poisoning (PSP) is primarily caused by the consumption of bivalves that have accumulated toxins by feeding on toxic dinoflagellates, single-celled protists found naturally in the sea and inland waters. Saxitoxin is the most virulent of these. In mild cases, PSP causes tingling, numbness, sickness and diarrhoea. In more severe cases, the muscles of the chest wall may be affected leading to paralysis and even death. In 1937, researchers in California established the connection between blooms of these phytoplankton and PSP. The biotoxin remains potent even when the shellfish are well-cooked. In the United States, there is a regulatory limit of 80 μg/g of saxitoxin equivalent in shellfish meat. Amnesic shellfish poisoning Amnesic shellfish poisoning (ASP) was first reported in eastern Canada in 1987. It is caused by the substance domoic acid found in certain diatoms of the genus Pseudo-nitzschia. Bivalves can become toxic when they filter these microalgae out of the water. Domoic acid is a low-molecular weight amino acid that is able to destroy brain cells causing memory loss, gastroenteritis, long-term neurological problems or death. In an outbreak in the western United States in 1993, finfish were also implicated as vectors, and seabirds and mammals suffered neurological symptoms. In the United States and Canada, a regulatory limit of 20 μg/g of domoic acid in shellfish meat is set. Ecosystem services Ecosystem services provided by marine bivalves in relation to nutrient extraction from the coastal environment have gained increased attention to mitigate adverse effects of excess nutrient loading from human activities, such as agriculture and sewage discharge. These activities damage coastal ecosystems and require action from local, regional, and national environmental management. Marine bivalves filter particles like phytoplankton, thereby transforming particulate organic matter into bivalve tissue or larger faecal pellets that are transferred to the benthos. Nutrient extraction from the coastal environment takes place through two different pathways: (i) harvest/removal of the bivalves – thereby returning nutrients back to land; or (ii) through increased denitrification in proximity to dense bivalve aggregations, leading to loss of nitrogen to the atmosphere. Active use of marine bivalves for nutrient extraction may include a number of secondary effects on the ecosystem, such as filtration of particulate material. This leads to partial transformation of particulate-bound nutrients into dissolved nutrients via bivalve excretion or enhanced mineralization of faecal material. When they live in polluted waters, bivalve molluscs have a tendency to accumulate substances such as heavy metals and persistent organic pollutants in their tissues. This is because they ingest the chemicals as they feed but their enzyme systems are not capable of metabolising them and as a result, the levels build up. This may be a health hazard for the molluscs themselves, and is one for humans who eat them. It also has certain advantages in that bivalves can be used in monitoring the presence and quantity of pollutants in their environment. There are limitations to the use of bivalves as bioindicators. The level of pollutants found in the tissues varies with species, age, size, time of year and other factors. The quantities of pollutants in the water may vary and the molluscs may reflect past rather than present values. In a study near Vladivostok it was found that the level of pollutants in the bivalve tissues did not always reflect the high levels in the surrounding sediment in such places as harbours. The reason for this was thought to be that the bivalves in these locations did not need to filter so much water as elsewhere because of the water's high nutritional content. A study of nine different bivalves with widespread distributions in tropical marine waters concluded that the mussel, Trichomya hirsuta, most nearly reflected in its tissues the level of heavy metals (Pb, Cd, Cu, Zn, Co, Ni, and Ag) in its environment. In this species there was a linear relationship between the sedimentary levels and the tissue concentration of all the metals except zinc. In the Persian Gulf, the Atlantic pearl-oyster (Pinctada radiata) is considered to be a useful bioindicator of heavy metals. Crushed shells, available as a by-product of the seafood canning industry, can be used to remove pollutants from water. It has been found that, as long as the water is maintained at an alkaline pH, crushed shells will remove cadmium, lead and other heavy metals from contaminated waters by swapping the calcium in their constituent aragonite for the heavy metal, and retaining these pollutants in a solid form. The rock oyster (Saccostrea cucullata) has been shown to reduce the levels of copper and cadmium in contaminated waters in the Persian Gulf. The live animals acted as biofilters, selectively removing these metals, and the dead shells also had the ability to reduce their concentration. Other uses Conchology is the scientific study of mollusc shells, but the term conchologist is also sometimes used to describe a collector of shells. Many people pick up shells on the beach or purchase them and display them in their homes. There are many private and public collections of mollusc shells, but the largest one in the world is at the Smithsonian Institution, which houses in excess of 20 million specimens. Shells are used decoratively in many ways. They can be pressed into concrete or plaster to make decorative paths, steps or walls and can be used to embellish picture frames, mirrors or other craft items. They can be stacked up and glued together to make ornaments. They can be pierced and threaded onto necklaces or made into other forms of jewellery. Shells have had various uses in the past as body decorations, utensils, scrapers and cutting implements. Carefully cut and shaped shell tools dating back 32,000 years have been found in a cave in Indonesia. In this region, shell technology may have been developed in preference to the use of stone or bone implements, perhaps because of the scarcity of suitable rock materials. The indigenous peoples of the Americas living near the east coast used pieces of shell as wampum. The channeled whelk (Busycotypus canaliculatus) and the quahog (Mercenaria mercenaria) were used to make white and purple traditional patterns. The shells were cut, rolled, polished and drilled before being strung together and woven into belts. These were used for personal, social and ceremonial purposes and also, at a later date, for currency. The Winnebago Tribe from Wisconsin had numerous uses for freshwater mussels including using them as spoons, cups, ladles and utensils. They notched them to provide knives, graters and saws. They carved them into fish hooks and lures. They incorporated powdered shell into clay to temper their pottery vessels. They used them as scrapers for removing flesh from hides and for separating the scalps of their victims. They used shells as scoops for gouging out fired logs when building canoes and they drilled holes in them and fitted wooden handles for tilling the ground. Buttons have traditionally been made from a variety of freshwater and marine shells. At first they were used decoratively rather than as fasteners and the earliest known example dates back five thousand years and was found at Mohenjo-daro in the Indus Valley. Sea silk is a fine fabric woven from the byssus threads of bivalves, particularly the pen shell (Pinna nobilis). It used to be produced in the Mediterranean region where these shells are endemic. It was an expensive fabric and overfishing has much reduced populations of the pen shell. Crushed shells are added as a calcareous supplement to the diet of laying poultry. Oyster shell and cockle shell are often used for this purpose and are obtained as a by-product from other industries. Pearls and mother-of-pearl Mother-of-pearl or nacre is the naturally occurring lustrous layer that lines some mollusc shells. It is used to make pearl buttons and in artisan craftwork to make organic jewellery. It has traditionally been inlaid into furniture and boxes, particularly in China. It has been used to decorate musical instruments, watches, pistols, fans and other products. The import and export of goods made with nacre are controlled in many countries under the International Convention of Trade in Endangered Species of Wild Fauna and Flora. A pearl is created in the mantle of a mollusk when an irritant particle is surrounded by layers of nacre. Although most bivalves can create pearls, oysters in the family Pteriidae and freshwater mussels in the families Unionidae and Margaritiferidae are the main source of commercially available pearls because the calcareous concretions produced by most other species have no lustre. Finding pearls inside oysters is a very chancy business as hundreds of shells may need to be pried open before a single pearl can be found. Most pearls are now obtained from cultured shells where an irritant substance has been purposefully introduced to induce the formation of a pearl. A "mabe" (irregular) pearl can be grown by the insertion of an implant, usually made of plastic, under a flap of the mantle and next to the mother-of-pearl interior of the shell. A more difficult procedure is the grafting of a piece of oyster mantle into the gonad of an adult specimen together with the insertion of a shell bead nucleus. This produces a superior, spherical pearl. The animal can be opened to extract the pearl after about two years and reseeded so that it produces another pearl. Pearl oyster farming and pearl culture is an important industry in Japan and many other countries bordering the Indian and Pacific Oceans. Symbolism The scallop is the symbol of St James and is called Coquille Saint-Jacques in French. It is an emblem carried by pilgrims on their way to the shrine of Santiago de Compostela in Galicia. The shell became associated with the pilgrimage and came to be used as a symbol showing hostelries along the route and later as a sign of hospitality, food and lodging elsewhere. Roman myth has it that Venus, the goddess of love, was born in the sea and emerged accompanied by fish and dolphins, with Botticelli depicting her as arriving in a scallop shell. The Romans revered her and erected shrines in her honour in their gardens, praying to her to provide water and verdant growth. From this, the scallop and other bivalve shells came to be used as a symbol for fertility. Its depiction is used in architecture, furniture and fabric design and it is the logo of Royal Dutch Shell, the global oil and gas company. Bivalvian taxonomies For the past two centuries no consensus has existed on bivalve phylogeny from the many classifications developed. In earlier taxonomic systems, experts used a single characteristic feature for their classifications, choosing among shell morphology, hinge type or gill type. Conflicting naming schemes proliferated due to these taxonomies based on single organ systems. One of the most widely accepted systems was that put forward by Norman D. Newell in Part N of the Treatise on Invertebrate Paleontology, which employed a classification system based on general shell shape, microstructures and hinge configuration. Because features such as hinge morphology, dentition, mineralogy, shell morphology and shell composition change slowly over time, these characteristics can be used to define major taxonomic groups. Since the year 2000, taxonomic studies using cladistical analyses of multiple organ systems, shell morphology (including fossil species) and modern molecular phylogenetics have resulted in the drawing up of what experts believe is a more accurate phylogeny of the Bivalvia. Based upon these studies, a new proposed classification system for the Bivalvia was published in 2010 by Bieler, Carter & Coan. In 2012, this new system was adopted by the World Register of Marine Species (WoRMS) for the classification of the Bivalvia. Some experts still maintain that Anomalodesmacea should be considered a separate subclass, whereas the new system treats it as the order Anomalodesmata, within the subclass Heterodonta. Molecular phylogenetic work continues, further clarifying which Bivalvia are most closely related and thus refining the classification. Practical taxonomy of R.C. Moore R.C. Moore, in Moore, Lalicker, and Fischer, 1952, Invertebrate Fossils, gives a practical and useful classification of pelecypods (Bivalvia) even if somewhat antiquated, based on shell structure, gill type, and hinge teeth configuration. Subclasses and orders given are: Subclass:Prionodesmacea Order Paleoconcha Taxodonta: Many teeth (e.g. order Nuculida) Schizodonta: Big bifurcating teeth (e.g. Trigonia spp.) Isodonta: Equal teeth (e.g. Spondylus spp.) Dysodonta: Absent teeth and ligaments joins the valves. Subclass:Teleodesmacea Order Heterodonta: Different teeth (e.g. family Cardiidae). [ Lower Ordovician – Recent] Pachydonta: Large, different, deformed teeth (e.g. rudist spp.). [ Late Jurassic – Upper Cretaceous] Desmodonta: Hinge-teeth absent or irregular with ligaments (e.g. family Anatinidae). Prionodesmacea have a prismatic and nacreous shell structure, separated mantle lobes, poorly developed siphons, and hinge teeth that are lacking or unspecialized. Gills range from protobranch to eulamellibranch. Teleodesmacea on the other hand have a porcelanous and partly nacreous shell structure; Mantle lobes that are generally connected, well developed siphons, and specialized hinge teeth. In most, gills are eulamellibranch. 1935 taxonomy In his 1935 work Handbuch der systematischen Weichtierkunde (Handbook of Systematic Malacology), Johannes Thiele introduced a mollusc taxonomy based upon the 1909 work by Cossmann and Peyrot. Thiele's system divided the bivalves into three orders. Taxodonta consisted of forms that had taxodont dentition, with a series of small parallel teeth perpendicular to the hinge line. Anisomyaria consisted of forms that had either a single adductor muscle or one adductor muscle much smaller than the other. Eulamellibranchiata consisted of forms with ctenidial gills. The Eulamellibranchiata was further divided into four suborders: Schizodonta, Heterodonta, Adapedonta and Anomalodesmata. Taxonomy based upon hinge tooth morphology The systematic layout presented here follows Newell's 1965 classification based on hinge tooth morphology (all taxa marked † are extinct) : The monophyly of the subclass Anomalodesmata is disputed. The standard view now is that it resides within the subclass Heterodonta. Taxonomy based upon gill morphology An alternative systematic scheme exists using gill morphology. This distinguishes between Protobranchia, Filibranchia and Eulamellibranchia. The first corresponds to Newell's Palaeotaxodonta and Cryptodonta, the second to his Pteriomorphia, with the last corresponding to all other groups. In addition, Franc separated the Septibranchia from his eulamellibranchs because of the morphological differences between them. The septibranchs belong to the superfamily Poromyoidea and are carnivorous, having a muscular septum instead of filamentous gills. 2010 taxonomy In May 2010, a new taxonomy of the Bivalvia was published in the journal Malacologia. In compiling this, the authors used a variety of phylogenetic information including molecular analysis, anatomical analysis, shell morphology and shell microstructure as well as bio-geographic, paleobiogeographic and stratigraphic information. In this classification 324 families are recognized as valid, 214 of which are known exclusively from fossils and 110 of which occur in the recent past, with or without a fossil record. This classification has since been adopted by WoRMS. Proposed classification of Class Bivalvia (under the redaction of Rüdiger Bieler, Joseph G. Carter and Eugene V. Coan) (all taxa marked † are extinct) : Grade Euprotobranchia Order Fordillida 2 families (2†) Order Tuarangiida 1 family (1†) Subclass Heterodonta Infraclass Archiheterodonta Order Carditida 4 families Infraclass Euheterodonta Unassigned Euheterodonta 4 families Order Pholadomyida (=Anomalodesmata) 16 families Order Myida 4 families Order Lucinida 2 families Order Venerida 30 families Subclass Palaeoheterodonta Order Trigoniida 16 families (15†) Order Unionida 15 families (8†) Subclass Protobranchia Order Nuculanida 8 families Order Nuculida 3 families (1†) Order Solemyida 2 families Subclass Pteriomorphia Order Arcida 7 families Infraclass Eupteriomorphia Order Ostreida 2 families Suborder Pectinida 7 families Suborder Limida 1 family Suborder Mytilida 1 family Suborder Pteriida 4 families
Biology and health sciences
Bivalvia
Animals
202388
https://en.wikipedia.org/wiki/Spear-thrower
Spear-thrower
A spear-thrower, spear-throwing lever, or atlatl (pronounced or ; Nahuatl ahtlatl ) is a tool that uses leverage to achieve greater velocity in dart or javelin-throwing, and includes a bearing surface that allows the user to store energy during the throw. It may consist of a shaft with a cup or a spur at the end that supports and propels the butt of the spear. It's usually about as long as the user's arm or forearm. The user holds the spear-thrower in one hand, gripping near the end farthest from the cup. The user puts the butt end of the spear, or dart, in the cup, or grabs the spur with the end of the spear. The spear is much longer than the thrower. The user holds the thrower at the grip end, with the spear resting on the thrower and the butt end of the spear resting in the thrower's cup. The user can hold the spear, with the index and thumb, with the same hand as the thrower, with the other fingers. The user reaches back with the spear pointed at the target. Then they make an overhand throwing motion with the thrower while letting go of the spear with the fingers. The dart is thrown by the action of the upper arm and wrist. The throwing arm together with the atlatl acts as a lever. The spear-thrower is a low-mass, fast-moving extension of the throwing arm, increasing the length of the lever. This extra length allows the thrower to impart force to the dart over a longer distance, thus imparting more energy and higher speeds. Common modern ball throwers (such as molded plastic arms used for throwing tennis balls for dogs to fetch) use the same principle. A spear-thrower is a long-range weapon and can readily impart to a projectile speeds of over . Spear-throwers appear early in human history in several parts of the world, and have survived in use in traditional societies until the present day, as well as being revived in recent years for sporting purposes. In the United States, the Nahuatl word is often used for revived uses of spear-throwers (or the Mayan word ); in Australia, the Dharug word is used instead. The ancient Greeks and Romans used a leather thong or loop, known as an ankule or amentum, as a spear-throwing device. The Swiss arrow is a weapon that works similarly to amentum. Design Spear-thrower designs may include improvements such as thong loops to fit the fingers, the use of flexible shafts or stone balance weights. Dart shafts can be made thinner and highly flexible for added power and range, the fletching can be spiralized to add spin to the dart making it more stable and accurate. Darts resemble large arrows or small spears and are typically from in length and 9 to 16 mm (3/8" to 5/8") in diameter. Another important improvement to the spear-thrower's design was the introduction of a small weight (between 60 and 80 grams) strapped to its midsection. Some atlatlists maintain that stone weights add mass to the shaft of the device, causing resistance to acceleration when swung and resulting in a more forceful and accurate launch of the dart. Others claim that spear-thrower weights add only stability to a cast, resulting in greater accuracy. Based on previous work done by William S. Webb, William R. Perkins claims that spear-thrower weights, commonly called "bannerstones", and characterized by a centered hole in a symmetrically shaped carved or ground stone, shaped wide and flat with a drilled hole and thus a little like a large wingnut, are an improvement to the design that created a silencing effect when swung. The use of the device would reduce the telltale "zip" of a swung atlatl to a more subtle "woof" sound that did not travel as far and was less likely to alert prey. Robert Berg's theory is that the bannerstone was carried by hunters as a spindle weight to produce string from natural fibers gathered while hunting, for the purpose of tying on fletching and hafting stone or bone points. Woomera The woomera or ‘miru’, allow hunters to apply more force, speed and distance when launching their spears. A woomera is usually made from Mulga wood, and serves many other purposes such as a: receptacle for mixing ochre for traditional paintings for ceremonies, deflection tool of enemies’ spears in battle, fire making saw, or a utensil for chopping game. This tool is usually long and wide, and comes in a concave, elliptical shape. Artistic designs Several Stone Age spear-throwers (usually now incomplete) are decorated with carvings of animals: the British Museum has one decorated with a mammoth, and there is one decorated with a hyena in France. Many pieces of decorated bone may have belonged to bâtons de commandement. The Aztec atlatl was often decorated with snake designs and feathers, potentially evocative of its association with Ehecatl, the Aztec wind deity. History Wooden darts were known at least since the Middle Paleolithic (Schöningen, Torralba, Clacton-on-Sea and Kalambo Falls). While the spear-thrower is capable of casting a dart well over one hundred meters, it is most accurately used at distances of twenty meters or less. The spearthrower is believed to have been in use by Homo sapiens since the Upper Paleolithic (around 30,000 years ago). Most stratified European finds come from the Magdalenian (late upper Palaeolithic). In this period, elaborate pieces, often in the form of animals, are common. The earliest reliable data concerning atlatls have come from several caves in France dating to the Upper Paleolithic, about 21,000 to 17,000 years ago. The earliest known example is a 17,500-year-old Solutrean atlatl made of reindeer antler, found at Combe Saunière (Dordogne), France. It is possible that the atlatl was invented earlier than this, as Mungo Man from 42,000 BP displays arthritis in his right elbow, a pathology referred to today as the "Atlatl elbow," resulting from many years of forceful torsion from using an atlatl. At present, there is no evidence for the use of atlatls in Africa. Peoples such as the Maasai and Khoisan throw spears without any aids, but the use of atlatls in hunting is limited in comparison to spears because the animal must be close and already immobile. During the Ice Age, the atlatl was used by humans to hunt megafauna. Ice Age megafauna offered a large food supply when other game was limited, and the atlatl gave more power to pierce their thicker skin. In this time period, atlatls were usually made of wood or bone. Improvements made to spears' edge made it more efficient as well. In Europe, the spear-thrower was supplemented by the bow and arrow in the Epi-Paleolithic. Along with improved ease of use, the bow offered the advantage that the bulk of elastic energy is stored in the throwing device, rather than the projectile. Arrow shafts can therefore be much smaller and have looser tolerances for spring constant and weight distribution than atlatl darts. This allowed for more forgiving flint knapping: dart heads designed for a particular spear thrower tend to differ in mass by only a few percent. By the Iron Age, the amentum, a strap attached to the shaft, was the standard European mechanism for throwing lighter javelins. The amentum gives not only range, but also spin to the projectile. The spear-thrower was used by early Americans as well. It may have been introduced to America during the immigration across the Bering Land Bridge, and despite the later introduction of the bow and arrow, atlatl use was widespread at the time of first European contact. Atlatls are represented in the art of multiple pre-Columbian cultures, including the Basketmaker culture in the American Southwest, Maya in the Yucatán Peninsula, and Moche in the Andes of South America. Atlatls were especially prominent in the iconography of the warriors of the Teotihuacan culture of Central Mexico. A ruler from Teotihuacan named Spearthrower Owl is an important figure described in Mayan stelae. Complete wooden spear-throwers have been found on dry sites in the western United States and in waterlogged environments in Florida and Washington. Several Amazonian tribes also used the atlatl for fishing and hunting. Some even preferred this weapon over the bow and arrow and used it not only in combat but also in sports competitions. Such was the case with the Tarairiú, a Tapuya tribe of migratory foragers and raiders inhabiting the forested mountains and highland savannahs of Rio Grande do Norte in mid-17th-century Brazil. Anthropologist Harald Prins offers the following description:The atlatl, as used by these Tarairiu warriors, was unique in shape. About long and wide, this spear thrower was a tapering piece of wood carved of brown hard-wood. Well-polished, it was shaped with a semi-circular outer half and had a deep groove hollowed out to receive the end of the javelin, which could be engaged by a horizontal wooden peg or spur lashed with a cotton thread to the proximal and narrower end of the throwing board, where a few scarlet parrot feathers were tied for decoration. [Their] darts or javelins ... were probably made of a two-meter long wooden cane with a stone or long and serrated hard-wood point, sometimes tipped with poison. Equipped with their uniquely grooved atlatl, they could hurl their long darts from a great distance with accuracy, speed, and such deadly force that these easily pierced through the protective armor of the Portuguese or any other enemy.The spear-thrower was an important part of life, hunting, and religion in the ancient Andes. The earliest known spear-thrower of the South Americas had a proximal handle piece and is commonly referred to as an estólica in Spanish references to indigenous Andean culture . Estólica and atlatl are therefore synonymous terms. The estólica is best known archaeologically from Nazca culture and the Inca civilization, but the earliest examples are known from associations with Chinchorro mummies. The estólica is also known from Moche culture, including detailed representations on painted pottery, and in representations on textiles of the Wari culture. The Andean estólica had a wooden body with a hook that was made of stone or metal. These hooks have been found at multiple highland sites including Cerro Baúl, a site of the Wari culture. In the Andes, the tips of darts were often capped with metal. Arrow points commonly had the same appearance as these Andean tips. The length of a common estòlica was about 50 cm. Estólica handles were commonly carved and modeled to represent real world accounts like animals and deities. Examples of estòlicas with no handle pieces have been interpreted as children's toys. Archaeologists found decorated examples in the Moche culture burial of the Lady of Cao at El Brujo in the Chicama valley. At her feet was a group of twenty-three atlatls with handle pieces that depicted birds. These “theatrical” estòlicas are different from normal weapons. They are much longer (80–100 cm) than the regular examples (50–60 cm). Archeologists John Whittaker and Kathryn Kamp, both faculty from Grinnell College, speculate that they might have been part of a ceremony before the burial or symbolic references to indicate that the royal woman in the burial had been a warrior. Estólicas are depicted along with maces, clubs, and shields on Moche vessels that illustrate warfare. The atlatl appears in the artwork of Chavín de Huantar, such as on the Black and White Portal. Among the Tlingit of Southeast Alaska, approximately one dozen old elaborately carved specimens they call "shee áan" (sitting on a branch) remain in museum collections and private collections, one having sold at auction for more than $100,000. In September 1997, an atlatl dart fragment, carbon dated to 4360 ± 50 14C yr BP (TO 6870), was found in an ice patch on mountain Thandlät, the first of the southern Yukon Ice Patches to be studied. The people of New Guinea and Aboriginal people in Australia also use spear-throwers. In the mid Holocene, Aboriginal people in Australia developed spear-throwers, known as woomeras. As well as its practical use as a hunting weapon, it may also have had social effects. John Whittaker suggests the device was a social equalizer in that it requires skill rather than muscle power alone. Thus, women and children would have been able to participate in hunting. Whittaker said the stone-tipped projectiles from the Aztec atlatl were not powerful enough to penetrate Spanish steel plate armor, but they were strong enough to penetrate the mail, leather and cotton armor that most Spanish soldiers wore. Whittaker said the Aztecs started their battles with atlatl darts followed with melee combat using the macuahuitl. Another type of Stone Age artefact that is sometimes excavated is the . These are shorter, normally less than one foot long, and made of antler, with a hole drilled through them. When first found in the nineteenth century, they were interpreted by French archaeologists to be symbols of authority, like a modern field marshal's baton, and so named ("batons of command"). Though debate over their function continues, tests with replicas have found them effective aids to spear or dart throwing when used with a cord,. Another theory is that they were "arrow-straighteners". Bian Jian ("Spear sling") Bian Jian (, lit. 'Whip arrow') is a unique spear-thrower that was used during Song period. It can be described as a long staff sling that throws a spear-sized dart instead of a rock-like projectile. It requires two operators unlike other spear-throwers. It should not be confused with another Bian Jian (). Modern times In modern times, some people have resurrected the dart thrower for sports, often using the term atlatl, throwing either for distance and/or for accuracy. The World Atlatl Association was formed in 1987 to promote the atlatl. Throws of almost have been recorded. Colleges reported to field teams in this event include Grinnell College in Iowa, Franklin Pierce University in New Hampshire, Alfred University in New York, and the University of Vermont. Atlatls are sometimes used in modern times for hunting. In the U.S., the Pennsylvania Game Commission has given preliminary approval for legalization of the atlatl for hunting certain animals. The animals that would be allowed to atlatl hunters have yet to be determined, but particular consideration has been given to deer. Currently, Alabama allows the atlatl for deer hunting, while a handful of other states list the device as legal for rough fish (those not sought for sport or food), some game birds and non-game mammals. Starting in 2007, Missouri allowed use of the atlatl for hunting wildlife (excluding deer and turkey), and starting in 2010, also allowed deer hunting during the firearms portion of the deer season (except the muzzleloader portion). Starting in 2012, Missouri allowed the use of atlatls during the fall archery deer and turkey hunting seasons and, starting in 2014, allowed the use of atlatls during the spring turkey hunting season as well. Missouri also allows use of the atlatl for fishing, with some restrictions (similar to the restrictions for spearfishing and bowfishing). The Nebraska Game and Parks Commission allows the use of atlatls for the taking of deer . The woomera is still used today by some Aboriginal people for hunting in Australia. Yup'ik Eskimo hunters still use the atlatl, known locally as "nuqaq" (nook-ak), in villages near the mouth of the Yukon River for seal hunting. Competitions There are numerous atlatl competitions held every year, with spears and spear-throwers built using both ancient and modern materials. Events are often held at parks, such as Letchworth State Park in New York, Cahokia Mounds State Historic Site in Illinois, or Valley of Fire State Park in Nevada. Atlatl associations around the world host a number of local atlatl competitions. Chimney Point State Historic Site in Addison, Vermont hosts the annual Northeast Open Atlatl Championship. In 2009, the Fourteenth Annual Open Atlatl Championship was held on Saturday and Sunday, September 19 and 20. On the Friday before the Championship, a workshop was held to teach modern and traditional techniques of atlatl and dart construction, flint knapping, hafting stone points, and cordage making. Competitions may be held in conjunction with other events, such as the Ohio Pawpaw Festival, or at the Bois D'Arc Primitive Skills Gathering and Knap-in, held every September in southern Missouri. Atlatl events commonly include the International Standard Accuracy Competition (ISAC), in which contestants throw ten times at a bull's-eye target. Other contests involving different distances or terrain may also be included, usually testing the atlatlist's accuracy rather than distance throwing. Popular culture In the sixth episode of the fourth season of the television competition Top Shot, the elimination round consisted of two contestants using the atlatl at ranges of 30, 45 and 60 feet. An atlatl was the weapon of choice of a serial killer in the 2020 action-thriller The Silencing, where it is erroneously described as an illegal weapon. Lydia Demarek, a character in the popular fantasy novel series Brotherband, owns and often uses an atlatl. In the light novel series Evangelion ANIMA, an enemy known as a Victor uses a form of atlatl as a weapon.
Technology
Ranged weapons
null
202403
https://en.wikipedia.org/wiki/Excavata
Excavata
Excavata is an extensive and diverse but paraphyletic group of unicellular Eukaryota. The group was first suggested by Simpson and Patterson in 1999 and the name latinized and assigned a rank by Thomas Cavalier-Smith in 2002. It contains a variety of free-living and symbiotic protists, and includes some important parasites of humans such as Giardia and Trichomonas. Excavates were formerly considered to be included in the now obsolete Protista kingdom. They were distinguished from other lineages based on electron-microscopic information about how the cells are arranged (they have a distinctive ultrastructural identity). They are considered to be a basal flagellate lineage. On the basis of phylogenomic analyses, the group was shown to contain three widely separated eukaryote groups, the discobids, metamonads, and malawimonads. A current view of the composition of the excavates is given below, indicating that the group is paraphyletic. Except for some Euglenozoa, all are non-photosynthetic. Characteristics Most excavates are unicellular, heterotrophic flagellates. Only some Euglenozoa are photosynthetic. In some (particularly anaerobic intestinal parasites), the mitochondria have been greatly reduced. Some excavates lack "classical" mitochondria, and are called "amitochondriate", although most retain a mitochondrial organelle in greatly modified form (e.g. a hydrogenosome or mitosome). Among those with mitochondria, the mitochondrial cristae may be tubular, discoidal, or in some cases, laminar. Most excavates have two, four, or more flagella. Many have a conspicuous ventral feeding groove with a characteristic ultrastructure, supported by microtubules—the "excavated" appearance of this groove giving the organisms their name. However, various groups that lack these traits are considered to be derived excavates based on genetic evidence (primarily phylogenetic trees of molecular sequences). The Acrasidae slime molds are the only excavates to exhibit limited multicellularity. Like other cellular slime molds, they live most of their life as single cells, but will sometimes assemble into larger clusters. Proposed group Excavate relationships were always uncertain, suggesting that they are not a monophyletic group. Phylogenetic analyses often do not place malawimonads on the same branch as the other Excavata. Excavates were thought to include multiple groups: Discoba or JEH clade Euglenozoa and Heterolobosea (Percolozoa) or Eozoa (as named by Cavalier-Smith) appear to be particularly close relatives, and are united by the presence of discoid cristae within the mitochondria (Superphylum Discicristata). A close relationship has been shown between Discicristata and Jakobida, the latter having tubular cristae like most other protists, and hence were united under the taxon name Discoba, which was proposed for this supposedly monophyletic group. Metamonads Metamonads are unusual in not having classical mitochondria—instead they have hydrogenosomes, mitosomes or uncharacterised organelles. The oxymonad Monocercomonoides is reported to have completely lost homologous organelles. There are competing explanations. Malawimonads The malawimonads have been proposed to be members of Excavata owing to their typical excavate morphology, and phylogenetic affinity to other excavate groups in some molecular phylogenies. However, their position among eukaryotes remains elusive. Ancyromonads Ancyromonads are small free-living cells with a narrow longitudinal groove down one side of the cell. The ancyromonad groove is not used for "suspension feeding", unlike in "typical excavates" (e.g. malawimonads, jakobids, Trimastix, Carpediemonas, Kiperferlia, etc). Ancyromonads instead capture prokaryotes attached to surfaces. The phylogenetic placement of ancyromonads is poorly understood (in 2020), however some phylogenetic analyses place them as close relatives of malawimonads. Evolution Origin of the Eukaryotes The conventional explanation for the origin of the Eukaryotes is that a heimdallarchaeian or another Archaea acquired an alphaproteobacterium as an endosymbiont, and that this became the mitochondrion, the organelle providing oxidative respiration to the eukaryotic cell. Caesar al Jewari and Sandra Baldauf argue instead that the Eukaryotes possibly started with an endosymbiosis event of a Deltaproteobacterium or Gammaproteobacterium, accounting for the otherwise unexplained presence of anaerobic bacterial enzymes in Metamonada. The sister of the Preaxostyla within Metamonada represents the rest of the Eukaryotes which acquired an Alphaproteobacterium. In their scenario, the hydrogenosome and mitosome, both conventionally considered "mitochondrion-derived organelles", would predate the mitochondrion, and instead be derived from the earlier symbiotic bacterium. Phylogeny In 2023, using molecular phylogenetic analysis of 186 taxa, Al Jewari and Baldauf proposed a phylogenetic tree with the metamonad Parabasalia as basal Eukaryotes. Discoba and the rest of the Eukaryota appear to have emerged as sister taxon to the Preaxostyla, incorporating a single alphaproteobacterium as mitochondria by endosymbiosis. Thus the Fornicata are more closely related to e.g. animals than to Parabasalia. The rest of the Eukaryotes emerged within the Excavata as sister of the Discoba; as they are within the same clade but are not cladistically considered part of the Excavata yet, the Excavata are in this analysis highly paraphyletic. The Anaeramoeba are associated with Parabasalia, but could turn out to be more basal as the root of a tree is often difficult to pinpoint.
Biology and health sciences
Other organisms
null
202482
https://en.wikipedia.org/wiki/Foot%20%28unit%29
Foot (unit)
The foot (standard symbol: ft) is a unit of length in the British imperial and United States customary systems of measurement. The prime symbol, , is commonly used to represent the foot. In both customary and imperial units, one foot comprises 12 inches, and one yard comprises three feet. Since an international agreement in 1959, the foot is defined as equal to exactly 0.3048 meters. Historically, the "foot" was a part of many local systems of units, including the Greek, Roman, Chinese, French, and English systems. It varied in length from country to country, from city to city, and sometimes from trade to trade. Its length was usually between 250 mm and 335 mm and was generally, but not always, subdivided into 12 inches or 16 digits. The United States is the only industrialized country that uses the (international) foot in preference to the meter in its commercial, engineering, and standards activities. The foot is legally recognized in the United Kingdom; road distance signs must use imperial units (however, distances on road signs are always marked in miles or yards, not feet; bridge clearances are given in meters as well as feet and inches), while its usage is widespread among the British public as a measurement of height. The foot is recognized as an alternative expression of length in Canada. Both the UK and Canada have partially metricated their units of measurement. The measurement of altitude in international aviation (the flight level unit) is one of the few areas where the foot is used outside the English-speaking world. The most common plural of foot is feet. However, the singular form may be used like a plural when it is preceded by a number, as in "he is six foot tall." Historical origin Historically, the human body has been used to provide the basis for units of length. The foot of an adult European-American male is typically about 15.3% of his height, giving a person of a foot-length of about , on average. Archaeologists believe that, in the past, the people of Egypt, India, and Mesopotamia preferred the cubit, while the people of Rome, Greece, and China preferred the foot. Under the Harappan linear measures, Indus cities during the Bronze Age used a foot of and a cubit of . The Egyptian equivalent of the foot—a measure of four palms or 16 digits—was known as the and has been reconstructed as about . The Greek foot (, ) had a length of of a stadion, one stadion being about ; therefore a foot was, at the time, about . Its exact size varied from city to city and could range between and , but lengths used for temple construction appear to have been about to ; the former was close to the size of the Roman foot. The standard Roman foot () was normally about (97% of today's measurement), but in some provinces, particularly Germania Inferior, the so-called (foot of Nero Claudius Drusus) was sometimes used, with a length of about . (In reality, this foot predated Drusus.) Originally both the Greeks and the Romans subdivided the foot into 16 digits, but in later years, the Romans also subdivided the foot into 12 (from which both the English words "inch" and "ounce" are derived). After the fall of the Roman Empire, some Roman traditions were continued but others fell into disuse. In AD 790 Charlemagne attempted to reform the units of measure in his domains. His units of length were based on the and in particular the , the distance between the fingertips of the outstretched arms of a man. The has 6 (feet) each of . He was unsuccessful in introducing a standard unit of length throughout his realm: an analysis of the measurements of Charlieu Abbey shows that during the 9th century the Roman foot of was used; when it was rebuilt in the 10th century, a foot of about was used. At the same time, monastic buildings used the Carolingian foot of . The procedure for verification of the foot as described in the 16th century posthumously published work by Jacob Köbel in his book is: England The Neolithic long foot, first proposed by archeologists Mike Parker Pearson and Andrew Chamberlain, is based upon calculations from surveys of Phase 1 elements at Stonehenge. They found that the underlying diameters of the stone circles had been consistently laid out using multiples of a base unit amounting to 30 long feet, which they calculated to be 1.056 of a modern international foot (thus 12.672 inches or 0.3219 m). Furthermore, this unit is identifiable in the dimensions of some stone lintels at the site and in the diameter of the "southern circle" at nearby Durrington Walls. Evidence that this unit was in widespread use across southern Britain is available from the Folkton Drums from Yorkshire (neolithic artifacts, made from chalk, with circumferences that exactly divide as integers into ten long feet) and a similar object, the Lavant drum, excavated at Lavant, Sussex, again with a circumference divisible as a whole number into ten long feet. The measures of Iron Age Britain are uncertain and proposed reconstructions such as the Megalithic Yard are controversial. Later Welsh legend credited Dyfnwal Moelmud with the establishment of their units, including a foot of 9 inches. The Belgic or North German foot of was introduced to England either by the Belgic Celts during their invasions prior to the Romans or by the Anglo-Saxons in the 5th and 6th century. Roman units were introduced following their invasion in AD 43. Following the Roman withdrawal and Saxon invasions, the Roman foot continued to be used in the construction crafts while the Belgic foot was used for land measurement. Both the Welsh and Belgic feet seem to have been based on multiples of the barleycorn, but by as early as 950 the English kings seem to have (ineffectually) ordered measures to be based upon an iron yardstick at Winchester and then London. Henry I was said to have ordered a new standard to be based upon the length of his own arm and, by the Act concerning the Composition of Yards and Perches traditionally credited to Edward I or II, the statute foot was a different measure, exactly of the old (Belgic) foot. The barleycorn, inch, ell, and yard were likewise shrunk, while rods and furlongs remained the same. The ambiguity over the state of the mile was resolved by the 1593 Act against Converting of Great Houses into Several Tenements and for Restraint of Inmates and Inclosures in and near about the City of London and Westminster, which codified the statute mile as comprising 5,280 feet. The differences among the various physical standard yards around the world, revealed by increasingly powerful microscopes, eventually led to the 1959 adoption of the international foot defined in terms of the meter. Definition International foot The international yard and pound agreement of July 1959 defined the length of the international yard in the United States and countries of the Commonwealth of Nations as exactly 0.9144 meters. Consequently, since a foot is one third of a yard, the international foot is defined to be equal to exactly 0.3048 meters. This was 2 ppm shorter than the previous US definition and 1.7 ppm longer than the previous British definition. The 1959 agreement concluded a series of step-by-step events, set off in particular by the British Standards Institution's adoption of a scientific standard inch of 25.4 millimeters in 1930. Symbol The IEEE standard symbol for a foot is "ft". In some cases, the foot is denoted by a prime, often approximated by an apostrophe, and the inch by a double prime; for example, 2feet 4 inches is sometimes denoted 2′4″. Imperial units In Imperial units, the foot was defined as  yard, with the yard being realized as a physical standard (separate from the standard meter). The yard standards of the different Commonwealth countries were periodically compared with one another. The value of the United Kingdom primary standard of the yard was determined in terms of the meter by the National Physical Laboratory in 1964 to be , implying a pre-1959 UK foot of . The UK adopted the international yard for all purposes through the Weights and Measures Act 1963, effective January 1, 1964. Survey foot When the international foot was defined in 1959, a great deal of survey data was already available based on the former definitions, especially in the United States and in India. The small difference between the survey foot and the international foot would not be detectable on a survey of a small parcel, but becomes significant for mapping, or when the state plane coordinate system (SPCS) is used in the US, because the origin of the system may be hundreds of thousands of feet (hundreds of miles) from the point of interest. Hence the previous definitions continued to be used for surveying in the United States and India for many years, and are denoted survey feet to distinguish them from the international foot. The United Kingdom was unaffected by this problem, as the retriangulation of Great Britain (1936–62) had been done in meters. US survey foot In the United States, the foot was defined as 12 inches, with the inch being defined by the Mendenhall Order of 1893 via 39.37 inches = 1 m (making a US foot exactly meters, approximately ). On December 31, 2022, the National Institute of Standards and Technology, the National Geodetic Survey, and the United States Department of Commerce deprecated use of the US survey foot and recommended conversion to either the meter or the international foot (0.3048 m). However, the historic relevance of the US survey foot persists, as the Federal Register notes: State legislation is also important for determining the conversion factor to be used for everyday land surveying and real estate transactions, although the difference (two parts per million) is of no practical significance given the precision of normal surveying measurements over short distances (usually much less than a mile). Out of 50 states and six other jurisdictions, 40 have legislated that surveying measures should be based on the US survey foot, six have legislated that they be made on the basis of the international foot, and ten have not specified. Indian survey foot The Indian survey foot is defined as exactly , presumably derived from a measurement of the previous Indian standard of the yard. The current National Topographic Database of the Survey of India is based on the metric WGS-84 datum, which is also used by the Global Positioning System. Historical use Metric foot An ISO 2848 measure of 3 basic modules (30 cm) is called a "metric foot", but there were earlier distinct definitions of a metric foot during metrication in France and Germany. France In 1799 the meter became the official unit of length in France. This was not fully enforced, and in 1812 Napoleon introduced the system of mesures usuelles which restored the traditional French measurements in the retail trade, but redefined them in terms of metric units. The foot, or pied métrique, was defined as one third of a meter. This unit continued in use until 1837. Germany In southwestern Germany in 1806, the Confederation of the Rhine was founded and three different reformed feet were defined, all of which were based on the metric system: In Hesse, the Fuß (foot) was redefined as 25 cm. In Baden, the Fuß was redefined as 30 cm. In the Palatinate, the Fuß was redefined as being  cm (as in France). Other obsolete feet Prior to the introduction of the metric system, many European cities and countries used the foot, but it varied considerably in length: the in Ypres, Belgium, was 273.8 millimeters (10.78in) while the in Venice was 347.73 millimeters (13.690in). Lists of conversion factors between the various units of measure were given in many European reference works including: Traité, Paris – 1769 Palaiseau – Bordeaux: 1816 de Gelder, Amsterdam and The Hague – 1824 Horace, Brussels – 1840 Noback & Noback (2 volumes), Leipzig – 1851 Bruhns, Leipzig – 1881 Many of these standards were peculiar to a particular city, especially in Germany (which, before German unification in 1871, consisted of many kingdoms, principalities, free cities and so on). In many cases the length of the unit was not uniquely fixed: for example, the English foot was stated as 11 pouces 2.6 lignes (French inches and lines) by Picard, 11 pouces 3.11 lignes by Maskelyne, and 11 pouces 3 lignes by D'Alembert. Most of the various feet in this list ceased to be used when the countries adopted the metric system. The Netherlands and modern Belgium adopted the metric system in 1817, having used the under Napoleon and the newly formed German Empire adopted the metric system in 1871. The palm (typically 200–280 mm, ie. 7 to 11 inches) was used in many Mediterranean cities instead of the foot. Horace Doursther, whose reference was published in Belgium which had the smallest foot measurements, grouped both units together, while J. F. G. Palaiseau devoted three chapters to units of length: one for linear measures (palms and feet); one for cloth measures (ells); and one for distances traveled (miles and leagues). Obsolete feet details In Belgium, the words (French) and (Dutch) would have been used interchangeably.
Physical sciences
Length and distance
null
202497
https://en.wikipedia.org/wiki/Waxwing
Waxwing
The waxwings are three species of passerine birds classified in the genus Bombycilla. They are pinkish-brown and pale grey with distinctive smooth plumage in which many body feathers are not individually visible, a black and white eyestripe, a crest, a square-cut tail and pointed wings. Some of the wing feathers have red tips, the resemblance of which to sealing wax gives these birds their common name. According to most authorities, this is the only genus placed in the family Bombycillidae, although sometimes the family is extended to include related taxa that are more usually included in separate families: silky flycatchers (Ptiliogonatidae (e.g. Phainoptila)), Hypocolius (Hypocoliidae), Hylocitrea (Hylocitreidae), palmchats (Dulidae) and the Hawaiian honeyeaters (Mohoidae). There are three species: the Bohemian waxwing (B. garrulus), the Japanese waxwing (B. japonica) and the cedar waxwing (B. cedrorum). Waxwings are not long-distance migrants, but move nomadically outside the breeding season. Waxwings mostly feed on fruit, but at times of year when fruits are unavailable they feed on sap, buds, flowers and insects. They catch insects by gleaning through foliage or in mid-air. They often nest near water, the female building a loose nest at the fork of a branch, well away from the trunk of the tree. She also incubates the eggs, the male bringing her food to the nest, and both sexes help rear the young. Waxwings appear in art and have been mentioned in literature. Etymology Bombycilla, the genus name, is Vieillot's attempt at Latin for "silktail", translating the German name Seidenschwänze. Vieillot analyzed motacilla, Latin for wagtail, as mota for "move" and cilla, which he thought meant "tail"; however, Motacilla actually combines motacis, a mover, with the diminutive suffix -illa. He then combined this "cilla" with the Latin bombyx, meaning silk. Description Waxwings are characterised by soft silky plumage. They have unique red tips to some of the wing feathers where the shafts extend beyond the barbs; in the Bohemian and cedar waxwings, these tips look like sealing wax, and give the group its common name. The legs are short and strong, and the wings are pointed. The male and female have the same plumage. All three species have mainly brown plumage, a black line through the eye and black under the chin, a square-ended tail with a red or yellow tip, and a pointed crest. The bill, eyes, and feet are dark. The adults moult between August and November, but may suspend their moult and continue after migration. Calls are high-pitched, buzzing or trilling monosyllables. Behavior Diet These are arboreal birds that breed in northern forests. Their main food is fruit, which they eat from early summer (strawberries, mulberries, and serviceberries) through late summer and fall (raspberries, blackberries, cherries, and honeysuckle berries) into late fall and winter (juniper berries, grapes, crabapples, mountain-ash fruits, rose hips, cotoneaster fruits, dogwood berries, and mistletoe berries). They pluck fruit from a perch or occasionally while hovering. In spring they replace fruit with sap, buds, and flowers. In the warmer part of the year they catch many insects by gleaning or in midair, and often nest near water where flying insects are abundant. Reproduction Waxwings also choose nest sites in places with rich supplies of fruit and breed late in the year to take advantage of summer ripening. However, they may start courting as early as the winter. Pairing includes a ritual in which mates pass a fruit or small inedible object back and forth several times until one eats it (if it is a fruit). After this they may copulate. Many pairs may nest close together in places with good food supplies, and a pair does not defend a territory—perhaps the reason waxwings have no true song—but a bird may attack intruders, perhaps to guard its mate. Both birds gather nest materials, but the female does most of the construction, usually on a horizontal limb or in a crotch well away from the tree trunk, at any height. She makes a loose, bulky nest of twigs, grass, and lichen, which she lines with fine grass, moss, and pine needles and may camouflage with dangling pieces of grass, flowers, lichen, and moss. The female incubates, fed by the male on the nest, but once the eggs hatch, both birds feed the young. Migration They are not true long-distance migrants, but wander erratically outside the breeding season and move south from their summer range in winter. In poor berry years huge numbers can irrupt well beyond their normal range, often in flocks that on occasion number in the thousands. A flock arrived in rural Derbyshire, England, during January 2024, feeding on berries. Taxonomy Some authorities (including the Sibley-Monroe checklist) place some other genera in the family Bombycillidae along with the waxwings. Birds that are sometimes classified in this way include the silky-flycatchers, the hypocolius, and the palm chat. Recent molecular analyses have corroborated their affinity and identified them as a clade, identifying the yellow-flanked whistler as another member. Species
Biology and health sciences
Passerida
Animals
202522
https://en.wikipedia.org/wiki/Ionizing%20radiation
Ionizing radiation
Ionizing radiation (US, ionising radiation in the UK), including nuclear radiation, consists of subatomic particles or electromagnetic waves that have sufficient energy to ionize atoms or molecules by detaching electrons from them. Some particles can travel up to 99% of the speed of light, and the electromagnetic waves are on the high-energy portion of the electromagnetic spectrum. Gamma rays, X-rays, and the higher energy ultraviolet part of the electromagnetic spectrum are ionizing radiation, whereas the lower energy ultraviolet, visible light, infrared, microwaves, and radio waves are non-ionizing radiation. Nearly all types of laser light are non-ionizing radiation. The boundary between ionizing and non-ionizing radiation in the ultraviolet area cannot be sharply defined, as different molecules and atoms ionize at different energies. The energy of ionizing radiation starts between 10 electronvolts (eV) and 33 eV. Ionizing subatomic particles include alpha particles, beta particles, and neutrons. These particles are created by radioactive decay, and almost all are energetic enough to ionize. There are also secondary cosmic particles produced after cosmic rays interact with Earth's atmosphere, including muons, mesons, and positrons. Cosmic rays may also produce radioisotopes on Earth (for example, carbon-14), which in turn decay and emit ionizing radiation. Cosmic rays and the decay of radioactive isotopes are the primary sources of natural ionizing radiation on Earth, contributing to background radiation. Ionizing radiation is also generated artificially by X-ray tubes, particle accelerators, and nuclear fission. Ionizing radiation is not immediately detectable by human senses, so instruments such as Geiger counters are used to detect and measure it. However, very high energy particles can produce visible effects on both organic and inorganic matter (e.g. water lighting in Cherenkov radiation) or humans (e.g. acute radiation syndrome). Ionizing radiation is used in a wide variety of fields such as medicine, nuclear power, research, and industrial manufacturing, but presents a health hazard if proper measures against excessive exposure are not taken. Exposure to ionizing radiation causes cell damage to living tissue and organ damage. In high acute doses, it will result in radiation burns and radiation sickness, and lower level doses over a protracted time can cause cancer. The International Commission on Radiological Protection (ICRP) issues guidance on ionizing radiation protection, and the effects of dose uptake on human health. Directly ionizing radiation Ionizing radiation may be grouped as directly or indirectly ionizing. Any charged particle with mass can ionize atoms directly by fundamental interaction through the Coulomb force if it carries sufficient kinetic energy. Such particles include atomic nuclei, electrons, muons, charged pions, protons, and energetic charged nuclei stripped of their electrons. When moving at relativistic speeds (near the speed of light, c) these particles have enough kinetic energy to be ionizing, but there is considerable speed variation. For example, a typical alpha particle moves at about 5% of c, but an electron with 33 eV (just enough to ionize) moves at about 1% of c. Two of the first types of directly ionizing radiation to be discovered are alpha particles which are helium nuclei ejected from the nucleus of an atom during radioactive decay, and energetic electrons, which are called beta particles. Natural cosmic rays are made up primarily of relativistic protons but also include heavier atomic nuclei like helium ions and HZE ions. In the atmosphere such particles are often stopped by air molecules, and this produces short-lived charged pions, which soon decay to muons, a primary type of cosmic ray radiation that reaches the surface of the earth. Pions can also be produced in large amounts in particle accelerators. Alpha particles Alpha particles consist of two protons and two neutrons bound together into a particle identical to a helium nucleus. Alpha particle emissions are generally produced in the process of alpha decay. Alpha particles are a strongly ionizing form of radiation, but when emitted by radioactive decay they have low penetration power and can be absorbed by a few centimeters of air, or by the top layer of human skin. More powerful alpha particles from ternary fission are three times as energetic, and penetrate proportionately farther in air. The helium nuclei that form 10–12% of cosmic rays, are also usually of much higher energy than those produced by radioactive decay and pose shielding problems in space. However, this type of radiation is significantly absorbed by the Earth's atmosphere, which is a radiation shield equivalent to about 10 meters of water. The alpha particle was named by Ernest Rutherford after the first letter in the Greek alphabet, α, when he ranked the known radioactive emissions in descending order of ionising effect in 1899. The symbol is α or α2+. Because they are identical to helium nuclei, they are also sometimes written as or indicating a Helium ion with a +2 charge (missing its two electrons). If the ion gains electrons from its environment, the alpha particle can be written as a normal (electrically neutral) helium atom . Beta particles Beta particles are high-energy, high-speed electrons or positrons emitted by certain types of radioactive nuclei, such as potassium-40. The production of beta particles is termed beta decay. They are designated by the Greek letter beta (β). There are two forms of beta decay, β− and β+, which respectively give rise to the electron and the positron. Beta particles are much less penetrating than gamma radiation, but more penetrating than alpha particles. High-energy beta particles may produce X-rays known as bremsstrahlung ("braking radiation") or secondary electrons (delta ray) as they pass through matter. Both of these can cause an indirect ionization effect. Bremsstrahlung is of concern when shielding beta emitters, as the interaction of beta particles with some shielding materials produces Bremsstrahlung. The effect is greater with material having high atomic numbers, so material with low atomic numbers is used for beta source shielding. Positrons and other types of antimatter The positron or antielectron is the antiparticle or the antimatter counterpart of the electron. When a low-energy positron collides with a low-energy electron, annihilation occurs, resulting in their conversion into the energy of two or more gamma ray photons (see electron–positron annihilation). As positrons are positively charged particles they can directly ionize an atom through Coulomb interactions. Positrons can be generated by positron emission nuclear decay (through weak interactions), or by pair production from a sufficiently energetic photon. Positrons are common artificial sources of ionizing radiation used in medical positron emission tomography (PET) scans. Charged nuclei Charged nuclei are characteristic of galactic cosmic rays and solar particle events and except for alpha particles (charged helium nuclei) have no natural sources on earth. In space, however, very high energy protons, helium nuclei, and HZE ions can be initially stopped by relatively thin layers of shielding, clothes, or skin. However, the resulting interaction will generate secondary radiation and cause cascading biological effects. If just one atom of tissue is displaced by an energetic proton, for example, the collision will cause further interactions in the body. This is called "linear energy transfer" (LET), which utilizes elastic scattering. LET can be visualized as a billiard ball hitting another in the manner of the conservation of momentum, sending both away with the energy of the first ball divided between the two unequally. When a charged nucleus strikes a relatively slow-moving nucleus of an object in space, LET occurs and neutrons, alpha particles, low-energy protons, and other nuclei will be released by the collisions and contribute to the total absorbed dose of tissue. Indirectly ionizing radiation Indirectly ionizing radiation is electrically neutral and does not interact strongly with matter, therefore the bulk of the ionization effects are due to secondary ionization. Photon radiation Even though photons are electrically neutral, they can ionize atoms indirectly through the photoelectric effect and the Compton effect. Either of those interactions will cause the ejection of an electron from an atom at relativistic speeds, turning that electron into a beta particle (secondary beta particle) that will ionize other atoms. Since most of the ionized atoms are due to the secondary beta particles, photons are indirectly ionizing radiation. Radiated photons are called gamma rays if they are produced by a nuclear reaction, subatomic particle decay, or radioactive decay within the nucleus. They are called x-rays if produced outside the nucleus. The generic term "photon" is used to describe both. X-rays normally have a lower energy than gamma rays, and an older convention was to define the boundary as a wavelength of 10−11 m (or a photon energy of 100 keV). That threshold was driven by historic limitations of older X-ray tubes and low awareness of isomeric transitions. Modern technologies and discoveries have shown an overlap between X-ray and gamma energies. In many fields they are functionally identical, differing for terrestrial studies only in origin of the radiation. In astronomy, however, where radiation origin often cannot be reliably determined, the old energy division has been preserved, with X-rays defined as being between about 120 eV and 120 keV, and gamma rays as being of any energy above 100 to 120 keV, regardless of source. Most astronomical "gamma-ray astronomy" are known not to originate in nuclear radioactive processes but, rather, result from processes like those that produce astronomical X-rays, except driven by much more energetic electrons. Photoelectric absorption is the dominant mechanism in organic materials for photon energies below 100 keV, typical of classical X-ray tube originated X-rays. At energies beyond 100 keV, photons ionize matter increasingly through the Compton effect, and then indirectly through pair production at energies beyond 5 MeV. The accompanying interaction diagram shows two Compton scatterings happening sequentially. In every scattering event, the gamma ray transfers energy to an electron, and it continues on its path in a different direction and with reduced energy. Definition boundary for lower-energy photons The lowest ionization energy of any element is 3.89 eV, for caesium. However, US Federal Communications Commission material defines ionizing radiation as that with a photon energy greater than 10 eV (equivalent to a far ultraviolet wavelength of 124 nanometers). Roughly, this corresponds to both the first ionization energy of oxygen, and the ionization energy of hydrogen, both about 14 eV. In some Environmental Protection Agency references, the ionization of a typical water molecule at an energy of 33 eV is referenced as the appropriate biological threshold for ionizing radiation: this value represents the so-called W-value, the colloquial name for the ICRU's mean energy expended in a gas per ion pair formed, which combines ionization energy plus the energy lost to other processes such as excitation. At 38 nanometers wavelength for electromagnetic radiation, 33 eV is close to the energy at the conventional 10 nm wavelength transition between extreme ultraviolet and X-ray radiation, which occurs at about 125 eV. Thus, X-ray radiation is always ionizing, but only extreme-ultraviolet radiation can be considered ionizing under all definitions. Neutrons Neutrons have a neutral electrical charge often misunderstood as zero electrical charge and thus often do not directly cause ionization in a single step or interaction with matter. However, fast neutrons will interact with the protons in hydrogen via linear energy transfer, energy that a particle transfers to the material it is moving through. This mechanism scatters the nuclei of the materials in the target area, causing direct ionization of the hydrogen atoms. When neutrons strike the hydrogen nuclei, proton radiation (fast protons) results. These protons are themselves ionizing because they are of high energy, are charged, and interact with the electrons in matter. Neutrons that strike other nuclei besides hydrogen will transfer less energy to the other particle if linear energy transfer does occur. But, for many nuclei struck by neutrons, inelastic scattering occurs. Whether elastic or inelastic scatter occurs is dependent on the speed of the neutron, whether fast or thermal or somewhere in between. It is also dependent on the nuclei it strikes and its neutron cross section. In inelastic scattering, neutrons are readily absorbed in a type of nuclear reaction called neutron capture and attributes to the neutron activation of the nucleus. Neutron interactions with most types of matter in this manner usually produce radioactive nuclei. The abundant oxygen-16 nucleus, for example, undergoes neutron activation, rapidly decays by a proton emission forming nitrogen-16, which decays to oxygen-16. The short-lived nitrogen-16 decay emits a powerful beta ray. This process can be written as: 16O (n,p) 16N (fast neutron capture possible with >11 MeV neutron) 16N → 16O + β− (Decay t1/2 = 7.13 s) This high-energy β− further interacts rapidly with other nuclei, emitting high-energy γ via Bremsstrahlung While not a favorable reaction, the 16O (n,p) 16N reaction is a major source of X-rays emitted from the cooling water of a pressurized water reactor and contributes enormously to the radiation generated by a water-cooled nuclear reactor while operating. For the best shielding of neutrons, hydrocarbons that have an abundance of hydrogen are used. In fissile materials, secondary neutrons may produce nuclear chain reactions, causing a larger amount of ionization from the daughter products of fission. Outside the nucleus, free neutrons are unstable and have a mean lifetime of 14 minutes, 42 seconds. Free neutrons decay by emission of an electron and an electron antineutrino to become a proton, a process known as beta decay: In the adjacent diagram, a neutron collides with a proton of the target material, and then becomes a fast recoil proton that ionizes in turn. At the end of its path, the neutron is captured by a nucleus in an (n,γ)-reaction that leads to the emission of a neutron capture photon. Such photons always have enough energy to qualify as ionizing radiation. Physical effects Nuclear effects Neutron radiation, alpha radiation, and extremely energetic gamma (> ~20 MeV) can cause nuclear transmutation and induced radioactivity. The relevant mechanisms are neutron activation, alpha absorption, and photodisintegration. A large enough number of transmutations can change macroscopic properties and cause targets to become radioactive themselves, even after the original source is removed. Chemical effects Ionization of molecules can lead to radiolysis (breaking chemical bonds), and formation of highly reactive free radicals. These free radicals may then react chemically with neighbouring materials even after the original radiation has stopped. (e.g., ozone cracking of polymers by ozone formed by ionization of air). Ionizing radiation can also accelerate existing chemical reactions such as polymerization and corrosion, by contributing to the activation energy required for the reaction. Optical materials deteriorate under the effect of ionizing radiation. High-intensity ionizing radiation in air can produce a visible ionized air glow of telltale bluish-purple color. The glow can be observed, e.g., during criticality accidents, around mushroom clouds shortly after a nuclear explosion, or the inside of a damaged nuclear reactor like during the Chernobyl disaster. Monatomic fluids, e.g. molten sodium, have no chemical bonds to break and no crystal lattice to disturb, so they are immune to the chemical effects of ionizing radiation. Simple diatomic compounds with very negative enthalpy of formation, such as hydrogen fluoride will reform rapidly and spontaneously after ionization. Electrical effects The ionization of materials temporarily increases their conductivity, potentially permitting damaging current levels. This is a particular hazard in semiconductor microelectronics employed in electronic equipment, with subsequent currents introducing operation errors or even permanently damaging the devices. Devices intended for high radiation environments such as the nuclear industry and extra-atmospheric (space) applications may be made radiation hard to resist such effects through design, material selection, and fabrication methods. Proton radiation found in space can also cause single-event upsets in digital circuits. The electrical effects of ionizing radiation are exploited in gas-filled radiation detectors, e.g. the Geiger-Muller counter or the ion chamber. Health effects Most adverse health effects of exposure to ionizing radiation may be grouped in two general categories: deterministic effects (harmful tissue reactions) due in large part to killing or malfunction of cells following high doses from radiation burns. stochastic effects, i.e., cancer and heritable effects involving either cancer development in exposed individuals owing to mutation of somatic cells or heritable disease in their offspring owing to mutation of reproductive (germ) cells. The most common impact is stochastic induction of cancer with a latent period of years or decades after exposure. For example, ionizing radiation is one cause of chronic myelogenous leukemia, although most people with CML have not been exposed to radiation. The mechanism by which this occurs is well understood, but quantitative models predicting the level of risk remain controversial. The most widely accepted model, the Linear no-threshold model (LNT), holds that the incidence of cancers due to ionizing radiation increases linearly with effective radiation dose at a rate of 5.5% per sievert. If this is correct, then natural background radiation is the most hazardous source of radiation to general public health, followed by medical imaging as a close second. Other stochastic effects of ionizing radiation are teratogenesis, cognitive decline, and heart disease. Although DNA is always susceptible to damage by ionizing radiation, the DNA molecule may also be damaged by radiation with enough energy to excite certain molecular bonds to form pyrimidine dimers. This energy may be less than ionizing, but near to it. A good example is ultraviolet spectrum energy which begins at about 3.1 eV (400 nm) at close to the same energy level which can cause sunburn to unprotected skin, as a result of photoreactions in collagen and (in the UV-B range) also damage in DNA (for example, pyrimidine dimers). Thus, the mid and lower ultraviolet electromagnetic spectrum is damaging to biological tissues as a result of electronic excitation in molecules which falls short of ionization, but produces similar non-thermal effects. To some extent, visible light and also ultraviolet A (UVA) which is closest to visible energies, have been proven to result in formation of reactive oxygen species in skin, which cause indirect damage since these are electronically excited molecules which can inflict reactive damage, although they do not cause sunburn (erythema). Like ionization-damage, all these effects in skin are beyond those produced by simple thermal effects. Measurement of radiation The table below shows radiation and dose quantities in SI and non-SI units. Uses of radiation Ionizing radiation has many industrial, military, and medical uses. Its usefulness must be balanced with its hazards, a compromise that has shifted over time. For example, at one time, assistants in shoe shops in the US used X-rays to check a child's shoe size, but this practice was halted when the risks of ionizing radiation were better understood. Neutron radiation is essential to the working of nuclear reactors and nuclear weapons. The penetrating power of x-ray, gamma, beta, and positron radiation is used for medical imaging, nondestructive testing, and a variety of industrial gauges. Radioactive tracers are used in medical and industrial applications, as well as biological and radiation chemistry. Alpha radiation is used in static eliminators and smoke detectors. The sterilizing effects of ionizing radiation are useful for cleaning medical instruments, food irradiation, and the sterile insect technique. Measurements of carbon-14, can be used to date the remains of long-dead organisms (such as wood that is thousands of years old). Sources of radiation Ionizing radiation is generated through nuclear reactions, nuclear decay, by very high temperature, or via acceleration of charged particles in electromagnetic fields. Natural sources include the sun, lightning and supernova explosions. Artificial sources include nuclear reactors, particle accelerators, and x-ray tubes. The United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR) itemized types of human exposures. The International Commission on Radiological Protection manages the International System of Radiological Protection, which sets recommended limits for dose uptake. Background radiation Background radiation comes from both natural and human-made sources. The global average exposure of humans to ionizing radiation is about 3 mSv (0.3 rem) per year, 80% of which comes from nature. The remaining 20% results from exposure to human-made radiation sources, primarily from medical imaging. Average human-made exposure is much higher in developed countries, mostly due to CT scans and nuclear medicine. Natural background radiation comes from five primary sources: cosmic radiation, solar radiation, external terrestrial sources, radiation in the human body, and radon. The background rate for natural radiation varies considerably with location, being as low as 1.5 mSv/a (1.5 mSv per year) in some areas and over 100 mSv/a in others. The highest level of purely natural radiation recorded on the Earth's surface is 90 μGy/h (0.8 Gy/a) on a Brazilian black beach composed of monazite. The highest background radiation in an inhabited area is found in Ramsar, primarily due to naturally radioactive limestone used as a building material. Some 2000 of the most exposed residents receive an average radiation dose of 10 mGy per year, (1 rad/yr) ten times more than the ICRP recommended limit for exposure to the public from artificial sources. Record levels were found in a house where the effective radiation dose due to external radiation was 135 mSv/a, (13.5 rem/yr) and the committed dose from radon was 640 mSv/a (64.0 rem/yr). This unique case is over 200 times higher than the world average background radiation. Despite the high levels of background radiation that the residents of Ramsar receive there is no compelling evidence that they experience a greater health risk. The ICRP recommendations are conservative limits and may represent an over representation of the actual health risk. Generally radiation safety organization recommend the most conservative limits assuming it is best to err on the side of caution. This level of caution is appropriate but should not be used to create fear about background radiation danger. Radiation danger from background radiation may be a serious threat but is more likely a small overall risk compared to all other factors in the environment. Cosmic radiation The Earth, and all living things on it, are constantly bombarded by radiation from outside our solar system. This cosmic radiation consists of relativistic particles: positively charged nuclei (ions) from 1 amu protons (about 85% of it) to 26 amu iron nuclei and even beyond. (The high-atomic number particles are called HZE ions.) The energy of this radiation can far exceed that which humans can create, even in the largest particle accelerators (see ultra-high-energy cosmic ray). This radiation interacts in the atmosphere to create secondary radiation that rains down, including x-rays, muons, protons, antiprotons, alpha particles, pions, electrons, positrons, and neutrons. The dose from cosmic radiation is largely from muons, neutrons, and electrons, with a dose rate that varies in different parts of the world and based largely on the geomagnetic field, altitude, and solar cycle. The cosmic-radiation dose rate on airplanes is so high that, according to the United Nations UNSCEAR 2000 Report (see links at bottom), airline flight crew workers receive more dose on average than any other worker, including those in nuclear power plants. Airline crews receive more cosmic rays if they routinely work flight routes that take them close to the North or South pole at high altitudes, where this type of radiation is maximal. Cosmic rays also include high-energy gamma rays, which are far beyond the energies produced by solar or human sources. External terrestrial sources Most materials on Earth contain some radioactive atoms, even if in small quantities. Most of the dose received from these sources is from gamma-ray emitters in building materials, or rocks and soil when outside. The major radionuclides of concern for terrestrial radiation are isotopes of potassium, uranium, and thorium. Each of these sources has been decreasing in activity since the formation of the Earth. Internal radiation sources All earthly materials that are the building blocks of life contain a radioactive component. As humans, plants, and animals consume food, air, and water, an inventory of radioisotopes builds up within the organism (see banana equivalent dose). Some radionuclides, like potassium-40, emit a high-energy gamma ray that can be measured by sensitive electronic radiation measurement systems. These internal radiation sources contribute to an individual's total radiation dose from natural background radiation. Radon An important source of natural radiation is radon gas, which seeps continuously from bedrock but can, because of its high density, accumulate in poorly ventilated houses. Radon-222 is a gas produced by the α-decay of radium-226. Both are a part of the natural uranium decay chain. Uranium is found in soil throughout the world in varying concentrations. Radon is the largest cause of lung cancer among non-smokers and the second-leading cause overall. Radiation exposure There are three standard ways to limit exposure: Time: For people exposed to radiation in addition to natural background radiation, limiting or minimizing the exposure time will reduce the dose from the source of radiation. Distance: Radiation intensity decreases sharply with distance, according to an inverse-square law (in an absolute vacuum). Shielding: Air or skin can be sufficient to substantially attenuate alpha radiation, while sheet metal or plastic is often sufficient to stop beta radiation. Barriers of lead, concrete, or water are often used to give effective protection from more penetrating forms of ionizing radiation such as gamma rays and neutrons. Some radioactive materials are stored or handled underwater or by remote control in rooms constructed of thick concrete or lined with lead. There are special plastic shields that stop beta particles, and air will stop most alpha particles. The effectiveness of a material in shielding radiation is determined by its half-value thicknesses, the thickness of material that reduces the radiation by half. This value is a function of the material itself and of the type and energy of ionizing radiation. Some generally accepted thicknesses of attenuating material are 5 mm of aluminum for most beta particles, and 3 inches of lead for gamma radiation. These can all be applied to natural and human-made sources. For human-made sources the use of Containment is a major tool in reducing dose uptake and is effectively a combination of shielding and isolation from the open environment. Radioactive materials are confined in the smallest possible space and kept out of the environment such as in a hot cell (for radiation) or glove box (for contamination). Radioactive isotopes for medical use, for example, are dispensed in closed handling facilities, usually gloveboxes, while nuclear reactors operate within closed systems with multiple barriers that keep the radioactive materials contained. Work rooms, hot cells and gloveboxes have slightly reduced air pressures to prevent escape of airborne material to the open environment. In nuclear conflicts or civil nuclear releases civil defense measures can help reduce exposure of populations by reducing ingestion of isotopes and occupational exposure. One is the issue of potassium iodide (KI) tablets, which blocks the uptake of radioactive iodine (one of the major radioisotope products of nuclear fission) into the human thyroid gland. Occupational exposure Occupationally exposed individuals are controlled within the regulatory framework of the country they work in, and in accordance with any local nuclear licence constraints. These are usually based on the recommendations of the International Commission on Radiological Protection. The ICRP recommends limiting artificial irradiation. For occupational exposure, the limit is 50 mSv in a single year with a maximum of 100 mSv in a consecutive five-year period. The radiation exposure of these individuals is carefully monitored with the use of dosimeters and other radiological protection instruments which will measure radioactive particulate concentrations, area gamma dose readings and radioactive contamination. A legal record of dose is kept. Examples of activities where occupational exposure is a concern include: Airline crew (the most exposed population) Industrial radiography Medical radiology and nuclear medicine Uranium mining Nuclear power plant and nuclear fuel reprocessing plant workers Research laboratories (government, university and private) Some human-made radiation sources affect the body through direct radiation, known as effective dose (radiation) while others take the form of radioactive contamination and irradiate the body from within. The latter is known as committed dose. Public exposure Medical procedures, such as diagnostic X-rays, nuclear medicine, and radiation therapy are by far the most significant source of human-made radiation exposure to the general public. Some of the major radionuclides used are I-131, Tc-99m, Co-60, Ir-192, and Cs-137. The public is also exposed to radiation from consumer products, such as tobacco (polonium-210), combustible fuels (gas, coal, etc.), televisions, luminous watches and dials (tritium), airport X-ray systems, smoke detectors (americium), electron tubes, and gas lantern mantles (thorium). Of lesser magnitude, members of the public are exposed to radiation from the nuclear fuel cycle, which includes the entire sequence from processing uranium to the disposal of the spent fuel. The effects of such exposure have not been reliably measured due to the extremely low doses involved. Opponents use a cancer per dose model to assert that such activities cause several hundred cases of cancer per year, an application of the widely accepted Linear no-threshold model (LNT). The International Commission on Radiological Protection recommends limiting artificial irradiation to the public to an average of 1 mSv (0.001 Sv) of effective dose per year, not including medical and occupational exposures. In a nuclear war, gamma rays from both the initial weapon explosion and fallout would be the sources of radiation exposure. Spaceflight Massive particles are a concern for astronauts outside the Earth's magnetic field who would receive solar particles from solar proton events (SPE) and galactic cosmic rays from cosmic sources. These high-energy charged nuclei are blocked by Earth's magnetic field but pose a major health concern for astronauts traveling to the Moon and to any distant location beyond the Earth orbit. Highly charged HZE ions in particular are known to be extremely damaging, although protons make up the vast majority of galactic cosmic rays. Evidence indicates past SPE radiation levels that would have been lethal for unprotected astronauts. Air travel Air travel exposes people on aircraft to increased radiation from space as compared to sea level, including cosmic rays and from solar flare events. Software programs such as Epcard, CARI, SIEVERT, PCAIRE are attempts to simulate exposure by aircrews and passengers. An example of a measured dose (not simulated dose) is 6 μSv per hour from London Heathrow to Tokyo Narita on a high-latitude polar route. However, dosages can vary, such as during periods of high solar activity. The United States FAA requires airlines to provide flight crew with information about cosmic radiation, and an International Commission on Radiological Protection recommendation for the general public is no more than 1 mSv per year. In addition, many airlines do not allow pregnant flightcrew members, to comply with a European Directive. The FAA has a recommended limit of 1 mSv total for a pregnancy, and no more than 0.5 mSv per month. Information originally based on Fundamentals of Aerospace Medicine published in 2008. Radiation hazard warning signs Hazardous levels of ionizing radiation are signified by the trefoil sign on a yellow background. These are usually posted at the boundary of a radiation controlled area or in any place where radiation levels are significantly above background due to human intervention. The red ionizing radiation warning symbol (ISO 21482) was launched in 2007, and is intended for IAEA Category 1, 2 and 3 sources defined as dangerous sources capable of death or serious injury, including food irradiators, teletherapy machines for cancer treatment and industrial radiography units. The symbol is to be placed on the device housing the source, as a warning not to dismantle the device or to get any closer. It will not be visible under normal use, only if someone attempts to disassemble the device. The symbol will not be located on building access doors, transportation packages or containers.
Physical sciences
Basics_6
null
202661
https://en.wikipedia.org/wiki/Stellar%20parallax
Stellar parallax
Stellar parallax is the apparent shift of position (parallax) of any nearby star (or other object) against the background of distant stars. By extension, it is a method for determining the distance to the star through trigonometry, the stellar parallax method. Created by the different orbital positions of Earth, the extremely small observed shift is largest at time intervals of about six months, when Earth arrives at opposite sides of the Sun in its orbit, giving a baseline distance of about two astronomical units between observations. The parallax itself is considered to be half of this maximum, about equivalent to the observational shift that would occur due to the different positions of Earth and the Sun, a baseline of one astronomical unit (AU). Stellar parallax is so difficult to detect that its existence was the subject of much debate in astronomy for hundreds of years. Thomas Henderson, Friedrich Georg Wilhelm von Struve, and Friedrich Bessel made the first successful parallax measurements in 1832–1838, for the stars Alpha Centauri, Vega, and 61 Cygni. History of measurement Early theory and attempts Stellar parallax is so small that it was unobservable until the 19th century, and its apparent absence was used as a scientific argument against heliocentrism during the early modern age. It is clear from Euclid's geometry that the effect would be undetectable if the stars were far enough away, but for various reasons, such gigantic distances involved seemed entirely implausible: it was one of Tycho Brahe's principal objections to Copernican heliocentrism that for it to be compatible with the lack of observable stellar parallax, there would have to be an enormous and unlikely void between the orbit of Saturn and the eighth sphere (the fixed stars). James Bradley first tried to measure stellar parallaxes in 1729. The stellar movement proved too insignificant for his telescope, but he instead discovered the aberration of light and the nutation of Earth's axis, and catalogued 3,222 stars. 19th and 20th centuries Measurement of annual parallax was the first reliable way to determine the distances to the closest stars. In the second quarter of the 19th century, technological progress reached to the level which provided sufficient accuracy and precision for stellar parallax measurements. Giuseppe Calandrelli noted stellar parallax in 1805-6 and came up with a 4-second value for the star Vega which was a gross overestimate. The first successful stellar parallax measurements were done by Thomas Henderson in Cape Town South Africa in 1832–1833, where he measured parallax of one of the closest stars, Alpha Centauri. Between 1835 and 1836, astronomer Friedrich Georg Wilhelm von Struve at the Dorpat university observatory measured the distance of Vega, publishing his results in 1837. Friedrich Bessel, a friend of Struve, carried out an intense observational campaign in 1837–1838 at Koenigsberg Observatory for the star 61 Cygni using a heliometer, and published his results in 1838. Henderson published his results in 1839, after returning from South Africa. Those three results, two of which were measured with the best instruments at the time (Fraunhofer great refractor used by Struve and Fraunhofer heliometer by Bessel) were the first ones in history to establish the reliable distance scale to the stars. A large heliometer was installed at Kuffner Observatory (In Vienna) in 1896, and was used for measuring the distance to other stars by trigonometric parallax. By 1910 it had computed 16 parallax distances to other stars, out of only 108 total known to science at that time. Being very difficult to measure, only about 60 stellar parallaxes had been obtained by the end of the 19th century, mostly by use of the filar micrometer. Astrographs using astronomical photographic plates sped the process in the early 20th century. Automated plate-measuring machines and more sophisticated computer technology of the 1960s allowed more efficient compilation of star catalogues. In the 1980s, charge-coupled devices (CCDs) replaced photographic plates and reduced optical uncertainties to one milliarcsecond. Stellar parallax remains the standard for calibrating other measurement methods (see Cosmic distance ladder). Accurate calculations of distance based on stellar parallax require a measurement of the distance from Earth to the Sun, now known to exquisite accuracy based on radar reflection off the surfaces of planets. Space astrometry In 1989, the satellite Hipparcos was launched primarily for obtaining parallaxes and proper motions of nearby stars, increasing the number of stellar parallaxes measured to milliarcsecond accuracy a thousandfold. Even so, Hipparcos is only able to measure parallax angles for stars up to about 1,600 light-years away, a little more than one percent of the diameter of the Milky Way Galaxy. The Hubble telescope WFC3 now has a precision of 20 to 40 microarcseconds, enabling reliable distance measurements up to for a small number of stars. This gives more accuracy to the cosmic distance ladder and improves the knowledge of distances in the Universe, based on the dimensions of the Earth's orbit. As distances between the two points of observation are increased, the visual effect of the parallax is likewise rendered more visible. NASA's New Horizons spacecraft performed the first interstellar parallax measurement on 22 April 2020, taking images of Proxima Centauri and Wolf 359 in conjunction with earth-based observatories. The relative proximity of the two stars combined with the 6.5 billion kilometer (about 43 AU) distance of the spacecraft from Earth yielded a discernible parallax of arcminutes, allowing the parallax to be seen visually without instrumentation. The European Space Agency's Gaia mission, launched 19 December 2013, is expected to measure parallax angles to an accuracy of 10 microarcseconds for all moderately bright stars, thus mapping nearby stars (and potentially planets) up to a distance of tens of thousands of light-years from Earth. Data Release 2 in 2018 claims mean errors for the parallaxes of 15th magnitude and brighter stars of 20–40 microarcseconds. Radio astrometry Very long baseline interferometry in the radio band can produce images with angular resolutions of about 1 milliarcsecond, and hence, for bright radio sources, the precision of parallax measurements made in the radio can easily exceed those of optical telescopes like Gaia. These measurements tend to be sensitivity limited, and need to be made one at a time, so the work is generally done only for sources like pulsars and X-ray binaries, where the radio emission is strong relative to the optical emission. Parallax method Principle Throughout the year the position of a star S is noted in relation to other stars in its apparent neighborhood: Stars that did not seem to move in relation to each other are used as reference points to determine the path of S. The observed path is an ellipse: the projection of Earth's orbit around the Sun through S onto the distant background of non-moving stars. The farther S is removed from Earth's orbital axis, the greater the eccentricity of the path of S. The center of the ellipse corresponds to the point where S would be seen from the Sun: The plane of Earth's orbit is at an angle to a line from the Sun through S. The vertices v and v' of the elliptical projection of the path of S are projections of positions of Earth E and such that a line E- intersects the line Sun-S at a right angle; the triangle created by points E, and S is an isosceles triangle with the line Sun-S as its symmetry axis. Any stars that did not move between observations are, for the purpose of the accuracy of the measurement, infinitely far away. This means that the distance of the movement of the Earth compared to the distance to these infinitely far away stars is, within the accuracy of the measurement, 0. Thus a line of sight from Earth's first position E to vertex v will be essentially the same as a line of sight from the Earth's second position to the same vertex v, and will therefore run parallel to it - impossible to depict convincingly in an image of limited size: Since line - is a transversal in the same (approximately Euclidean) plane as parallel lines E-v and -v, it follows that the corresponding angles of intersection of these parallel lines with this transversal are congruent: the angle θ between lines of sight E-v and - is equal to the angle θ between -v and -, which is the angle θ between observed positions of S in relation to its apparently unmoving stellar surroundings. The distance d from the Sun to S now follows from simple trigonometry: tan(θ) = E-Sun / d, so that d = E-Sun / tan(θ), where E-Sun is 1 AU. The more distant an object is, the smaller its parallax. Stellar parallax measures are given in the tiny units of arcseconds, or even in thousandths of arcseconds (milliarcseconds). The distance unit parsec is defined as the length of the leg of a right triangle adjacent to the angle of one arcsecond at one vertex, where the other leg is 1 AU long. Because stellar parallaxes and distances all involve such skinny right triangles, a convenient trigonometric approximation can be used to convert parallaxes (in arcseconds) to distance (in parsecs). The approximate distance is simply the reciprocal of the parallax: For example, Proxima Centauri (the nearest star to Earth other than the Sun), whose parallax is 0.7685, is 1 / 0.7685 parsecs = distant. Variants Stellar parallax is most often measured using annual parallax, defined as the difference in position of a star as seen from Earth and Sun, i.e. the angle subtended at a star by the mean radius of Earth's orbit around the Sun. The parsec (3.26 light-years) is defined as the distance for which the annual parallax is 1 arcsecond. Annual parallax is normally measured by observing the position of a star at different times of the year as Earth moves through its orbit. The angles involved in these calculations are very small and thus difficult to measure. The nearest star to the Sun (and also the star with the largest parallax), Proxima Centauri, has a parallax of 0.7685 ± 0.0002 arcsec. This angle is approximately that subtended by an object 2 centimeters in diameter located 5.3 kilometers away. Derivation For a right triangle, where is the parallax, is approximately the average distance from the Sun to Earth, and is the distance to the star. Using small-angle approximations (valid when the angle is small compared to 1 radian), so the parallax, measured in arcseconds, is If the parallax is 1", then the distance is This defines the parsec, a convenient unit for measuring distance using parallax. Therefore, the distance, measured in parsecs, is simply , when the parallax is given in arcseconds. Error Precise parallax measurements of distance have an associated error. This error in the measured parallax angle does not translate directly into an error for the distance, except for relatively small errors. The reason for this is that an error toward a smaller angle results in a greater error in distance than an error toward a larger angle. However, an approximation of the distance error can be computed by where d is the distance and p is the parallax. The approximation is far more accurate for parallax errors that are small relative to the parallax than for relatively large errors. For meaningful results in stellar astronomy, Dutch astronomer Floor van Leeuwen recommends that the parallax error be no more than 10% of the total parallax when computing this error estimate.
Physical sciences
Basics
Astronomy
202672
https://en.wikipedia.org/wiki/Spectral%20density
Spectral density
In signal processing, the power spectrum of a continuous time signal describes the distribution of power into frequency components composing that signal. According to Fourier analysis, any physical signal can be decomposed into a number of discrete frequencies, or a spectrum of frequencies over a continuous range. The statistical average of any sort of signal (including noise) as analyzed in terms of its frequency content, is called its spectrum. When the energy of the signal is concentrated around a finite time interval, especially if its total energy is finite, one may compute the energy spectral density. More commonly used is the power spectral density (PSD, or simply power spectrum), which applies to signals existing over all time, or over a time period large enough (especially in relation to the duration of a measurement) that it could as well have been over an infinite time interval. The PSD then refers to the spectral energy distribution that would be found per unit time, since the total energy of such a signal over all time would generally be infinite. Summation or integration of the spectral components yields the total power (for a physical process) or variance (in a statistical process), identical to what would be obtained by integrating over the time domain, as dictated by Parseval's theorem. The spectrum of a physical process often contains essential information about the nature of . For instance, the pitch and timbre of a musical instrument are immediately determined from a spectral analysis. The color of a light source is determined by the spectrum of the electromagnetic wave's electric field as it fluctuates at an extremely high frequency. Obtaining a spectrum from time series such as these involves the Fourier transform, and generalizations based on Fourier analysis. In many cases the time domain is not specifically employed in practice, such as when a dispersive prism is used to obtain a spectrum of light in a spectrograph, or when a sound is perceived through its effect on the auditory receptors of the inner ear, each of which is sensitive to a particular frequency. However this article concentrates on situations in which the time series is known (at least in a statistical sense) or directly measured (such as by a microphone sampled by a computer). The power spectrum is important in statistical signal processing and in the statistical study of stochastic processes, as well as in many other branches of physics and engineering. Typically the process is a function of time, but one can similarly discuss data in the spatial domain being decomposed in terms of spatial frequency. Units In physics, the signal might be a wave, such as an electromagnetic wave, an acoustic wave, or the vibration of a mechanism. The power spectral density (PSD) of the signal describes the power present in the signal as a function of frequency, per unit frequency. Power spectral density is commonly expressed in SI units of watts per hertz (abbreviated as W/Hz). When a signal is defined in terms only of a voltage, for instance, there is no unique power associated with the stated amplitude. In this case "power" is simply reckoned in terms of the square of the signal, as this would always be proportional to the actual power delivered by that signal into a given impedance. So one might use units of V2 Hz−1 for the PSD. Energy spectral density (ESD) would have units of V2 s Hz−1, since energy has units of power multiplied by time (e.g., watt-hour). In the general case, the units of PSD will be the ratio of units of variance per unit of frequency; so, for example, a series of displacement values (in meters) over time (in seconds) will have PSD in units of meters squared per hertz, m2/Hz. In the analysis of random vibrations, units of g2 Hz−1 are frequently used for the PSD of acceleration, where g denotes the g-force. Mathematically, it is not necessary to assign physical dimensions to the signal or to the independent variable. In the following discussion the meaning of x(t) will remain unspecified, but the independent variable will be assumed to be that of time. One-sided vs two-sided A PSD can be either a one-sided function of only positive frequencies or a two-sided function of both positive and negative frequencies but with only half the amplitude. Noise PSDs are generally one-sided in engineering and two-sided in physics. Definition Energy spectral density In signal processing, the energy of a signal is given by Assuming the total energy is finite (i.e. is a square-integrable function) allows applying Parseval's theorem (or Plancherel's theorem). That is, where is the Fourier transform of at frequency (in Hz). The theorem also holds true in the discrete-time cases. Since the integral on the left-hand side is the energy of the signal, the value of can be interpreted as a density function multiplied by an infinitesimally small frequency interval, describing the energy contained in the signal at frequency in the frequency interval . Therefore, the energy spectral density of is defined as: The function and the autocorrelation of form a Fourier transform pair, a result also known as the Wiener–Khinchin theorem (see also Periodogram). As a physical example of how one might measure the energy spectral density of a signal, suppose represents the potential (in volts) of an electrical pulse propagating along a transmission line of impedance , and suppose the line is terminated with a matched resistor (so that all of the pulse energy is delivered to the resistor and none is reflected back). By Ohm's law, the power delivered to the resistor at time is equal to , so the total energy is found by integrating with respect to time over the duration of the pulse. To find the value of the energy spectral density at frequency , one could insert between the transmission line and the resistor a bandpass filter which passes only a narrow range of frequencies (, say) near the frequency of interest and then measure the total energy dissipated across the resistor. The value of the energy spectral density at is then estimated to be . In this example, since the power has units of V2 Ω−1, the energy has units of V2 s Ω−1 = J, and hence the estimate of the energy spectral density has units of J Hz−1, as required. In many situations, it is common to forget the step of dividing by so that the energy spectral density instead has units of V2 Hz−1. This definition generalizes in a straightforward manner to a discrete signal with a countably infinite number of values such as a signal sampled at discrete times : where is the discrete-time Fourier transform of   The sampling interval is needed to keep the correct physical units and to ensure that we recover the continuous case in the limit   But in the mathematical sciences the interval is often set to 1, which simplifies the results at the expense of generality. (also see normalized frequency) Power spectral density The above definition of energy spectral density is suitable for transients (pulse-like signals) whose energy is concentrated around one time window; then the Fourier transforms of the signals generally exist. For continuous signals over all time, one must rather define the power spectral density (PSD) which exists for stationary processes; this describes how the power of a signal or time series is distributed over frequency, as in the simple example given previously. Here, power can be the actual physical power, or more often, for convenience with abstract signals, is simply identified with the squared value of the signal. For example, statisticians study the variance of a function over time (or over another independent variable), and using an analogy with electrical signals (among other physical processes), it is customary to refer to it as the power spectrum even when there is no physical power involved. If one were to create a physical voltage source which followed and applied it to the terminals of a one ohm resistor, then indeed the instantaneous power dissipated in that resistor would be given by watts. The average power of a signal over all time is therefore given by the following time average, where the period is centered about some arbitrary time : Whenever it is more convenient to deal with time limits in the signal itself rather than time limits in the bounds of the integral, the average power can also be written as where and is unity within the arbitrary period and zero elsewhere. When is non-zero, the integral must grow to infinity at least as fast as does. That is the reason why we cannot use the energy of the signal, which is that diverging integral. In analyzing the frequency content of the signal , one might like to compute the ordinary Fourier transform ; however, for many signals of interest the ordinary Fourier transform does not formally exist. However, under suitable conditions, certain generalizations of the Fourier transform (e.g. the Fourier-Stieltjes transform) still adhere to Parseval's theorem. As such, where the integrand defines the power spectral density: The convolution theorem then allows regarding as the Fourier transform of the time convolution of and , where * represents the complex conjugate. In order to deduce Eq.2, we will find an expression for that will be useful for the purpose. In fact, we will demonstrate that . Let's start by noting that and let , so that when and vice versa. So Where, in the last line, we have made use of the fact that and are dummy variables. So, we have q.e.d. Now, let's demonstrate eq.2 by using the demonstrated identity. In addition, we will make the subtitution . In this way, we have: where the convolution theorem has been used when passing from the 3rd to the 4th line. Now, if we divide the time convolution above by the period and take the limit as , it becomes the autocorrelation function of the non-windowed signal , which is denoted as , provided that is ergodic, which is true in most, but not all, practical cases. Assuming the ergodicity of , the power spectral density can be found once more as the Fourier transform of the autocorrelation function (Wiener–Khinchin theorem). Many authors use this equality to actually define the power spectral density. The power of the signal in a given frequency band , where , can be calculated by integrating over frequency. Since , an equal amount of power can be attributed to positive and negative frequency bands, which accounts for the factor of 2 in the following form (such trivial factors depend on the conventions used): More generally, similar techniques may be used to estimate a time-varying spectral density. In this case the time interval is finite rather than approaching infinity. This results in decreased spectral coverage and resolution since frequencies of less than are not sampled, and results at frequencies which are not an integer multiple of are not independent. Just using a single such time series, the estimated power spectrum will be very "noisy"; however this can be alleviated if it is possible to evaluate the expected value (in the above equation) using a large (or infinite) number of short-term spectra corresponding to statistical ensembles of realizations of evaluated over the specified time window. Just as with the energy spectral density, the definition of the power spectral density can be generalized to discrete time variables . As before, we can consider a window of with the signal sampled at discrete times for a total measurement period . Note that a single estimate of the PSD can be obtained through a finite number of samplings. As before, the actual PSD is achieved when (and thus ) approaches infinity and the expected value is formally applied. In a real-world application, one would typically average a finite-measurement PSD over many trials to obtain a more accurate estimate of the theoretical PSD of the physical process underlying the individual measurements. This computed PSD is sometimes called a periodogram. This periodogram converges to the true PSD as the number of estimates as well as the averaging time interval approach infinity. If two signals both possess power spectral densities, then the cross-spectral density can similarly be calculated; as the PSD is related to the autocorrelation, so is the cross-spectral density related to the cross-correlation. Properties of the power spectral density Some properties of the PSD include: Cross power spectral density Given two signals and , each of which possess power spectral densities and , it is possible to define a cross power spectral density (CPSD) or cross spectral density (CSD). To begin, let us consider the average power of such a combined signal. Using the same notation and methods as used for the power spectral density derivation, we exploit Parseval's theorem and obtain where, again, the contributions of and are already understood. Note that , so the full contribution to the cross power is, generally, from twice the real part of either individual CPSD. Just as before, from here we recast these products as the Fourier transform of a time convolution, which when divided by the period and taken to the limit becomes the Fourier transform of a cross-correlation function. where is the cross-correlation of with and is the cross-correlation of with . In light of this, the PSD is seen to be a special case of the CSD for . If and are real signals (e.g. voltage or current), their Fourier transforms and are usually restricted to positive frequencies by convention. Therefore, in typical signal processing, the full CPSD is just one of the CPSDs scaled by a factor of two. For discrete signals and , the relationship between the cross-spectral density and the cross-covariance is Estimation The goal of spectral density estimation is to estimate the spectral density of a random signal from a sequence of time samples. Depending on what is known about the signal, estimation techniques can involve parametric or non-parametric approaches, and may be based on time-domain or frequency-domain analysis. For example, a common parametric technique involves fitting the observations to an autoregressive model. A common non-parametric technique is the periodogram. The spectral density is usually estimated using Fourier transform methods (such as the Welch method), but other techniques such as the maximum entropy method can also be used. Related concepts The spectral centroid of a signal is the midpoint of its spectral density function, i.e. the frequency that divides the distribution into two equal parts. The spectral edge frequency (SEF), usually expressed as "SEF x", represents the frequency below which x percent of the total power of a given signal are located; typically, x is in the range 75 to 95. It is more particularly a popular measure used in EEG monitoring, in which case SEF has variously been used to estimate the depth of anesthesia and stages of sleep. A spectral envelope is the envelope curve of the spectrum density. It describes one point in time (one window, to be precise). For example, in remote sensing using a spectrometer, the spectral envelope of a feature is the boundary of its spectral properties, as defined by the range of brightness levels in each of the spectral bands of interest. The spectral density is a function of frequency, not a function of time. However, the spectral density of a small window of a longer signal may be calculated, and plotted versus time associated with the window. Such a graph is called a spectrogram. This is the basis of a number of spectral analysis techniques such as the short-time Fourier transform and wavelets. A "spectrum" generally means the power spectral density, as discussed above, which depicts the distribution of signal content over frequency. For transfer functions (e.g., Bode plot, chirp) the complete frequency response may be graphed in two parts: power versus frequency and phase versus frequency—the phase spectral density, phase spectrum, or spectral phase. Less commonly, the two parts may be the real and imaginary parts of the transfer function. This is not to be confused with the frequency response of a transfer function, which also includes a phase (or equivalently, a real and imaginary part) as a function of frequency. The time-domain impulse response cannot generally be uniquely recovered from the power spectral density alone without the phase part. Although these are also Fourier transform pairs, there is no symmetry (as there is for the autocorrelation) forcing the Fourier transform to be real-valued. See Ultrashort pulse#Spectral phase, phase noise, group delay. Sometimes one encounters an amplitude spectral density (ASD), which is the square root of the PSD; the ASD of a voltage signal has units of V Hz−1/2. This is useful when the shape of the spectrum is rather constant, since variations in the ASD will then be proportional to variations in the signal's voltage level itself. But it is mathematically preferred to use the PSD, since only in that case is the area under the curve meaningful in terms of actual power over all frequency or over a specified bandwidth. Applications Any signal that can be represented as a variable that varies in time has a corresponding frequency spectrum. This includes familiar entities such as visible light (perceived as color), musical notes (perceived as pitch), radio/TV (specified by their frequency, or sometimes wavelength) and even the regular rotation of the earth. When these signals are viewed in the form of a frequency spectrum, certain aspects of the received signals or the underlying processes producing them are revealed. In some cases the frequency spectrum may include a distinct peak corresponding to a sine wave component. And additionally there may be peaks corresponding to harmonics of a fundamental peak, indicating a periodic signal which is not simply sinusoidal. Or a continuous spectrum may show narrow frequency intervals which are strongly enhanced corresponding to resonances, or frequency intervals containing almost zero power as would be produced by a notch filter. Electrical engineering The concept and use of the power spectrum of a signal is fundamental in electrical engineering, especially in electronic communication systems, including radio communications, radars, and related systems, plus passive remote sensing technology. Electronic instruments called spectrum analyzers are used to observe and measure the power spectra of signals. The spectrum analyzer measures the magnitude of the short-time Fourier transform (STFT) of an input signal. If the signal being analyzed can be considered a stationary process, the STFT is a good smoothed estimate of its power spectral density. Cosmology Primordial fluctuations, density variations in the early universe, are quantified by a power spectrum which gives the power of the variations as a function of spatial scale.
Physical sciences
Electromagnetic radiation
Physics
202696
https://en.wikipedia.org/wiki/Marina
Marina
A marina (from Spanish , Portuguese and Italian : "related to the sea") is a dock or basin with moorings and supplies for yachts and small boats. A marina differs from a port in that a marina does not handle large passenger ships or cargo from freighters. The word marina may also refer to an inland wharf on a river or canal that is used exclusively by non-industrial pleasure craft such as canal narrowboats. Emplacement Marinas may be located along the banks of rivers connecting to lakes or seas and may be inland. They are also located on coastal harbors (natural or man made) or coastal lagoons, either as stand alone facilities or within a port complex. History In the 19th century, the few existing pleasure craft shared the same facilities as trading and fishing vessels. The marina appeared in the 20th century with the popularization of yachting. Facilities and services A marina may have refuelling, washing and repair facilities, marine and boat chandlers, stores and restaurants. A marina may include ground facilities such as parking lots for vehicles and boat trailers. Slipways (or boat ramps) transfer a trailered boat into the water. A marina may have a travel lift, a specialised crane used for lifting heavier boats out of the water and transporting them around the hard stand. A marina may provide in- or out-of-water boat storage. Fee-based services such as parking, use of picnic areas, pubs, and clubhouses for showers are usually included in long-term rental agreements. Visiting yachtsmen usually have the option of buying each amenity from a fixed schedule of fees; arrangements can be as wide as a single use, such as a shower, or several weeks of temporary berthing. The right to use the facilities is frequently extended at overnight or period rates to visiting yachtsmen. Since marinas are often limited by available space, it may take years on a waiting list to get a permanent berth. Moorings and access Boats are moored on buoys, on fixed or floating walkways tied to an anchoring piling by a roller or ring mechanism (floating docks, pontoons). Buoys are cheaper to rent but less convenient than being able to walk from land to boat. Harbor shuttles (water taxis) or launches, may transfer people between the shore and boats moored on buoys. The alternative is a tender such as an inflatable boat. Facilities offering fuel, boat ramps and stores will normally have a common-use dock set aside for such short term parking needs. Where the tidal range is large, marinas may use locks to maintain the water level for several hours before and after low water. Economic organisation Marinas may be owned and operated by a private club, especially yacht clubs — but also as private enterprises or municipal facilities. Marinas may be standalone private businesses, components of a resort, or owned and operated by public entities.
Technology
Coastal infrastructure
null
202898
https://en.wikipedia.org/wiki/Atmosphere%20of%20Earth
Atmosphere of Earth
The atmosphere of Earth is composed of a layer of gas mixture that surrounds the Earth's planetary surface (both lands and oceans), known collectively as air, with variable quantities of suspended aerosols and particulates (which create weather features such as clouds and hazes), all retained by Earth's gravity. The atmosphere serves as a protective buffer between the Earth's surface and outer space, shields the surface from most meteoroids and ultraviolet solar radiation, keeps it warm and reduces diurnal temperature variation (temperature extremes between day and night) through heat retention (greenhouse effect), redistributes heat and moisture among different regions via air currents, and provides the chemical and climate conditions allowing life to exist and evolve on Earth. By mole fraction (i.e., by quantity of molecules), dry air contains 78.08% nitrogen, 20.95% oxygen, 0.93% argon, 0.04% carbon dioxide, and small amounts of other trace gases (see #Composition below for more detail). Air also contains a variable amount of water vapor, on average around 1% at sea level, and 0.4% over the entire atmosphere. Earth's early atmosphere consisted of accreted gases from the solar nebula, but the atmosphere changed significantly over time, affected by many factors such as volcanism, impact events, weathering and the evolution of life (particularly the photoautotrophs). Recently, human activity has also contributed to atmospheric changes, such as climate change (mainly through deforestation and fossil fuel-related global warming), ozone depletion and acid deposition. The atmosphere has a mass of about 5.15 kg, three quarters of which is within about of the surface. The atmosphere becomes thinner with increasing altitude, with no definite boundary between the atmosphere and outer space. The Kármán line, at or 1.57% of Earth's radius, is often used as the border between the atmosphere and outer space. Atmospheric effects become noticeable during atmospheric reentry of spacecraft at an altitude of around . Several layers can be distinguished in the atmosphere based on characteristics such as temperature and composition, namely the troposphere, stratosphere, mesosphere, thermosphere (formally the ionosphere) and exosphere. Air composition, temperature and atmospheric pressure vary with altitude. Air suitable for use in photosynthesis by terrestrial plants and respiration of terrestrial animals is found within the troposphere. The study of Earth's atmosphere and its processes is called atmospheric science (aerology), and includes multiple subfields, such as climatology and atmospheric physics. Early pioneers in the field include Léon Teisserenc de Bort and Richard Assmann. The study of historic atmosphere is called paleoclimatology. Composition The three major constituents of Earth's atmosphere are nitrogen, oxygen, and argon. Water vapor accounts for roughly 0.25% of the atmosphere by mass. The concentration of water vapor (a greenhouse gas) varies significantly from around 10 ppm by mole fraction in the coldest portions of the atmosphere to as much as 5% by mole fraction in hot, humid air masses, and concentrations of other atmospheric gases are typically quoted in terms of dry air (without water vapor). The remaining gases are often referred to as trace gases, among which are other greenhouse gases, principally carbon dioxide, methane, nitrous oxide, and ozone. Besides argon, other noble gases, neon, helium, krypton, and xenon are also present. Filtered air includes trace amounts of many other chemical compounds. Many substances of natural origin may be present in locally and seasonally variable small amounts as aerosols in an unfiltered air sample, including dust of mineral and organic composition, pollen and spores, sea spray, and volcanic ash. Various industrial pollutants also may be present as gases or aerosols, such as chlorine (elemental or in compounds), fluorine compounds and elemental mercury vapor. Sulfur compounds such as hydrogen sulfide and sulfur dioxide (SO2) may be derived from natural sources or from industrial air pollution. The average molecular weight of dry air, which can be used to calculate densities or to convert between mole fraction and mass fraction, is about 28.946 or 28.964 g/mol. This is decreased when the air is humid. The relative concentration of gases remains constant until about . Stratification In general, air pressure and density decrease with altitude in the atmosphere. However, temperature has a more complicated profile with altitude and may remain relatively constant or even increase with altitude in some regions (see the temperature section). Because the general pattern of the temperature/altitude profile, or lapse rate, is constant and measurable by means of instrumented balloon soundings, the temperature behavior provides a useful metric to distinguish atmospheric layers. This atmospheric stratification divides the Earth's atmosphere into five main layers: Exosphere: Thermosphere: Mesosphere: Stratosphere: Troposphere: Exosphere The exosphere is the outermost layer of Earth's atmosphere (though it is so tenuous that some scientists consider it to be part of interplanetary space rather than part of the atmosphere). It extends from the thermopause (also known as the "exobase") at the top of the thermosphere to a poorly defined boundary with the solar wind and interplanetary medium. The altitude of the exobase varies from about to about in times of higher incoming solar radiation. The upper limit varies depending on the definition. Various authorities consider it to end at about or about —about halfway to the moon, where the influence of Earth's gravity is about the same as radiation pressure from sunlight. The geocorona visible in the far ultraviolet (caused by neutral hydrogen) extends to at least . This layer is mainly composed of extremely low densities of hydrogen, helium and several heavier molecules including nitrogen, oxygen and carbon dioxide closer to the exobase. The atoms and molecules are so far apart that they can travel hundreds of kilometres without colliding with one another. Thus, the exosphere no longer behaves like a gas, and the particles constantly escape into space. These free-moving particles follow ballistic trajectories and may migrate in and out of the magnetosphere or the solar wind. Every second, the Earth loses about 3 kg of hydrogen, 50 g of helium, and much smaller amounts of other constituents. The exosphere is too far above Earth for meteorological phenomena to be possible. However, Earth's auroras—the aurora borealis (northern lights) and aurora australis (southern lights)—sometimes occur in the lower part of the exosphere, where they overlap into the thermosphere. The exosphere contains many of the artificial satellites that orbit Earth. Thermosphere The thermosphere is the second-highest layer of Earth's atmosphere. It extends from the mesopause (which separates it from the mesosphere) at an altitude of about up to the thermopause at an altitude range of . The height of the thermopause varies considerably due to changes in solar activity. Because the thermopause lies at the lower boundary of the exosphere, it is also referred to as the exobase. The lower part of the thermosphere, from above Earth's surface, contains the ionosphere. The temperature of the thermosphere gradually increases with height and can rise as high as , though the gas molecules are so far apart that its temperature in the usual sense is not very meaningful. The air is so rarefied that an individual molecule (of oxygen, for example) travels an average of between collisions with other molecules. Although the thermosphere has a high proportion of molecules with high energy, it would not feel hot to a human in direct contact, because its density is too low to conduct a significant amount of energy to or from the skin. This layer is completely cloudless and free of water vapor. However, non-hydrometeorological phenomena such as the aurora borealis and aurora australis are occasionally seen in the thermosphere. The International Space Station orbits in this layer, between . It is this layer where many of the satellites orbiting the Earth are present. Mesosphere The mesosphere is the third highest layer of Earth's atmosphere, occupying the region above the stratosphere and below the thermosphere. It extends from the stratopause at an altitude of about to the mesopause at above sea level. Temperatures drop with increasing altitude to the mesopause that marks the top of this middle layer of the atmosphere. It is the coldest place on Earth and has an average temperature around . Just below the mesopause, the air is so cold that even the very scarce water vapor at this altitude can condense into polar-mesospheric noctilucent clouds of ice particles. These are the highest clouds in the atmosphere and may be visible to the naked eye if sunlight reflects off them about an hour or two after sunset or similarly before sunrise. They are most readily visible when the Sun is around 4 to 16 degrees below the horizon. Lightning-induced discharges known as transient luminous events (TLEs) occasionally form in the mesosphere above tropospheric thunderclouds. The mesosphere is also the layer where most meteors burn up upon atmospheric entrance. It is too high above Earth to be accessible to jet-powered aircraft and balloons, and too low to permit orbital spacecraft. The mesosphere is mainly accessed by sounding rockets and rocket-powered aircraft. Stratosphere The stratosphere is the second-lowest layer of Earth's atmosphere. It lies above the troposphere and is separated from it by the tropopause. This layer extends from the top of the troposphere at roughly above Earth's surface to the stratopause at an altitude of about . The atmospheric pressure at the top of the stratosphere is roughly 1/1000 the pressure at sea level. It contains the ozone layer, which is the part of Earth's atmosphere that contains relatively high concentrations of that gas. The stratosphere defines a layer in which temperatures rise with increasing altitude. This rise in temperature is caused by the absorption of ultraviolet radiation (UV) from the Sun by the ozone layer, which restricts turbulence and mixing. Although the temperature may be at the tropopause, the top of the stratosphere is much warmer, and may be near 0 °C. The stratospheric temperature profile creates very stable atmospheric conditions, so the stratosphere lacks the weather-producing air turbulence that is so prevalent in the troposphere. Consequently, the stratosphere is almost completely free of clouds and other forms of weather. However, polar stratospheric or nacreous clouds are occasionally seen in the lower part of this layer of the atmosphere where the air is coldest. The stratosphere is the highest layer that can be accessed by jet-powered aircraft. Troposphere The troposphere is the lowest layer of Earth's atmosphere. It extends from Earth's surface to an average height of about , although this altitude varies from about at the geographic poles to at the Equator, with some variation due to weather. The troposphere is bounded above by the tropopause, a boundary marked in most places by a temperature inversion (i.e. a layer of relatively warm air above a colder one), and in others by a zone that is isothermal with height. Although variations do occur, the temperature usually declines with increasing altitude in the troposphere because the troposphere is mostly heated through energy transfer from the surface. Thus, the lowest part of the troposphere (i.e. Earth's surface) is typically the warmest section of the troposphere. This promotes vertical mixing (hence, the origin of its name in the Greek word τρόπος, tropos, meaning "turn"). The troposphere contains roughly 80% of the mass of Earth's atmosphere. The troposphere is denser than all its overlying layers because a larger atmospheric weight sits on top of the troposphere and causes it to be most severely compressed. Fifty percent of the total mass of the atmosphere is located in the lower of the troposphere. Nearly all atmospheric water vapor or moisture is found in the troposphere, so it is the layer where most of Earth's weather takes place. It has basically all the weather-associated cloud genus types generated by active wind circulation, although very tall cumulonimbus thunder clouds can penetrate the tropopause from below and rise into the lower part of the stratosphere. Most conventional aviation activity takes place in the troposphere, and it is the only layer accessible by propeller-driven aircraft. Other layers Within the five principal layers above, which are largely determined by temperature, several secondary layers may be distinguished by other properties: The ozone layer is contained within the stratosphere. In this layer ozone concentrations are about 2 to 8 parts per million, which is much higher than in the lower atmosphere but still very small compared to the main components of the atmosphere. It is mainly located in the lower portion of the stratosphere from about , though the thickness varies seasonally and geographically. About 90% of the ozone in Earth's atmosphere is contained in the stratosphere. The ionosphere is a region of the atmosphere that is ionized by solar radiation. It is responsible for auroras. During daytime hours, it stretches from and includes the mesosphere, thermosphere, and parts of the exosphere. However, ionization in the mesosphere largely ceases during the night, so auroras are normally seen only in the thermosphere and lower exosphere. The ionosphere forms the inner edge of the magnetosphere. It has practical importance because it influences, for example, radio propagation on Earth. The homosphere and heterosphere are defined by whether the atmospheric gases are well mixed. The surface-based homosphere includes the troposphere, stratosphere, mesosphere, and the lowest part of the thermosphere, where the chemical composition of the atmosphere does not depend on molecular weight because the gases are mixed by turbulence. This relatively homogeneous layer ends at the turbopause found at about , the very edge of space itself as accepted by the FAI, which places it about above the mesopause. Above this altitude lies the heterosphere, which includes the exosphere and most of the thermosphere. Here, the chemical composition varies with altitude. This is because the distance that particles can move without colliding with one another is large compared with the size of motions that cause mixing. This allows the gases to stratify by molecular weight, with the heavier ones, such as oxygen and nitrogen, present only near the bottom of the heterosphere. The upper part of the heterosphere is composed almost completely of hydrogen, the lightest element. The planetary boundary layer is the part of the troposphere that is closest to Earth's surface and is directly affected by it, mainly through turbulent diffusion. During the day the planetary boundary layer usually is well-mixed, whereas at night it becomes stably stratified with weak or intermittent mixing. The depth of the planetary boundary layer ranges from as little as about on clear, calm nights to or more during the afternoon in dry regions. The average temperature of the atmosphere at Earth's surface is or , depending on the reference. Physical properties Pressure and thickness The average atmospheric pressure at sea level is defined by the International Standard Atmosphere as . This is sometimes referred to as a unit of standard atmospheres (atm). Total atmospheric mass is , about 2.5% less than would be inferred from the average sea-level pressure and Earth's area of 51007.2 megahectares, this portion being displaced by Earth's mountainous terrain. Atmospheric pressure is the total weight of the air above unit area at the point where the pressure is measured. Thus air pressure varies with location and weather. If the entire mass of the atmosphere had a uniform density equal to sea-level density (about 1.2 kg/m3) from sea level upwards, it would terminate abruptly at an altitude of . Air pressure actually decreases exponentially with altitude, for altitudes up to around , dropping by half every , or by a factor of 1/e ≈ 0.368 every , which is called the scale height. However, the atmosphere is more accurately modeled with a customized equation for each layer that takes gradients of temperature, molecular composition, solar radiation and gravity into account. At heights over 100 km, an atmosphere may no longer be well mixed. Then each chemical species has its own scale height. In summary, the mass of Earth's atmosphere is distributed approximately as follows: 50% is below , 90% is below , 99.99997% is below , the Kármán line. By international convention, this marks the beginning of space where human travelers are considered astronauts. By comparison, the summit of Mount Everest is at ; commercial airliners typically cruise between , where the lower density and temperature of the air improve fuel economy; weather balloons reach and above; and the highest X-15 flight in 1963 reached . Even above the Kármán line, significant atmospheric effects such as auroras still occur. Meteors begin to glow in this region, though the larger ones may not burn up until they penetrate more deeply. The various layers of Earth's ionosphere, important to HF radio propagation, begin below 100 km and extend beyond 500 km. By comparison, the International Space Station and Space Shuttle typically orbit at 350–400 km, within the F-layer of the ionosphere, where they encounter enough atmospheric drag to require reboosts every few months, otherwise orbital decay will occur, resulting in a return to Earth. Depending on solar activity, satellites can experience noticeable atmospheric drag at altitudes as high as 700–800 km. Temperature The division of the atmosphere into layers mostly by reference to temperature is discussed above. Temperature decreases with altitude starting at sea level, but variations in this trend begin above 11 km, where the temperature stabilizes over a large vertical distance through the rest of the troposphere. In the stratosphere, starting above about 20 km, the temperature increases with height, due to heating within the ozone layer caused by the capture of significant ultraviolet radiation from the Sun by the dioxygen and ozone gas in this region. Still another region of increasing temperature with altitude occurs at very high altitudes, in the aptly-named thermosphere above 90 km. Speed of sound Because in an ideal gas of constant composition the speed of sound depends only on temperature and not on pressure or density, the speed of sound in the atmosphere with altitude takes on the form of the complicated temperature profile (see illustration to the right), and does not mirror altitudinal changes in density or pressure. Density and mass The density of air at sea level is about 1.2 kg/m3 (1.2 g/L, 0.0012 g/cm3). Density is not measured directly but is calculated from measurements of temperature, pressure and humidity using the equation of state for air (a form of the ideal gas law). Atmospheric density decreases as the altitude increases. This variation can be approximately modeled using the barometric formula. More sophisticated models are used to predict the orbital decay of satellites. The average mass of the atmosphere is about 5 quadrillion (5) tonnes or 1/1,200,000 the mass of Earth. According to the American National Center for Atmospheric Research, "The total mean mass of the atmosphere is 5.1480 kg with an annual range due to water vapor of 1.2 or 1.5 kg, depending on whether surface pressure or water vapor data are used; somewhat smaller than the previous estimate. The mean mass of water vapor is estimated as 1.27 kg and the dry air mass as 5.1352 ±0.0003 kg." Tabulated properties Optical properties Solar radiation (or sunlight) is the energy Earth receives from the Sun. Earth also emits radiation back into space, but at longer wavelengths that humans cannot see. Part of the incoming and emitted radiation is absorbed or reflected by the atmosphere. In May 2017, glints of light, seen as twinkling from an orbiting satellite a million miles away, were found to be reflected light from ice crystals in the atmosphere. Scattering When light passes through Earth's atmosphere, photons interact with it through scattering. If the light does not interact with the atmosphere, it is called direct radiation and is what you see if you were to look directly at the Sun. Indirect radiation is light that has been scattered in the atmosphere. For example, on an overcast day when you cannot see your shadow, there is no direct radiation reaching you, it has all been scattered. As another example, due to a phenomenon called Rayleigh scattering, shorter (blue) wavelengths scatter more easily than longer (red) wavelengths. This is why the sky looks blue; you are seeing scattered blue light. This is also why sunsets are red. Because the Sun is close to the horizon, the Sun's rays pass through more atmosphere than normal before reaching your eye. Much of the blue light has been scattered out, leaving the red light in a sunset. Absorption Different molecules absorb different wavelengths of radiation. For example, O2 and O3 absorb almost all radiation with wavelengths shorter than 300 nanometres. Water (H2O) absorbs at many wavelengths above 700 nm. When a molecule absorbs a photon, it increases the energy of the molecule. This heats the atmosphere, but the atmosphere also cools by emitting radiation, as discussed below. The combined absorption spectra of the gases in the atmosphere leave "windows" of low opacity, allowing the transmission of only certain bands of light. The optical window runs from around 300 nm (ultraviolet-C) up into the range humans can see, the visible spectrum (commonly called light), at roughly 400–700 nm and continues to the infrared to around 1100 nm. There are also infrared and radio windows that transmit some infrared and radio waves at longer wavelengths. For example, the radio window runs from about one centimetre to about eleven-metre waves. Emission Emission is the opposite of absorption, it is when an object emits radiation. Objects tend to emit amounts and wavelengths of radiation depending on their "black body" emission curves, therefore hotter objects tend to emit more radiation, with shorter wavelengths. Colder objects emit less radiation, with longer wavelengths. For example, the Sun is approximately , its radiation peaks near 500 nm, and is visible to the human eye. Earth is approximately , so its radiation peaks near 10,000 nm, and is much too long to be visible to humans. Because of its temperature, the atmosphere emits infrared radiation. For example, on clear nights Earth's surface cools down faster than on cloudy nights. This is because clouds (H2O) are strong absorbers and emitters of infrared radiation. This is also why it becomes colder at night at higher elevations. The greenhouse effect is directly related to this absorption and emission effect. Some gases in the atmosphere absorb and emit infrared radiation, but do not interact with sunlight in the visible spectrum. Common examples of these are and H2O. Refractive index The refractive index of air is close to, but just greater than, 1. Systematic variations in the refractive index can lead to the bending of light rays over long optical paths. One example is that, under some circumstances, observers on board ships can see other vessels just over the horizon because light is refracted in the same direction as the curvature of Earth's surface. The refractive index of air depends on temperature, giving rise to refraction effects when the temperature gradient is large. An example of such effects is the mirage. Circulation Atmospheric circulation is the large-scale movement of air through the troposphere, and the means (with ocean circulation) by which heat is distributed around Earth. The large-scale structure of the atmospheric circulation varies from year to year, but the basic structure remains fairly constant because it is determined by Earth's rotation rate and the difference in solar radiation between the equator and poles. Evolution of Earth's atmosphere Earliest atmosphere The first atmosphere, during the Early Earth's Hadean eon, consisted of gases in the solar nebula, primarily hydrogen, and probably simple hydrides such as those now found in the gas giants (Jupiter and Saturn), notably water vapor, methane and ammonia. During this earliest era, the Moon-forming collision and numerous impacts with large meteorites heated the atmosphere, driving off the most volatile gases. The collision with Theia, in particular, melted and ejected large portions of Earth's mantle and crust and outgassed significant amounts of steam which eventually cooled and condensed to contribute to ocean water at the end of the Hadean. Second atmosphere The increasing solidification of Earth's crust at the end of the Hadean closed off most of the advective heat transfer to the surface, causing the atmosphere to cool, which condensed most of the water vapor out of the air precipitating into a superocean. Further outgassing from volcanism, supplemented by gases introduced by huge asteroids during the Late Heavy Bombardment, created the subsequent Archean atmosphere, which consisted largely of nitrogen plus carbon dioxide, methane and inert gases. A major part of carbon dioxide emissions dissolved in water and reacted with metals such as calcium and magnesium during weathering of crustal rocks to form carbonates that were deposited as sediments. Water-related sediments have been found that date from as early as 3.8 billion years ago. About 3.4 billion years ago, nitrogen formed the major component of the then-stable "second atmosphere". The influence of the evolution of life has to be taken into account rather soon in the history of the atmosphere because hints of earliest life forms appeared as early as 3.5 billion years ago. How Earth at that time maintained a climate warm enough for liquid water and life, if the early Sun put out 30% lower solar radiance than today, is a puzzle known as the "faint young Sun paradox". The geological record however shows a continuous relatively warm surface during the complete early temperature record of Earth – with the exception of one cold glacial phase about 2.4 billion years ago. In the late Neoarchean, an oxygen-containing atmosphere began to develop, apparently due to a billion years of cyanobacterial photosynthesis (see Great Oxygenation Event), which have been found as stromatolite fossils from 2.7 billion years ago. The early basic carbon isotopy (isotope ratio proportions) strongly suggests conditions similar to the current, and that the fundamental features of the carbon cycle became established as early as 4 billion years ago. Ancient sediments in the Gabon dating from between about 2.15 and 2.08 billion years ago provide a record of Earth's dynamic oxygenation evolution. These fluctuations in oxygenation were likely driven by the Lomagundi carbon isotope excursion. Third atmosphere The constant re-arrangement of continents by plate tectonics influences the long-term evolution of the atmosphere by transferring carbon dioxide to and from large continental carbonate stores. Free oxygen did not exist in the atmosphere until about 2.4 billion years ago during the Great Oxygenation Event and its appearance is indicated by the end of banded iron formations (which signals the depletion of substrates that can react with oxygen to produce ferric deposits) during the early Proterozoic eon. Before this time, any oxygen produced by cyanobacterial photosynthesis would be readily removed by the oxidation of reducing substances on the Earth's surface, notably ferrous iron, sulfur and atmospheric methane. Free oxygen molecules did not start to accumulate in the atmosphere until the rate of production of oxygen began to exceed the availability of reductant materials that removed oxygen. This point signifies a shift from a reducing atmosphere to an oxidizing atmosphere. O2 showed major variations during the Proterozoic, including a billion-year period of euxinia, until reaching a steady state of more than 15% by the end of the Precambrian. The rise of the more robust eukaryotic photoautotrophs (green and red algae) injected further oxygenation into the air, especially after the end of the Cryogenian global glaciation, which was followed by an evolutionary radiation event during the Ediacaran period known as the Avalon explosion, where complex metazoan life forms (including the earliest cnidarians, placozoans and bilaterians) first proliferated. The following time span from 539 million years ago to the present day is the Phanerozoic eon, during the earliest period of which, the Cambrian, more actively moving metazoan life began to appear and rapidly diversify in another radiation event called the Cambrian explosion, whose locomotive metabolism was fuelled by the rising oxygen level. The amount of oxygen in the atmosphere has fluctuated over the last 600 million years, reaching a peak of about 30% around 280 million years ago during the Carboniferous period, significantly higher than today's 21%. Two main processes govern changes in the atmosphere: the evolution of plants and their increasing role in carbon fixation, and the consumption of oxygen by rapidly diversifying animal faunae and also by plants for photorespiration and their own metabolic needs at night. Breakdown of pyrite and volcanic eruptions release sulfur into the atmosphere, which reacts and hence reduces oxygen in the atmosphere. However, volcanic eruptions also release carbon dioxide, which can fuel oxygenic photosynthesis by terrestrial and aquatic plants. The cause of the variation of the amount of oxygen in the atmosphere is not precisely understood. Periods with more oxygen in the atmosphere were often associated with more rapid development of animals. Air pollution is the introduction of airborne chemicals, particulate matter or biological materials that cause harm or discomfort to organisms. The population growth, industrialization and motorization of human societies have significantly increased the amount of airborne pollutants in the Earth's atmosphere, causing noticeable problems such as smogs, acid rains and pollution-related diseases. The depletion of stratospheric ozone layer, which shields the surface from harmful ionizing ultraviolet radiations, is also caused by air pollution, chiefly from chlorofluorocarbons and other ozone-depleting substances. Since 1750, human activity, especially after the Industrial Revolution, has increased the concentrations of various greenhouse gases, most importantly carbon dioxide, methane and nitrous oxide. Greenhouse gas emissions, coupled with deforestation and destruction of wetlands via logging and land developments, have caused an observed rise in global temperatures, with the global average surface temperatures being higher in the 2011–2020 decade than they were in 1850. It has raised concerns of man-made climate change, which can have significant environmental impacts such as sea level rise, ocean acidification, glacial retreat (which threatens water security), increasing extreme weather events and wildfires, ecological collapse and mass dying of wildlife.
Physical sciences
Earth science
null
202899
https://en.wikipedia.org/wiki/Atmosphere
Atmosphere
An atmosphere () is a layer of gases that envelop an astronomical object, held in place by the gravity of the object. A planet retains an atmosphere when the gravity is great and the temperature of the atmosphere is low. A stellar atmosphere is the outer region of a star, which includes the layers above the opaque photosphere; stars of low temperature might have outer atmospheres containing compound molecules. The atmosphere of Earth is composed of nitrogen (78%), oxygen (21%), argon (0.9%), carbon dioxide (0.04%) and trace gases. Most organisms use oxygen for respiration; lightning and bacteria perform nitrogen fixation which produces ammonia that is used to make nucleotides and amino acids; plants, algae, and cyanobacteria use carbon dioxide for photosynthesis. The layered composition of the atmosphere minimises the harmful effects of sunlight, ultraviolet radiation, solar wind, and cosmic rays and thus protects the organisms from genetic damage. The current composition of the atmosphere of the Earth is the product of billions of years of biochemical modification of the paleoatmosphere by living organisms. Occurrence and compositions Origins Atmospheres are clouds of gas bound to and engulfing an astronomical focal point of sufficiently dominating mass, adding to its mass, possibly escaping from it or collapsing into it. Because of the latter, such planetary nucleus can develop from interstellar molecular clouds or protoplanetary disks into rocky astronomical objects with varyingly thick atmospheres, gas giants or fusors. Composition and thickness is originally determined by the stellar nebula's chemistry and temperature, but can also by a product processes within the astronomical body outgasing a different atmosphere. Compositions The atmospheres of the planets Venus and Mars are principally composed of carbon dioxide and nitrogen, argon and oxygen. The composition of Earth's atmosphere is determined by the by-products of the life that it sustains. Dry air (mixture of gases) from Earth's atmosphere contains 78.08% nitrogen, 20.95% oxygen, 0.93% argon, 0.04% carbon dioxide, and traces of hydrogen, helium, and other "noble" gases (by volume), but generally a variable amount of water vapor is also present, on average about 1% at sea level. The low temperatures and higher gravity of the Solar System's giant planets—Jupiter, Saturn, Uranus and Neptune—allow them more readily to retain gases with low molecular masses. These planets have hydrogen–helium atmospheres, with trace amounts of more complex compounds. Two satellites of the outer planets possess significant atmospheres. Titan, a moon of Saturn, and Triton, a moon of Neptune, have atmospheres mainly of nitrogen. When in the part of its orbit closest to the Sun, Pluto has an atmosphere of nitrogen and methane similar to Triton's, but these gases are frozen when it is farther from the Sun. Other bodies within the Solar System have extremely thin atmospheres not in equilibrium. These include the Moon (sodium gas), Mercury (sodium gas), Europa (oxygen), Io (sulfur), and Enceladus (water vapor). The first exoplanet whose atmospheric composition was determined is HD 209458b, a gas giant with a close orbit around a star in the constellation Pegasus. Its atmosphere is heated to temperatures over 1,000 K, and is steadily escaping into space. Hydrogen, oxygen, carbon and sulfur have been detected in the planet's inflated atmosphere. Atmospheres in the Solar System Atmosphere of the Sun Atmosphere of Mercury Atmosphere of Venus Atmosphere of Earth Atmosphere of the Moon Atmosphere of Mars Atmosphere of Ceres Atmosphere of Jupiter Atmosphere of Io Atmosphere of Callisto Atmosphere of Europa Atmosphere of Ganymede Atmosphere of Saturn Atmosphere of Titan Atmosphere of Enceladus Atmosphere of Uranus Atmosphere of Titania Atmosphere of Neptune Atmosphere of Triton Atmosphere of Pluto Structure of atmosphere Earth The atmosphere of Earth is composed of layers with different properties, such as specific gaseous composition, temperature, and pressure. The troposphere is the lowest layer of the atmosphere. This extends from the planetary surface to the bottom of the stratosphere. The troposphere contains 75–80% of the mass of the atmosphere, and is the atmospheric layer wherein the weather occurs; the height of the troposphere varies between 17 km at the equator and 7.0 km at the poles. The stratosphere extends from the top of the troposphere to the bottom of the mesosphere, and contains the ozone layer, at an altitude between 15 km and 35 km. It is the atmospheric layer that absorbs most of the ultraviolet radiation that Earth receives from the Sun. The mesosphere ranges from 50 km to 85 km and is the layer wherein most meteors are incinerated before reaching the surface. The thermosphere extends from an altitude of 85 km to the base of the exosphere at 690 km and contains the ionosphere, where solar radiation ionizes the atmosphere. The density of the ionosphere is greater at short distances from the planetary surface in the daytime and decreases as the ionosphere rises at night-time, thereby allowing a greater range of radio frequencies to travel greater distances. The exosphere begins at 690 to 1,000 km from the surface, and extends to roughly 10,000 km, where it interacts with the magnetosphere of Earth. Pressure Atmospheric pressure is the force (per unit-area) perpendicular to a unit-area of planetary surface, as determined by the weight of the vertical column of atmospheric gases. In said atmospheric model, the atmospheric pressure, the weight of the mass of the gas, decreases at high altitude because of the diminishing mass of the gas above the point of barometric measurement. The units of air pressure are based upon the standard atmosphere (atm), which is 101,325 Pa (equivalent to 760 Torr or 14.696 psi). The height at which the atmospheric pressure declines by a factor of e (an irrational number equal to 2.71828) is called the scale height (H). For an atmosphere of uniform temperature, the scale height is proportional to the atmospheric temperature and is inversely proportional to the product of the mean molecular mass of dry air, and the local acceleration of gravity at the point of barometric measurement. Escape Surface gravity differs significantly among the planets. For example, the large gravitational force of the giant planet Jupiter retains light gases such as hydrogen and helium that escape from objects with lower gravity. Secondly, the distance from the Sun determines the energy available to heat atmospheric gas to the point where some fraction of its molecules' thermal motion exceed the planet's escape velocity, allowing those to escape a planet's gravitational grasp. Thus, distant and cold Titan, Triton, and Pluto are able to retain their atmospheres despite their relatively low gravities. Since a collection of gas molecules may be moving at a wide range of velocities, there will always be some fast enough to produce a slow leakage of gas into space. Lighter molecules move faster than heavier ones with the same thermal kinetic energy, and so gases of low molecular weight are lost more rapidly than those of high molecular weight. It is thought that Venus and Mars may have lost much of their water when, after being photodissociated into hydrogen and oxygen by solar ultraviolet radiation, the hydrogen escaped. Earth's magnetic field helps to prevent this, as, normally, the solar wind would greatly enhance the escape of hydrogen. However, over the past 3 billion years Earth may have lost gases through the magnetic polar regions due to auroral activity, including a net 2% of its atmospheric oxygen. The net effect, taking the most important escape processes into account, is that an intrinsic magnetic field does not protect a planet from atmospheric escape and that for some magnetizations the presence of a magnetic field works to increase the escape rate. Other mechanisms that can cause atmosphere depletion are solar wind-induced sputtering, impact erosion, weathering, and sequestration—sometimes referred to as "freezing out"—into the regolith and polar caps. Terrain Atmospheres have dramatic effects on the surfaces of rocky bodies. Objects that have no atmosphere, or that have only an exosphere, have terrain that is covered in craters. Without an atmosphere, the planet has no protection from meteoroids, and all of them collide with the surface as meteorites and create craters. For planets with a significant atmosphere, most meteoroids burn up as meteors before hitting a planet's surface. When meteoroids do impact, the effects are often erased by the action of wind. Wind erosion is a significant factor in shaping the terrain of rocky planets with atmospheres, and over time can erase the effects of both craters and volcanoes. In addition, since liquids cannot exist without pressure, an atmosphere allows liquid to be present at the surface, resulting in lakes, rivers and oceans. Earth and Titan are known to have liquids at their surface and terrain on the planet suggests that Mars had liquid on its surface in the past. Outside the Solar System Atmosphere of HD 209458 b Circulation The circulation of the atmosphere occurs due to thermal differences when convection becomes a more efficient transporter of heat than thermal radiation. On planets where the primary heat source is solar radiation, excess heat in the tropics is transported to higher latitudes. When a planet generates a significant amount of heat internally, such as is the case for Jupiter, convection in the atmosphere can transport thermal energy from the higher temperature interior up to the surface. Importance From the perspective of a planetary geologist, the atmosphere acts to shape a planetary surface. Wind picks up dust and other particles which, when they collide with the terrain, erode the relief and leave deposits (eolian processes). Frost and precipitations, which depend on the atmospheric composition, also influence the relief. Climate changes can influence a planet's geological history. Conversely, studying the surface of the Earth leads to an understanding of the atmosphere and climate of other planets. For a meteorologist, the composition of the Earth's atmosphere is a factor affecting the climate and its variations. For a biologist or paleontologist, the Earth's atmospheric composition is closely dependent on the appearance of life and its evolution.
Physical sciences
Planetary science
null
203056
https://en.wikipedia.org/wiki/Spherical%20harmonics
Spherical harmonics
In mathematics and physical science, spherical harmonics are special functions defined on the surface of a sphere. They are often employed in solving partial differential equations in many scientific fields. The table of spherical harmonics contains a list of common spherical harmonics. Since the spherical harmonics form a complete set of orthogonal functions and thus an orthonormal basis, each function defined on the surface of a sphere can be written as a sum of these spherical harmonics. This is similar to periodic functions defined on a circle that can be expressed as a sum of circular functions (sines and cosines) via Fourier series. Like the sines and cosines in Fourier series, the spherical harmonics may be organized by (spatial) angular frequency, as seen in the rows of functions in the illustration on the right. Further, spherical harmonics are basis functions for irreducible representations of SO(3), the group of rotations in three dimensions, and thus play a central role in the group theoretic discussion of SO(3). Spherical harmonics originate from solving Laplace's equation in the spherical domains. Functions that are solutions to Laplace's equation are called harmonics. Despite their name, spherical harmonics take their simplest form in Cartesian coordinates, where they can be defined as homogeneous polynomials of degree in that obey Laplace's equation. The connection with spherical coordinates arises immediately if one uses the homogeneity to extract a factor of radial dependence from the above-mentioned polynomial of degree ; the remaining factor can be regarded as a function of the spherical angular coordinates and only, or equivalently of the orientational unit vector specified by these angles. In this setting, they may be viewed as the angular portion of a set of solutions to Laplace's equation in three dimensions, and this viewpoint is often taken as an alternative definition. Notice, however, that spherical harmonics are not functions on the sphere which are harmonic with respect to the Laplace-Beltrami operator for the standard round metric on the sphere: the only harmonic functions in this sense on the sphere are the constants, since harmonic functions satisfy the Maximum principle. Spherical harmonics, as functions on the sphere, are eigenfunctions of the Laplace-Beltrami operator (see Higher dimensions). A specific set of spherical harmonics, denoted or , are known as Laplace's spherical harmonics, as they were first introduced by Pierre Simon de Laplace in 1782. These functions form an orthogonal system, and are thus basic to the expansion of a general function on the sphere as alluded to above. Spherical harmonics are important in many theoretical and practical applications, including the representation of multipole electrostatic and electromagnetic fields, electron configurations, gravitational fields, geoids, the magnetic fields of planetary bodies and stars, and the cosmic microwave background radiation. In 3D computer graphics, spherical harmonics play a role in a wide variety of topics including indirect lighting (ambient occlusion, global illumination, precomputed radiance transfer, etc.) and modelling of 3D shapes. History Spherical harmonics were first investigated in connection with the Newtonian potential of Newton's law of universal gravitation in three dimensions. In 1782, Pierre-Simon de Laplace had, in his Mécanique Céleste, determined that the gravitational potential at a point associated with a set of point masses located at points was given by Each term in the above summation is an individual Newtonian potential for a point mass. Just prior to that time, Adrien-Marie Legendre had investigated the expansion of the Newtonian potential in powers of and . He discovered that if then where is the angle between the vectors and . The functions are the Legendre polynomials, and they can be derived as a special case of spherical harmonics. Subsequently, in his 1782 memoir, Laplace investigated these coefficients using spherical coordinates to represent the angle between and . (See for more detail.) In 1867, William Thomson (Lord Kelvin) and Peter Guthrie Tait introduced the solid spherical harmonics in their Treatise on Natural Philosophy, and also first introduced the name of "spherical harmonics" for these functions. The solid harmonics were homogeneous polynomial solutions of Laplace's equation By examining Laplace's equation in spherical coordinates, Thomson and Tait recovered Laplace's spherical harmonics. (See Harmonic polynomial representation.) The term "Laplace's coefficients" was employed by William Whewell to describe the particular system of solutions introduced along these lines, whereas others reserved this designation for the zonal spherical harmonics that had properly been introduced by Laplace and Legendre. The 19th century development of Fourier series made possible the solution of a wide variety of physical problems in rectangular domains, such as the solution of the heat equation and wave equation. This could be achieved by expansion of functions in series of trigonometric functions. Whereas the trigonometric functions in a Fourier series represent the fundamental modes of vibration in a string, the spherical harmonics represent the fundamental modes of vibration of a sphere in much the same way. Many aspects of the theory of Fourier series could be generalized by taking expansions in spherical harmonics rather than trigonometric functions. Moreover, analogous to how trigonometric functions can equivalently be written as complex exponentials, spherical harmonics also possessed an equivalent form as complex-valued functions. This was a boon for problems possessing spherical symmetry, such as those of celestial mechanics originally studied by Laplace and Legendre. The prevalence of spherical harmonics already in physics set the stage for their later importance in the 20th century birth of quantum mechanics. The (complex-valued) spherical harmonics are eigenfunctions of the square of the orbital angular momentum operator and therefore they represent the different quantized configurations of atomic orbitals. Laplace's spherical harmonics Laplace's equation imposes that the Laplacian of a scalar field is zero. (Here the scalar field is understood to be complex, i.e. to correspond to a (smooth) function .) In spherical coordinates this is: Consider the problem of finding solutions of the form . By separation of variables, two differential equations result by imposing Laplace's equation: The second equation can be simplified under the assumption that has the form . Applying separation of variables again to the second equation gives way to the pair of differential equations for some number . A priori, is a complex constant, but because must be a periodic function whose period evenly divides , is necessarily an integer and is a linear combination of the complex exponentials . The solution function is regular at the poles of the sphere, where . Imposing this regularity in the solution of the second equation at the boundary points of the domain is a Sturm–Liouville problem that forces the parameter to be of the form for some non-negative integer with ; this is also explained below in terms of the orbital angular momentum. Furthermore, a change of variables transforms this equation into the Legendre equation, whose solution is a multiple of the associated Legendre polynomial . Finally, the equation for has solutions of the form ; requiring the solution to be regular throughout forces . Here the solution was assumed to have the special form . For a given value of , there are independent solutions of this form, one for each integer with . These angular solutions are a product of trigonometric functions, here represented as a complex exponential, and associated Legendre polynomials: which fulfill Here is called a spherical harmonic function of degree and order , is an associated Legendre polynomial, is a normalization constant, and and represent colatitude and longitude, respectively. In particular, the colatitude , or polar angle, ranges from at the North Pole, to at the Equator, to at the South Pole, and the longitude , or azimuth, may assume all values with . For a fixed integer , every solution , , of the eigenvalue problem is a linear combination of . In fact, for any such solution, is the expression in spherical coordinates of a homogeneous polynomial that is harmonic (see below), and so counting dimensions shows that there are linearly independent such polynomials. The general solution to Laplace's equation in a ball centered at the origin is a linear combination of the spherical harmonic functions multiplied by the appropriate scale factor , where the are constants and the factors are known as (regular) solid harmonics . Such an expansion is valid in the ball For , the solid harmonics with negative powers of (the irregular solid harmonics ) are chosen instead. In that case, one needs to expand the solution of known regions in Laurent series (about ), instead of the Taylor series (about ) used above, to match the terms and find series expansion coefficients . Orbital angular momentum In quantum mechanics, Laplace's spherical harmonics are understood in terms of the orbital angular momentum The is conventional in quantum mechanics; it is convenient to work in units in which . The spherical harmonics are eigenfunctions of the square of the orbital angular momentum Laplace's spherical harmonics are the joint eigenfunctions of the square of the orbital angular momentum and the generator of rotations about the azimuthal axis: These operators commute, and are densely defined self-adjoint operators on the weighted Hilbert space of functions f square-integrable with respect to the normal distribution as the weight function on R3: Furthermore, L2 is a positive operator. If is a joint eigenfunction of and , then by definition for some real numbers m and λ. Here m must in fact be an integer, for Y must be periodic in the coordinate φ with period a number that evenly divides 2π. Furthermore, since and each of Lx, Ly, Lz are self-adjoint, it follows that . Denote this joint eigenspace by , and define the raising and lowering operators by Then and commute with , and the Lie algebra generated by , , is the special linear Lie algebra of order 2, , with commutation relations Thus (it is a "raising operator") and (it is a "lowering operator"). In particular, must be zero for k sufficiently large, because the inequality must hold in each of the nontrivial joint eigenspaces. Let be a nonzero joint eigenfunction, and let be the least integer such that Then, since it follows that Thus for the positive integer . The foregoing has been all worked out in the spherical coordinate representation, but may be expressed more abstractly in the complete, orthonormal spherical ket basis. Harmonic polynomial representation The spherical harmonics can be expressed as the restriction to the unit sphere of certain polynomial functions . Specifically, we say that a (complex-valued) polynomial function is homogeneous of degree if for all real numbers and all . We say that is harmonic if where is the Laplacian. Then for each , we define For example, when , is just the 3-dimensional space of all linear functions , since any such function is automatically harmonic. Meanwhile, when , we have a 5-dimensional space: For any , the space of spherical harmonics of degree is just the space of restrictions to the sphere of the elements of . As suggested in the introduction, this perspective is presumably the origin of the term “spherical harmonic” (i.e., the restriction to the sphere of a harmonic function). For example, for any the formula defines a homogeneous polynomial of degree with domain and codomain , which happens to be independent of . This polynomial is easily seen to be harmonic. If we write in spherical coordinates and then restrict to , we obtain which can be rewritten as After using the formula for the associated Legendre polynomial , we may recognize this as the formula for the spherical harmonic (See Special cases.) Conventions Orthogonality and normalization Several different normalizations are in common use for the Laplace spherical harmonic functions . Throughout the section, we use the standard convention that for (see associated Legendre polynomials) which is the natural normalization given by Rodrigues' formula. In acoustics, the Laplace spherical harmonics are generally defined as (this is the convention used in this article) while in quantum mechanics: where are associated Legendre polynomials without the Condon–Shortley phase (to avoid counting the phase twice). In both definitions, the spherical harmonics are orthonormal where is the Kronecker delta and . This normalization is used in quantum mechanics because it ensures that probability is normalized, i.e., The disciplines of geodesy and spectral analysis use which possess unit power The magnetics community, in contrast, uses Schmidt semi-normalized harmonics which have the normalization In quantum mechanics this normalization is sometimes used as well, and is named Racah's normalization after Giulio Racah. It can be shown that all of the above normalized spherical harmonic functions satisfy where the superscript denotes complex conjugation. Alternatively, this equation follows from the relation of the spherical harmonic functions with the Wigner D-matrix. Condon–Shortley phase One source of confusion with the definition of the spherical harmonic functions concerns a phase factor of , commonly referred to as the Condon–Shortley phase in the quantum mechanical literature. In the quantum mechanics community, it is common practice to either include this phase factor in the definition of the associated Legendre polynomials, or to append it to the definition of the spherical harmonic functions. There is no requirement to use the Condon–Shortley phase in the definition of the spherical harmonic functions, but including it can simplify some quantum mechanical operations, especially the application of raising and lowering operators. The geodesy and magnetics communities never include the Condon–Shortley phase factor in their definitions of the spherical harmonic functions nor in the ones of the associated Legendre polynomials. Real form A real basis of spherical harmonics can be defined in terms of their complex analogues by setting The Condon–Shortley phase convention is used here for consistency. The corresponding inverse equations defining the complex spherical harmonics in terms of the real spherical harmonics are The real spherical harmonics are sometimes known as tesseral spherical harmonics. These functions have the same orthonormality properties as the complex ones above. The real spherical harmonics with are said to be of cosine type, and those with of sine type. The reason for this can be seen by writing the functions in terms of the Legendre polynomials as The same sine and cosine factors can be also seen in the following subsection that deals with the Cartesian representation. See here for a list of real spherical harmonics up to and including , which can be seen to be consistent with the output of the equations above. Use in quantum chemistry As is known from the analytic solutions for the hydrogen atom, the eigenfunctions of the angular part of the wave function are spherical harmonics. However, the solutions of the non-relativistic Schrödinger equation without magnetic terms can be made real. This is why the real forms are extensively used in basis functions for quantum chemistry, as the programs don't then need to use complex algebra. Here, the real functions span the same space as the complex ones would. For example, as can be seen from the table of spherical harmonics, the usual functions () are complex and mix axis directions, but the real versions are essentially just , , and . Spherical harmonics in Cartesian form The complex spherical harmonics give rise to the solid harmonics by extending from to all of as a homogeneous function of degree , i.e. setting It turns out that is basis of the space of harmonic and homogeneous polynomials of degree . More specifically, it is the (unique up to normalization) Gelfand-Tsetlin-basis of this representation of the rotational group and an explicit formula for in cartesian coordinates can be derived from that fact. The Herglotz generating function If the quantum mechanical convention is adopted for the , then Here, is the vector with components , , and is a vector with complex coordinates: The essential property of is that it is null: It suffices to take and as real parameters. In naming this generating function after Herglotz, we follow , who credit unpublished notes by him for its discovery. Essentially all the properties of the spherical harmonics can be derived from this generating function. An immediate benefit of this definition is that if the vector is replaced by the quantum mechanical spin vector operator , such that is the operator analogue of the solid harmonic , one obtains a generating function for a standardized set of spherical tensor operators, : The parallelism of the two definitions ensures that the 's transform under rotations (see below) in the same way as the 's, which in turn guarantees that they are spherical tensor operators, , with and , obeying all the properties of such operators, such as the Clebsch-Gordan composition theorem, and the Wigner-Eckart theorem. They are, moreover, a standardized set with a fixed scale or normalization. Separated Cartesian form The Herglotzian definition yields polynomials which may, if one wishes, be further factorized into a polynomial of and another of and , as follows (Condon–Shortley phase): and for : Here and For this reduces to The factor is essentially the associated Legendre polynomial , and the factors are essentially . Examples Using the expressions for , , and listed explicitly above we obtain: It may be verified that this agrees with the function listed here and here. Real forms Using the equations above to form the real spherical harmonics, it is seen that for only the terms (cosines) are included, and for only the terms (sines) are included: and for m = 0: Special cases and values When , the spherical harmonics reduce to the ordinary Legendre polynomials: When , or more simply in Cartesian coordinates, At the north pole, where , and is undefined, all spherical harmonics except those with vanish: Symmetry properties The spherical harmonics have deep and consequential properties under the operations of spatial inversion (parity) and rotation. Parity The spherical harmonics have definite parity. That is, they are either even or odd with respect to inversion about the origin. Inversion is represented by the operator . Then, as can be seen in many ways (perhaps most simply from the Herglotz generating function), with being a unit vector, In terms of the spherical angles, parity transforms a point with coordinates to . The statement of the parity of spherical harmonics is then (This can be seen as follows: The associated Legendre polynomials gives and from the exponential function we have , giving together for the spherical harmonics a parity of .) Parity continues to hold for real spherical harmonics, and for spherical harmonics in higher dimensions: applying a point reflection to a spherical harmonic of degree changes the sign by a factor of . Rotations Consider a rotation about the origin that sends the unit vector to . Under this operation, a spherical harmonic of degree and order transforms into a linear combination of spherical harmonics of the same degree. That is, where is a matrix of order that depends on the rotation . However, this is not the standard way of expressing this property. In the standard way one writes, where is the complex conjugate of an element of the Wigner D-matrix. In particular when is a rotation of the azimuth we get the identity, The rotational behavior of the spherical harmonics is perhaps their quintessential feature from the viewpoint of group theory. The 's of degree provide a basis set of functions for the irreducible representation of the group SO(3) of dimension . Many facts about spherical harmonics (such as the addition theorem) that are proved laboriously using the methods of analysis acquire simpler proofs and deeper significance using the methods of symmetry. Spherical harmonics expansion The Laplace spherical harmonics form a complete set of orthonormal functions and thus form an orthonormal basis of the Hilbert space of square-integrable functions . On the unit sphere , any square-integrable function can thus be expanded as a linear combination of these: This expansion holds in the sense of mean-square convergence — convergence in L2 of the sphere — which is to say that The expansion coefficients are the analogs of Fourier coefficients, and can be obtained by multiplying the above equation by the complex conjugate of a spherical harmonic, integrating over the solid angle Ω, and utilizing the above orthogonality relationships. This is justified rigorously by basic Hilbert space theory. For the case of orthonormalized harmonics, this gives: If the coefficients decay in ℓ sufficiently rapidly — for instance, exponentially — then the series also converges uniformly to f. A square-integrable function can also be expanded in terms of the real harmonics above as a sum The convergence of the series holds again in the same sense, namely the real spherical harmonics form a complete set of orthonormal functions and thus form an orthonormal basis of the Hilbert space of square-integrable functions . The benefit of the expansion in terms of the real harmonic functions is that for real functions the expansion coefficients are guaranteed to be real, whereas their coefficients in their expansion in terms of the (considering them as functions ) do not have that property. Spectrum analysis Power spectrum in signal processing The total power of a function f is defined in the signal processing literature as the integral of the function squared, divided by the area of its domain. Using the orthonormality properties of the real unit-power spherical harmonic functions, it is straightforward to verify that the total power of a function defined on the unit sphere is related to its spectral coefficients by a generalization of Parseval's theorem (here, the theorem is stated for Schmidt semi-normalized harmonics, the relationship is slightly different for orthonormal harmonics): where is defined as the angular power spectrum (for Schmidt semi-normalized harmonics). In a similar manner, one can define the cross-power of two functions as where is defined as the cross-power spectrum. If the functions and have a zero mean (i.e., the spectral coefficients and are zero), then and represent the contributions to the function's variance and covariance for degree , respectively. It is common that the (cross-)power spectrum is well approximated by a power law of the form When , the spectrum is "white" as each degree possesses equal power. When , the spectrum is termed "red" as there is more power at the low degrees with long wavelengths than higher degrees. Finally, when , the spectrum is termed "blue". The condition on the order of growth of is related to the order of differentiability of in the next section. Differentiability properties One can also understand the differentiability properties of the original function in terms of the asymptotics of . In particular, if decays faster than any rational function of as , then is infinitely differentiable. If, furthermore, decays exponentially, then is actually real analytic on the sphere. The general technique is to use the theory of Sobolev spaces. Statements relating the growth of the to differentiability are then similar to analogous results on the growth of the coefficients of Fourier series. Specifically, if then is in the Sobolev space . In particular, the Sobolev embedding theorem implies that is infinitely differentiable provided that for all . Algebraic properties Addition theorem A mathematical result of considerable interest and use is called the addition theorem for spherical harmonics. Given two vectors and , with spherical coordinates and , respectively, the angle between them is given by the relation in which the role of the trigonometric functions appearing on the right-hand side is played by the spherical harmonics and that of the left-hand side is played by the Legendre polynomials. The addition theorem states where is the Legendre polynomial of degree . This expression is valid for both real and complex harmonics. The result can be proven analytically, using the properties of the Poisson kernel in the unit ball, or geometrically by applying a rotation to the vector y so that it points along the z-axis, and then directly calculating the right-hand side. In particular, when , this gives Unsöld's theorem which generalizes the identity to two dimensions. In the expansion (), the left-hand side is a constant multiple of the degree zonal spherical harmonic. From this perspective, one has the following generalization to higher dimensions. Let be an arbitrary orthonormal basis of the space of degree spherical harmonics on the -sphere. Then , the degree zonal harmonic corresponding to the unit vector , decomposes as Furthermore, the zonal harmonic is given as a constant multiple of the appropriate Gegenbauer polynomial: Combining () and () gives () in dimension when and are represented in spherical coordinates. Finally, evaluating at gives the functional identity where is the volume of the (n−1)-sphere. Contraction rule Another useful identity expresses the product of two spherical harmonics as a sum over spherical harmonics Many of the terms in this sum are trivially zero. The values of and that result in non-zero terms in this sum are determined by the selection rules for the 3j-symbols. Clebsch–Gordan coefficients The Clebsch–Gordan coefficients are the coefficients appearing in the expansion of the product of two spherical harmonics in terms of spherical harmonics themselves. A variety of techniques are available for doing essentially the same calculation, including the Wigner 3-jm symbol, the Racah coefficients, and the Slater integrals. Abstractly, the Clebsch–Gordan coefficients express the tensor product of two irreducible representations of the rotation group as a sum of irreducible representations: suitably normalized, the coefficients are then the multiplicities. Visualization of the spherical harmonics The Laplace spherical harmonics can be visualized by considering their "nodal lines", that is, the set of points on the sphere where , or alternatively where . Nodal lines of are composed of ℓ circles: there are circles along longitudes and ℓ−|m| circles along latitudes. One can determine the number of nodal lines of each type by counting the number of zeros of in the and directions respectively. Considering as a function of , the real and imaginary components of the associated Legendre polynomials each possess ℓ−|m| zeros, each giving rise to a nodal 'line of latitude'. On the other hand, considering as a function of , the trigonometric sin and cos functions possess 2|m| zeros, each of which gives rise to a nodal 'line of longitude'. When the spherical harmonic order m is zero (upper-left in the figure), the spherical harmonic functions do not depend upon longitude, and are referred to as zonal. Such spherical harmonics are a special case of zonal spherical functions. When (bottom-right in the figure), there are no zero crossings in latitude, and the functions are referred to as sectoral. For the other cases, the functions checker the sphere, and they are referred to as tesseral. More general spherical harmonics of degree are not necessarily those of the Laplace basis , and their nodal sets can be of a fairly general kind. List of spherical harmonics Analytic expressions for the first few orthonormalized Laplace spherical harmonics that use the Condon–Shortley phase convention: Higher dimensions The classical spherical harmonics are defined as complex-valued functions on the unit sphere inside three-dimensional Euclidean space . Spherical harmonics can be generalized to higher-dimensional Euclidean space as follows, leading to functions . Let Pℓ denote the space of complex-valued homogeneous polynomials of degree in real variables, here considered as functions . That is, a polynomial is in provided that for any real , one has Let Aℓ denote the subspace of Pℓ consisting of all harmonic polynomials: These are the (regular) solid spherical harmonics. Let Hℓ denote the space of functions on the unit sphere obtained by restriction from The following properties hold: The sum of the spaces is dense in the set of continuous functions on with respect to the uniform topology, by the Stone–Weierstrass theorem. As a result, the sum of these spaces is also dense in the space of square-integrable functions on the sphere. Thus every square-integrable function on the sphere decomposes uniquely into a series of spherical harmonics, where the series converges in the sense. For all , one has where is the Laplace–Beltrami operator on . This operator is the analog of the angular part of the Laplacian in three dimensions; to wit, the Laplacian in dimensions decomposes as It follows from the Stokes theorem and the preceding property that the spaces are orthogonal with respect to the inner product from . That is to say, for and for . Conversely, the spaces are precisely the eigenspaces of . In particular, an application of the spectral theorem to the Riesz potential gives another proof that the spaces are pairwise orthogonal and complete in . Every homogeneous polynomial can be uniquely written in the form where . In particular, An orthogonal basis of spherical harmonics in higher dimensions can be constructed inductively by the method of separation of variables, by solving the Sturm-Liouville problem for the spherical Laplacian where φ is the axial coordinate in a spherical coordinate system on Sn−1. The end result of such a procedure is where the indices satisfy and the eigenvalue is . The functions in the product are defined in terms of the Legendre function Connection with representation theory The space of spherical harmonics of degree is a representation of the symmetry group of rotations around a point (SO(3)) and its double-cover SU(2). Indeed, rotations act on the two-dimensional sphere, and thus also on by function composition for a spherical harmonic and a rotation. The representation is an irreducible representation of SO(3). The elements of arise as the restrictions to the sphere of elements of : harmonic polynomials homogeneous of degree on three-dimensional Euclidean space . By polarization of , there are coefficients symmetric on the indices, uniquely determined by the requirement The condition that be harmonic is equivalent to the assertion that the tensor must be trace free on every pair of indices. Thus as an irreducible representation of , is isomorphic to the space of traceless symmetric tensors of degree . More generally, the analogous statements hold in higher dimensions: the space of spherical harmonics on the -sphere is the irreducible representation of corresponding to the traceless symmetric -tensors. However, whereas every irreducible tensor representation of and is of this kind, the special orthogonal groups in higher dimensions have additional irreducible representations that do not arise in this manner. The special orthogonal groups have additional spin representations that are not tensor representations, and are typically not spherical harmonics. An exception are the spin representation of SO(3): strictly speaking these are representations of the double cover SU(2) of SO(3). In turn, SU(2) is identified with the group of unit quaternions, and so coincides with the 3-sphere. The spaces of spherical harmonics on the 3-sphere are certain spin representations of SO(3), with respect to the action by quaternionic multiplication. Connection with hemispherical harmonics Spherical harmonics can be separated into two set of functions. One is hemispherical functions (HSH), orthogonal and complete on hemisphere. Another is complementary hemispherical harmonics (CHSH). Generalizations The angle-preserving symmetries of the two-sphere are described by the group of Möbius transformations PSL(2,C). With respect to this group, the sphere is equivalent to the usual Riemann sphere. The group PSL(2,C) is isomorphic to the (proper) Lorentz group, and its action on the two-sphere agrees with the action of the Lorentz group on the celestial sphere in Minkowski space. The analog of the spherical harmonics for the Lorentz group is given by the hypergeometric series; furthermore, the spherical harmonics can be re-expressed in terms of the hypergeometric series, as is a subgroup of . More generally, hypergeometric series can be generalized to describe the symmetries of any symmetric space; in particular, hypergeometric series can be developed for any Lie group.
Physical sciences
Atomic physics
Physics
203082
https://en.wikipedia.org/wiki/Temperate%20coniferous%20forest
Temperate coniferous forest
Temperate coniferous forest is a terrestrial biome defined by the World Wide Fund for Nature. Temperate coniferous forests are found predominantly in areas with warm summers and cool winters, and vary in their kinds of plant life. In some, needleleaf trees dominate, while others are home primarily to broadleaf evergreen trees or a mix of both tree types. A separate habitat type, the tropical coniferous forests, occurs in more tropical climates. Temperate coniferous forests are common in the coastal areas of regions that have mild winters and heavy rainfall, or inland in drier climates or montane areas. Many species of trees inhabit these forests including pine, cedar, fir, and redwood. The understory also contains a wide variety of herbaceous and shrub species. Temperate coniferous forests sustain the highest levels of biomass in any terrestrial ecosystem and are notable for trees of massive proportions in temperate rainforest regions. Structurally, these forests are rather simple, consisting of 2 layers generally: an overstory and understory. However, some forests may support a layer of shrubs. Pine forests support an herbaceous ground layer that may be dominated by grasses and forbs that lend themselves to ecologically important wildfires. In contrast, the moist conditions found in temperate rain forests favor the dominance by ferns and some forbs. Forest communities dominated by huge trees (e.g., giant sequoia, Sequoiadendron gigantea; redwood, Sequoia sempervirens), unusual ecological phenomena, occur in western North America, southwestern South America, as well as in the Australasian region in such areas as southeastern Australia and northern New Zealand. The Klamath-Siskiyou ecoregion of western North America harbors diverse and unusual assemblages and displays notable endemism for a number of plant and animal taxa. Ecoregions Eurasia North America
Physical sciences
Forests
Earth science
203085
https://en.wikipedia.org/wiki/Tropical%20and%20subtropical%20grasslands%2C%20savannas%2C%20and%20shrublands
Tropical and subtropical grasslands, savannas, and shrublands
Tropical and subtropical grasslands, savannas, and shrublands is a terrestrial biome defined by the World Wide Fund for Nature. The biome is dominated by grass and/or shrubs located in semi-arid to semi-humid climate regions of subtropical and tropical latitudes. Tropical grasslands are mainly found between 5 degrees and 20 degrees in both North and south of the Equator. Description Grasslands are dominated by grasses and other herbaceous plants. Savannas are grasslands with scattered trees. Shrublands are dominated by woody or herbaceous shrubs. Large expanses of land in the tropics do not receive enough rainfall to support extensive tree cover. The tropical and subtropical grasslands, savannas, and shrublands are characterized by rainfall levels between per year. Rainfall can be highly seasonal, with the entire year's rainfall sometimes occurring within a couple of weeks. African savannas occur between forest or woodland regions and grassland regions. Flora includes acacia and baobab trees, grass, and low shrubs. Acacia trees lose their leaves in the dry season to conserve moisture, while the baobab stores water in its trunk for the dry season. Many of these savannas are in Africa. Large mammals that have evolved to take advantage of the ample forage typify the biodiversity associated with these habitats. These large mammal faunas are richest in African savannas and grasslands. The most intact assemblages currently occur in East African Acacia savannas and Zambezian savannas consisting of mosaics of miombo, mopane, and other habitats. Large-scale migration of tropical savanna herbivores, such as wildebeest (Connochaetes taurinus) and zebra (Equus quagga), are continuing to decline through habitat alteration and hunting. They now only occur to any significant degree in East Africa and the central Zambezian region. Much of the extraordinary abundance of Guinean and Sahelian savannas has been eliminated, although the large-scale migrations of Ugandan Kob still occur in the savannas in the Sudd region. The Sudan type of climate is characterized by an alternating hot and rainy season, and a cool and dry season. In the Northern Hemisphere, the hot rainy season normally begins in May and lasts until September. Rainfall varies from 25 cm to 150 cm and is usually unreliable. The rest of the year is cool and dry. Rainfall decreases as one goes either towards North in Northern Hemisphere or South in the Southern Hemisphere. Drought is very common. Occurrence Tropical and subtropical grasslands, savannas, and shrublands occur on all continents but Antarctica. They are widespread in Africa, and are also found all throughout South Asia and Southeast Asia, the northern parts of South America and Australia, and the southern United States. Ecoregions
Physical sciences
Grasslands
Earth science
203089
https://en.wikipedia.org/wiki/Flooded%20grasslands%20and%20savannas
Flooded grasslands and savannas
Flooded grasslands and savannas is a terrestrial biome of the World Wide Fund for Nature (WWF) biogeographical system, consisting of large expanses or complexes of flooded grasslands. These areas support numerous plants and animals adapted to the unique hydrologic regimes and soil conditions. Large congregations of migratory and resident waterbirds may be found in these regions. The relative importance of these habitat types for these birds as well as more vagile taxa typically varies as the availability of water and productivity annually and seasonally shifts among complexes of smaller and larger wetlands throughout a region. This habitat type is found on four of the continents on Earth. Some globally outstanding flooded savannas and grasslands occur in the Everglades, Pantanal, Lake Chad flooded savanna, Zambezian flooded grasslands, and the Sudd. The Everglades, with an area of , are the world's largest rain-fed flooded grassland on a limestone substrate, and feature some 11,000 species of seed-bearing plants, 25 varieties of orchids, 300 bird species, and 150 fish species. The Pantanal, with an area of , is the largest flooded grassland on Earth, supporting over 260 species of fish, 700 birds, 90 mammals, 160 reptiles, 45 amphibians, 1,000 butterflies, and 1,600 species of plants. The flooded savannas and grasslands are generally the largest complexes in each region.
Physical sciences
Wetlands
Earth science
203103
https://en.wikipedia.org/wiki/Montane%20grasslands%20and%20shrublands
Montane grasslands and shrublands
Montane grasslands and shrublands are a biome defined by the World Wildlife Fund. The biome includes high elevation grasslands and shrublands around the world. The term "montane" in the name of the biome refers to "high elevation", rather than the ecological term that denotes the region below the treeline. This biome includes high elevation (montane and alpine) grasslands and shrublands, including the puna and páramo in South America, subalpine heath in New Guinea and East Africa, steppes of the Tibetan plateaus, as well as other similar subalpine habitats around the world. The plants and animals of tropical montane páramos display striking adaptations to cool, wet conditions and intense sunlight. Around the world, characteristic plants of these habitats display features such as rosette structures, waxy surfaces, and abundant pilosity. The páramos of the northern Andes are the most extensive examples of this habitat type. Although ecoregion biotas are most diverse in the Andes, these ecosystems are distinctive wherever they occur in the tropics. The heathlands and moorlands of East Africa (e.g., Mount Kilimanjaro, Mount Kenya, Rwenzori Mountains), Mount Kinabalu of Borneo, and the Central Range of New Guinea are all limited in extent, isolated, and support endemic plants and animals. Drier subtropical montane grasslands, savannas, and woodlands include the Ethiopian Highlands, the Zambezian montane grasslands and woodlands, and the montane habitats of southeastern Africa. The montane grasslands of the Tibetan Plateau still support relatively intact migrations of Tibetan antelope (Pantholops Hodgsoni) and kiang, or Tibetan wild ass (Equus hemionus). A unique feature of many tropical páramos is the presence of giant rosette plants from a variety of plant families, such as Lobelia (Africa), Puya (South America), Cyathea (New Guinea), and Argyroxiphium (Hawai’i). These plant forms can reach elevations of above sea level. Montane grassland and shrubland ecoregions
Physical sciences
Grasslands
Earth science
203104
https://en.wikipedia.org/wiki/Alpine%20tundra
Alpine tundra
Alpine tundra is a type of natural region or biome that does not contain trees because it is at high elevation, with an associated harsh climate. As the latitude of a location approaches the poles, the threshold elevation for alpine tundra gets lower until it reaches sea level, and alpine tundra merges with polar tundra. The high elevation causes an adverse climate, which is too cold and windy to support tree growth. Alpine tundra transitions to sub-alpine forests below the tree line; stunted forests occurring at the forest-tundra ecotone are known as krummholz. With increasing elevation it ends at the snow line where snow and ice persist through summer. Alpine tundra occurs in mountains worldwide. The flora of the alpine tundra is characterized by dwarf shrubs close to the ground. The cold climate of the alpine tundra is caused by adiabatic cooling of air, and is similar to polar climate. Geography Alpine tundra occurs at high enough altitude at any latitude. Portions of montane grasslands and shrublands ecoregions worldwide include alpine tundra. Large regions of alpine tundra occur in the North American Cordillera and parts of the northern Appalachian Mountains in North America, the Alps and Pyrenees of Europe, the Himalaya and Karakoram of Asia, the Andes of South America, the Eastern Rift mountains of Africa, the Snowy Mountains of Australia, the South Island of New Zealand, and the Scandinavian Mountains. Alpine tundra occupies high-mountain summits, slopes, and ridges above timberline. Aspect plays a role as well; the treeline often occurs at higher elevations on warmer equator-facing slopes. Because the alpine zone is present only on mountains, much of the landscape is rugged and broken, with rocky, snowcapped peaks, cliffs, and talus slopes, but also contains areas of gently rolling to almost flat topography. Averaging over many locations and local microclimates, the treeline rises when moving 1 degree south from 70 to 50°N, and per degree from 50 to 30°N. Between 30°N and 20°S, the treeline is roughly constant, between . Climate Alpine climate is the average weather (climate) for the alpine tundra. The climate becomes colder when reaching higher elevations—this characteristic is described by the lapse rate of air: air tends to get colder as it rises, since it expands. The dry adiabatic lapse rate is 10 °C per km (5.5 °F per 1000 ft) of elevation or altitude. Therefore, moving up on a mountain is roughly equivalent to moving 80 kilometers (45 miles or 0.75° of latitude) towards the pole. This relationship is only approximate, however, since local factors such as proximity to oceans can drastically modify the climate. In the alpine tundra, trees cannot tolerate the environmental conditions (usually cold temperatures, extreme snowpack, or associated lack of available moisture). Typical high-elevation growing seasons range from 45 to 90 days, with average summer temperatures near . Growing season temperatures frequently fall below freezing, and frost occurs throughout the growing season in many areas. Precipitation occurs mainly as winter snow, but soil water availability is highly variable with season, location, and topography. For example, snowfields commonly accumulate on the lee sides of ridges while ridgelines may remain nearly snow free due to redistribution by wind. Some alpine habitats may be up to 70% snow free in winter. High winds are common in alpine ecosystems, and can cause significant soil erosion and be physically and physiologically detrimental to plants. Also, wind coupled with high solar radiation can promote extremely high rates of evaporation and transpiration. Quantifying the climate There have been several attempts at quantifying what constitutes an alpine climate. Climatologist Wladimir Köppen demonstrated a relationship between the Arctic and Antarctic tree lines and the 10 °C summer isotherm; i.e., places where the average temperature in the warmest calendar month of the year is below 10 °C cannot support forests. See Köppen climate classification for more information. Otto Nordenskjöld theorized that winter conditions also play a role: His formula is W = 9 − 0.1 C, where W is the average temperature in the warmest month and C the average of the coldest month, both in degrees Celsius (this would mean, for example, that if a particular location had an average temperature of in its coldest month, the warmest month would need to average or higher for trees to be able to survive there). In 1947, Holdridge improved on these schemes, by defining biotemperature: the mean annual temperature, where all temperatures below 0 °C are treated as 0 °C (because it makes no difference to plant life, being dormant). If the mean biotemperature is between , Holdridge quantifies the climate as alpine. Flora Since the habitat of alpine vegetation is subject to intense radiation, wind, cold, snow, and ice, it grows close to the ground and consists mainly of perennial grasses, sedges, and forbs. Perennial herbs (including grasses, sedges, and low woody or semi-woody shrubs) dominate the alpine landscape; they have much more root and rhizome biomass than that of shoots, leaves, and flowers. The roots and rhizomes not only function in water and nutrient absorption but also play a very important role in over-winter carbohydrate storage. Annual plants are rare in this ecosystem and usually are only a few inches tall, with weak root systems. Other common plant life-forms include prostrate shrubs; tussock-forming graminoids; cushion plants; and cryptogams, such as bryophytes and lichens. Relative to lower elevation areas in the same region, alpine regions have a high rate of endemism and a high diversity of plant species. This taxonomic diversity can be attributed to geographical isolation, climate changes, glaciation, microhabitat differentiation, and different histories of migration or evolution or both. These phenomena contribute to plant diversity by introducing new flora and favoring adaptations, both of new species and the dispersal of pre-existing species. Though tundra covers only a minority of the Earth's surface (17-20%), the biodiversity of plant species is important to human nutrition. Of the 20 plant species that make up 80% of human food, 7 of them (35%) originated in this region. Plants have adapted to the harsh alpine environment. Cushion plants, looking like ground-hugging clumps of moss, escape the strong winds blowing a few inches above them. Many flowering plants of the alpine tundra have dense hairs on stems and leaves to provide wind protection or red-colored pigments capable of converting the sun's light rays into heat. Some plants take two or more years to form flower buds, which survive the winter below the surface and then open and produce fruit with seeds in the few weeks of summer. In various areas of alpine tundra, woody plant encroachment is observed. Alpine areas are unique because of the severity and complexity of their environmental conditions. Very small changes in topography – as small as 1 foot (0.3 m) or less – may mean the difference between a windswept area or an area of snow accumulation, changing the potential productivity and plant community drastically. Between these extremes of drought versus saturation, several intermediate environments may exist all within a few yards of each other, depending on topography, substrate, and climate. Alpine vegetation generally occurs in a mosaic of small patches with widely differing environmental conditions. Vegetation types vary from cushion and rosette plants on the ridges and in the rock crannies; to herbaceous and grassy vegetation along the slopes; dwarf shrubs with grasses and forbs below the melting snowdrifts; and sedges, grasses, low shrubs, and mosses in the bogs and along the brooks. Alpine meadows form where sediments from the weathering of rocks has produced soils well-developed enough to support grasses and sedges. Non-flowering lichens cling to rocks and soil. Their enclosed algal cells can photosynthesize at any temperature above , and the outer fungal layers can absorb more than their own weight in water. The adaptations for survival of drying winds and cold may make tundra vegetation seem very hardy, but in some respects the tundra is very fragile. Repeated footsteps often destroy tundra plants, allowing exposed soil to blow away; recovery may take hundreds of years. Fauna Because alpine tundra is located in various widely separated regions of the Earth, there is no animal species common to all areas of alpine tundra. Some animals of alpine tundra environments include the kea, marmot, mountain goat, bighorn sheep, chinchilla, Himalayan tahr, yak, snow leopard, and pika.
Physical sciences
Biomes: General
Earth science
203109
https://en.wikipedia.org/wiki/Temperate%20broadleaf%20and%20mixed%20forests
Temperate broadleaf and mixed forests
Temperate broadleaf and mixed forest is a temperate climate terrestrial habitat type defined by the World Wide Fund for Nature, with broadleaf tree ecoregions, and with conifer and broadleaf tree mixed coniferous forest ecoregions. These forests are richest and most distinctive in central China and eastern North America, with some other globally distinctive ecoregions in the Himalayas, Western and Central Europe, the southern coast of the Black Sea, Australasia, Southwestern South America and the Russian Far East. Ecology The typical structure of these forests includes four layers. The uppermost layer is the canopy composed of tall mature trees ranging from high. Below the canopy is the three-layered, shade-tolerant understory that is roughly shorter than the canopy. The top layer of the understory is the sub-canopy composed of smaller mature trees, saplings, and suppressed juvenile canopy layer trees awaiting an opening in the canopy. Below the sub-canopy is the shrub layer, composed of low growing woody plants. Typically the lowest growing (and most diverse) layer is the ground cover or herbaceous layer. Trees In the Northern hemisphere, characteristic dominant broadleaf trees in this biome include oaks (Quercus spp.), beeches (Fagus spp.), maples (Acer spp.), or birches (Betula spp.). The term "mixed forest" comes from the inclusion of coniferous trees as a canopy component of some of these forests. Typical coniferous trees include pines (Pinus spp.), firs (Abies spp.), and spruces (Picea spp.). In some areas of this biome, the conifers may be a more important canopy species than the broadleaf species. In the Southern Hemisphere, endemic genera such as Nothofagus and Eucalyptus occupy this biome, and most coniferous trees (members of the Araucariaceae and Podocarpaceae) occur in mixtures with broadleaf species, and are classed as broadleaf and mixed forests. Climate Temperate broadleaf and mixed forests occur in areas with distinct warm and cool seasons, including climates such as humid continental, humid subtropical, and oceanic, that give them moderate annual average temperatures: . These forests occur in relatively warm and rainy climates, sometimes also with a distinct dry season. A dry season occurs in the winter in East Asia and in summer on the wet fringe of the Mediterranean climate zones. Other areas, such as central eastern North America, have a fairly even distribution of rainfall; annual rainfall is typically over and often over , though it can go as low as in some parts of the Middle East and close to in the mountains of New Zealand and the Azores. Temperatures are typically moderate except in parts of Asia such as Ussuriland, or the Upper Midwest, where temperate forests can occur despite very harsh conditions with very cold winters. The climates are typically humid for much of the year, usually appearing in the humid subtropical climate and in the humid continental climate zones to the south of tundra and the generally subarctic taiga. In the Köppen climate classification they are represented respectively by Cfa, Dfa/Dfb southern range and Cfb, and more rarely, Csb, BSk and Csa. Ecoregions Australasia Eurasia Americas
Physical sciences
Forests
null
203111
https://en.wikipedia.org/wiki/Tropical%20and%20subtropical%20coniferous%20forests
Tropical and subtropical coniferous forests
Tropical and subtropical coniferous forests are a tropical forest habitat type defined by the World Wide Fund for Nature. These forests are found predominantly in North and Central America and experience low levels of precipitation and moderate variability in temperature. Tropical and subtropical coniferous forests are characterized by diverse species of conifers, whose needles are adapted to deal with the variable climatic conditions. Most tropical and subtropical coniferous forest ecoregions are found in the Nearctic and Neotropical realms, from Mexico to Nicaragua and on the Greater Antilles, Bahamas, and Bermuda. Other tropical and subtropical coniferous forests ecoregions occur in Asia. Mexico harbors the world's richest and most complex subtropical coniferous forests. The conifer forests of the Greater Antilles contain many endemics and relictual taxa. Many migratory birds and butterflies spend winter in tropical and subtropical conifer forests. This biome features a thick, closed canopy which blocks light to the floor and allows little underbrush. As a result, the ground is often covered with fungi and ferns. Shrubs and small trees compose a diverse understory. Tropical and subtropical coniferous forests ecoregions
Physical sciences
Forests
Earth science
203113
https://en.wikipedia.org/wiki/Tropical%20and%20subtropical%20dry%20broadleaf%20forests
Tropical and subtropical dry broadleaf forests
The tropical and subtropical dry broadleaf forest is a habitat type defined by the World Wide Fund for Nature and is located at tropical and subtropical latitudes. Though these forests occur in climates that are warm year-round, and may receive several hundred millimeters of rain per year, they have long dry seasons that last several months and vary with geographic location. These seasonal droughts have great impact on all living things in the forest. Deciduous trees predominate in most of these forests, and during the drought a leafless period occurs, which varies with species type. Because trees lose moisture through their leaves, the shedding of leaves allows trees such as teak and mountain ebony to conserve water during dry periods. The newly bare trees open up the canopy layer, enabling sunlight to reach ground level and facilitate the growth of thick underbrush. Trees on moister sites and those with access to ground water tend to be evergreen. Infertile sites also tend to support evergreen trees. Three tropical dry forest ecoregions, the East Deccan dry evergreen forests, the Sri Lanka dry-zone dry evergreen forests, and the Southeastern Indochina dry evergreen forests, are characterized by evergreen trees. Though less biologically diverse than rainforests, tropical dry forests are home to a wide variety of wildlife including monkeys, deer, large cats, parrots, various rodents, and ground dwelling birds. Mammalian biomass tends to be higher in dry forests than in rain forests, especially in Asian and African dry forests. Many of these species display extraordinary adaptations to the difficult climate. This biome is alternately known as the tropical and subtropical dry forest biome or the tropical and subtropical deciduous forest biome. Geographical variation Dry forests tend to exist in the drier areas north and south of the tropical rainforest belt, south or north of the subtropical deserts, generally in two bands: one between 10° and 20°N latitude and the other between 10° and 20°S latitude. The most diverse dry forests in the world occur in western and southern Mexico and in the Bolivian lowlands. The dry forests of the Pacific Coast of northwestern South America support a wealth of unique species due to their dry climate. The Maputaland-Pondoland bushland and thickets along the east coast of South Africa are diverse and support many endemic species. The dry forests of central India and Indochina are notable for their diverse large vertebrate faunas. Madagascar dry deciduous forests and New Caledonia dry forests are also highly distinctive (pronounced endemism and a large number of relictual taxa) for a wide range of taxa and at higher taxonomic levels. Trees use underground water during the dry seasons. Biodiversity patterns and requirements Species tend to have wider ranges than moist forest species, although in some regions many species do display highly restricted ranges; most dry forest species are restricted to tropical dry forests, particularly in plants; beta diversity and alpha diversity high but typically lower than adjacent moist forests. Effective conservation of dry broadleaf forests requires the preservation of large and continuous areas of forest. Large natural areas are required to maintain larger predators and other vertebrates, and to buffer sensitive species from hunting pressure. The persistence of riparian forests and water sources is critical for many dry forest species. Large swathes of intact forest are required to allow species to recover from occasional large events, like forest fires. Dry forests are highly sensitive to excessive burning and deforestation; overgrazing and invasive species can also quickly alter natural communities; restoration is possible but challenging, particularly if degradation has been intense and persistent. Ecoregions Afrotropical realm Cape Verde Islands dry forests Madagascar dry deciduous forests Zambezian cryptosepalum dry forests Australasian realm Lesser Sundas deciduous forests New Caledonia dry forests Sumba deciduous forests Timor and Wetar deciduous forests Indomalayan realm Central Deccan Plateau dry deciduous forests Central Indochina dry forests Chota Nagpur Plateau East Deccan dry evergreen forests Irrawaddy dry forests Khathiar–Gir dry deciduous forests Narmada Valley dry deciduous forests Northern dry deciduous forests South Deccan Plateau dry deciduous forests Southeastern Indochina dry evergreen forests Southern Vietnam lowland dry forests Sri Lanka dry-zone dry evergreen forests Nearctic realm Sonoran-Sinaloan transition subtropical dry forest Neotropical realm Apure–Villavicencio dry forests Atlantic dry forests Bahamian dry forests Bajío dry forests Balsas dry forests Bolivian montane dry forests Cayman Islands dry forests Central American dry forests Chiapas Depression dry forests Chiquitano dry forests Cuban dry forests Ecuadorian dry forests Gran Chaco Hispaniolan dry forests Jalisco dry forests Jamaican dry forests Lara–Falcón dry forests Leeward Islands dry forests Magdalena Valley dry forests Maracaibo dry forests Marañón dry forests Panamanian dry forests Patía Valley dry forests Puerto Rican dry forests Revillagigedo Islands Sierra de la Laguna dry forests Sinaloan dry forests Sinú Valley dry forests Southern Pacific dry forests Trinidad and Tobago dry forests Tumbes–Piura dry forests Veracruz dry forests Windward Islands dry forests Yucatán dry forests Oceanian realm Fiji tropical dry forests Hawaiian tropical dry forests Marianas tropical dry forests Yap tropical dry forests
Physical sciences
Forests
Earth science
203115
https://en.wikipedia.org/wiki/Tropical%20and%20subtropical%20moist%20broadleaf%20forests
Tropical and subtropical moist broadleaf forests
Tropical and subtropical moist broadleaf forests (TSMF), also known as tropical moist forest, is a subtropical and tropical forest habitat type defined by the World Wide Fund for Nature. Description TSMF is generally found in large, discontinuous patches centered on the equatorial belt and between the Tropic of Cancer and Tropic of Capricorn. TSMF are characterized by low variability in annual temperature and high levels of rainfall of more than annually. Forest composition is dominated by evergreen and semi-deciduous tree species. These forests are home to more species than any other terrestrial ecosystem on Earth: Half of the world's species may live in these forests, where a square kilometer may be home to more than 1,000 tree species. These forests are found around the world, particularly in the Indo-Malayan Archipelago, the Amazon Basin, and the African Congo Basin. The perpetually warm, wet climate makes these environments more productive than any other terrestrial environment on Earth and promotes explosive plant growth. A tree here may grow over in height in just 5 years. From above, the forest appears as an unending sea of green, broken only by occasional, taller "emergent" trees. These towering emergents are the realm of hornbills, toucans, and the harpy eagle. Generally, biodiversity is highest in the forest canopy. The canopy can be divided into five layers: overstory canopy with emergent crowns, a medium layer of canopy, lower canopy, shrub level, and finally understory. The canopy is home to many of the forest's animals, including apes and monkeys. Below the canopy, a lower understory hosts snakes and big cats. The forest floor, relatively clear of undergrowth due to the thick canopy above, is stalked by other animals such as gorillas and deer. All levels of these forests contain an unparalleled diversity of invertebrate species, including New Guinea's stick insects and butterflies that can grow over in length. Many forests are being cleared for farmland, while others are subject to large-scale commercial logging. An area the size of Ireland is destroyed every few years. Types The biome includes several types of forests: Lowland equatorial evergreen rain forests, commonly known as tropical rainforests, are forests which receive high rainfall (tropical rainforest climate with more than 2000 mm, or 80 inches, annually) throughout the year. These forests occur in a belt around the equator, with the largest areas in the Amazon basin of South America, the Congo Basin of central Africa, the Wet Tropics of Queensland in Australia and parts of the Malay Archipelago. About half of the world's tropical rainforests are in the South American countries of Brazil and Peru. Rainforests now cover less than 6% of Earth's land surface. Scientists estimate that more than half of all the world's plant and animal species live in tropical rainforests. Tropical seasonal forests, also known as moist deciduous, monsoon or semi-evergreen (mixed) seasonal forests, have a monsoon or wet savannah climates (as in the Köppen climate classification): receiving high overall rainfall with a warm summer wet season and (often) a cooler winter dry season. Some trees in these forests drop some or all of their leaves during the winter dry season. These forests are found in South Florida, parts of South America, in Central America and around the Caribbean, in coastal West Africa, parts of the Indian subcontinent, Northern Australia and across much of Indochina. Montane rain forests are found in cooler-climate mountainous areas. Those with elevations high enough to regularly encounter low-level cloud cover are known as cloud forests. Flooded forests, including freshwater swamp forests and peat swamp forests. Manigua a low, often impenetrable dense forest of tangled tropical shrub and small trees. It is usually found in marshy areas but also on dry land in certain places. The term is used in Cuba, the Dominican Republic, Puerto Rico and Colombia. Notable ecoregions A number of TSMF ecoregions are notable for their biodiversity and endemism: Southwest Amazon moist forests in Brazil, Peru and Bolivia Atlantic Forest in Brazil, Argentina and Paraguay Chocó–Darién moist forests in Colombia and Panama The Wet Tropics of Queensland in Australia Northwestern Andean montane forests of Colombia and Ecuador Guayanan Highlands moist forests Cuban moist forests Veracruz moist forests in Mexico Congolese rainforests Upper Guinean forests Albertine Rift montane forests from Uganda to Burundi Eastern Arc forests of Kenya and Tanzania Coastal forests of eastern Africa from Somalia to Mozambique Madagascar subhumid forests Puerto Rican moist forests Sri Lanka lowland rain forests Peninsular Malaysian peat swamp forests Borneo peat swamp forests New Caledonia rain forests Western Ghats
Physical sciences
Forests
Earth science
203116
https://en.wikipedia.org/wiki/Vulpes
Vulpes
Vulpes is a genus of the sub-family Caninae. The members of this genus are colloquially referred to as true foxes, meaning they form a proper clade. The word "fox" occurs in the common names of all species of the genus, but also appears in the common names of other canid species. True foxes are distinguished from members of the genus Canis, such as domesticated dogs, wolves, jackals and coyotes, by their smaller size (5–11 kg), longer, bushier tail, and flatter skull. They have black, triangular markings between their eyes and nose, and the tip of their tail is often a different color from the rest of their pelt. The typical lifespan for this genus is between two and four years, but can reach up to a decade. Extant species Within Vulpes, 12 separate extant species and four fossil species are described: Early history The oldest known fossil species within Vulpes is V. riffautae, dating back to the late Miocene of Chad, which is within the Neogene. The deposits where these fossils are found are about 7 million years old, which might make them the earliest Canidae in the Old World. They are estimated to have weighed between 1.5 and 3.5 lb. V. skinneri, from the Malapa Fossil Site from South Africa, is younger than V. riffautae by roughly 5 million years, and shows up in the early Pleistocene. Two other extinct, less documented fossils are known: V. praeglacialis and V. hassani. V. praeglacialis was discovered in the Petralona Cave in Chalkidiki, Greece. The age of the deposits (Early Pleistocene) makes it the earliest occurrence of Vulpes in Europe. V. hassani is found in a Miocene-Pliocene deposit in northwestern Africa. This species may have given rise to current Rüppell's fox, which lends support that the close phylogenetic clustering of Rüppels and Red foxes is the result of recent introgressive hybridization rather than recent speciation. In the Pleistocene, Vulpes had a fairly wide distribution, with eight species found in North America. Of these eight, six are not fossil, and three species still remain in North America (V. velox, V. macrotis, and V. vulpes). The remaining three moved on to sections of Africa over time. V. stenognathus is extinct, but has extant sister taxa including V. chama, V. rueppellii, V. velox, and V. vulpes, which fits with these species all evolving together in North America. Fossil species †Vulpes hassani †Vulpes odessana †Vulpes praeglacialis - Kormos (found in Petralona Cave, Greece) †Vulpes qiuzhudingi (2014) †Vulpes riffautae - Late Miocene †Vulpes rooki †Vulpes skinneri †Vulpes stenognathus †Vulpes gigas Description True foxes are small to medium-sized animals, usually smaller than other canines, such as wolves, dogs, and jackals. For example, the largest species, the red fox, weighs on average 4.1–8.7 kg and the smallest species, the fennec fox, weighs only 0.7–1.6 kg. They have long, dense fur, and a bushy, rounded tail that is at least half as long, or fully as long as, the head and body. They have a rather long body with shorter limbs, a long, narrow muzzle, and large, pointed ears. The forelimbs have five toes, while the hind legs have only four. The skull is light and slender, elongated. Sagittal crest not developed at all or weakly defined. Vulpes species have vertically slit pupils, which generally appear elliptical in strong light like those of cats, which provide them with significant advantages. Like most canids, true foxes have a muscular body, powerful jaws, and teeth for grasping prey. Blunt claws are especially useful for gripping the ground while tracking down their prey. Some species have a pungent "foxy" odor, arising mainly from a gland located on the dorsal surface of the tail, not far from its base. Not much sexual dimorphism is displayed, although males are slightly larger. In general, Vulpes has a bone structure very close to that of its canid relatives, but there are some variations. For example, although canid limbs are designed specifically for running quickly on land to catch prey, Vulpes species avoid rapid sprints, excluding when being chased, and have become more specialized for leaping and grasping prey. In Vulpes vulpes, for example, the adaptions for leaping, grasping, and climbing include the lengthening of hind limbs in relation to fore limbs, as well as overall slenderizing of both hind and fore limbs. Muscles are also emphasized along the axis of limbs. The length, color and density of the fur of fox species differ. Fennec foxes (and other desert-adapted fox species such as Vulpes macrotis) have large ears and a short coat to keep the body cool. On the other hand, the Arctic fox has small ears and a thick, insulating coat to keep the body warm. A solid color coat is seen in most animals, but there are occasions where the coat color varies over the year to enhance camouflage against the current seasons landscape. The red fox, Ruppell's fox, and Tibetan sand fox possess white-tipped tails. The Arctic fox's tail-tip is of the same color as the rest of the tail (white or blue-gray). Blanford's fox usually possesses a black-tipped tail, but a small number of specimens (2% in Israel, 24% in the United Arab Emirates) possess a light-tipped tail. The other foxes in this group (Bengal, Cape, corsac, fennec, kit, pale, and swift) all possess black-tipped or dark-tipped tails. Distribution and habitat The range of the genus is very wide, present in a wide variety of habitats, from the desert to the Arctic, and from high altitudes in the mountains to open plains. True foxes are opportunistic and thrive anywhere they can find food and shelter. They are also widespread in suburban and urban areas, where they can take advantage of human food supplies; however, they prefer to stay away from large industrial areas. In certain areas, foxes tend to do better where humans are present, including in many agricultural landscapes, forests and patchy woodlands. Behavior and ecology Most true foxes are nocturnal, but they can be active during the morning and dusk and occasionally hunt and scavenge in daylight during winter. Many fox species are solitary or nomadic, living most of their lives on their own, except for the mating season, when they have a monogamous relationship with a partner. Some live in small family groups, others are more gregarious. Vulpes have a high variation in social organization between species and populations. Their hierarchical society usually depends on population densities. As population density increases, there is also an increase in the formation of social groups. These groups consist of one dominant pair and a few other subordinate adults that tend to be related. Dominance is established within the den, and dominant kits have usually more access to food and often hold higher social status. If a dispute occurs, dominance is determined by fighting, and the loser may be rejected from its group. These social groups can consist of up to ten adults. Cape foxes likely have a matriarchal social organization. Diet This genus is omnivorous and prone to scavenging. The foods of choice for Vulpes consist of invertebrates, a variety of small vertebrates, grasses, and some angiosperms. The typical intake per day is about 1 kg. True foxes exhibit hoarding behavior or caching where they store away food for another day out of sight from other animals. Predators Adult foxes have very few predators except coyotes, bears, and wolves, depending on the location. Juvenile foxes face a wider range of threats from small carnivores and large birds of prey, such as eagles. Reproduction Most true foxes are monogamous. However, they can form polyandrous and polygynous pairs. Breeding season varies between species and habitat, but they generally breed between late December and late March. Most foxes dig out dens to provide a safe underground space for raising their young. Born deaf and blind, kits or cubs require their mother's milk and complete supervision for the first four to five weeks out of the womb, but begin to be progressively weaned after the first month. Once fully weaned, kits seek out various insects. The parents supplement this diet with a variety of mammals and birds. During early to middle July, the kits are able to hunt on their own and soon move away from their parents. Relationship with humans Domestication The silver fox is a melanistic form of the wild red fox. Though rare, domestication has been documented in silver foxes. The most notable experiment was conducted in Novosibirsk, Russia, at the Siberian Institute of Cytology and Genetics. In this study, generations of silver foxes were divided into those with friendly traits and those with unfriendly traits. After 50 years, the friendly foxes developed “dog-like” domesticated traits such as spots, tail wagging, enjoyment of human touch, and barking. Fox hunting Fox hunting was started in the United Kingdom in the 16th century that involves tracking, chasing, and killing a fox with the aid of foxhounds and horses. It has since then spread to Europe, the United States, and Australia. Vulpes in culture and literature
Biology and health sciences
Canines
Animals
203200
https://en.wikipedia.org/wiki/Quark%20star
Quark star
A quark star is a hypothetical type of compact, exotic star, where extremely high core temperature and pressure have forced nuclear particles to form quark matter, a continuous state of matter consisting of free quarks. Background Some massive stars collapse to form neutron stars at the end of their life cycle, as has been both observed and explained theoretically. Under the extreme temperatures and pressures inside neutron stars, the neutrons are normally kept apart by a degeneracy pressure, stabilizing the star and hindering further gravitational collapse. However, it is hypothesized that under even more extreme temperature and pressure, the degeneracy pressure of the neutrons is overcome, and the neutrons are forced to merge and dissolve into their constituent quarks, creating an ultra-dense phase of quark matter based on densely packed quarks. In this state, a new equilibrium is supposed to emerge, as a new degeneracy pressure between the quarks, as well as repulsive electromagnetic forces, will occur and hinder total gravitational collapse. If these ideas are correct, quark stars might occur, and be observable, somewhere in the universe. Such a scenario is seen as scientifically plausible, but has not been proven observationally or experimentally; the very extreme conditions needed for stabilizing quark matter cannot be created in any laboratory and has not been observed directly in nature. The stability of quark matter, and hence the existence of quark stars, is for these reasons among the unsolved problems in physics. If quark stars can form, then the most likely place to find quark star matter would be inside neutron stars that exceed the internal pressure needed for quark degeneracy – the point at which neutrons break down into a form of dense quark matter. They could also form if a massive star collapses at the end of its life, provided that it is possible for a star to be large enough to collapse beyond a neutron star but not large enough to form a black hole. If they exist, quark stars would resemble and be easily mistaken for neutron stars: they would form in the death of a massive star in a Type II supernova, be extremely dense and small, and possess a very high gravitational field. They would also lack some features of neutron stars, unless they also contained a shell of neutron matter, because free quarks are not expected to have properties matching degenerate neutron matter. For example, they might be radio-silent, or have atypical sizes, electromagnetic fields, or surface temperatures, compared to neutron stars. History The analysis about quark stars was first proposed in 1965 by Soviet physicists D. D. Ivanenko and D. F. Kurdgelaidze. Their existence has not been confirmed. The equation of state of quark matter is uncertain, as is the transition point between neutron-degenerate matter and quark matter. Theoretical uncertainties have precluded making predictions from first principles. Experimentally, the behaviour of quark matter is being actively studied with particle colliders, but this can only produce very hot (above 1012 K) quark–gluon plasma blobs the size of atomic nuclei, which decay immediately after formation. The conditions inside compact stars with extremely high densities and temperatures well below 1012 K cannot be recreated artificially, as there are no known methods to produce, store or study "cold" quark matter directly as it would be found inside quark stars. The theory predicts quark matter to possess some peculiar characteristics under these conditions. Formation It is hypothesized that when the neutron-degenerate matter, which makes up neutron stars, is put under sufficient pressure from the star's own gravity or the initial supernova creating it, the individual neutrons break down into their constituent quarks (up quarks and down quarks), forming what is known as quark matter. This conversion may be confined to the neutron star's center or it might transform the entire star, depending on the physical circumstances. Such a star is known as a quark star. Stability and strange quark matter Ordinary quark matter consisting of up and down quarks has a very high Fermi energy compared to ordinary atomic matter and is stable only under extreme temperatures and/or pressures. This suggests that the only stable quark stars will be neutron stars with a quark matter core, while quark stars consisting entirely of ordinary quark matter will be highly unstable and re-arrange spontaneously. It has been shown that the high Fermi energy making ordinary quark matter unstable at low temperatures and pressures can be lowered substantially by the transformation of a sufficient number of up and down quarks into strange quarks, as strange quarks are, relatively speaking, a very heavy type of quark particle. This kind of quark matter is known specifically as strange quark matter and it is speculated and subject to current scientific investigation whether it might in fact be stable under the conditions of interstellar space (i.e. near zero external pressure and temperature). If this is the case (known as the Bodmer–Witten assumption), quark stars made entirely of quark matter would be stable if they quickly transform into strange quark matter. Strange stars Stars made of strange quark matter are known as strange stars. These form a distinct subtype of quark stars. Theoretical investigations have revealed that quark stars might not only be produced from neutron stars and powerful supernovas, they could also be created in the early cosmic phase separations following the Big Bang. If these primordial quark stars transform into strange quark matter before the external temperature and pressure conditions of the early Universe makes them unstable, they might turn out stable, if the Bodmer–Witten assumption holds true. Such primordial strange stars could survive to this day. Characteristics Quark stars have some special characteristics that separate them from ordinary neutron stars. Under the physical conditions found inside neutron stars, with extremely high densities but temperatures well below 1012 K, quark matter is predicted to exhibit some peculiar characteristics. It is expected to behave as a Fermi liquid and enter a so-called color-flavor-locked (CFL) phase of color superconductivity, where "color" refers to the six "charges" exhibited in the strong interaction, instead of the two charges (positive and negative) in electromagnetism. At slightly lower densities, corresponding to higher layers closer to the surface of the compact star, the quark matter will behave as a non-CFL quark liquid, a phase that is even more mysterious than CFL and might include color conductivity and/or several additional yet undiscovered phases. None of these extreme conditions can currently be recreated in laboratories so nothing can be inferred about these phases from direct experiments. Observed overdense neutron stars At least under the assumptions mentioned above, the probability of a given neutron star being a quark star is low, so in the Milky Way there would only be a small population of quark stars. If it is correct, however, that overdense neutron stars can turn into quark stars, that makes the possible number of quark stars higher than was originally thought, as observers would be looking for the wrong type of star. A neutron star without deconfinement to quarks and higher densities cannot have a rotational period shorter than a millisecond; even with the unimaginable gravity of such a condensed object the centripetal force of faster rotation would eject matter from the surface, so detection of a pulsar of millisecond or less period would be strong evidence of a quark star. Observations released by the Chandra X-ray Observatory on April 10, 2002, detected two possible quark stars, designated RX J1856.5−3754 and 3C 58, which had previously been thought to be neutron stars. Based on the known laws of physics, the former appeared much smaller and the latter much colder than it should be, suggesting that they are composed of material denser than neutron-degenerate matter. However, these observations are met with skepticism by researchers who say the results were not conclusive; and since the late 2000s, the possibility that RX J1856 is a quark star has been excluded. Another star, XTE J1739-285, has been observed by a team led by Philip Kaaret of the University of Iowa and reported as a possible quark star candidate. In 2006, You-Ling Yue et al., from Peking University, suggested that PSR B0943+10 may in fact be a low-mass quark star. It was reported in 2008 that observations of supernovae SN 2006gy, SN 2005gj and SN 2005ap also suggest the existence of quark stars. It has been suggested that the collapsed core of supernova SN 1987A may be a quark star. In 2015, Zi-Gao Dai et al. from Nanjing University suggested that Supernova ASASSN-15lh is a newborn strange quark star. In 2022 it was suggested that GW190425, which likely formed as a merger between two neutron stars giving off gravitational waves in the process, could be a quark star. Other hypothesized quark formations Apart from ordinary quark matter and strange quark matter, other types of quark-gluon plasma might hypothetically occur or be formed inside neutron stars and quark stars. This includes the following, some of which has been observed and studied in laboratories: Robert L. Jaffe 1977, suggested a four-quark state with strangeness (qs). Robert L. Jaffe 1977 suggested the H dibaryon, a six-quark state with equal numbers of up-, down-, and strange quarks (represented as uuddss or udsuds). Bound multi-quark systems with heavy quarks (QQ). In 1987, a pentaquark state was first proposed with a charm anti-quark (qqqs). Pentaquark state with an antistrange quark and four light quarks consisting of up- and down-quarks only (qqqq). Light pentaquarks are grouped within an antidecuplet, the lightest candidate, Θ+, which can also be described by the diquark model of Robert L. Jaffe and Wilczek (QCD). Θ++ and antiparticle −−. Doubly strange pentaquark (ssdd), member of the light pentaquark antidecuplet. Charmed pentaquark Θc(3100) (uudd) state was detected by the H1 collaboration. Tetraquark particles might form inside neutron stars and under other extreme conditions. In 2008, 2013 and 2014 the tetraquark particle of Z(4430), was discovered and investigated in laboratories on Earth.
Physical sciences
Stellar astronomy
Astronomy
203314
https://en.wikipedia.org/wiki/W.%20M.%20Keck%20Observatory
W. M. Keck Observatory
The W. M. Keck Observatory is an astronomical observatory with two telescopes at an elevation of 4,145 meters (13,600 ft) near the summit of Mauna Kea in the U.S. state of Hawaii. Both telescopes have aperture primary mirrors, and, when completed in 1993 (Keck I) and 1996 (Keck II), they were the largest optical reflecting telescopes in the world. They have been the third and fourth largest since 2006. Overview With a concept first proposed in 1977, telescope designers Terry Mast, of the University of California, Berkeley, and Jerry Nelson of Lawrence Berkeley Laboratory had been developing the technology necessary to build a large, ground-based telescope. In 1985, Howard B. Keck of the W. M. Keck Foundation gave $70 million to fund the construction of the Keck I telescope, which began in September 1985. First light occurred on November 24, 1990, using 9 of the eventual 36 segments. When construction of the first telescope was well advanced, further donations allowed the construction of a second telescope starting in 1991. The Keck I telescope began science observations in May 1993, while first light for Keck II occurred on April 27, 1996. The key advance that allowed the construction of the Keck telescopes was the use of active optics to operate smaller mirror segments as a single, contiguous mirror. A mirror of similar size cast of a single piece of glass could not be made rigid enough to hold its shape precisely; it would sag microscopically under its own weight as it was turned to different positions, causing aberrations in the optical path. In the Keck telescopes, each primary mirror is made of 36 hexagonal segments that work together as a unit. Each segment is 1.8 meters wide and 7.5 centimeters thick and weighs half a ton. The mirrors were made in Lexington, Massachusetts by Itek Optical Systems from Zerodur glass-ceramic by the German company Schott AG. On the telescope, each segment is kept stable by a system of active optics, which uses extremely rigid support structures in combination with three actuators under each segment. During observation, the computer-controlled system of sensors and actuators dynamically adjusts each segment's position relative to its neighbors, keeping a surface shape accuracy of four nanometers. As the telescope moves, this twice-per-second adjustment counters the effects of gravity and other environmental and structural effects that can affect mirror shape. Each Keck telescope sits on an altazimuth mount. Most current 8–10 m class telescopes use altazimuth designs for their reduced structural requirements compared to older equatorial designs. Altazimuth mounting provides the greatest strength and stiffness with the least amount of steel, which, for Keck Observatory, totals about 270 tons per telescope, bringing each telescope's total weight to more than 300 tons. Two proposed designs for the next generation 30 and 40 m telescopes use the same basic technology pioneered at Keck Observatory: a hexagonal mirror array coupled with an altazimuth mounting. Each of the two telescopes has a primary mirror with an equivalent diameter of 10 meters (32.8 ft or 394 in), slightly smaller than the Gran Telescopio Canarias whose primary mirror has an equivalent diameter of 10.4 meters. The telescopes are equipped with a suite of cameras and spectrometers that allow observations across much of the visible and near-infrared spectrum. Management The Keck Observatory is managed by the California Association for Research in Astronomy, a non-profit 501(c)(3) organization whose board of directors includes representatives from Caltech and the University of California. Construction of the telescopes was made possible through private grants of over $140 million from the W.M. Keck Foundation. The National Aeronautics and Space Administration (NASA) joined the partnership in October 1996 when Keck II commenced observations. Telescope time is allocated by the partner institutions. Caltech, the University of Hawaii System, and the University of California accept proposals from their own researchers; NASA accepts proposals from researchers based in the United States. Jerry Nelson, Keck Telescope project scientist, contributed to later multi-mirror projects until his death in June 2017. He conceived one of the Kecks' innovations, a reflecting surface of multiple thin segments acting as one mirror. Instruments MOSFIRE MOSFIRE (Multi-Object Spectrometer for Infra-Red Exploration), a third-generation instrument, was delivered to Keck Observatory on February 8, 2012; first light was obtained on the Kecks I telescope on April 4, 2012. A multi-object spectrograph wide-field camera for the near-infrared (0.97 to 2.41 μm), its special feature is its cryogenic Configurable Slit Unit (CSU) that is reconfigurable by remote control in under six minutes without any thermal cycling. Bars move in from each side to form up to 46 short slits. When the bars are removed, MOSFIRE becomes a wide-field imager. It was developed by teams from the University of California, Los Angeles (UCLA), the California Institute of Technology (Caltech) and the University of California, Santa Cruz, (UCSC). Its co-principal investigators are Ian S. McLean (UCLA) and Charles C. Steidel (Caltech), and the project was managed by WMKO Instrument Program Manager Sean Adkins. MOSFIRE was funded in part by the Telescope System Instrumentation Program (TSIP), operated by AURA and funded by the National Science Foundation; and by a private donation to WMKO by Gordon and Betty Moore. DEIMOS The Deep Extragalactic Imaging Multi-Object Spectrograph is capable of gathering spectra from 130 galaxies or more in a single exposure. In "Mega Mask" mode, DEIMOS can take spectra of more than 1,200 objects at once, using a special narrow-band filter. HIRES The largest and most mechanically complex of the Keck Observatory's main instruments, the High Resolution Echelle Spectrometer breaks up incoming light into its component colors to measure the precise intensity of each of thousands of color channels. Its spectral capabilities have resulted in many breakthrough discoveries, such as the detection of planets outside our solar system and direct evidence for a model of the Big Bang theory. The radial velocity precision is up to one meter per second (1.0 m/s). The instrument detection limit at 1 AU is . KCWI The Keck Cosmic Web Imager is an integral field spectrograph operating at wavelengths between 350 and 560 nm. LRIS The Low Resolution Imaging Spectrograph is a faint-light instrument capable of taking spectra and images of the most distant known objects in the universe. The instrument is equipped with a red arm and a blue arm to explore stellar populations of distant galaxies, active galactic nuclei, galactic clusters, and quasars. LWS The Long Wavelength Spectrometer for the Keck I telescope is and imaging, grating spectrometer working in the wavelength range of 3-25 microns. Like NIRC, the LWS was a forward-CASS instrument, and was used for studying cometary, planetary, and extragalactic objects. The LWS is now retired from science observations. NIRC The Near Infrared Camera for the Keck I telescope is so sensitive it could detect the equivalent of a single candle flame on the Moon. This sensitivity makes it ideal for ultra-deep studies of galactic formation and evolution, the search for proto-galaxies and images of quasar environments. It has provided ground-breaking studies of the Galactic Center, and is also used to study protoplanetary disks, and high-mass star-forming regions. NIRC was retired from science observations in 2010. NIRC-2 The second generation Near Infrared Camera works with the Keck Adaptive Optics system to produce the highest-resolution ground-based images and spectroscopy in the 1–5 micrometers (μm) range. Typical programs include mapping surface features on Solar System bodies, searching for planets around other stars, and analyzing the morphology of remote galaxies. NIRES The Near-Infrared Echellette Spectrometer is a spectrograph that provides simultaneous coverage of wavelengths from 0.94 to 2.45 microns. NIRSPEC The Near Infrared Spectrometer studies very high redshift radio galaxies, the motions and types of stars located near the Galactic Center, the nature of brown dwarfs, the nuclear regions of dusty starburst galaxies, active galactic nuclei, interstellar chemistry, stellar physics, and Solar System science. OSIRIS The OH-Suppressing Infrared Imaging Spectrograph is a near-infrared spectrograph for use with the Keck I adaptive optics system. OSIRIS takes spectra in a small field of view to provide a series of images at different wavelengths. The instrument allows astronomers to ignore wavelengths at which the Earth's atmosphere shines brightly from emissions of OH (hydroxyl) molecules, thus allowing the detection of objects 10 times fainter than previously available. Originally installed on Keck II, in January 2012 OSIRIS was moved to the Keck I telescope. Keck Interferometer The Interferometer allowed the light from both Keck telescopes to be combined into an baseline, near infrared, optical interferometer. This long baseline gave the interferometer an effective angular resolution of 5 milliarcseconds (mas) at 2.2 μm, and 24 mas at 10 μm. Several back-end instruments allowed the interferometer to operate in a variety of modes, operating in H, K, and L-band near infrared, as well as nulling interferometry. As of mid-2012 the Keck Interferometer has been discontinued for lack of funding. Both Keck Observatory telescopes are equipped with laser guide star adaptive optics, which compensate for the blurring from atmospheric turbulence. The equipment is the first AO system operational on a large telescope and has been constantly upgraded to expand its capability.
Technology
Ground-based observatories
null
203318
https://en.wikipedia.org/wiki/Ritchey%E2%80%93Chr%C3%A9tien%20telescope
Ritchey–Chrétien telescope
A Ritchey–Chrétien telescope (RCT or simply RC) is a specialized variant of the Cassegrain telescope that has a hyperbolic primary mirror and a hyperbolic secondary mirror designed to eliminate off-axis optical errors (coma). The RCT has a wider field of view free of optical errors compared to a more traditional reflecting telescope configuration. Since the mid 20th century, a majority of large professional research telescopes have been Ritchey–Chrétien configurations; some well-known examples are the Hubble Space Telescope, the Keck telescopes and the ESO Very Large Telescope. History The Ritchey–Chrétien telescope was invented in the early 1910s by American astronomer George Willis Ritchey and French astronomer Henri Chrétien. Ritchey constructed the first successful RCT, which had an aperture diameter of in 1927 (Ritchey 24-inch reflector). The second RCT was a instrument constructed by Ritchey for the United States Naval Observatory; that telescope is still in operation at the Naval Observatory Flagstaff Station. Design As with the other Cassegrain-configuration reflectors, the Ritchey–Chrétien telescope (RCT) has a very short optical tube assembly and compact design for a given focal length. The RCT offers good off-axis optical performance, but its mirrors require sophisticated techniques to manufacture and test. Hence the Ritchey–Chrétien configuration is most commonly found on high-performance professional telescopes. Two-mirror foundation A telescope with only one curved mirror, such as a Newtonian telescope, will always have aberrations. If the mirror is spherical, it will suffer primarily from spherical aberration. If the mirror is made parabolic, to correct the spherical aberration, then it still suffers from coma and astigmatism, since there are no additional design parameters one can vary to eliminate them. With two non-spherical mirrors, such as the Ritchey–Chrétien telescope, coma can be eliminated as well, by making the two mirrors' contribution to total coma cancel. This allows a larger useful field of view. However, such designs still suffer from astigmatism. The basic Ritchey–Chrétien two-surface design is free of third-order coma and spherical aberration. However, the two-surface design does suffer from fifth-order coma, severe large-angle astigmatism, and comparatively severe field curvature. Further corrections by a third element When focused midway between the sagittal and tangential focusing planes, stars appear as circles, making the Ritchey–Chrétien well suited for wide field and photographic observations. The remaining aberrations of the two-element basic design may be improved with the addition of smaller optical elements near the focal plane. Astigmatism can be cancelled by including a third curved optical element. When this element is a mirror, the result is a three-mirror anastigmat. Alternatively, a RCT may use one or several low-power lenses in front of the focal plane as a field-corrector to correct astigmatism and flatten the focal surface, as for example the SDSS telescope and the VISTA telescope; this can allow a field-of-view up to around 3° diameter. The Schmidt camera can deliver even wider fields up to about 7°. However, the Schmidt requires a full-aperture corrector plate, which restricts it to apertures below 1.2 meters, while a Ritchey–Chrétien can be much larger. Other telescope designs with front-correcting elements are not limited by the practical problems of making a multiply-curved Schmidt corrector plate, such as the Lurie–Houghton design. Aperture obstruction In a Ritchey–Chrétien design, as in most Cassegrain systems, the secondary mirror blocks a central portion of the aperture. This ring-shaped entrance aperture significantly reduces a portion of the modulation transfer function (MTF) over a range of low spatial frequencies, compared to a full-aperture design such as a refractor. This MTF notch has the effect of lowering image contrast when imaging broad features. In addition, the support for the secondary (the spider) may introduce diffraction spikes in images. Mirrors The radii of curvature of the primary and secondary mirrors, respectively, in a two-mirror Cassegrain configuration are: and , where is the effective focal length of the system, is the back focal length (the distance from the secondary to the focus), is the distance between the two mirrors and is the secondary magnification. If, instead of and , the known quantities are the focal length of the primary mirror, , and the distance to the focus behind the primary mirror, , then and . For a Ritchey–Chrétien system, the conic constants and of the two mirrors are chosen so as to eliminate third-order spherical aberration and coma; the solution is: and . Note that and are less than (since ), so both mirrors are hyperbolic. (The primary mirror is typically quite close to being parabolic, however.) The hyperbolic curvatures are difficult to test, especially with equipment typically available to amateur telescope makers or laboratory-scale fabricators; thus, older telescope layouts predominate in these applications. However, professional optics fabricators and large research groups test their mirrors with interferometers. A Ritchey–Chrétien then requires minimal additional equipment, typically a small optical device called a null corrector that makes the hyperbolic primary look spherical for the interferometric test. On the Hubble Space Telescope, this device was built incorrectly (a reflection from an un-intended surface leading to an incorrect measurement of lens position) leading to the error in the Hubble primary mirror. Incorrect null correctors have led to other mirror fabrication errors as well, such as in the New Technology Telescope. Additional flat mirrors In practice, each of these designs may also include any number of flat fold mirrors, used to bend the optical path into more convenient configurations. This article only discusses the mirrors required for forming an image, not those for placing it in a convenient location. Examples of large Ritchey–Chrétien telescopes Ritchey intended the 100-inch Mount Wilson Hooker telescope (1917) and the 200-inch (5 m) Hale Telescope to be RCTs. His designs would have provided sharper images over a larger usable field of view compared to the parabolic designs actually used. However, Ritchey and Hale had a falling-out. With the 100-inch project already late and over budget, Hale refused to adopt the new design, with its hard-to-test curvatures, and Ritchey left the project. Both projects were then built with traditional optics. Since then, advances in optical measurement and fabrication have allowed the RCT design to take over – the Hale telescope, dedicated in 1948, turned out to be the last world-leading telescope to have a parabolic primary mirror. The 10.4 m Gran Telescopio Canarias at Roque de los Muchachos Observatory on La Palma, Canary Islands, (Spain). The two 10.0 m telescopes of the Keck Observatory at Mauna Kea Observatory, (United States). The four 8.2 m telescopes comprising the Very Large Telescope, (Chile). The 8.2 m Subaru telescope at Mauna Kea Observatory, (United States). The two 8.0 m telescopes comprising the Gemini Observatory at Mauna Kea Observatory, (United States) and Chile. The 4.1 m Visible and Infrared Survey Telescope for Astronomy at the Paranal Observatory, (Chile). The 4.1 m Southern Astrophysical Research Telescope at Cerro Pachón, (Chile). The 4.0 m Mayall Telescope at Kitt Peak National Observatory, (United States). The 4.0 m Blanco telescope at the Cerro Tololo Inter-American Observatory, (Chile). The 3.94 m telescope at Eastern Anatolia Observatory (DAG) in Erzurum, Turkey. The 3.9 m Anglo-Australian Telescope at Siding Spring Observatory, (Australia). The 3.6 m Devasthal Optical Telescope of Aryabhatta Research Institute of Observational Sciences, Nainital, (India). The 3.58 m Telescopio Nazionale Galileo at Roque de los Muchachos Observatory on La Palma, Canary Islands, (Spain). The 3.58 m New Technology Telescope at the European Southern Observatory, (Chile). The 3.5 m ARC telescope at Apache Point Observatory, New Mexico, (United States). The 3.5 m Calar Alto Observatory telescope at mount Calar Alto, (Spain). The 3.50 m WIYN Observatory at Kitt Peak National Observatory, (United States). The 3.4 m INO340 Telescope at Iranian National Observatory, (Iran). The 2.65 m VLT Survey Telescope at ESO’s Paranal Observatory, (Chile). The 2.56 m effective 11 Nordic Optical Telescope on La Palma, Canary Islands, (Spain). The 2.50 m Sloan Digital Sky Survey telescope (modified design) at Apache Point Observatory, New Mexico, U.S. The 2.4 m Hubble Space Telescope currently in orbit around the Earth. The 2.4 m Thai National Observatory telescope on Doi Inthanon, (Thailand). The 2.3 m Aristarchos Telescope at Chelmos Observatory, Greece. The 2.2 m Calar Alto Observatory telescope at mount Calar Alto, (Spain). The 2.15 m Leoncito Astronomical Complex telescope on San Juan, Argentina. The 2.12 m telescope at San Pedro Martir, National Astronomical Observatory (Mexico). The 2.1 m telescope at Kitt Peak National Observatory, (United States). The 2.08 m Otto Struve Telescope at McDonald Observatory, (United States). The 2.0 m Liverpool Telescope (robotic telescope) on La Palma, Canary Islands, (Spain). The 2.0 m telescope at Rozhen Observatory, Bulgaria. The 2.0 m Himalayan Chandra Telescope of the Indian Astronomical Observatory, Hanle, (India). The 1.8 m Pan-STARRS telescopes at Haleakala on Maui, Hawaii. The 1.65 m telescope at Molėtai Astronomical Observatory, (Lithuania). The 1.6 m Mont-Mégantic Observatory telescope on Mont-Mégantic in Quebec, Canada. The 1.6 m Perkin-Elmer telescope on Pico dos Dias Observatory in Minas Gerais, Brazil. The 1.3 m telescope at Skinakas Observatory, in the island of Crete, Greece. The 1.0 m Ritchey Telescope at the United States Naval Observatory Flagstaff Station (the final telescope made by G. Ritchey before his death). The 1.0 m DFM Engineering 8 at Embry-Riddle Observatory in Daytona Beach, Florida, (United States). The four 1.0 m SPECULOOS telescopes at the Paranal Observatory in Chile dedicated to the search for Earth-sized exoplanets. The 0.85 m Spitzer Space Telescope, infrared space telescope in an Earth-trailing orbit (retired by NASA on 30 January 2020). The 0.8 m Astelco Systems design Perren Telescope at the University College London Observatory in Mill Hill, London, (UK). The 0.8 m DFM Engineering CCT-32 telescope at the University of Victoria in Victoria, British Columbia The 0.208 m LOng Range Reconnaissance Imager (LORRI) camera on board the New Horizons space craft, currently beyond Pluto.
Technology
Telescope
null
203359
https://en.wikipedia.org/wiki/Magnetic%20resonance
Magnetic resonance
Magnetic resonance is a process by which a physical excitation (resonance) is set up via magnetism. This process was used to develop magnetic resonance imaging (MRI) and nuclear magnetic resonance spectroscopy (NMRS) technology. It is also being used to develop nuclear magnetic resonance quantum computers. History The first observation of electron-spin resonance was in 1944 by Y. K. Zavosky, a Soviet physicist then teaching at Kazan State University (now Kazan Federal University). Nuclear magnetic resonance was first observed in 1946 in the US by a team led by Felix Bloch at the same time as a separate team led by Edward Mills Purcell, the two of whom would later be the 1952 Nobel Laureates in Physics. Resonant and non-resonant methods A natural way to measure the separation between two energy levels is to find a measurable quantity defined by this separation and measure it. However, the precision of this method is limited by measurement precision and thus may be poor. Alternatively, we can set up an experiment in which the system's behavior depends on the energy level. If we apply an external field of controlled frequency, we can measure the level separation by noting at which frequency a qualitative change happens: that would mean that at this frequency, the transition between two states has a high probability. An example of such an experiment is a variation of Stern–Gerlach experiment, in which magnetic moment is measured by finding resonance frequency for the transition between two spin states.
Physical sciences
Nuclear physics
Physics
203545
https://en.wikipedia.org/wiki/Nuthatch
Nuthatch
The nuthatches () constitute a genus, Sitta, of small passerine birds belonging to the family Sittidae. Characterised by large heads, short tails, and powerful bills and feet, nuthatches advertise their territory using loud, simple songs. Most species exhibit grey or bluish upper parts and a black eye stripe. Most nuthatches breed in the temperate or montane woodlands of the Northern Hemisphere, although two species have adapted to rocky habitats in the warmer and drier regions of Eurasia. However, the greatest diversity is in Southern Asia, and similarities between the species have made it difficult to identify distinct species. All members of this genus nest in holes or crevices. Most species are non-migratory and live in their habitat year-round, although the North American red-breasted nuthatch migrates to warmer regions during the winter. A few nuthatch species have restricted ranges and face threats from deforestation. Nuthatches are omnivorous, eating mostly insects, nuts, and seeds. They forage for insects hidden in or under bark by climbing along tree trunks and branches, sometimes upside-down. They forage within their territories when breeding, but they may join mixed feeding flocks at other times. Their habit of wedging a large food item in a crevice and then hacking at it with their strong bills gives this group its English name. Taxonomy The nuthatch family, Sittidae, was described by René-Primevère Lesson in 1828. Sometimes the wallcreeper (Tichodroma muraria), which is restricted to the mountains of southern Eurasia, is placed in the same family as the nuthatches, but in a separate subfamily "Tichodromadinae", in which case the nuthatches are classified in the subfamily "Sittinae". However, the wallcreeper is more often placed in a separate family, the Tichodromadidae. The wallcreeper is intermediate in its morphology between the nuthatches and the treecreepers, but its appearance, the texture of its plumage, and the shape and pattern of its tail suggest that it is closer to the former taxon. The nuthatch vanga of Madagascar (formerly known as the coral-billed nuthatch) and the sittellas from Australia and New Guinea were once placed in the nuthatch family because of similarities in appearance and lifestyle, but they are not closely related. The resemblances arose via convergent evolution to fill an ecological niche. The nuthatches' closest relatives, other than the wallcreeper, are the treecreepers, and the two (or three) families are sometimes placed in a larger grouping with the wrens and gnatcatchers. This superfamily, the Certhioidea, is proposed on phylogenetic studies using mitochondrial and nuclear DNA, and was created to cover a clade of (four or) five families removed from a larger grouping of passerine birds, the Sylvioidea. Genus name The nuthatches are all in the genus Sitta Linnaeus, 1758, a name derived from : , Ancient Greek for this bird. The English term nuthatch refers to the propensity of some species to wedge a large insect or seed in a crack and hack at it with their strong bills. Species boundaries Species boundaries in the nuthatches are difficult to define. The red-breasted nuthatch, Corsican nuthatch and Chinese nuthatch have breeding ranges separated by thousands of kilometres, but are similar in habitat preference, appearance and song. They were formerly considered to be one species, but are now normally split into three and comprise a superspecies along with the Krüper's and Algerian nuthatch. Unusually for nuthatches, all five species excavate their own nests. The Eurasian, chestnut-vented, Kashmir and chestnut-bellied nuthatches form another superspecies and replace each other geographically across Asia. They are currently considered to be four separate species, but the south Asian forms were once believed to be a subspecies of the Eurasian nuthatch. A recent change in this taxonomy is a split of the chestnut-bellied nuthatch into three species, namely the Indian nuthatch, Sitta castanea, found south of the Ganges, the Burmese nuthatch, Sitta neglecta, found in southeast Asia, and the chestnut-bellied nuthatch sensu stricto, S. cinnamoventris, which occurs in the Himalayas. Mitochondrial DNA studies have demonstrated that the white-breasted northern subspecies of Eurasian nuthatch, S. (europea) arctica, is distinctive, and also a possible candidate for full species status. This split has been accepted by the British Ornithologists' Union. A 2006 review of Asian nuthatches suggested that there are still unresolved problems in nuthatch taxonomy and proposed splitting the genus Sitta. This suggestion would move the red- and yellow-billed south Asian species (velvet-fronted, yellow-billed and sulphur-billed nuthatches) to a new genus, create a third genus for the blue nuthatch, and possibly a fourth for the beautiful nuthatch. The fossil record for this group appears to be restricted to a foot bone of an early Miocene bird from Bavaria which has been identified as an extinct representative of the climbing Certhioidea, a clade comprising the treecreepers, wallcreeper and nuthatches. It has been described as Certhiops rummeli. Two fossil species have been described in the genus Sitta: S. cuvieri Gervais, 1852 and S. senogalliensis Portis, 1888, but they probably do not belong to nuthatches. Description Nuthatches are compact birds with short legs, compressed wings, and square 12-feathered tails. They have long, sturdy, pointed bills and strong toes with long claws. Nuthatches have blue-grey backs (violet-blue in some Asian species, which also have red or yellow bills) and white underparts, which are variably tinted with buff, orange, rufous or lilac. Although head markings vary between species, a long black eye stripe, with contrasting white supercilium, dark forehead and blackish cap is common. The sexes look similar, but may differ in underpart colouration, especially on the rear flanks and under the tail. Juveniles and first-year birds can be almost indistinguishable from adults. The sizes of nuthatches vary, from the large giant nuthatch, at and , to the small brown-headed nuthatch and the pygmy nuthatch, both around in length and about . Nuthatches are very vocal, using an assortment of whistles, trills and calls. Their breeding songs tend to be simple and often identical to their contact calls but longer in duration. The red-breasted nuthatch, which coexists with the black-capped chickadee throughout much of its range, is able to understand the latter species' calls. The chickadee has subtle call variations that communicate information about the size and risk of potential predators. Many birds recognise the simple alarm calls produced by other species, but the red-breasted nuthatch is able to interpret the chickadees' detailed variations and to respond appropriately. Species The species diversity for Sittidae is greatest in southern Asia (possibly the original home of this family), where about 15 species occur, but it has representatives across much of the Northern Hemisphere. The currently recognised nuthatch species are tabulated below. Distribution and habitat Members of the nuthatch family live in most of North America and Europe and throughout Asia down to the Wallace Line. Nuthatches are sparsely represented in Africa; one species lives in a small area of northeastern Algeria and a population of the Eurasian nuthatch subspecies, S. e. hispaniensis, lives in the mountains of Morocco. Most species are resident year-round. The only significant migrant is the red-breasted nuthatch, which winters widely across North America, deserting the northernmost parts of its breeding range in Canada; it has been recorded as a vagrant in Bermuda, Iceland and England. Most nuthatches are woodland birds and the majority are found in coniferous or other evergreen forests, although each species has a preference for a particular tree type. The strength of the association varies from the Corsican nuthatch, which is closely linked with Corsican pine, to the catholic habitat of the Eurasian nuthatch, which prefers deciduous or mixed woods but breeds in coniferous forests in the north of its extensive range. However, the two species of rock nuthatches are not strongly tied to woodlands: they breed on rocky slopes or cliffs, although both move into wooded areas when not breeding. In parts of Asia where several species occur in the same geographic region, there is often an altitudinal separation in their preferred habitats. Nuthatches prefer a fairly temperate climate; northern species live near sea level whereas those further south are found in cooler highland habitats. Eurasian and red-breasted nuthatches are lowland birds in the north of their extensive ranges, but breed in the mountains further south; for example, the Eurasian nuthatch, which breeds where the July temperature range is , is found near sea level in Northern Europe, but between altitude in Morocco. The velvet-fronted nuthatch is the sole member of the family which prefers tropical lowland forests. Behaviour Nesting, breeding and survival All nuthatches nest in cavities; except for the two species of rock nuthatches, all use tree holes, making a simple cup lined with soft materials on which to rest eggs. In some species the lining consists of small woody objects such as bark flakes and seed husks, while in others it includes the moss, grass, hair and feathers typical of passerine birds. Members of the red-breasted nuthatch superspecies excavate their own tree holes, although most other nuthatches use natural holes or old woodpecker nests. Several species reduce the size of the entrance hole and seal up cracks with mud. The red-breasted nuthatch makes the nest secure by daubing sticky conifer resin globules around the entrance, the male applying the resin outside and the female inside. The resin may deter predators or competitors (the resident birds avoid the resin by diving straight through the entrance hole). The white-breasted nuthatch smears blister beetles around the entrance to its nest, and it has been suggested that the unpleasant smell from the crushed insects deters squirrels, its chief competitor for natural tree cavities. The western rock nuthatch builds an elaborate flask-shaped nest from mud, dung and hair or feathers, and decorates the nest's exterior and nearby crevices with feathers and insect wings. The nests are located in rock crevices, in caves, under cliff overhangs or on buildings. The eastern rock nuthatch builds a similar but less complex structure across the entrance to a cavity. Its nest can be quite small but may weigh up to 32 kg (70 lb). This species will also nest in river banks or tree holes and will enlarge its nest hole if it the cavity is too small. Nuthatches are monogamous. The female produces eggs that are white with red or yellow markings; the clutch size varies, tending to be larger for northern species. The eggs are incubated for 12 to 18 days by the female alone, or by both parents, depending on the species. The altricial (naked and helpless) chicks take between 21 and 27 days to fledge. Both parents feed the young, and in the case of two American species, brown-headed and pygmy, helper males from the previous brood may assist the parents in feeding. For the few species on which data are available, the average nuthatch lifespan in the wild is between 2 and 3.5 years, although ages of up to 10 years have been recorded. The Eurasian nuthatch has an adult annual survival rate of 53% and the male Corsican nuthatch 61.6%. Nuthatches and other small woodland birds share the same predators: accipiters, owls, squirrels and woodpeckers. An American study showed that nuthatch responses to predators may be linked to reproductive strategies. It measured the willingness of males of two species to feed incubating females on the nest when presented with models of a sharp-shinned hawk, which hunts adult nuthatches, or a house wren, which destroys eggs. The white-breasted nuthatch is shorter-lived than the red-breasted nuthatch, but has more young, and was found to respond more strongly to the egg predator, whereas the red-breasted showed greater concern with the hawk. This supports the theory that longer-lived species benefit from adult survival and future breeding opportunities while birds with shorter life spans place more value on the survival of their larger broods. Cold can be a problem for small birds that do not migrate. Communal roosting in tight huddles can help conserve heat and several nuthatch species employ it—up to 170 pygmy nuthatches have been seen in a single roost. The pygmy nuthatch is able to lower its body temperature when roosting, conserving energy through hypothermia and a lowered metabolic rate. Feeding Nuthatches forage along tree trunks and branches and are members of the same feeding guild as woodpeckers. Unlike woodpeckers and treecreepers, however, they do not use their tails for additional support, relying instead on their strong legs and feet to progress in jerky hops. They are able to descend head-first and hang upside-down beneath twigs and branches. Krüper's nuthatch can even stretch downward from an upside-down position to drink water from leaves without touching the ground. Rock nuthatches forage with a similar technique to the woodland species, but seek food on rock faces and sometimes buildings. When breeding, a pair of nuthatches will only feed within their territory, but at other times will associate with passing tits or join mixed-species feeding flocks. Insects and other invertebrates are a major portion of the nuthatch diet, especially during the breeding season, when they rely almost exclusively on live prey, but most species also eat seeds during the winter, when invertebrates are less readily available. Larger food items, such as big insects, snails, acorns or seeds may be wedged into cracks and pounded with the bird's strong bill. Unusually for a bird, the brown-headed nuthatch uses a piece of tree bark as a lever to pry up other bark flakes to look for food; the bark tool may then be carried from tree to tree or used to cover a seed cache. All nuthatches appear to store food, especially seeds, in tree crevices, in the ground, under small stones, or behind bark flakes, and these caches are remembered for as long as 30 days. Similarly, the rock nuthatches wedge snails into suitable crevices for consumption in times of need. European nuthatches have been found to avoid using their caches during benign conditions in order to save them for harsher times. Conservation status Some nuthatches, such as the Eurasian nuthatch and the North American species, have extensive ranges and large populations, and few conservation problems, although locally they may be affected by woodland fragmentation. In contrast, some of the more restricted species face severe pressures. The endangered white-browed nuthatch is found only in the Mount Victoria area of Burma, where forest up to above sea level has been almost totally cleared and habitat between is heavily degraded. Nearly 12,000 people live in the Natma Taung national park which includes Mount Victoria, and their fires and traps add to the pressure on the nuthatch. The population of the white-browed nuthatch, estimated at only a few thousand, is decreasing, and no conservation measures are in place. The Algerian nuthatch is found in only four areas of Algeria, and it is possible that the total population does not exceed 1,000 birds. Fire, erosion, and grazing and disturbance by livestock have reduced the quality of the habitat, despite its location in the Taza National Park. Deforestation has also caused population declines for the vulnerable Yunnan and yellow-billed nuthatches. The Yunnan nuthatch can cope with some tree loss, since it prefers open pine woodland, but although still locally common, it has disappeared from several of the areas in which it was recorded in the early 20th century. The threat to yellow-billed is particularly acute on Hainan, where more than 70% of the woodland has been lost in the past 50 years due to shifting cultivation and the use of wood for fuel during Chinese government re-settlement programmes. Krüper's nuthatch is threatened by urbanisation and development in and around mature coniferous forests, particularly in the Mediterranean coastal areas where the species was once numerous. A law promoting tourism came into force in Turkey in 2003, further exacerbating the threats to their habitat. The law reduced bureaucracy and made it easier for developers to build tourism facilities and summer houses in the coastal zone where woodland loss is a growing problem for the nuthatch.
Biology and health sciences
Passerida
Animals
203847
https://en.wikipedia.org/wiki/Buteo
Buteo
Buteo is a genus of medium to fairly large, wide-ranging raptors with a robust body and broad wings. In the Old World, members of this genus are called "buzzards", but "hawk" is used in the New World (Etymology: Buteo is the Latin name of the common buzzard). As both terms are ambiguous, buteo is sometimes used instead, for example, by the Peregrine Fund. Characteristics Buteos are fairly large birds. Total length can vary from and wingspan can range from . The lightest known species is the roadside hawk, at an average of although the lesser known white-rumped and Ridgway's hawks are similarly small in average wingspan around , and average length around in standard measurements. The largest species in length and wingspan is the upland buzzard, which averages around in length and in wingspan. The upland is rivaled in weight and outsized in foot measurements and bill size by the ferruginous hawk. In both of these largest buteos, adults typically weigh over , and in mature females, can exceed a mass of . All buteos may be noted for their broad wings and sturdy builds. They frequently soar on thermals at midday over openings and are most frequently seen while doing this. The flight style varies based on the body type and wing shape and surface size. Some long-winged species, such as rough-legged buzzards and Swainson's hawks, have a floppy, buoyant flight style, while others, such as red-tailed hawks and rufous-tailed hawks, tend to be relatively shorter-winged, soaring more slowly and flying with more labored, deeper flaps. Most small and some medium-sized species, from the roadside hawk to the red-shouldered hawk, often fly with an alternation of soaring and flapping, thus may be reminiscent of an Accipiter hawk in flight, but are still relatively larger-winged, shorter-tailed, and soar more extensively in open areas than Accipiter species do. Buteos inhabit a wide range of habitats across the world, but tend to prefer some access to both clearings, which provide ideal hunting grounds, and trees, which can provide nesting locations and security. Diet All Buteo species are to some extent opportunistic when it comes to hunting, and prey on almost any type of small animal as it becomes available to them. However, most have a strong preference for small mammals, mostly rodents. Rodents of almost every family in the world are somewhere preyed upon by Buteo species. Since many rodents are primarily nocturnal, most buteos mainly hunt rodents that may be partially active during the day, which can include squirrels and chipmunks, voles, and gerbils. More nocturnal varieties are hunted opportunistically and may be caught in the first or last few hours of light. Other smallish mammals, such as shrews, moles, pikas, bats, and weasels, tend to be minor secondary prey, although can locally be significant for individual species. Larger mammals, such as rabbits, hares, and marmots, including even adult specimens weighing as much as , may be hunted by the heaviest and strongest species, such as ferruginous, red-tailed and white-tailed hawks. Birds are taken occasionally, as well. Small to mid-sized birds, i.e. passerines, woodpeckers, waterfowl, pigeons, and gamebirds, are most often taken. However, since the adults of most smaller birds can successfully outmaneuver and evade buteos in flight, much avian prey is taken in the nestling or fledgling stages or adult birds if they are previously injured. An exception is the short-tailed hawk, which is a relatively small and agile species and is locally a small bird-hunting specialist. The Hawaiian hawk, which evolved on an isolated group of islands with no terrestrial mammals, was also initially a bird specialist, although today it preys mainly on introduced rodents. Other prey may include snakes, lizards, frogs, salamanders, fish, and even various invertebrates, especially beetles. In several Buteo species found in more tropical regions, such as the roadside hawk or grey-lined hawk, reptiles and amphibians may come to locally dominate the diet. Swainson's hawk, despite its somewhat large size, is something of exceptional insect-feeding specialist and may rely almost fully on crickets and dragonflies when wintering in southern South America. Carrion is eaten occasionally by most species, but is almost always secondary to live prey. The importance of carrion in the Old World "buzzard" species is relatively higher since these often seem slower and less active predators than their equivalents in the Americas. Most Buteo species seem to prefer to ambush prey by pouncing down to the ground directly from a perch. In a secondary approach, many spot prey from a great distance while soaring and circle down to the ground to snatch it. Reproduction Buteos are typical accipitrids in most of their breeding behaviors. They all build their own nests, which are often constructed out of sticks and other materials they can carry. Nests are generally located in trees, which are generally selected based on large sizes and inaccessibility to climbing predators rather than by species. Most Buteos breed in stable pairs, which may mate for life or at least for several years even in migratory species in which pairs part ways during winter. Generally from 2 to 4 eggs are laid by the female and are mostly incubated by her, while the male mate provides food. Once the eggs hatch, the survival of the young is dependent upon how abundant appropriate food is and the security of the nesting location from potential nest predators and other (often human-induced) disturbances. As in many raptors, the nestlings hatch at intervals of a day or two and the older, strong siblings tend to have the best chances of survival, with the younger siblings often starving or being handled aggressively (and even killed) by their older siblings. The male generally does most of the hunting and the female broods, but the male may also do some brooding while the female hunts as well. Once the fledgling stage is reached, the female takes over much of the hunting. After a stage averaging a couple of weeks, the fledglings take the adults‘ increasing indifference to feeding them or occasional hostile behavior towards them as a cue to disperse on their own. Generally, young Buteos tend to disperse several miles away from their nesting grounds and wander for one to two years until they can court a mate and establish their own breeding range. Distribution The Buteo hawks include many of the most widely distributed, most common, and best-known raptors in the world. Examples include the red-tailed hawk of North America and the common buzzard of Eurasia. Most Northern Hemisphere species are at least partially migratory. In North America, species such as broad-winged hawks and Swainson's hawks are known for their huge numbers (often called "kettles") while passing over major migratory flyways in the fall. Up to tens of thousands of these Buteos can be seen each day during the peak of their migration. Any of the prior mentioned common Buteo species may have total populations that exceed a million individuals. On the other hand, the Socotra buzzard and Galapagos hawks are considered vulnerable to extinction per the IUCN. The Ridgway's hawk is even more direly threatened and is considered Critically Endangered. These insular forms are threatened primarily by habitat destruction, prey reductions and poisoning. The latter reason is considered the main cause of a noted decline in the population of the more abundant Swainson's hawk, due to insecticides being used in southern South America, which the hawks ingest through crickets and then die from poisoning. Taxonomy and systematics The genus Buteo was erected by the French naturalist Bernard Germain de Lacépède in 1799 by tautonymy with the specific name of the common buzzard Falco buteo which had been introduced by Carl Linnaeus in 1758. Extant species in taxonomic order Fossil record A number of fossil species have been discovered, mainly in North America. Some are placed here primarily based on considerations of biogeography, Buteo being somewhat hard to distinguish from Geranoaetus based on osteology alone: †Buteo dondasi (Late Pliocene of Buenos Aires, Argentina) †Buteo fluviaticus (Brule Middle? Oligocene of Wealt County, US) – possibly same as B. grangeri †Buteo grangeri (Brule Middle? Oligocene of Washabaugh County, South Dakota, US) †Buteo antecursor (Brule Late? Oligocene) †?Buteo sp. (Brule Late Oligocene of Washington County, US) †Buteo ales (Agate Fossil Beds Early Miocene of Sioux County, US) – formerly in Geranospiza or Geranoaetus †Buteo typhoius (Olcott Early ?- snake Creek Late Miocene of Sioux County, US) †Buteo pusillus (Middle Miocene of Grive-Saint-Alban, France) †Buteo sp. (Middle Miocene of Grive-Saint-Alban, France – Early Pleistocene of Bacton, England) †Buteo contortus (snake Creek Late Miocene of Sioux County, US) – formerly in Geranoaetus †Buteo spassovi (Late Miocene of Chadžidimovo, Bulgaria) †Buteo conterminus (snake Creek Late Miocene/Early Pliocene of Sioux County, US) – formerly in Geranoaetus †Buteo sp. (Late Miocene/Early Pliocene of Lee Creek Mine, North Carolina, US) †Buteo sanya (Late Pleistocene of Luobidang Cave, Hainan, China) †Buteo chimborazoensis (Late Pleistocene of Ecuador) †Buteo sanfelipensis (Late Pleistocene, Cuba) An unidentifiable accipitrid that occurred on Ibiza in the Late Pliocene/Early Pleistocene may also have been a Buteo. If this is so, the bird can be expected to aid in untangling the complicated evolutionary history of the common buzzard group. The prehistoric species "Aquila" danana, Buteogallus fragilis (Fragile eagle), and Spizaetus grinnelli were at one time also placed in Buteo.
Biology and health sciences
Accipitrimorphae
Animals
203859
https://en.wikipedia.org/wiki/Wallcreeper
Wallcreeper
The wallcreeper (Tichodroma muraria) is a small passerine bird found throughout the high mountains of the Palearctic from southern Europe to central China. It is the only extant member of both the genus Tichodroma and the family Tichodromidae. Taxonomy and systematics In the past, there was some disagreement among ornithologists as to where the wallcreeper belongs in the taxonomic order. Initially, Linnaeus included it in the treecreepers as Certhia muraria, and even when given a separate genus of its own, Tichodroma, by Johann Karl Wilhelm Illiger in 1811, it was long included in the treecreeper family Certhiidae. More recently, it was placed in its own monotypic family, Tichodromadidae, by Karel Voous in the influential List of Recent Holarctic Bird Species, while other authorities such as Charles Vaurie put it in a monotypic family called Tichodromadinae, as a subfamily of the nuthatch family Sittidae. In either case, it is closely related to the nuthatches; a 2016 phylogenetic study of members in the superfamily Certhioidea suggests it is a sister species to the Sittidae. At least one other species of wallcreeper is known from the fossil record, Tichodroma capeki (Late Miocene of Polgardi, Hungary). The genus name Tichodroma comes from the Ancient Greek teikhos, meaning "wall", and dromos, meaning "runner". The specific name muraria is Medieval Latin for "of walls", from Latin murus, "wall". Alternatively, the wallcreeper is named the red-winged wall creeper. Subspecies Two subspecies are accepted: European wallcreeper (T. m. muraria) - (Linnaeus, 1766): Found from southern and eastern Europe to the Caucasus and western Iran Asian wallcreeper (T. m. nepalensis) - Bonaparte, 1850: Originally described as a separate species. Found from Kazakhstan, Turkmenistan and eastern Iran to eastern China Description The wallcreeper is long, with a weight of . Its is primarily blue-grey, with darker flight and tail feathers. In summer, the males have a black throat grading into the grey of the rest of the body, and females can have either a white throat or a small dark patch on the throat; in autumn and winter, both sexes have a white throat. Its most striking plumage feature, though, are its extraordinary crimson wings with white spots. Largely hidden when the wings are folded, this bright colouring covers most of the covert feathers, and the basal half of the primaries and secondaries. The tail is short, black with a narrow white fringe. Juveniles closely resemble the winter plumage. The subspecies T. m. nepalensis is slightly darker than the nominate race. Vocalisations Though largely silent, both male and female wallcreepers sing, the females generally only while defending feeding territories in the winter. The song is a high-pitched, drawn-out whistle, with notes that alternately rise and fall. During the breeding season, the male sings while perched or climbing. Distribution and habitat A bird of high mountains, the wallcreeper breeds at elevations ranging between in Europe, between in the Tien Shan, and in the Himalaya. It is largely resident across its range, but moves to lower elevations in winter, when it is found on buildings and in quarries. In France it regularly and repeatedly winters on cathedrals and viaducts in Brittany and Normandy. Birds have wintered as far afield as England and the Netherlands, where one spent two consecutive winters between 1989 and 1991 at the Vrije Universiteit in Amsterdam. The species is resident across much of the Himalayas, ranging across India, Nepal, Bhutan and parts of Tibet and also as a winter visitor in Bangladesh. Behaviour and ecology This species can be quite tame, but is often surprisingly difficult to see on mountain faces. While it may be confiding in the breeding and non-breeding seasons, and vagrant birds especially are extremely tame, they will still hide when they are aware of being watched, and will hesitate before entering the nest and even take roundabout routes towards the nest during prolonged observations. Wallcreepers are territorial, and pairs vigorously defend their breeding territory during the summer. During the winter the wallcreeper is solitary, with males and females defending individual feeding territories. The size of these feeding territories is hard to estimate but may comprise a single large quarry or rock massif; or, alternatively, a series of smaller quarries and rock faces. Wallcreepers may travel some distances from roosting sites to feeding territories. They have also been demonstrated showing site fidelity to winter feeding territories in consecutive years. Breeding The female wallcreeper builds a cup nest of grass and moss, sheltered deep in a rock crevice, hole or cave. The nest is lined with softer materials, often including feathers or wool, and typically has two entrances. The female usually lays 4–5 eggs, though clutches as small as three have been found. The white eggs measure 21 mm long, and are marked with a small number of black or reddish-brown speckles. Once her entire clutch has been laid, the female incubates the eggs for 19–20 days, until they hatch. During incubation, she is regularly fed by her mate. Young are altricial, which means they are blind, featherless and helpless at birth. Both parents feed the nestlings for a period of 28–30 days, until the young birds fledge. Each pair raises a single brood a year. Feeding The wallcreeper is an insectivore, feeding on terrestrial invertebrates, primarily insects and spiders, gleaned from rock faces. It sometimes also chases flying insects in short sallies from a rock wall perch. Feeding birds move across a cliff face in short flights and quick hops, often with their wings partially spread.
Biology and health sciences
Passerida
Animals
203896
https://en.wikipedia.org/wiki/Adobe%20Acrobat
Adobe Acrobat
Adobe Acrobat is a family of application software and web services developed by Adobe Inc. to view, create, manipulate, print and manage Portable Document Format (PDF) files. The family comprises Acrobat Reader (formerly Reader), Acrobat (formerly Exchange) and Acrobat.com. The basic Acrobat Reader, available for several desktop and mobile platforms, is freeware; it supports viewing, printing, scaling or resizing and annotating of PDF files. Additional, "Premium", services are available on paid subscription. The commercial proprietary Acrobat, available for Microsoft Windows, macOS, and mobile, can also create, edit, convert, digitally sign, encrypt, export and publish PDF files. Acrobat.com complements the family with a variety of enterprise content management and file hosting services. Purpose The main function of Adobe Acrobat is creating, viewing, and editing PDF documents. It can import popular document and image formats and save them as PDF. It is also possible to import a scanner's output, a website, or the contents of the Windows clipboard. Because of the nature of the PDF, however, once a PDF document is created, its natural organization and flow cannot be meaningfully modified. In other words, Adobe Acrobat is able to modify the contents of paragraphs and images, but doing so does not repaginate the whole document to accommodate for a longer or shorter document. Acrobat can crop PDF pages, change their order, manipulate hyperlinks, digitally sign a PDF file, add comments, redact certain parts of the PDF file, and ensure its adherence to such standards as PDF/A. History Adobe Acrobat was launched in 1993 and had to compete with other products and proprietary formats that aimed to create digital documents: Common Ground from No Hands Software Inc. Envoy from WordPerfect Corporation Folio Views from NextPage Replica from Farallon Computing WorldView from Interleaf DjVu from AT&T Laboratories Adobe has renamed the Acrobat products several times, in addition to merging, splitting and discontinuing them. Initially, the offered products were called Acrobat Reader, Acrobat Exchange and Acrobat Distiller. "Acrobat Exchange" soon became "Acrobat". Over time, "Acrobat Reader" became "Reader". Between versions 3 and 5, Acrobat did not have several editions. In 1999, the Acrobat.com service came to being and introduced several web services whose names started with "Acrobat", but eventually, "Acrobat.com" was downgraded from the name of the family of services, to that of one of those services. Unlike most other Adobe products, such as members of Adobe Creative Suite family, the Acrobat products do not have icons that display two letters on a colored rectangle. Document Cloud In April 2015, Adobe introduced the "Document Cloud" branding (alongside its Creative Cloud) to signify its adoption of the cloud storage and the software as a service model. Programs under this branding received a "DC" suffix. In addition, "Reader" was renamed back to "Acrobat Reader". Following the introduction of Document Cloud, Acrobat.com was discontinued as their features were integrated into the desktop programs and mobile apps. The GUI had major changes with the introduction of Acrobat DC in 2015, which supports Windows 7 and later, and OS X 10.9 and later. Version numbers are now identified by the last two digits of the year of major release, and the month and year is specified; the previous version was 12, but examples of the DC (Document Cloud) Acrobat product family versions are DC June 2016, version 15.016.20045, released 2 June 2016 and DC Classic January 2016, version 15.006.30119, released 12 January 2016. From DC 2015 the Acrobat family is available in two tracks, the original track, now named Classic, and the Continuous track. Updates for the Classic track are released quarterly, and do not include new features, whereas updates for the Continuous track are issued more frequently, and implemented silently and automatically. The last pre-DC version, Acrobat XI, was updated to 11.0.23 version (and this was the final release) on November 14, 2017, support for which had ended a month earlier on October 15, 2017. In September 2020, Adobe released a feature to make documents easier to read on phones called "Liquid Mode" using its Sensei AI. Adobe Acrobat family products Current services Acrobat.com is the web version of Acrobat developed by Adobe to edit, create, manipulate, print and manage files in a PDF. It is currently available for users with a web browser and an Adobe ID only. Acrobat Distiller is a software application for converting documents from PostScript format to PDF. Acrobat Pro is the professional full version of Acrobat developed by Adobe to edit, create, manipulate, print and manage files in a PDF. It is currently available for Windows and macOS. Acrobat Reader is the freeware version of Acrobat developed by Adobe to view, create, fill, print and format files in a PDF. It is currently available for Windows, macOS, iOS, and Android. Acrobat Standard is the standard full version of Acrobat developed by Adobe to edit, create, manipulate, print and manage files in a PDF. It is currently available for Windows. Document Cloud is part of the Acrobat family developed by Adobe to edit, create, save online, print and format files in a PDF. It is currently available for users with a web browser and an Adobe ID only. Fill & Sign is part of the Acrobat family developed by Adobe to fill, sign, and manage files in a PDF. It is currently available for Windows, macOS, iOS, and Android. Scan is part of the Acrobat family developed by Adobe Inc. to scan, crop, and manage files in a PDF. It is currently available for iOS and Android. Sign (formerly EchoSign and eSign) is part of the Acrobat family developed by Adobe Inc. to fill, sign, and manage files in a PDF. It is currently available for iOS and Android. Discontinued services Acrobat Approval allows users to deploy electronic forms based on a PDF. Acrobat Business Tools is a discontinued component of the Acrobat family that was distributed by Adobe Systems with collaboration and document review features. Acrobat Capture is a document processing utility for Windows from Adobe Systems that converts a scan of any paper document into a PDF file with selectable text through OCR technology. Acrobat Distiller Server is a discontinued server-based utility that was developed by Adobe Systems to perform centralized high-volume conversion of PostScript documents to PDF formats for workgroups. Acrobat eBook Reader is a PDF-based e-book reader from Adobe Systems. Features present in Acrobat eBook Reader later appeared in Digital Editions. Acrobat Elements was a very basic version of the Acrobat family that was released by Adobe Systems. Its key feature advantage over the free Acrobat Reader was the ability to create reliable PDF files from Microsoft Office applications. Acrobat InProduction is a pre-press tools suite for Acrobat released by Adobe in 2000 to handle color separation and pre-flighting of PDF files for printing. Acrobat Messenger is a document utility for Acrobat users that was released by Adobe Systems in 2000 to convert paper documents into PDF files that can be e-mailed, faxed, or shared online. Acrobat Reader Touch is a free PDF document viewer developed and released on December 11, 2012, by Adobe Systems for the Windows Touch user interface. FormsCentral was a web form filling server for users with Windows, macOS, or a web browser and an Adobe ID only. It was discontinued on July 28, 2015, and replaced with Experience Manager Forms. Send & Track (formerly SendNow and Send) was a service that lets you send files as links, track files you send to specific individuals, and get confirmation receipts when others view your file. It was completely discontinued as of July 11, 2018. Hidden helper tools Acrobat Synchronizer is a tool installed along with Acrobat versions. While running in the background, it maintains the accuracy of Acrobat files imported to Acrobat. RdrCEF (also known as Adobe Reader Cloud Extension Feature) is a tool bundled with Acrobat that runs a process that handles cloud connectivity features. Supported file formats The table below contains some of the supported file formats that can be opened or accessed in Adobe Acrobat. Internationalization and localization Adobe Acrobat is available in the following languages: Arabic, Chinese Simplified, Chinese Traditional, Czech, Danish, Dutch, English, Finnish, French, German, Greek, Hebrew, Hungarian, Italian, Japanese, Korean, Norwegian, Polish, Portuguese, Romanian, Russian, Spanish, Swedish, Turkish and Ukrainian. Arabic and Hebrew versions are available from WinSoft International, Adobe Systems' internationalization and localization partner. Before Adobe Acrobat DC, separate Arabic and Hebrew versions were developed specifically for these languages, which are normally written right-to-left. These versions include special TouchUp properties to manage digits, ligatures option and paragraph direction in right-to-left Middle Eastern scripts such as Arabic, Hebrew, and Persian, as well as standard left-to-right Indian scripts such as Devanagari and Gujarati. The Web Capture feature can convert single web pages or entire web sites into PDF files, while preserving the content's original text encoding. Acrobat can also copy Arabic and Hebrew text to the system clipboard in its original encoding; if the target application is also compatible with the text encoding, then the text will appear in the correct script. Security A comprehensive list of security bulletins for most Adobe products and related versions is published on their Security bulletins and advisories page and in other related venues. In particular, the detailed history of security updates for all versions of Adobe Acrobat has been made public. From Version 3.02 onwards, Acrobat Reader has included support for JavaScript. This functionality allows a PDF document creator to include code which executes when the document is read. Malicious PDF files that attempt to attack security vulnerabilities can be attached to links on web pages or distributed as email attachments. While JavaScript is designed without direct access to the file system to make it "safe", vulnerabilities have been reported for abuses such as distributing malicious code by Acrobat programs. Adobe applications had already become the most popular client-software targets for attackers during the last quarter of 2009. McAfee predicted that Adobe software, especially Reader and Flash, would be the primary target for software attacks in the year 2010. September 2006 warning On September 13, 2006, David Kierznowski provided sample PDF files illustrating JavaScript vulnerabilities. Since at least version 6, JavaScript can be disabled using the preferences menu and embedded URLs that are launched are intercepted by a security warning dialog box to either allow or block the website from activating. February 2009 warning On February 19, 2009, Adobe released a Security Bulletin announcing JavaScript vulnerabilities in Adobe Reader and Acrobat versions 9 and earlier. As a workaround for this issue, US-CERT recommended disabling JavaScript in the affected Adobe products, canceling integration with Windows shell and web browsers (while carrying out an extended version of de-integration for Internet Explorer), deactivating Adobe indexing services and avoiding all PDF files from external sources. February 2013 warning Adobe has identified critical vulnerabilities in Adobe Reader and Acrobat XI (11.0.01 and earlier) for Windows and Macintosh, 9.5.3 and earlier 9.x versions. These vulnerabilities could cause the application to crash and potentially allow an attacker to take control of the affected system. There have been reports of these vulnerabilities being exploited to trick Windows users into clicking on a malicious PDF file delivered in an email message. Adobe recommended users update their product installations. January 2016 warning Adobe has released security updates for Adobe Acrobat and Reader for Windows and Macintosh. These updates address critical vulnerabilities that could potentially allow an attacker to take control of the affected system.
Technology
Office and data management
null
203921
https://en.wikipedia.org/wiki/Crusher
Crusher
A crusher is a machine designed to reduce large rocks into smaller rocks, gravel, sand or rock dust. Crushers may be used to reduce the size, or change the form, of waste materials so they can be more easily disposed of or recycled, or to reduce the size of a solid mix of raw materials (as in rock ore), so that pieces of different composition can be differentiated. Crushing is the process of transferring a force amplified by mechanical advantage through a material made of molecules that bond together more strongly, and resist deformation more, than those in the material being crushed do. Crushing devices hold material between two parallel or tangent solid surfaces, and apply sufficient force to bring the surfaces together to generate enough energy within the material being crushed so that its molecules separate from (fracturing), or change alignment in relation to (deformation), each other. The earliest crushers were hand-held stones, where the weight of the stone provided a boost to muscle power, used against a stone anvil. Querns and mortars are types of these crushing devices. Background history In industry, crushers are machines which use a metal surface to break or compress materials into small fractional chunks or denser masses. Throughout most of industrial history, the greater part of crushing and mining part of the process occurred under muscle power as the application of force concentrated in the tip of the miners pick or sledge hammer driven drill bit. Before explosives came into widespread use in bulk mining in the mid-nineteenth century, most initial ore crushing and sizing was by hand and hammers at the mine or by water powered trip hammers in the small charcoal fired smithies and iron works typical of the Renaissance through the early-to-middle Industrial Revolution. It was only after explosives, and later early powerful steam shovels produced large chunks of materials, chunks originally reduced by hammering in the mine before being loaded into sacks for a trip to the surface, chunks that were eventually also to lead to rails and mine railways transporting bulk aggregations that post-mine face crushing became widely necessary. The earliest of these were in the foundries, but as coal took hold the larger operations became the coal breakers that fueled industrial growth from the first decade of the 1600s to the replacement of breakers in the 1970s through the fuel needs of the present day. The gradual coming of that era and displacement of the cottage industry based economies was itself accelerated first by the utility of wrought and cast iron as a desired materials giving impetus to larger operations, then in the late-sixteenth century by the increasing scarcity of wood lands for charcoal production to make the newfangled window glass material that had become—along with the chimney— 'all the rage'  among the growing middle-class and affluence of the sixteenth-and-seventeenth centuries;and as always, the charcoal needed to smelt metals, especially to produce ever larger amounts of brass and bronze, pig iron, cast iron and wrought iron demanded by the new consumer classes. Other metallurgical developments such as silver and gold mining mirrored the practices and developments of the bulk material handling methods and technologies feeding the burgeoning appetite for more and more iron and glass, both of which were rare in personal possessions until the 1700s. Things only became worse when the English figured out how to cast the more economical iron cannons (1547), following on their feat of becoming the armorers of the European continent's powers by having been leading producers of brass and bronze guns, and eventually by various acts of Parliament, gradually banned or restricted the further cutting of trees for charcoal in larger and larger regions in the United Kingdom. In 1611, a consortium led by courtier Edward Zouch was granted a patent for the reverberatory furnace, a furnace using coal, not precious national timber reserves, which was immediately employed in glass making. An early politically connected and wealthy Robber Baron figure Sir Robert Mansell bought his way into the fledgling furnace company and wrested control of it, and by 1615 managed to have James I issue a proclamation forbidding the use of wood to produce glass, giving his family's extensive coal holdings a monopoly on both source and means of production for nearly half-a-century. Abraham Darby a century later relocated to Bristol where he had established a building brass and bronze industry by importing Dutch workers and using them to raid Dutch techniques. Both materials were considered superior to iron for cannon, and machines as they were better understood. But Darby would change the world in several key ways. Where the Dutch had failed in casting iron, one of Darby's apprentices, John Thomas succeeded in 1707 and as Burke put it: "had given England the key to the Industrial Revolution". At the time, mines and foundries were virtually all small enterprises except for the tin mines (driven by the price and utility of brass) and materials came out of the mines already hammered small by legions of miners who had to stuff their work into carry sacks for pack animal slinging. Concurrently, mines needed drainage resulting in Savery and Newcomen's early steam driven pumping systems. The deeper the mines went, the larger the demand became for better pumps, the greater the demand for iron, the greater the need for coal, the greater the demand for each. Seeing ahead clearly, Darby, sold off his brass business interests and relocated to Coalbrookdale with its plentiful coal mines, water power and nearby ore supplies. Over that decade his foundries developed iron casting technologies and began to supplant other metals in many applications. He adapted Coking of his fuel by copying Brewers practices. In 1822, the pumping industries needs for larger cylinders met up with Darby's ability to melt sufficient quantities of pig iron to cast large inexpensive iron cylinders instead of costly brass ones, reducing the cost of cylinders by nine-tenths. With gunpowder being increasingly applied to mining, rock chunks from a mining face became much larger, and the blast dependent mining itself had become dependent upon an organized group, not just an individual swinging a pick. Economies of scale gradually infused industrial enterprises, while transport became a key bottleneck as the volume of moved materials continued to increase following demand. This spurred numerous canal projects, inspired laying first wooden, then iron protected rails using draft animals to pull loads in the emerging bulk goods transportation dependent economy. In the coal industry, which grew as coal became the preferred fuel for smelting ores, crushing and preparation (cleaning) was performed for over a hundred years in coal breakers, massive noisy buildings full of conveyors, belt-powered trip-hammer crushing stages and giant metal grading/sorting grates. Like mine pumps, the internal conveyors and trip-hammers were housed within these 7—11 storey buildings. Industrial use Mining operations use crushers, commonly classified by the degree to which they fragment the starting material, with primary and secondary crushers handling coarse materials, and tertiary and quaternary crushers reducing ore particles to finer gradations. Each crusher is designed to work with a certain maximum size of raw material, and often delivers its output to a screening machine which sorts and directs the product for further processing. As well as size, machines designed for each of these stages must take into account the basic methods of material size reduction; impact, compression, attrition, and cutting. Depending on the material properties and desired outcome some methods of crushing and thus machines designs may be more appropriate for the use case. Typically, crushing stages are followed by milling stages if the materials need to be further reduced. Additionally rock breakers are typically located next to a crusher to reduce oversize material too large for a crusher. Crushers are used to reduce particle size enough so that the material can be processed into finer particles in a grinder. A typical processing line at a mine might consist of a crusher followed by a SAG mill followed by a ball mill. In this context, the SAG mill and ball mill are considered grinders rather than crushers. In operation, the raw material (of various sizes) is usually delivered to the primary crusher's hopper by dump trucks, excavators or wheeled front-end loaders. A feeder device such as an apron feeder, conveyor or vibrating grid controls the rate at which this material enters the crusher, and often contains a preliminary screening device which allows smaller material to bypass the crusher itself, thus improving efficiency. Primary crushing reduces the large pieces to a size which can be handled by the downstream machinery. Some crushers are mobile and can crush rocks as large as 1.5 meter (60 inches). Primarily used in-pit at the mine face these units are able to move with the large infeed machines (mainly shovels) to increase the tonnage produced. In a mobile road operation, these crushed rocks are directly combined with concrete, and asphalt which are then deposited on to a road surface. This removes the need for hauling oversized material to a stationary crusher and then back to the road surface. Additionally, Small/Mini Crushers (approx 6,000 to 60,000 pound range) are portable crushers typically used by travelling from jobsite-to-jobsite. These small crusher may be track or wheel mounted, gas, diesel, or electrically powered, but are primarily of the Jaw Crusher variety. Types of crushers The following table describes typical uses of commonly used crushers: Jaw crusher A jaw crusher uses compressive force for breaking of particle. This mechanical pressure is achieved by the two jaws of the crusher of which one is fixed while the other reciprocates. A jaw or toggle crusher consists of a set of vertical jaws, one jaw is kept stationary and is called a fixed jaw while the other jaw called a swing jaw, moves back and forth relative to it, by a cam or pitman mechanism, acting like a class II lever or a nutcracker. The volume or cavity between the two jaws is called the crushing chamber. The movement of the swing jaw can be quite small, since complete crushing is not performed in one stroke. The inertia required to crush the material is provided by a flywheel that moves a shaft creating an eccentric motion that causes the closing of the gap. Jaw crushers are heavy duty machines and hence need to be robustly constructed. The outer frame is generally made of cast iron or steel. The jaws themselves are usually constructed from cast steel. They are fitted with replaceable liners which are made of manganese steel, or Ni-hard (a Ni-Cr alloyed cast iron). Jaw crushers are usually constructed in sections to ease the process transportation if they are to be taken underground for carrying out the operations. Jaw crushers are classified on the basis of the position of the pivoting of the swing jaw Blake crusher-the swing jaw is fixed at the lower position Dodge crusher-the swing jaw is fixed at the upper position Universal crusher-the swing jaw is fixed at an intermediate position The Blake crusher was patented by Eli Whitney Blake in 1858. The Blake type jaw crusher has a fixed feed area and a variable discharge area. Blake crushers are of two types- single toggle and double toggle jaw crushers. In the single toggle jaw crushers, the swing jaw is suspended on the eccentric shaft which leads to a much more compact design than that of the double toggle jaw crusher. The swing jaw, suspended on the eccentric, undergoes two types of motion- swing motion towards the fixed jaw due to the action of toggle plate and vertical movement due to the rotation of the eccentric. These two motions, when combined, lead to an elliptical jaw motion. This motion is useful as it assists in pushing the particles through the crushing chamber. This phenomenon leads to higher capacity of the single toggle jaw crushers but it also results in higher wear of the crushing jaws. These type of jaw crushers are preferred for the crushing of softer particles. In the double toggle jaw crushers, the oscillating motion of the swing jaw is caused by the vertical motion of the pitman. The pitman moves up and down. The swing jaw closes, i.e., it moves towards the fixed jaw when the pitman moves upward and opens during the downward motion of the pitman. This type is commonly used in mines due to its ability to crush tough and abrasive materials. In the Dodge type jaw crushers, the jaws are farther apart at the top than at the bottom, forming a tapered chute so that the material is crushed progressively smaller and smaller as it travels downward until it is small enough to escape from the bottom opening. The Dodge jaw crusher has a variable feed area and a fixed discharge area which leads to choking of the crusher and hence is used only for laboratory purposes and not for heavy duty operations. Gyratory crusher A gyratory crusher is similar in basic concept to a jaw crusher, consisting of a concave surface and a conical head; both surfaces are typically lined with manganese steel surfaces. The inner cone has a slight circular movement, but does not rotate; the movement is generated by an eccentric arrangement. As with the jaw crusher, material travels downward between the two surfaces being progressively crushed until it is small enough to fall out through the gap between the two surfaces. A gyratory crusher is one of the main types of primary crushers in a mine or ore processing plant. Gyratory crushers are designated in size either by the gape and mantle diameter or by the size of the receiving opening. Gyratory crushers can be used for primary or secondary crushing. The crushing action is caused by the closing of the gap between the mantle line (movable) mounted on the central vertical spindle and the concave liners (fixed) mounted on the main frame of the crusher. The gap is opened and closed by an eccentric on the bottom of the spindle that causes the central vertical spindle to gyrate. The vertical spindle is free to rotate around its own axis. The crusher illustrated is a short-shaft suspended spindle type, meaning that the main shaft is suspended at the top and that the eccentric is mounted above the gear. The short-shaft design has superseded the long-shaft design in which the eccentric is mounted below the gear. Cone crusher With the rapid development of mining technology, the cone crusher can be divided into four types: compound cone crusher, spring cone crusher, hydraulic cone crusher and gyratory crusher. According to different models, the cone crusher is divided into vertical shaft cone (VSC) series cone crusher (compound cone crusher), Symons cone crusher, PY cone crusher, single cylinder hydraulic cone crusher, multi-cylinder hydraulic cone crusher, gyratory crusher, etc. A cone crusher is similar in operation to a gyratory crusher, with less steepness in the crushing chamber and more of a parallel zone between crushing zones. A cone crusher breaks rock by squeezing the rock between an eccentrically gyrating spindle, which is covered by a wear-resistant mantle, and the enclosing concave hopper, covered by a manganese concave or a bowl liner. As rock enters the top of the cone crusher, it becomes wedged and squeezed between the mantle and the bowl liner or concave. Large pieces of ore are broken once, and then fall to a lower position (because they are now smaller) where they are broken again. This process continues until the pieces are small enough to fall through the narrow opening at the bottom of the crusher. A cone crusher is suitable for crushing a variety of mid-hard and above mid-hard ores and rocks. It has the advantage of reliable construction, high productivity, better granularity and shape of finished products, easy adjustment and lower operational costs. The spring release system of a cone crusher acts an overload protection that allows tramp to pass through the crushing chamber without damage to the crusher. Compound cone crusher Compound cone crusher (VSC series cone crusher) can crush materials of over medium hardness. It is mainly used in mining, chemical industry, road and bridge construction, building, etc. As for VSC series cone crusher, there are four crushing cavities (coarse, medium, fine and superfine) to choose. Compared with the same type, VSC series cone crusher, whose combination of crushing frequency and eccentricity is the best, can make materials have higher comminution degree and higher yield. In addition, VSC series cone crusher's enhanced laminating crushing effect on material particles makes the cubic shape of crushed materials better, which increases the selling point. Symons cone crusher Symons cone crusher (spring cone crusher) can crush materials of above medium hardness. And it is widely used in metallurgy, building, hydropower, transportation, chemical industry, etc. When used with jaw crusher, it can be used as secondary, tertiary or quaternary crushing. Generally speaking, the standard type of Symons cone crusher is applied to medium crushing. The medium type is applied to fine crushing. The short head type is applied to coarse fine crushing. As casting steel technique is adopted, the machine has good rigidity and large high strength. Single cylinder hydraulic cone crusher Single cylinder hydraulic cone crusher is mainly composed of main frame, transmission device, eccentric shaft, bowl-shaped bearing, crushing cone, mantle, bowl liner, adjusting device, adjusting sleeve, hydraulic control system, hydraulic safety system, dust-proof ring, feed plate, etc. It is applied to cement mill, mining, building construction, road &bridge construction, railway construction and metallurgy and some other industries. Multi-cylinder hydraulic cone crusher Multi-cylinder hydraulic cone crusher is mainly composed of main frame, eccentric shaft, crushing cone, mantle, bowl liner, adjusting device, dust ring, transmission device, bowl-shaped bearing, adjusting sleeve, hydraulic control system, hydraulic safety system, etc. The electric motor of the cone crusher drives the eccentric shaft to make periodic swing movement under the shaft axis, and consequently surface of mantle approaches and leaves the surface of bowl liner now and then, so that the material is crushed due to squeezing and grinding inside the crushing chamber. The safety cylinder of the machine can ensure safety as well as lift supporting sleeve and static cone by a hydraulic system and automatically remove the blocks in the crushing chamber when the machine is suddenly stuffy. Thus the maintenance rate is greatly reduced and production efficiency is greatly improved as it can remove blocks without disassembling the machine. Impact crusher Impact crushers involve the use of impact rather than pressure to crush material. The material is contained within a cage, with openings on the bottom, end, or side of the desired size to allow pulverized material to escape. There are two types of impact crushers: horizontal shaft impactor and vertical shaft impactor. Horizontal shaft impactor (HSI) / hammermill The HSI crushers break rock by impacting the rock with hammers that are fixed upon the outer edge of a spinning rotor. HSI machines are sold in stationary, trailer mounted and crawler mounted configurations. HSI's are used in recycling, hard rock and soft materials. In earlier years the practical use of HSI crushers is limited to soft materials and non abrasive materials, such as limestone, phosphate, gypsum, weathered shales, however improvements in metallurgy have changed the application of these machines. Mobile crusher Mobile crushers are versatile and efficient machines designed for on-site crushing in mining and construction, offering flexibility and mobility to process materials directly at the job site. Mobile crushers are available in various types and configurations: Mobile Jaw Crushers: These crushers feature a stationary jaw and a movable jaw, enabling primary crushing of materials with varying hardness and abrasiveness. Mobile Impact Crushers: Impact crushers utilize the principle of rapid impact to crush materials, making them suitable for secondary and tertiary crushing of various rocks and minerals. Mobile Cone Crushers: Cone crushers employ a cone-shaped crushing chamber, ideal for producing finely crushed aggregates and sands for construction and mining applications. Mobile Vertical Shaft Impact (VSI) Crushers: VSI crushers utilize a high-speed rotor with wear-resistant tips to crush materials, offering superior shaping capabilities for producing high-quality aggregates with excellent particle shape. Mobile Jaw and Cone Combination Crushers: These crushers combine the features of jaw and cone crushers, offering versatility and efficiency for processing diverse materials in various applications. Vertical shaft impactor (VSI) VSI crushers use a different approach involving a high speed rotor with wear resistant tips and a crushing chamber designed to 'throw' the rock against. The VSI crushers utilize velocity rather than surface force as the predominant force to break rock. In its natural state, rock has a jagged and uneven surface. Applying surface force (pressure) results in unpredictable and typically non-cubical resulting particles. Utilizing velocity rather than surface force allows the breaking force to be applied evenly both across the surface of the rock as well as through the mass of the rock. Rock, regardless of size, has natural fissures (faults) throughout its structure. As rock is 'thrown' by a VSI rotor against a solid anvil, it fractures and breaks along these fissures. Final particle size can be controlled by 1) the velocity at which the rock is thrown against the anvil and 2) the distance between the end of the rotor and the impact point on the anvil. The product resulting from VSI crushing is generally of a consistent cubical shape such as that required by modern Superpave highway asphalt applications. Using this method also allows materials with much higher abrasiveness to be crushed than is capable with an HSI and most other crushing methods. VSI crushers generally utilize a high speed spinning rotor at the center of the crushing chamber and an outer impact surface of either abrasive resistant metal anvils or crushed rock. Utilizing cast metal surfaces 'anvils' is traditionally referred to as a "shoe and anvil VSI". Utilizing crushed rock on the outer walls of the crusher for new rock to be crushed against is traditionally referred to as "rock on rock VSI". VSI crushers can be used in static plant set-up or in mobile tracked equipment. Mineral sizers Mineral sizers are a variety of roll crushers which use two rotors with large teeth, on small diameter shafts, driven at a low speed by a direct high torque drive system. This design produced three-stage breaking action which all interact when breaking materials using sizer technology. The gripping: at the first stage, the material is gripped by the leading faces of opposed rotor teeth. These subject the rock to multiple point loading, inducing stress into the material to exploit any natural weaknesses. At the second stage, material is broken in tension by being subjected to a three-point loading, applied between the front tooth faces on one rotor, and rear tooth faces on the other rotor. Any lumps of material that still remain oversize, are broken as the rotors chop through the fixed teeth of the breaker bar, thereby achieving a three dimensional controlled product size. The rotating screen effect: The interlaced toothed rotor design allows free flowing undersize material to pass through the continuously changing gaps generated by the relatively slow moving shafts. The deep scroll tooth pattern: The deep scroll conveys the larger material to one end of the machine and helps to spread the feed across the full length of the rotors. This feature can also be used to reject oversize material from the machine. Their primary advantage is a compact geometry and size which is valuable in mining industry, .e.g. in underground hard-rock mining. Crusher bucket A crusher bucket is an attachment for hydraulic excavators. Its way of working consists on a bucket with two crushing jaws inside, one of them is fixed and the other one moves back and forth relative to it, as in a jaw crusher. They are manufactured with a high inertia power train, circular jaw movement and an anti-stagnation plate, which prevents large shredding pieces from getting stuck in the bucket's mouth, not allowing them to enter the crushing jaws. They have also the crushing jaws placed in a cross position. This position together with its circular motion gives these crusher buckets the faculty of grinding wet material. Technology For the most part advances in crusher design have moved slowly. Jaw crushers have remained virtually unchanged for sixty years. More reliability and higher production have been added to basic cone crusher designs that have also remained largely unchanged. Increases in rotating speed have provided the largest variation. For instance, a 48-inch (120 cm) cone crusher manufactured in 1960 may be able to produce 170 tons/h of crushed rock, whereas the same size crusher manufactured today may produce 300 tons/h. These production improvements come from speed increases and better crushing chamber designs. The largest advance in cone crusher reliability has been seen in the use of hydraulics to protect crushers from being damaged when uncrushable objects enter the crushing chamber. Foreign objects, such as steel, can cause extensive damage to a cone crusher, and additional costs in lost production. The advance of hydraulic relief systems has greatly reduced downtime and improved the life of these machines. Due to safety requirements and the weight of the jaws, many companies and OEM providers have produced a suite of lifting tools to safely install and fit these tools into the crusher. In many countries, the lifting tool is mandatory due to legislative and policy-based safety requirements.
Technology
Metallurgy
null
203970
https://en.wikipedia.org/wiki/Hoopoe
Hoopoe
Hoopoes () are colourful birds found across Africa, Asia, and Europe, notable for their distinctive "crown" of feathers which can be raised or lowered at will. Three living and one extinct species are recognized, though for many years all of the extant species were lumped as a single species—Upupa epops. In fact, some taxonomists still consider all three species conspecific. Some authorities also keep the African and Eurasian hoopoe together but split the Madagascar hoopoe. The Eurasian hoopoe is common in its range and has a large population, so it is evaluated as Least Concern on The IUCN Red List of Threatened Species. However, their numbers are declining in Western Europe. Conversely, the hoopoe has been increasing in numbers at the tip of the South Sinai, Sharm el-Sheikh. There are dozens of nesting pairs that remain resident all year round. Taxonomy The genus Upupa was introduced in 1758 by the Swedish naturalist Carl Linnaeus in the tenth edition of his Systema Naturae. The type species is the Eurasian hoopoe (Upupa epops). Upupa and ἔποψ (epops) are respectively the Latin and Ancient Greek names for the hoopoe; both, like the English name, are onomatopoeic forms which imitate the cry of the bird. The hoopoe was classified in the clade Coraciiformes, which also includes kingfishers, bee-eaters, and rollers. A close relationship between the hoopoe and the wood hoopoes is also supported by the shared and unique nature of their stapes. In the Sibley-Ahlquist taxonomy, the hoopoe is separated from the Coraciiformes as a separate order, the Upupiformes. Some authorities place the wood hoopoes in the Upupiformes as well. Now the consensus is that both hoopoe and the wood hoopoes belong with the hornbills in the Bucerotiformes. The fossil record of the hoopoes is very incomplete, with the earliest fossil coming from the Quaternary. The fossil record of their relatives is older, with fossil wood hoopoes dating back to the Miocene and those of an extinct related family, the Messelirrisoridae, dating from the Eocene. Species Formerly considered a single species, the hoopoe has been split into three separate species: the Eurasian hoopoe, Madagascar hoopoe and the resident African hoopoe. One accepted separate species, the Saint Helena hoopoe, lived on the island of St Helena but became extinct in the 16th century, presumably due to introduced species. The genus Upupa was created by Linnaeus in his Systema naturae in 1758. It then included three other species with long curved bills: U. eremita (now Geronticus eremita), the northern bald ibis U. pyrrhocorax (now Pyrrhocorax pyrrhocorax), the red-billed chough U. paradisea Formerly, the greater hoopoe-lark was also considered to be a member of this genus (as Upupa alaudipes). Extant species Distribution and habitat Hoopoes are widespread in Europe, Asia, and North Africa, Sub-Saharan Africa and Madagascar. Most European and north Asian birds migrate to the tropics in winter. In contrast, the African populations are sedentary all year. The species has been a vagrant in Alaska; U. e. saturata was recorded there in 1975 in the Yukon Delta. Hoopoes have been known to breed north of their European range, and in southern England during warm, dry summers that provide plenty of grasshoppers and similar insects, although as of the early 1980s northern European populations were reported to be in the decline, possibly due to changes in climate. The hoopoe has two basic requirements of its habitat: bare or lightly vegetated ground on which to forage and vertical surfaces with cavities (such as trees, cliffs or even walls, nestboxes, haystacks, and abandoned burrows) in which to nest. These requirements can be provided in a wide range of ecosystems, and as a consequence the hoopoe inhabits a wide range of habitats such as heathland, wooded steppes, savannas and grasslands, as well as forest glades. The Madagascar species also makes use of more dense primary forest. The modification of natural habitats by humans for various agricultural purposes has led to hoopoes becoming common in olive groves, orchards, vineyards, parkland and farmland, although they are less common and are declining in intensively farmed areas. Hunting is of concern in southern Europe and Asia. Hoopoes make seasonal movements in response to rain in some regions such as in Ceylon and in the Western Ghats. Birds have been seen at high altitudes during migration across the Himalayas. One was recorded at about by the first Mount Everest expedition. Behaviour and ecology In what was long thought to be a defensive posture, hoopoes sunbathe by spreading out their wings and tail low against the ground and tilting their head up; they often fold their wings and preen halfway through. They also enjoy taking dust and sand baths. Adults may begin their moult after the breeding season and continue after they have migrated for the winter. Diet and feeding The diet of the hoopoe is mostly composed of insects, although small reptiles, frogs and plant matter such as seeds and berries are sometimes taken as well. It is a solitary forager which typically feeds on the ground. More rarely they will feed in the air, where their strong and rounded wings make them fast and manoeuverable, in pursuit of numerous swarming insects. More commonly their foraging style is to stride over relatively open ground and periodically pause to probe the ground with the full length of their bill. Insect larvae, pupae and mole crickets are detected by the bill and either extracted or dug out with the strong feet. Hoopoes will also feed on insects on the surface, probe into piles of leaves, and even use the bill to lever large stones and flake off bark. Common diet items include crickets, locusts, beetles, earwigs, cicadas, ant lions, bugs and ants. These can range from in length, with a preferred prey size of around . Larger prey items are beaten against the ground or a preferred stone to kill them and remove indigestible body parts such as wings and legs. Breeding Hoopoes are monogamous, although the pair bond apparently only lasts for a single season. They are also territorial. The male calls frequently to advertise his ownership of the territory. Chases and fights between rival males (and sometimes females) are common and can be brutal. Birds will try to stab rivals with their bills, and individuals are occasionally blinded in fights. The nest is in a hole in a tree or wall, and has a narrow entrance. It may be unlined, or various scraps may be collected. The female alone is responsible for incubating the eggs. Clutch size varies with location: Northern Hemisphere birds lay more eggs than those in the Southern Hemisphere, and birds at higher latitudes have larger clutches than those closer to the equator. In central and northern Europe and Asia the clutch size is around 12, whereas it is around four in the tropics and seven in the subtropics. The eggs are round and milky blue when laid, but quickly discolour in the increasingly dirty nest. They weigh . A replacement clutch is possible. When food is bountiful, the female will lay a few extra eggs for the purpose of providing food for chicks that have already hatched. In a study done in Spain, it was found that nests with a higher incidence of cannibalism successfully fledged more chicks than in nests where hatchlings weren't fed to older chicks. Hoopoes have well-developed anti-predator defences in the nest. The uropygial gland of the incubating and brooding female is quickly modified to produce a foul-smelling liquid, and the glands of nestlings do so as well. These secretions are rubbed into the plumage. The secretion, which smells like rotting meat, is thought to help deter predators, as well as deter parasites and possibly act as an antibacterial agent. The secretions stop soon before the young leave the nest. From the age of six days, nestlings can also direct streams of faeces at intruders, and will hiss at them in a snake-like fashion. The young also strike with their bill or with one wing. The incubation period for the species is between 15 and 18 days, during which time the male feeds the female. Incubation begins as soon as the first egg is laid, so the chicks are born asynchronously. The chicks hatch with a covering of downy feathers. By around day three to five, feather quills emerge which will become the adult feathers. The chicks are brooded by the female for between 9 and 14 days. The female later joins the male in the task of bringing food. The young fledge in 26 to 29 days and remain with the parents for about a week more. Relationship with humans The diet of the hoopoe includes many species considered by humans to be pests, such as the pupae of the processionary moth, a damaging forest pest which few other birds will eat because of its irritating hairs. For this reason the species is afforded protection under the law in many countries. In folklore, myth and religion Hoopoes are distinctive birds and have made a cultural impact over much of their range. They were considered sacred in Ancient Egypt, and were "depicted on the walls of tombs and temples". At the Old Kingdom, the hoopoe was used in the iconography as a symbolic code to indicate the child was the heir and successor of his father. They achieved a similar standing in Minoan Crete. In the Torah, Leviticus 11:13–19, hoopoes were listed among the animals that are detestable and should not be eaten. They are also listed in Deuteronomy as not kosher. The Hoopoe, known as the (), also appears with King Solomon in the Quran in Surah 27 (The Ant): The connection of the hoopoe with Solomon and the Queen of Sheba in the Qur'anic tradition is mentioned in passing in Rudyard Kipling's Just So story "The Butterfly that Stamped". In the pre-Islamic Vainakh religion of Chechnya, Ingushetia and Dagestan the hoopoe was sacred to the goddess Tusholi and known as "Tusholi's hen". As her bird, it could only be hunted with the express permission of the goddess's high priest, and even then only for strictly medicinal purposes. Hoopoes were seen as a symbol of virtue in Persia. A hoopoe was a leader of the birds in the Persian book of poems The Conference of the Birds ( by Attar) and when the birds seek a king, the hoopoe points out that the Simurgh was the king of the birds. Hoopoes were thought of as thieves across much of Europe, and harbingers of war in Scandinavia. In Estonian tradition, hoopoes are strongly connected with death and the underworld; their song is believed to foreshadow death for many people or cattle. In medieval ritual magic, the hoopoe was thought to be an evil bird. The Munich Manual of Demonic Magic, a collection of magical spells compiled in Germany frequently requires the sacrifice of a hoopoe to summon demons and perform other magical intentions. Tereus, transformed into the hoopoe, is the king of the birds in the Ancient Greek comedy The Birds by Aristophanes. In Ovid's Metamorphoses, book 6, King Tereus of Thrace rapes Philomela, his wife Procne's sister, and cuts out her tongue. In revenge, Procne kills their son Itys and serves him as a stew to his father. When Tereus sees the boy's head, which is served on a platter, he grabs a sword but just as he attempts to kill the sisters, they are turned into birds—Procne into a swallow and Philomela into a nightingale. Tereus himself is turned into an epops (6.674), translated as lapwing by Dryden and lappewincke (lappewinge) by John Gower in his Confessio Amantis, or hoopoe in A.S. Kline's translation. The bird's crest indicates his royal status, and his long, sharp beak is a symbol of his violent nature. English translators and poets probably had the northern lapwing in mind, considering its crest. As emblem The Eurasian hoopoe was chosen as the national bird of Israel in May 2008 in conjunction with the country's 60th anniversary, following a national survey of 155,000 citizens, outpolling the white-spectacled bulbul. The hoopoe appears on the logo of the University of Johannesburg and is the official mascot of the university's sports teams. The municipalities of Armstedt and Brechten, Germany, have a hoopoe in their coats of arms, as does Mārupe Municipality since 2021. Use in folk medicine In Morocco, hoopoes are traded live and as medicinal products in the markets, primarily in herbalist shops. This trade is unregulated and a potential threat to local populations. In Manipur, one of the states comprising Northeast India, the hoopoe is still used by traditional Muslim healers in a variety of preparations believed locally to benefit a number of conditions both medical and spiritual. Manipur abuts upon Myanmar and has been a cultural crossroads and melting pot of cultures for over 2,500 years. Its traditional medicine may thus reflect influences from an unusually wide area, including not only the Indian subcontinent but also Central Asia, Southeast Asia, East Asia and even the further-flung regions of Siberia, the Arctic, Micronesia and Polynesia. Ibopishak and Bimola record four Manipuri folk medicinal uses of the hoopoe which specify neither the body part of the bird used nor its method of preparation: as a tranquilizer in the treatment of abdominal pain, in the treatment of kidney and bladder disorders in the "prevention of leprosy" More specifically, it is believed that if an essence (method of preparation unspecified) prepared from the bird is dropped into the eye, it will remove superfluous eyelashes and strengthen the memory. Furthermore the authors record the following local Manipuri beliefs concerning specific body parts of the hoopoe: that its meat prevents frequent urination; that its feathers have the insecticidal property of killing ants and fleas that its blood banishes fairies (jinn) and nightmares that its heart cures (unspecified) diseases that its claws can be used to cure speech disorders. While Ibopishak and Bimola are unable to find any discernible effect of hoopoe tissue alone upon the dissolution of kidney stones, they do note that their experiments reveal that hoopoe tissue potentiates the effects of the Manipuri medicinal plant Cissus javana, when employed to treat such calculi (local healers use bird and plant in just such a combination for this purpose). Since, however, there was no control used involving the tissues of any other bird species, it remains unclear whether there are any medicinal properties peculiar to hoopoe tissue deriving from a distinctive chemistry. In popular culture Harrison Tordoff, a World War II fighter ace and later a noted ornithologist, named his P-51 Mustang Upupa epops, the scientific name of the hoopoe bird. A talking hoopoe named Almost Brilliant is a character in Nghi Vo's Singing Hills Cycle, first appearing in The Empress of Salt and Fortune.
Biology and health sciences
Coraciiformes
null
204002
https://en.wikipedia.org/wiki/Directed%20acyclic%20graph
Directed acyclic graph
In mathematics, particularly graph theory, and computer science, a directed acyclic graph (DAG) is a directed graph with no directed cycles. That is, it consists of vertices and edges (also called arcs), with each edge directed from one vertex to another, such that following those directions will never form a closed loop. A directed graph is a DAG if and only if it can be topologically ordered, by arranging the vertices as a linear ordering that is consistent with all edge directions. DAGs have numerous scientific and computational applications, ranging from biology (evolution, family trees, epidemiology) to information science (citation networks) to computation (scheduling). Directed acyclic graphs are also called acyclic directed graphs or acyclic digraphs. Definitions A graph is formed by vertices and by edges connecting pairs of vertices, where the vertices can be any kind of object that is connected in pairs by edges. In the case of a directed graph, each edge has an orientation, from one vertex to another vertex. A path in a directed graph is a sequence of edges having the property that the ending vertex of each edge in the sequence is the same as the starting vertex of the next edge in the sequence; a path forms a cycle if the starting vertex of its first edge equals the ending vertex of its last edge. A directed acyclic graph is a directed graph that has no cycles. A vertex of a directed graph is said to be reachable from another vertex when there exists a path that starts at and ends at . As a special case, every vertex is considered to be reachable from itself (by a path with zero edges). If a vertex can reach itself via a nontrivial path (a path with one or more edges), then that path is a cycle, so another way to define directed acyclic graphs is that they are the graphs in which no vertex can reach itself via a nontrivial path. Mathematical properties Reachability relation, transitive closure, and transitive reduction The reachability relation of a DAG can be formalized as a partial order on the vertices of the DAG. In this partial order, two vertices and are ordered as exactly when there exists a directed path from to in the DAG; that is, when can reach (or is reachable from ). However, different DAGs may give rise to the same reachability relation and the same partial order. For example, a DAG with two edges and has the same reachability relation as the DAG with three edges , , and . Both of these DAGs produce the same partial order, in which the vertices are ordered as . The transitive closure of a DAG is the graph with the most edges that has the same reachability relation as the DAG. It has an edge for every pair of vertices (, ) in the reachability relation of the DAG, and may therefore be thought of as a direct translation of the reachability relation into graph-theoretic terms. The same method of translating partial orders into DAGs works more generally: for every finite partially ordered set , the graph that has a vertex for every element of and an edge for every pair of elements in is automatically a transitively closed DAG, and has as its reachability relation. In this way, every finite partially ordered set can be represented as a DAG. The transitive reduction of a DAG is the graph with the fewest edges that has the same reachability relation as the DAG. It has an edge for every pair of vertices (, ) in the covering relation of the reachability relation of the DAG. It is a subgraph of the DAG, formed by discarding the edges for which the DAG also contains a longer directed path from to . Like the transitive closure, the transitive reduction is uniquely defined for DAGs. In contrast, for a directed graph that is not acyclic, there can be more than one minimal subgraph with the same reachability relation. Transitive reductions are useful in visualizing the partial orders they represent, because they have fewer edges than other graphs representing the same orders and therefore lead to simpler graph drawings. A Hasse diagram of a partial order is a drawing of the transitive reduction in which the orientation of every edge is shown by placing the starting vertex of the edge in a lower position than its ending vertex. Topological ordering A topological ordering of a directed graph is an ordering of its vertices into a sequence, such that for every edge the start vertex of the edge occurs earlier in the sequence than the ending vertex of the edge. A graph that has a topological ordering cannot have any cycles, because the edge into the earliest vertex of a cycle would have to be oriented the wrong way. Therefore, every graph with a topological ordering is acyclic. Conversely, every directed acyclic graph has at least one topological ordering. The existence of a topological ordering can therefore be used as an equivalent definition of a directed acyclic graphs: they are exactly the graphs that have topological orderings. In general, this ordering is not unique; a DAG has a unique topological ordering if and only if it has a directed path containing all the vertices, in which case the ordering is the same as the order in which the vertices appear in the path. The family of topological orderings of a DAG is the same as the family of linear extensions of the reachability relation for the DAG, so any two graphs representing the same partial order have the same set of topological orders. Combinatorial enumeration The graph enumeration problem of counting directed acyclic graphs was studied by . The number of DAGs on labeled vertices, for (without restrictions on the order in which these numbers appear in a topological ordering of the DAG) is 1, 1, 3, 25, 543, 29281, 3781503, … . These numbers may be computed by the recurrence relation Eric W. Weisstein conjectured, and proved, that the same numbers count the (0,1) matrices for which all eigenvalues are positive real numbers. The proof is bijective: a matrix is an adjacency matrix of a DAG if and only if is a (0,1) matrix with all eigenvalues positive, where denotes the identity matrix. Because a DAG cannot have self-loops, its adjacency matrix must have a zero diagonal, so adding preserves the property that all matrix coefficients are 0 or 1. Related families of graphs A multitree (also called a strongly unambiguous graph or a mangrove) is a DAG in which there is at most one directed path between any two vertices. Equivalently, it is a DAG in which the subgraph reachable from any vertex induces an undirected tree. A polytree (also called a directed tree) is a multitree formed by orienting the edges of an undirected tree. An arborescence is a polytree formed by orienting the edges of an undirected tree away from a particular vertex, called the root of the arborescence. Computational problems Topological sorting and recognition Topological sorting is the algorithmic problem of finding a topological ordering of a given DAG. It can be solved in linear time. Kahn's algorithm for topological sorting builds the vertex ordering directly. It maintains a list of vertices that have no incoming edges from other vertices that have not already been included in the partially constructed topological ordering; initially this list consists of the vertices with no incoming edges at all. Then, it repeatedly adds one vertex from this list to the end of the partially constructed topological ordering, and checks whether its neighbors should be added to the list. The algorithm terminates when all vertices have been processed in this way. Alternatively, a topological ordering may be constructed by reversing a postorder numbering of a depth-first search graph traversal. It is also possible to check whether a given directed graph is a DAG in linear time, either by attempting to find a topological ordering and then testing for each edge whether the resulting ordering is valid or alternatively, for some topological sorting algorithms, by verifying that the algorithm successfully orders all the vertices without meeting an error condition. Construction from cyclic graphs Any undirected graph may be made into a DAG by choosing a total order for its vertices and directing every edge from the earlier endpoint in the order to the later endpoint. The resulting orientation of the edges is called an acyclic orientation. Different total orders may lead to the same acyclic orientation, so an -vertex graph can have fewer than acyclic orientations. The number of acyclic orientations is equal to , where is the chromatic polynomial of the given graph. Any directed graph may be made into a DAG by removing a feedback vertex set or a feedback arc set, a set of vertices or edges (respectively) that touches all cycles. However, the smallest such set is NP-hard to find. An arbitrary directed graph may also be transformed into a DAG, called its condensation, by contracting each of its strongly connected components into a single supervertex. When the graph is already acyclic, its smallest feedback vertex sets and feedback arc sets are empty, and its condensation is the graph itself. Transitive closure and transitive reduction The transitive closure of a given DAG, with vertices and edges, may be constructed in time by using either breadth-first search or depth-first search to test reachability from each vertex. Alternatively, it can be solved in time where is the exponent for matrix multiplication algorithms; this is a theoretical improvement over the bound for dense graphs. In all of these transitive closure algorithms, it is possible to distinguish pairs of vertices that are reachable by at least one path of length two or more from pairs that can only be connected by a length-one path. The transitive reduction consists of the edges that form length-one paths that are the only paths connecting their endpoints. Therefore, the transitive reduction can be constructed in the same asymptotic time bounds as the transitive closure. Closure problem The closure problem takes as input a vertex-weighted directed acyclic graph and seeks the minimum (or maximum) weight of a closure – a set of vertices C, such that no edges leave C. The problem may be formulated for directed graphs without the assumption of acyclicity, but with no greater generality, because in this case it is equivalent to the same problem on the condensation of the graph. It may be solved in polynomial time using a reduction to the maximum flow problem. Path algorithms Some algorithms become simpler when used on DAGs instead of general graphs, based on the principle of topological ordering. For example, it is possible to find shortest paths and longest paths from a given starting vertex in DAGs in linear time by processing the vertices in a topological order, and calculating the path length for each vertex to be the minimum or maximum length obtained via any of its incoming edges. In contrast, for arbitrary graphs the shortest path may require slower algorithms such as Dijkstra's algorithm or the Bellman–Ford algorithm, and longest paths in arbitrary graphs are NP-hard to find. Applications Scheduling Directed acyclic graph representations of partial orderings have many applications in scheduling for systems of tasks with ordering constraints. An important class of problems of this type concern collections of objects that need to be updated, such as the cells of a spreadsheet after one of the cells has been changed, or the object files of a piece of computer software after its source code has been changed. In this context, a dependency graph is a graph that has a vertex for each object to be updated, and an edge connecting two objects whenever one of them needs to be updated earlier than the other. A cycle in this graph is called a circular dependency, and is generally not allowed, because there would be no way to consistently schedule the tasks involved in the cycle. Dependency graphs without circular dependencies form DAGs. For instance, when one cell of a spreadsheet changes, it is necessary to recalculate the values of other cells that depend directly or indirectly on the changed cell. For this problem, the tasks to be scheduled are the recalculations of the values of individual cells of the spreadsheet. Dependencies arise when an expression in one cell uses a value from another cell. In such a case, the value that is used must be recalculated earlier than the expression that uses it. Topologically ordering the dependency graph, and using this topological order to schedule the cell updates, allows the whole spreadsheet to be updated with only a single evaluation per cell. Similar problems of task ordering arise in makefiles for program compilation and instruction scheduling for low-level computer program optimization. A somewhat different DAG-based formulation of scheduling constraints is used by the program evaluation and review technique (PERT), a method for management of large human projects that was one of the first applications of DAGs. In this method, the vertices of a DAG represent milestones of a project rather than specific tasks to be performed. Instead, a task or activity is represented by an edge of a DAG, connecting two milestones that mark the beginning and completion of the task. Each such edge is labeled with an estimate for the amount of time that it will take a team of workers to perform the task. The longest path in this DAG represents the critical path of the project, the one that controls the total time for the project. Individual milestones can be scheduled according to the lengths of the longest paths ending at their vertices. Data processing networks A directed acyclic graph may be used to represent a network of processing elements. In this representation, data enters a processing element through its incoming edges and leaves the element through its outgoing edges. For instance, in electronic circuit design, static combinational logic blocks can be represented as an acyclic system of logic gates that computes a function of an input, where the input and output of the function are represented as individual bits. In general, the output of these blocks cannot be used as the input unless it is captured by a register or state element which maintains its acyclic properties. Electronic circuit schematics either on paper or in a database are a form of directed acyclic graphs using instances or components to form a directed reference to a lower level component. Electronic circuits themselves are not necessarily acyclic or directed. Dataflow programming languages describe systems of operations on data streams, and the connections between the outputs of some operations and the inputs of others. These languages can be convenient for describing repetitive data processing tasks, in which the same acyclically-connected collection of operations is applied to many data items. They can be executed as a parallel algorithm in which each operation is performed by a parallel process as soon as another set of inputs becomes available to it. In compilers, straight line code (that is, sequences of statements without loops or conditional branches) may be represented by a DAG describing the inputs and outputs of each of the arithmetic operations performed within the code. This representation allows the compiler to perform common subexpression elimination efficiently. At a higher level of code organization, the acyclic dependencies principle states that the dependencies between modules or components of a large software system should form a directed acyclic graph. Feedforward neural networks are another example. Causal structures Graphs in which vertices represent events occurring at a definite time, and where the edges always point from the early time vertex to a late time vertex of the edge, are necessarily directed and acyclic. The lack of a cycle follows because the time associated with a vertex always increases as you follow any path in the graph so you can never return to a vertex on a path. This reflects our natural intuition that causality means events can only affect the future, they never affect the past, and thus we have no causal loops. An example of this type of directed acyclic graph are those encountered in the causal set approach to quantum gravity though in this case the graphs considered are transitively complete. In the version history example below, each version of the software is associated with a unique time, typically the time the version was saved, committed or released. In the citation graph examples below, the documents are published at one time and can only refer to older documents. Sometimes events are not associated with a specific physical time. Provided that pairs of events have a purely causal relationship, that is edges represent causal relations between the events, we will have a directed acyclic graph. For instance, a Bayesian network represents a system of probabilistic events as vertices in a directed acyclic graph, in which the likelihood of an event may be calculated from the likelihoods of its predecessors in the DAG. In this context, the moral graph of a DAG is the undirected graph created by adding an (undirected) edge between all parents of the same vertex (sometimes called marrying), and then replacing all directed edges by undirected edges. Another type of graph with a similar causal structure is an influence diagram, the vertices of which represent either decisions to be made or unknown information, and the edges of which represent causal influences from one vertex to another. In epidemiology, for instance, these diagrams are often used to estimate the expected value of different choices for intervention. The converse is also true. That is in any application represented by a directed acyclic graph there is a causal structure, either an explicit order or time in the example or an order which can be derived from graph structure. This follows because all directed acyclic graphs have a topological ordering, i.e. there is at least one way to put the vertices in an order such that all edges point in the same direction along that order. Genealogy and version history Family trees may be seen as directed acyclic graphs, with a vertex for each family member and an edge for each parent-child relationship. Despite the name, these graphs are not necessarily trees because of the possibility of marriages between relatives (so a child has a common ancestor on both the mother's and father's side) causing pedigree collapse. The graphs of matrilineal descent (mother-daughter relationships) and patrilineal descent (father-son relationships) are trees within this graph. Because no one can become their own ancestor, family trees are acyclic. The version history of a distributed revision control system, such as Git, generally has the structure of a directed acyclic graph, in which there is a vertex for each revision and an edge connecting pairs of revisions that were directly derived from each other. These are not trees in general due to merges. In many randomized algorithms in computational geometry, the algorithm maintains a history DAG representing the version history of a geometric structure over the course of a sequence of changes to the structure. For instance in a randomized incremental algorithm for Delaunay triangulation, the triangulation changes by replacing one triangle by three smaller triangles when each point is added, and by "flip" operations that replace pairs of triangles by a different pair of triangles. The history DAG for this algorithm has a vertex for each triangle constructed as part of the algorithm, and edges from each triangle to the two or three other triangles that replace it. This structure allows point location queries to be answered efficiently: to find the location of a query point in the Delaunay triangulation, follow a path in the history DAG, at each step moving to the replacement triangle that contains . The final triangle reached in this path must be the Delaunay triangle that contains . Citation graphs In a citation graph the vertices are documents with a single publication date. The edges represent the citations from the bibliography of one document to other necessarily earlier documents. The classic example comes from the citations between academic papers as pointed out in the 1965 article "Networks of Scientific Papers" by Derek J. de Solla Price who went on to produce the first model of a citation network, the Price model. In this case the citation count of a paper is just the in-degree of the corresponding vertex of the citation network. This is an important measure in citation analysis. Court judgements provide another example as judges support their conclusions in one case by recalling other earlier decisions made in previous cases. A final example is provided by patents which must refer to earlier prior art, earlier patents which are relevant to the current patent claim. By taking the special properties of directed acyclic graphs into account, one can analyse citation networks with techniques not available when analysing the general graphs considered in many studies using network analysis. For instance transitive reduction gives new insights into the citation distributions found in different applications highlighting clear differences in the mechanisms creating citations networks in different contexts. Another technique is main path analysis, which traces the citation links and suggests the most significant citation chains in a given citation graph. The Price model is too simple to be a realistic model of a citation network but it is simple enough to allow for analytic solutions for some of its properties. Many of these can be found by using results derived from the undirected version of the Price model, the Barabási–Albert model. However, since Price's model gives a directed acyclic graph, it is a useful model when looking for analytic calculations of properties unique to directed acyclic graphs. For instance, the length of the longest path, from the n-th node added to the network to the first node in the network, scales as . Data compression Directed acyclic graphs may also be used as a compact representation of a collection of sequences. In this type of application, one finds a DAG in which the paths form the given sequences. When many of the sequences share the same subsequences, these shared subsequences can be represented by a shared part of the DAG, allowing the representation to use less space than it would take to list out all of the sequences separately. For example, the directed acyclic word graph is a data structure in computer science formed by a directed acyclic graph with a single source and with edges labeled by letters or symbols; the paths from the source to the sinks in this graph represent a set of strings, such as English words. Any set of sequences can be represented as paths in a tree, by forming a tree vertex for every prefix of a sequence and making the parent of one of these vertices represent the sequence with one fewer element; the tree formed in this way for a set of strings is called a trie. A directed acyclic word graph saves space over a trie by allowing paths to diverge and rejoin, so that a set of words with the same possible suffixes can be represented by a single tree vertex. The same idea of using a DAG to represent a family of paths occurs in the binary decision diagram, a DAG-based data structure for representing binary functions. In a binary decision diagram, each non-sink vertex is labeled by the name of a binary variable, and each sink and each edge is labeled by a 0 or 1. The function value for any truth assignment to the variables is the value at the sink found by following a path, starting from the single source vertex, that at each non-sink vertex follows the outgoing edge labeled with the value of that vertex's variable. Just as directed acyclic word graphs can be viewed as a compressed form of , binary decision diagrams can be viewed as compressed forms of decision trees that save space by allowing paths to rejoin when they agree on the results of all remaining decisions.
Mathematics
Graph theory
null
204037
https://en.wikipedia.org/wiki/3C%20273
3C 273
3C 273 is a quasar located at the center of a giant elliptical galaxy in the constellation of Virgo. It was the first quasar ever to be identified and is the visually brightest quasar in the sky as seen from Earth, with an apparent visual magnitude of 12.9. The derived distance to this object is . The mass of its central supermassive black hole is approximately 886 million times the mass of the Sun. Observation 3C 273 is visible from March to July in both the northern and southern hemispheres. Situated in the Virgo constellation, it is bright enough to be observed by eye with a amateur telescope. Due in part to its radio luminosity and its discovery as the first identified quasar, 3C 273's right ascension in the Fifth Fundamental Catalog (FK5) is used to standardize the positions of 23 extragalactic radio sources used to define the International Celestial Reference System (ICRS). Given its distance from Earth and visual magnitude, 3C 273 is the most distant celestial object average amateur astronomers are likely to see through their telescopes. Properties This is the optically brightest quasar in the sky from Earth with an apparent visual magnitude of ~12.9, and one of the closest with a redshift, z, of 0.158. A luminosity distance of DL = may be calculated from z. Using parallax methods with the Very Large Telescope interferometer yields a distance estimate of  (). It is one of the most luminous quasars known, with an absolute magnitude of −26.7, meaning that if it were only as distant as Pollux (~10 parsecs) it would appear nearly as bright in the sky as the Sun. Since the Sun's absolute magnitude is 4.83, it means that the quasar is over 4 trillion times more luminous than the Sun at visible wavelengths. The luminosity of 3C 273 is variable at nearly every wavelength from radio waves to gamma rays on timescales of a few days to decades. Polarization with coincident orientation has been observed with radio, infrared, and optical light being emitted from a large-scale jet; these emissions are therefore almost certainly synchrotron in nature. The radiation is created by a jet of charged particles moving at relativistic speeds. VLBI radio observations of 3C 273 have revealed proper motion of some of the radio emitting regions, further suggesting the presence of relativistic jets of material. This is a prototype of an Active Galactic Nucleus, demonstrating that the energy is being produced through accretion by a supermassive black hole (SMBH). No other astrophysical source can produce the observed energy. The mass of its central SMBH has been measured to be million solar masses through broad emission-line reverberation mapping. Large-scale jet The quasar has a large-scale visible jet, which measures ~ long, having an apparent size of 23″. Such jets are believed to be created by the interaction of the central black hole and the accretion disk. In 1995, optical imaging of the jet using the Hubble Space Telescope revealed a structured morphology evidenced by repeated bright knots interlaced by areas of weak emission. The viewing angle of the jet is about 6° as seen from Earth. The jet was observed to abruptly change direction by an intrinsic angle of 2° in 2003, which is larger than the jet's intrinsic opening angle of 1.1°. An expanding cocoon of heated gas is being generated by the jet, which may be impacting an inclined disk of gas within the central . Host galaxy 3C 273 lies at the center of a giant elliptical galaxy with an apparent magnitude of 16 and an apparent size of 29 arcseconds. The morphological classification of the host galaxy is E4, indicating a moderately flattened elliptical shape. The galaxy has an estimated mass of . History The name signifies that it was the 273rd object (ordered by right ascension) of the Third Cambridge Catalog of Radio Sources (3C), published in 1959. After accurate positions were obtained using lunar occultation by Cyril Hazard at the Parkes Radio Telescope, the radio source was quickly associated with an optical counterpart, an unresolved stellar object. In 1963, Maarten Schmidt and Bev Oke published a pair of papers in Nature reporting that 3C 273 has a substantial redshift of 0.158, placing it several billion light-years away. Prior to the discovery of 3C 273, several other radio sources had been associated with optical counterparts, the first being 3C 48. Also, many active galaxies had been misidentified as variable stars, including the famous BL Lac, W Com and AU CVn. However, it was not understood what these objects were, since their spectra were unlike those of any known stars. Its spectrum did not resemble that of any normal stars with typical stellar elements. 3C 273 was the first object to be identified as a quasar—an extremely luminous object at an astronomical distance. 3C 273 is a radio-loud quasar, and was also one of the first extragalactic X-ray sources discovered in 1970. However, even to this day, the process which gives rise to the X-ray emissions is controversial.
Physical sciences
Other notable objects
null
204092
https://en.wikipedia.org/wiki/Therapsida
Therapsida
Therapsida is a clade comprising a major group of eupelycosaurian synapsids that includes mammals and their ancestors and close relatives. Many of the traits today seen as unique to mammals had their origin within early therapsids, including limbs that were oriented more underneath the body, resulting in a more "standing" quadrupedal posture, as opposed to the lower sprawling posture of many reptiles and amphibians. Therapsids evolved from earlier synapsids commonly called "pelycosaurs", specifically within the Sphenacodontia, more than 279.5 million years ago. They replaced the pelycosaurs as the dominant large land animals in the Guadalupian through to the Early Triassic. In the aftermath of the Permian–Triassic extinction event, therapsids declined in relative importance to the rapidly diversifying archosaurian sauropsids (pseudosuchians, dinosaurs and pterosaurs, etc.) during the Middle Triassic. The therapsids include the cynodonts, the group that gave rise to mammals (Mammaliaformes) in the Late Triassic around 225 million years ago, the only therapsid clade that survived beyond the end of the Triassic. The only other group of therapsids to have survived into the Late Triassic, the dicynodonts, became extinct towards the end of the period. The last surviving group of non-mammaliaform cynodonts were the Tritylodontidae, which became extinct during the Early Cretaceous. Characteristics Jaw and teeth Therapsids' temporal fenestrae were larger than those of the pelycosaurs. The jaws of some therapsids were more complex and powerful, and the teeth were differentiated into frontal incisors for nipping, great lateral canines for puncturing and tearing, and molars for shearing and chopping food. Posture Therapsid legs were positioned more vertically beneath their bodies than were the sprawling legs of reptiles and pelycosaurs. Also compared to these groups, the feet were more symmetrical, with the first and last toes short and the middle toes long, an indication that the foot's axis was placed parallel to that of the animal, not sprawling out sideways. This orientation would have given a more mammal-like gait than the lizard-like gait of the pelycosaurs. Physiology The physiology of therapsids is poorly understood. Most Permian therapsids had a pineal foramen, indicating that they had a parietal eye like many modern reptiles and amphibians. The parietal eye serves an important role in thermoregulation and the circadian rhythm of ectotherms, but is absent in modern mammals, which are endothermic. Near the end of the Permian, dicynodonts, therocephalians and cynodonts show parallel trends towards loss of the pineal foramen, and the foramen is completely absent in probainognathian cynodonts. Evidence from oxygen isotopes, which are correlated with body temperature, suggests that most Permian therapsids were ectotherms and that endothermy evolved convergently in dicynodonts and cynodonts near the end of the Permian. In contrast, evidence from histology suggests that endothermy is shared across Therapsida, whereas estimates of blood flow rate and lifespan in the mammaliaform Morganucodon suggest that even early mammaliaforms had reptile-like metabolic rates. Evidence for respiratory turbinates, which have been hypothesized to be indicative of endothermy, was reported in the therocephalian Glanosuchus, but subsequent study showed that the apparent attachment sites for turbinates may simply be the result of distortion of the skull. Integument The evolution of integument in therapsids is poorly known, and there are few fossils that provide direct evidence for the presence or absence of fur. The most basal synapsids with unambiguous direct evidence of fur are docodonts, which are mammaliaforms very closely related to crown-group mammals. Two "mummified" juvenile specimens of the dicynodont Lystrosaurus murrayi preserve skin impressions; the skin is hairless, leathery, and dimpled, somewhat comparable to elephant skin. Fossilized facial skin from the dinocephalian Estemmenosuchus has been described as showing that the skin was glandular and lacked both scales and hair. Coprolites containing what appear to be hairs have been found from the Late Permian. Though the source of these hairs is not known with certainty, they may suggest that hair was present in at least some Permian therapsids. The closure of the pineal foramen in probainognathian cynodonts may indicate a mutation in the regulatory gene Msx2, which is involved in both the closure of the skull roof and the maintenance of hair follicles in mice. This suggests that hair may have first evolved in probainognathians, though it does not entirely rule out an earlier origin of fur. Whiskers probably evolved in probainognathian cynodonts. Some studies had inferred an earlier origin for whiskers based on the presence of foramina on the snout of therocephalians and early cynodonts, but the arrangement of foramina in these taxa actually closely resembles lizards, which would make the presence of mammal-like whiskers unlikely. Evolutionary history Therapsids evolved from a group of pelycosaurs called sphenacodonts. Therapsids became the dominant land animals in the Middle Permian, displacing the pelycosaurs. Therapsida consists of four major clades: the dinocephalians, the herbivorous anomodonts, the carnivorous biarmosuchians, and the mostly carnivorous theriodonts. After a brief burst of evolutionary diversity, the dinocephalians died out in the later Middle Permian (Guadalupian) but the anomodont dicynodonts as well as the theriodont gorgonopsians and therocephalians flourished, being joined at the very end of the Permian by the first of the cynodonts. Like all land animals, the therapsids were seriously affected by the Permian–Triassic extinction event, with the very successful gorgonopsians and the biarmosuchians dying out altogether and the remaining groups—dicynodonts, therocephalians and cynodonts—reduced to a handful of species each by the earliest Triassic. Surviving dicynodonts were represented by two families of disaster taxa (Lystrosauridae and Myosauridae), the scarcely known Kombuisia, and a single group of large stocky herbivores, the Kannemeyeriiformes, which were the only dicynodont lineage to thrive during the Triassic. They and the medium-sized cynodonts (including both carnivorous and herbivorous forms) flourished worldwide throughout the Early and Middle Triassic. They disappear from the fossil record across much of Pangea at the end of the Carnian (Late Triassic), although they continued for some time longer in the wet equatorial band and the south. Some exceptions were the still further derived eucynodonts. At least three groups of them survived. They all appeared in the Late Triassic period. The extremely mammal-like family, Tritylodontidae, survived into the Early Cretaceous. Another extremely mammal-like family, Tritheledontidae, are unknown later than the Early Jurassic. Mammaliaformes was the third group, including Morganucodon and similar animals. Some taxonomists refer to these animals as "mammals", though most limit the term to the mammalian crown group. The non-eucynodont cynodonts survived the Permian–Triassic extinction; Thrinaxodon, Galesaurus and Platycraniellus are known from the Early Triassic. By the Middle Triassic, however, only the eucynodonts remained. The therocephalians, relatives of the cynodonts, managed to survive the Permian–Triassic extinction and continued to diversify through the Early Triassic period. Approaching the end of the period, however, the therocephalians were in decline to eventual extinction, likely outcompeted by the rapidly diversifying Saurian lineage of diapsids, equipped with sophisticated respiratory systems better suited to the very hot, dry and oxygen-poor world of the End-Triassic. Dicynodonts were among the most successful groups of therapsids during the Late Permian; they survived through to near the end of the Triassic. Mammals are the only living therapsids. The mammalian crown group, which evolved in the Early Jurassic period, radiated from a group of mammaliaforms that included the docodonts. The mammaliaforms themselves evolved from probainognathians, a lineage of the eucynodont suborder. Classification Six major groups of therapsids are generally recognized: Biarmosuchia, Dinocephalia, Anomodontia, Gorgonopsia, Therocephalia and Cynodontia. A clade uniting therocephalians and cynodonts, called Eutheriodontia, is well supported, but relationships among the other four clades are controversial. The most widely accepted hypothesis of therapsid relationships, the Hopson and Barghausen paradigm, was first proposed in 1986. Under this hypothesis, biarmosuchians are the earliest-diverging major therapsid group, with the other five groups forming the Eutherapsida, and within Eutherapsida, gorgonopsians are the sister taxon of eutheriodonts, together forming the Theriodontia. Hopson and Barghausen did not initially come to a conclusion about how dinocephalians, anomodonts and theriodonts were related to each other, but subsequent studies suggested that anomodonts and theriodonts should be classified together as the Neotherapsida. However, there remains debate over these relationships; in particular, some studies have suggested that anomodonts, not gorgonopsians, are the sister taxon of Eutheriodontia, other studies have found dinocephalians and anomodonts to form a clade, and both the phylogenetic position and monophyly of Biarmosuchia remain controversial. In addition to the six major groups, there are several other lineages and species of uncertain classification. Raranimus from the early Middle Permian of China is likely to be the earliest-diverging known therapsid. Tetraceratops from the Early Permian of the United States has been hypothesized to be an even earlier-diverging therapsid, but more recent study has suggested it is more likely to be a non-therapsid sphenacodontian. Biarmosuchia Biarmosuchia is the most recently recognized therapsid clade, first recognized as a distinct lineage by Hopson and Barghausen in 1986 and formally named by Sigogneau-Russell in 1989. Most biarmosuchians were previously classified as gorgonopsians. Biarmosuchia includes the distinctive Burnetiamorpha, but support for the monophyly of Biarmosuchia is relatively low. Many biarmosuchians are known for extensive cranial ornamentation. Dinocephalia Dinocephalia comprises two distinctive groups, the Anteosauria and Tapinocephalia. Historically, carnivorous dinocephalians, including both anteosaurs and titanosuchids, were called titanosuchians and classified as members of Theriodontia, while the herbivorous Tapinocephalidae were classified as members of Anomodontia. Anomodontia Anomodontia includes the dicynodonts, a clade of tusked, beaked herbivores, and the most diverse and long-lived clade of non-cynodont therapsids. Other members of Anomodontia include Suminia, which is thought to have been a climbing form. Gorgonopsia Gorgonopsia is an abundant but morphologically homogeneous group of saber-toothed predators. Therocephalia It has been suggested that Therocephalia might not be monophyletic, with some species more closely related to cynodonts than others. However, most studies regard Therocephalia as monophyletic. Cynodontia Cynodonts are the most diverse and longest-lived of the therapsid groups, as Cynodontia includes mammals. Cynodonts are the only major therapsid clade to lack a Middle Permian fossil record, with the earliest-known cynodont being Charassognathus from the Wuchiapingian age of the Late Permian. Non-mammalian cynodonts include both carnivorous and herbivorous forms.
Biology and health sciences
Proto-mammals
Animals