id
stringlengths
2
8
url
stringlengths
31
117
title
stringlengths
1
71
text
stringlengths
153
118k
topic
stringclasses
4 values
section
stringlengths
4
49
sublist
stringclasses
9 values
59107367
https://en.wikipedia.org/wiki/Unbihexium
Unbihexium
Unbihexium, also known as element 126 or eka-plutonium, is a hypothetical chemical element; it has atomic number 126 and placeholder symbol Ubh. Unbihexium and Ubh are the temporary IUPAC name and symbol, respectively, until the element is discovered, confirmed, and a permanent name is decided upon. In the periodic table, unbihexium is expected to be a g-block superactinide and the eighth element in the 8th period. Unbihexium has attracted attention among nuclear physicists, especially in early predictions targeting properties of superheavy elements, for 126 may be a magic number of protons near the center of an island of stability, leading to longer half-lives, especially for 310Ubh or 354Ubh which may also have magic numbers of neutrons. Early interest in possible increased stability led to the first attempted synthesis of unbihexium in 1971 and searches for it in nature in subsequent years. Despite several reported observations, more recent studies suggest that these experiments were insufficiently sensitive; hence, no unbihexium has been found naturally or artificially. Predictions of the stability of unbihexium vary greatly among different models; some suggest the island of stability may instead lie at a lower atomic number, closer to copernicium and flerovium. Unbihexium is predicted to be a chemically active superactinide, exhibiting a variety of oxidation states from +1 to +8, and possibly being a heavier congener of plutonium. An overlap in energy levels of the 5g, 6f, 7d, and 8p orbitals is also expected, which complicates predictions of chemical properties for this element. Introduction History Synthesis attempts The first and only attempt to synthesize unbihexium, which was unsuccessful, was performed in 1971 at CERN (European Organization for Nuclear Research) by René Bimbot and John M. Alexander using the hot fusion reaction: + → * → no atoms High-energy (13-15 MeV) alpha particles were observed and taken as possible evidence for the synthesis of unbihexium. Subsequent unsuccessful experiments with higher sensitivity suggest that the 10 mb sensitivity of this experiment was too low; hence, the formation of unbihexium nuclei in this reaction was deemed highly unlikely. Possible natural occurrence A study in 1976 by a group of American researchers from several universities proposed that primordial superheavy elements, mainly livermorium, unbiquadium, unbihexium, and unbiseptium, with half-lives exceeding 500 million years could be a cause of unexplained radiation damage (particularly radiohalos) in minerals. This prompted many researchers to search for them in nature from 1976 to 1983. A group led by Tom Cahill, a professor at the University of California at Davis, claimed in 1976 that they had detected alpha particles and X-rays with the right energies to cause the damage observed, supporting the presence of these elements, especially unbihexium. Others claimed that none had been detected, and questioned the proposed characteristics of primordial superheavy nuclei. In particular, they cited that the magic number N = 228 necessary for enhanced stability would create a neutron-excessive nucleus in unbihexium that might not be beta-stable, although several calculations suggest that 354Ubh may indeed be stable against beta decay. This activity was also proposed to be caused by nuclear transmutations in natural cerium, raising further ambiguity upon this claimed observation of superheavy elements. Unbihexium has received particular attention in these investigations, for its speculated location in the island of stability may increase its abundance relative to other superheavy elements. Any naturally occurring unbihexium is predicted to be chemically similar to plutonium and may exist with primordial 244Pu in the rare earth mineral bastnäsite. In particular, plutonium and unbihexium are predicted to have similar valence configurations, leading to the existence of unbihexium in the +4 oxidation state. Therefore, should unbihexium occur naturally, it may be possible to extract it using similar techniques for the accumulation of cerium and plutonium. Likewise, unbihexium could also exist in monazite with other lanthanides and actinides that would be chemically similar. Recent doubt on the existence of primordial 244Pu casts uncertainty on these predictions, however, as the nonexistence (or minimal existence) of plutonium in bastnäsite will inhibit possible identification of unbihexium as its heavier congener. The possible extent of primordial superheavy elements on Earth today is uncertain. Even if they are confirmed to have caused the radiation damage long ago, they might now have decayed to mere traces, or even be completely gone. It is also uncertain if such superheavy nuclei may be produced naturally at all, as spontaneous fission is expected to terminate the r-process responsible for heavy element formation between mass number 270 and 290, well before elements such as unbihexium may be formed. A recent hypothesis tries to explain the spectrum of Przybylski's Star by naturally occurring flerovium, unbinilium, and unbihexium. Naming Using the 1979 IUPAC recommendations, the element should be temporarily called unbihexium (symbol Ubh) until it is discovered, the discovery is confirmed, and a permanent name chosen. Although widely used in the chemical community on all levels, from chemistry classrooms to advanced textbooks, the recommendations are mostly ignored among scientists who work theoretically or experimentally on superheavy elements, who call it "element 126", with the symbol E126, (126), or 126. Some researchers have also referred to unbihexium as eka-plutonium, a name derived from the system Dmitri Mendeleev used to predict unknown elements, though such an extrapolation might not work for g-block elements with no known congeners, and eka-plutonium would instead refer to element 146 or 148 when the term is meant to denote the element directly below plutonium. Prospects for future synthesis Every element from mendelevium onward was produced in fusion-evaporation reactions, culminating in the discovery of the heaviest known element, oganesson, in 2002 and most recently tennessine in 2010. These reactions approached the limit of current technology; for example, the synthesis of tennessine required 22 milligrams of 249Bk and an intense 48Ca beam for six months. The intensity of beams in superheavy element research cannot exceed 1012 projectiles per second without damaging the target and detector, and producing larger quantities of increasingly rare and unstable actinide targets is impractical. Consequently, future experiments must be done at facilities such as the superheavy element factory (SHE-factory) at the Joint Institute for Nuclear Research (JINR) or RIKEN, which will allow experiments to run for longer time periods with increased detection capabilities and enable otherwise inaccessible reactions. Even so, it will likely be a great challenge to synthesize elements beyond unbinilium (120) or unbiunium (121), given their short predicted half-lives and low predicted cross sections. It has been suggested that fusion-evaporation will not be feasible to reach unbihexium. As 48Ca cannot be used for synthesis of elements beyond atomic number 118 or possibly 119, the only alternatives are increasing the atomic number of the projectile or studying symmetric or near-symmetric reactions. One calculation suggests that the cross section for producing unbihexium from 249Cf and 64Ni may be as low as nine orders of magnitude lower than the detection limit; such results are also suggested by the non-observation of unbinilium and unbibium in reactions with heavier projectiles and experimental cross section limits. If Z = 126 represents a closed proton shell, compound nuclei may have greater survival probability and the use of 64Ni may be more feasible for producing nuclei with 122 < Z < 126, especially for compound nuclei near the closed shell at N = 184. However, the cross section still might not exceed 1 fb, posing an obstacle that may only be overcome with more sensitive equipment. Predicted properties Nuclear stability and isotopes Extensions of the nuclear shell model predicted that the next magic numbers after Z = 82 and N = 126 (corresponding to 208Pb, the heaviest stable nucleus) were Z = 126 and N = 184, making 310Ubh the next candidate for a doubly magic nucleus. These speculations led to interest in the stability of unbihexium as early as 1957; Gertrude Scharff Goldhaber was one of the first physicists to predict a region of increased stability in the vicinity of, and possibly centered at, unbihexium. This notion of an "island of stability" comprising longer-lived superheavy nuclei was popularized by University of California professor Glenn Seaborg in the 1960s. In this region of the periodic table, N = 184 and N = 228 have been suggested as closed neutron shells, and various atomic numbers, including Z = 126, have been proposed as closed proton shells. The extent of stabilizing effects in the region of unbihexium is uncertain, however, due to predictions of shifting or weakening of the proton shell closure and possible loss of double magicity. More recent research predicts the island of stability to instead be centered at beta-stable isotopes of copernicium (291Cn and 293Cn) or flerovium (Z = 114), which would place unbihexium well above the island and result in short half-lives regardless of shell effects. Earlier models suggested the existence of long-lived nuclear isomers resistant to spontaneous fission in the region near 310Ubh, with half-lives on the order of millions or billions of years. However, more rigorous calculations as early as the 1970s yielded contradictory results; it is now believed that the island of stability is not centered at 310Ubh, and thus will not enhance the stability of this nuclide. Instead, 310Ubh is thought to be very neutron-deficient and susceptible to alpha decay and spontaneous fission in less than a microsecond, and it may even lie at or beyond the proton drip line. A 2016 calculation on the decay properties of 288–339Ubh upholds these predictions; the isotopes lighter than 313Ubh (including 310Ubh) may indeed lie beyond the drip line and decay by proton emission, 313–327Ubh will alpha decay, possibly reaching flerovium and livermorium isotopes, and heavier isotopes will decay by spontaneous fission. This study and a quantum tunneling model predict alpha-decay half-lives under a microsecond for isotopes lighter than 318Ubh, rendering them impossible to identify experimentally. Hence, the isotopes 318–327Ubh may be synthesized and detected, and may even constitute a region of increased stability against fission around N ~ 198 with half-lives up to several seconds, though such a region of increased stability is completely absent in other models. A "sea of instability" defined by very low fission barriers (caused by greatly increasing Coulomb repulsion in superheavy elements) and consequently fission half-lives on the order of 10−18 seconds is predicted across various models. Although the exact limit of stability for half-lives over one microsecond varies, stability against fission is strongly dependent on the N = 184 and N = 228 shell closures and rapidly drops off immediately beyond the influence of the shell closure. Such an effect may be reduced, however, if nuclear deformation in intermediate isotopes may lead to a shift in magic numbers; a similar phenomenon was observed in the deformed doubly magic nucleus 270Hs. This shift could then lead to longer half-lives, perhaps on the order of days, for isotopes such as 342Ubh that would also lie on the beta-stability line. A second island of stability for spherical nuclei may exist in unbihexium isotopes with many more neutrons, centered at 354Ubh and conferring additional stability in N = 228 isotones near the beta-stability line. Originally, a short half-life of 39 milliseconds was predicted for 354Ubh toward spontaneous fission, though a partial alpha half-life for this isotope was predicted to be 18 years. More recent analysis suggests that this isotope may have a half-life on the order of 100 years should the closed shells have strong stabilizing effects, placing it at the peak of an island of stability. It may also be possible that 354Ubh is not doubly magic, as the Z = 126 shell is predicted to be relatively weak, or in some calculations, completely nonexistent. This suggests that any relative stability in unbihexium isotopes would be only due to neutron shell closures that may or may not have a stabilizing effect at Z = 126. Chemical Unbihexium is expected to be the sixth member of a superactinide series. It may have similarities to plutonium, as both elements have eight valence electrons over a noble gas core. In the superactinide series, the Aufbau principle is expected to break down due to relativistic effects, and an overlap of the energy levels of the 7d, 8p, and especially 5g and 6f orbitals is expected, which renders predictions of chemical and atomic properties of these elements very difficult. The ground state electron configuration of unbihexium is thus predicted to be [Og] 5g2 6f2 7d1 8s2 8p1 or 5g1 6f4 8s2 8p1, in contrast to [Og] 5g6 8s2 derived from Aufbau. As with the other early superactinides, it is predicted that unbihexium will be able to lose all eight valence electrons in chemical reactions, rendering a variety of oxidation states up to +8 possible. The +4 oxidation state is predicted to be most common, in addition to +2 and +6. Unbihexium should be able to form the tetroxide UbhO4 and hexahalides UbhF6 and UbhCl6, the latter with a fairly strong bond dissociation energy of 2.68 eV. Calculations suggest that a diatomic UbhF molecule will feature a bond between the 5g orbital in unbihexium and the 2p orbital in fluorine, thus characterizing unbihexium as an element whose 5g electrons should actively participate in bonding. It is also predicted that the Ubh6+ (in particular, in UbhF6) and Ubh7+ ions will have the electron configurations [Og] 5g2 and [Og] 5g1, respectively, in contrast to the [Og] 6f1 configuration seen in Ubt4+ and Ubq5+ that bears more resemblance to their actinide homologs. The activity of 5g electrons may influence the chemistry of superactinides such as unbihexium in new ways that are difficult to predict, as no known elements have electrons in a g orbital in the ground state.
Physical sciences
Periods
Chemistry
61747300
https://en.wikipedia.org/wiki/2I/Borisov
2I/Borisov
2I/Borisov, originally designated C/2019 Q4 (Borisov), is the first observed rogue comet and the second observed interstellar interloper after ʻOumuamua. It was discovered by the Crimean amateur astronomer and telescope maker Gennadiy Borisov on 29 August 2019 UTC (30 August local time). 2I/Borisov has a heliocentric orbital eccentricity of 3.36 and is not bound to the Sun. The comet passed through the ecliptic of the Solar System at the end of October 2019, and made its closest approach to the Sun at just over on 8 December 2019. The comet passed closest to Earth on 28 December 2019. In November 2019, astronomers from Yale University said that the comet's tail was 14 times the size of Earth, and stated, "It's humbling to realize how small Earth is next to this visitor from another solar system." Nomenclature The comet is formally called "2I/Borisov" by the International Astronomical Union (IAU), with "2I" or "2I/2019 Q4" being its designation and "Borisov" being its name, but is sometimes referred to as "Comet Borisov", especially in the popular press. As the second observed interstellar interloper after 1I/ʻOumuamua, it was given the "2I" designation, where "I" stands for interstellar. The name Borisov follows the tradition of naming comets after their discoverers. Before final designation as 2I/Borisov, the object was referred to by other names: Early orbit solutions suggested that the comet could be a near-Earth object and was thus listed on IAU's Minor Planet Center's (MPC) Near-Earth Object Confirmation Page (NEOCP) as gb00234. Further refinements after thirteen days of observation made clear the object was a hyperbolic comet, and it was given the designation C/2019 Q4 (Borisov) by the Minor Planet Center on 11 September 2019. A number of other astronomers including Davide Farnocchia, Bill Gray, and David Tholen concluded that the comet was interstellar. On 24 September 2019 the IAU announced that the Working Group for Small Body Nomenclature kept the name Borisov giving the comet the interstellar designation of 2I/Borisov, formally announcing the comet was indeed interstellar. Characteristics Unlike ʻOumuamua, which had an asteroidal appearance, 2I/Borisov's nucleus was surrounded by a coma, a cloud of dust and gas. Size and shape Early estimates of nucleus 2I/Borisov diameter have ranged from . 2I/Borisov has, unlike Solar System comets, noticeably shrunk during Solar System flyby, losing at least 0.4% of its mass before perihelion. Also, the amplitude of non-gravitational acceleration place an upper limit of 0.4 km on nucleus size, consistent with a previous Hubble Space Telescope upper limit of 0.5 km. The comet did not come much closer to Earth than 300 million km, which prevents using radar to directly determine its size and shape. This could be done using the occultation of a star by 2I/Borisov but an occultation would be difficult to predict, requiring a precise determination of its orbit, and the detection would necessitate a network of small telescopes. Rotation A study using observations from Hubble could not find a variation in the light curve. According to this study the rotational period must be larger than 10 hours. A study with CSA's NEOSSat found a period of 13.2 ± 0.2 days, which is unlikely to be the nuclear spin. Monte Carlo simulations based on the available orbit determinations suggest that the equatorial obliquity of 2I/Borisov could be about 59 degrees or 90 degrees, the latter is favored for the latest orbit determination. Chemical makeup and nucleus structure David Jewitt and Jane Luu estimate from the size of its coma the comet is producing 2 kg/s of dust and is losing 60 kg/s of water. They extrapolate that it became active in June 2019 when it was between 4 and 5  from the Sun. A search of image archives found precovery observations of 2I/Borisov as early as 13 December 2018, but not on 21 November 2018, indicating it became active between these dates. 2I/Borisov's composition appears uncommon yet not unseen in Solar System comets, being relatively depleted in water and diatomic carbon (C2), but enriched in carbon monoxide and amines (R-NH2). The molar ratio of carbon monoxide to water in 2I/Borisov tail is 35–105%, resembling the unusual blue-tailed comet C/2016 R2 (PANSTARRS) in contrast to the average ratio of 4% for solar system comets. The 2I/Borisov has also produced a minor amount of neutral nickel emission attributed to an unknown volatile compound of nickel. The nickel to iron abundance ratio is similar to Solar System comets. Trajectory As seen from Earth, the comet was in the northern sky from September until mid-November. It crossed the ecliptic plane on 26 October near the star Regulus, and the celestial equator on 13 November 2019, entering the southern sky. On 8 December 2019, the comet reached perihelion (closest approach to the Sun) and was near the inner edge of the asteroid belt. In late December, it made its closest approach to Earth, 1.9 , and had a solar elongation of about 80°. Due to its 44° orbital inclination, 2I/Borisov did not make any notable close approaches to the planets. 2I/Borisov entered the Solar System from the direction of Cassiopeia near the border with Perseus. This direction indicates that it originates from the galactic plane, rather than from the galactic halo. It will leave the Solar System in the direction of Telescopium. In interstellar space, 2I/Borisov takes roughly years to travel a light-year relative to the Sun. 2I/Borisov's trajectory is extremely hyperbolic, having an orbital eccentricity of 3.36. This is much higher than the 300+ known weakly hyperbolic comets, with heliocentric eccentricities just over 1, and even ʻOumuamua with an eccentricity of 1.2. 2I/Borisov also has a hyperbolic excess velocity () of , much higher than what could be explained by perturbations, which could produce velocities when approaching an infinite distance from the Sun of less than a few km/s. These two parameters are important indicators of 2I/Borisov's interstellar origin. For comparison, the Voyager 1 spacecraft, which is leaving the Solar System, is traveling at . 2I/Borisov has a much larger eccentricity than ʻOumuamua due to its higher excess velocity and its significantly higher perihelion distance. At this larger distance, the Sun's gravity is less able to alter its path as it passes through the Solar System. Observation Discovery The comet was discovered on 30 August 2019 by amateur astronomer Gennadiy Borisov at his personal observatory MARGO in Nauchnyy, Crimea, using a 0.65 meter telescope he designed and built himself. The discovery has been compared to the discovery of Pluto by Clyde Tombaugh. Tombaugh was also an amateur astronomer who was building his own telescopes, although he discovered Pluto using Lowell Observatory's astrograph. At discovery, it was inbound from the Sun, from Earth, and had a solar elongation of 38°. Borisov described his discovery thus: 2I/Borisov's interstellar origin required a couple of weeks to confirm. Early orbital solutions based on initial observations included the possibility that the comet could be a near-Earth object 1.4 AU from the Sun in an elliptical orbit with an orbital period of less than 1 year. Later using 151 observations over 12 days, NASA Jet Propulsion Laboratory's Scout gave an eccentricity range of 2.9–4.5 . But with an observation arc of only 12 days, there was still some doubt that it was interstellar because the observations were at a low solar elongation, which could introduce biases in the data such as differential refraction. Using large non-gravitational forces on the highly eccentric orbit, a solution could be generated with an eccentricity of about 1, an Earth minimum orbit intersection distance (MOID) of , and a perihelion at 0.90 AU around 30 December 2019. However, based on available observations, the orbit could only be parabolic if non-gravitational forces (thrust due to outgassing) affected its orbit more than any previous comets. Eventually with more observations the orbit converged to the hyperbolic solution that indicated an interstellar origin and non-gravitational forces could not explain the motion. Observation The last observations were in July 2020, seven months after perihelion. Observation of 2I/Borisov was aided by the fact that the comet was detected while inbound towards the Solar System. ʻOumuamua had been discovered as it was leaving the system, and thus could only be observed for 80 days before it was out of range. Because of its closest approach occurring near traditional year-end holidays, and the capability to have extended observations, some astronomers have called 2I/Borisov a "Christmas comet". Observations using the Hubble Space Telescope began on 12 October, when the comet moved far enough from the Sun to be safely observed by the telescope. Hubble is less affected by the confounding effects of the coma than ground-based telescopes, which will allow it to study the rotational light curve of 2I/Borisov's nucleus. This should facilitate an estimate of its size and shape. Comet chemistry A preliminary (low-resolution) visible spectrum of 2I/Borisov was similar to typical Oort Cloud comets. Its color indexes also resemble the Solar System's long period comets. Emissions at indicated the presence of cyanide (formula CN), which is typically the first detected in Solar System comets including comet Halley. This was the first detection of gas emissions from an interstellar object. The non-detection of diatomic carbon had also been reported in October 2019, with the ratio C2 to CN being less than either 0.095 or 0.3 . The diatomic carbon was positively detected in November 2019, with measured C2 to CN ratio of  . This resembles a carbon-chain depleted group of comets, which are either Jupiter family comets or rare blue-colored carbon monoxide comets exemplified by C/2016 R2. By the end of November 2019, C2 production had dramatically increased, and C2 to CN ratio reached 0.61, along with appearance of bright amine (NH2) bands. Atomic oxygen has also been detected, from this observers estimated an outgassing of water at a rate similar to Solar System comets. Initially, neither water nor OH lines were directly detected in September 2019. First unambiguous detection of OH lines was done 1 November 2019, and OH production peaked in early December 2019. Suspected nucleus fragmentation The comet did come within about 2 AU of the Sun, a distance at which many small comets have been found to disintegrate. The probability that a comet disintegrates strongly depends on the size of its nucleus; Guzik et al. estimated a probability of 10% that this would happen to 2I/Borisov. Jewitt and Luu compared 2I/Borisov to C/2019 J2 (Palomar), another comet of similar size that disintegrated in May 2019 at a distance of 1.9 AU from the Sun. In the event that the nucleus disintegrates, as is sometimes seen with small comets, Hubble can be used to study the evolution of the disintegration process. The severe outburst in February–March 2020, led to suspected "ongoing nucleus fragmentation" from the comet by 12 March. Indeed, images from the Hubble Space Telescope taken on 30 March 2020 show a non-stellar core indicating that Comet 2I/Borisov has ejected sunward a large fragment. The ejection is estimated to have begun around 7 March, and may have occurred during one of the outbursts that occurred near that time. The ejected fragment appeared to have vanished by 6 April 2020. A followup study, reported on 6 April 2020, observed only a single object, and noted that the fragment component had now disappeared. Later analysis of the event showed the ejected dust and fragments have a combined mass of about 0.1% of total mass of nucleus, making the event a large outburst rather than fragmentation. Exploration The high hyperbolic excess velocity of 2I/Borisov of makes it hard for a spacecraft to reach the comet with existing technology: according to a team of the Initiative for Interstellar Studies, a 202 kg (445 lb) spacecraft could theoretically have been sent in July 2018 to intercept 2I/Borisov using a Falcon Heavy-class launcher, or 765 kg (1687 lb) on a Space Launch System (SLS)-class booster, but only if the object had been discovered much earlier than it was to meet the optimal launch date. Launches after the actual discovery date would eliminate the possibility to use Falcon Heavy-class rockets, requiring Oberth maneuvres near Jupiter and near the Sun and a larger launch vehicle. Even an SLS-class launcher would only have been able to deliver a payload (such as a CubeSat) into a trajectory that would intercept 2I/Borisov in 2045 at a relative speed of . According to congressional testimony, NASA may need at least five years of preparation to launch such an intercepting mission.
Physical sciences
Other notable objects
Astronomy
50530370
https://en.wikipedia.org/wiki/Neon%20compounds
Neon compounds
Neon compounds are chemical compounds containing the element neon (Ne) with other molecules or elements from the periodic table. Compounds of the noble gas neon were believed not to exist, but there are now known to be molecular ions containing neon, as well as temporary excited neon-containing molecules called excimers. Several neutral neon molecules have also been predicted to be stable, but are yet to be discovered in nature. Neon has been shown to crystallize with other substances and form clathrates or Van der Waals solids. Neon has a high first ionization potential of 21.564 eV, which is only exceeded by that of helium (24.587 eV), requiring too much energy to make stable ionic compounds. Neon's polarisability of 0.395 Å3 is the second lowest of any element (only helium's is more extreme). Low polarisability means there will be little tendency to link to other atoms. Neon has a Lewis basicity or proton affinity of 2.06 eV. Neon is theoretically less reactive than helium, making it the least reactive of all the elements. Van der Waals molecules Van der Waals molecules are those where neon is held onto other components by London dispersion forces. The forces are very weak, so the bonds will be disrupted if there is too much molecular vibration, which happens if the temperature is too high (above that of solid neon). Neon atoms themselves can be linked together to make clusters of atoms. The dimer Ne2, trimer Ne3 and neon tetramer Ne4 have all been characterised by Coulomb explosion imaging. The molecules are made by an expanding supersonic jet of neon gas. The neon dimer has an average distance of 3.3 Å between atoms. The neon trimer is shaped approximately like an equilateral triangle with sides 3.3 Å long. However the shape is floppy and isosceles triangle shapes are also common. The first excited state of the neon trimer is 2 meV above the ground state. The neon tetramer takes the form of a tetrahedron with sides around 3.2 Å. Van der Waals molecules with metals include LiNe. More Van der Waals molecules include CF4Ne and CCl4Ne, Ne2Cl2, Ne3Cl2, I2Ne, I2Ne2, I2Ne3, I2Ne4, I2NexHey (x=1-5, y=1-4). Van der Waals molecules formed with organic molecules in gas include aniline, dimethyl ether, 1,1-difluoroethylene, pyrimidine, chlorobenzene, cyclopentanone, cyanocyclobutane, and cyclopentadienyl. Ligands Neon can form a very weak bond to a transition metal atom as a ligand, for example Cr(CO)5Ne, Mo(CO)5Ne, and W(CO)5Ne. NeNiCO is predicted to have a binding energy of 2.16 kcal/mol. The presence of neon changes the bending frequency of Ni−C−O by 36 cm−1. NeAuF and NeBeS have been isolated in noble gas matrixes. NeBeCO3 has been detected by infrared spectroscopy in a solid neon matrix. It was made from beryllium gas, dioxygen and carbon monoxide. The cyclic molecule Be2O2 can be made by evaporating Be with a laser with oxygen and an excess of inert gas. It coordinates two noble gas atoms and has had spectra measured in solid neon matrices. Known neon containing molecules are the homoleptic Ne.Be2O2.Ne, and heteroleptic Ne.Be2O2.Ar and Ne.Be2O2.Kr. The neon atoms are attracted to the beryllium atoms as they have a positive charge in this molecule. Beryllium sulfite molecules BeO2S, can also coordinate neon onto the beryllium atom. The dissociation energy for neon is 0.9 kcal/mol. When neon is added to the cyclic molecule, the ∠O-Be-O decreases and the O-Be bond lengths increase. Solids High pressure Van der Waals solids include (N2)6Ne7. Neon hydrate or neon clathrate, a clathrate, can form in ice II at 480 MPa pressure between 70 K and 260 K. Other neon hydrates are also predicted resembling hydrogen clathrate, and those clathrates of helium. These include the C0, ice Ih and ice Ic forms. Neon atoms can be trapped inside fullerenes such as C60 and C70. The isotope 22Ne is strongly enriched in carbonaceous chondrite meteorites, by more than 1,000 times its occurrence on Earth. This neon is given off when a meteorite is heated. An explanation for this is that originally when carbon was condensing from the aftermath of a supernova explosion, cages of carbon form that preferentially trap sodium atoms, including 22Na. Forming fullerenes trap sodium orders of magnitude more often than neon, so Na@C60 is formed, rather than the more common 20Ne@C60. The 22Na@C60 then decays radioactively to 22Ne@C60, without any other neon isotopes. To make buckyballs with neon inside, buckminsterfullerene can be heated to 600 °C with neon under pressure. With three atmospheres for one hour, about 1 in 8,500,000 molecules end up with Ne@C60. The concentration inside the buckyballs is about the same as in the surrounding gas. This neon comes back out when heated to 900 °C. Dodecahedrane can trap neon from a neon ion beam to yield Ne@C20H20. Neon also forms an intercalation compound (or alloy) with fullerenes like C60. In this the Ne atom is not inside the ball, but packs into the spaces in a crystal made from the balls. It intercalates under pressure, but is unstable at standard conditions, and degases in under 24 hours. However at low temperatures Ne•C60 is stable. Neon can be trapped inside some metal-organic framework compounds. In NiMOF-74 neon can be absorbed at 100 K at pressures up to 100 bars, and shows hysteresis, being retained till lower pressures. The pores easily take up six atoms per unit cell, as a hexagonal arrangement in the pores, with each neon atom close to a nickel atom. A seventh neon atom can be forced under pressure at the centre of the neon hexagons. Neon is pushed into crystals of ammonium iron formate (NH4Fe(HCOO)3) and ammonium nickel formate (NH4Ni(HCOO)3) at 1.5 GPa to yield Ne•NH4Fe(HCOO)3 and Ne•NH4Ni(HCOO)3. The neon atoms become trapped in a cage of five metal triformate units. The windows in the cages are blocked by ammonium ions. Argon does not undergo this, probably as its atoms are too big. Neon can penetrate TON zeolite under pressure. Each unit cell contains up to 12 neon atoms in the Cmc21 structure below 600 MPa. This is double the number of argon atoms that can be inserted into that zeolite. At 270 MPa occupancy is around 20% Over 600 MPa this neon penetrated phase transforms to a Pbn21 structure, which can be brought back to zero pressure. However all the neon escapes as it is depressurized. Neon causes the zeolite to remain crystalline, otherwise at pressure of 20 GPa it would have collapsed and become amorphous. Silica glass also absorbs neon under pressure. At 4 GPa there are 7 atoms of neon per nm3. Ions Ionic molecules can include neon, such as the clusters where m goes from 1 to 7 and n from 1 to over 20. HeNe+ (helium neonide cation) has a relatively strong covalent bond. The charge is distributed across both atoms. When metals are evaporated into a thin gas of hydrogen and neon in a strong electric field, ions are formed that are called neonides or neides. Ions observed include TiNe+, TiH2Ne+, ZnNe2+, ZrNe2+, NbNe2+, NbHNe2+, MoNe2+, RhNe2+, PdNe+, TaNe3+, WNe2+, WNe3+, ReNe3+, IrNe2+, AuNe+ (possible). SiF2Ne2+ can be made from neon and using mass spectrometer technology. SiF2Ne2+ has a bond from neon to silicon. has a very weak bond to fluorine and a high electron affinity. NeCCH+, a substituted acetylene, is predicted to be energetically stable by 5.9 kcal/mol, one of the most stable organic ions. A neon containing molecular anion was unknown for a long time. In 2020 the observation of the molecular anion [B12(CN)11Ne]− was reported. The vacant boron in the anions [B12(CN)11]− is very electrophilic and is able to bind the neon. [B12(CN)11Ne]− was found to be stable up to 50 K and lies significantly above the Ne condensation temperature of 25 K. This temperature is remarkably high and indicates a weak chemical interaction. Ionic clusters Metal ions can attract multiple neon atoms to form clusters. The shape of the cluster molecules is determined by repulsion between neon atoms and d-orbital electrons from the metal atom. For copper, neonides are known with numbers of neon atoms up to 24, Cu+Ne1-24. Cu+Ne4 and Cu+Ne12 have much greater numbers than those with higher number of neon atoms. Cu+Ne2 is predicted to be linear. Cu+Ne3 is predicted to be planar T-shaped with an Ne-Cu-Ne angle of 91°. Cu+Ne4 is predicted to be square planar (not tetrahedral) with D4h symmetry. For alkali and alkaline earth metals the M+Ne4 cluster is tetrahedral. Cu+Ne5 is predicted to have a square pyramid shape. Cu+Ne6 has a seriously distorted octahedral shape. Cu+Ne12 has an icosahedral shape. Anything beyond that is less stable, with extra neon atoms having to make an extra shell of atoms around an icosahedral core. Neonium The ion NeH+ formed by protonating neon, is called neonium. It is produced in an AC electric discharge through a mixture of neon and hydrogen with more produced when neon outnumbers hydrogen molecules by 36:1. The dipole moment is 3.004 D. Neonium is also formed by excited dihydrogen cation reacting with neon: Ne + H2+* → NeH+ + H The infrared spectrum around 3μm has also been measured. Excimers The molecule exists in an excited state in an excimer lamp using a microhollow cathode. This emits strongly in the vacuum ultraviolet between 75 and 90 nm with a peak at 83 nm. There is a problem in that there is no window material suitable to transmit these short wavelengths, so it must be used in a vacuum. If about one part in a thousand of hydrogen gas is included, most of the energy is transferred to hydrogen atoms and there is a strong monochromatic Lyman alpha emission at 121.567 nm. Cesium can form excimer molecules with neon CsNe*. A hydrogen-neon excimer is known to exist. Fluorescence was observed by Möller due to bound free transition in a Rydberg molecule of NeH*. NeH is metastable and its existence was proved by mass spectroscopy in which the NeH+ ion is neutralized and then reionized. The spectrum of NeH includes lines at 1.81, 1.60 and 1.46 eV, with a small band at 1.57 eV The bondlength in NeH is calculated as 1.003 Å. A helium neon excimer can be found in a mixed plasma or helium and neon. Some other excimers can be found in solid neon, including which has a luminescence peaking around 11.65 eV, or luminescing around 10.16–10.37 eV and 8.55 eV. Minerals Bokiy's crystallochemical classification of minerals included "compounds of neon" as type 82. However, no such minerals were known. Predicted compounds Analogously to the known ArBeO and the predicted HeBeO (beryllium oxide noble gas adducts), NeBeO is expected to exist, albeit with a very weak bond dissociation energy of 9 kJ/mol. The bond is enhanced by a dipole-induced positive charge on beryllium, and a vacancy in the σ orbital on beryllium where it faces the neon.
Physical sciences
Noble gas compounds
Chemistry
44163394
https://en.wikipedia.org/wiki/486958%20Arrokoth
486958 Arrokoth
486958 Arrokoth (provisional designation ; formerly nicknamed Ultima Thule) is a trans-Neptunian object located in the Kuiper belt. Arrokoth became the farthest and most primitive object in the Solar System visited by a spacecraft when the NASA space probe New Horizons conducted a flyby on 1 January 2019. Arrokoth is a contact binary long, composed of two planetesimals across, that are joined along their major axes. With an orbital period of about 298 years and a low orbital inclination and eccentricity, Arrokoth is classified as a cold classical Kuiper belt object. Arrokoth was discovered on 26 June 2014 by astronomer Marc Buie and the New Horizons Search Team using the Hubble Space Telescope as part of a search for a Kuiper-belt object for New Horizons to target in its first extended mission; it was chosen over two other candidates, and , to become the primary target of the mission. Name When Arrokoth was first observed by the Hubble Space Telescope in 2014, it was designated in the context of the telescope's search for Kuiper belt objects, and was nicknamed "11" for short. Its existence as a potential target of the New Horizons probe was announced by NASA in October 2014 and it was unofficially designated as "Potential Target 1", or . Its official provisional designation, , was assigned by the Minor Planet Center in March 2015, after sufficient orbital information had been gathered. The provisional designation indicates that Arrokoth was the 1745th minor planet to be assigned a provisional designation during the second half of June 2014. After further observations refining its orbit, it was given the permanent minor planet number 486958 on 12 March 2017. Ultima Thule Before the flyby on 1 January 2019, NASA invited suggestions from the public on a nickname to be used for the object. One of the choices, Ultima Thule, was selected on 13 March 2018. (, ) is the northernmost location mentioned in ancient Greek and Roman literature and cartography, while in classical and medieval literature (Latin for 'farthermost Thule') acquired a metaphorical meaning of any distant place located beyond the "borders of the known world". Once it was determined that the body was a bilobate contact binary, the New Horizons team nicknamed the larger lobus "Ultima" and the smaller lobus "Thule". They are now formally named "Wenu" and "Weeyo", respectively. In November 2019, the International Astronomical Union (IAU) announced the object's permanent official name, Arrokoth. Arrokoth The name Arrokoth was chosen by the New Horizons team to represent the Powhatan people indigenous to the Tidewater region of Virginia and Maryland in the eastern United States. The Hubble Space Telescope and the Johns Hopkins University Applied Physics Laboratory, which were prominently involved in Arrokoth's discovery, were both operated from the Tidewater region of Maryland. With the permission of the elders of the Pamunkey Indian Tribe of the Powhatan nation, the name Arrokoth was proposed to the IAU and formally announced by the New Horizons team in a ceremony held at the NASA Headquarters in the District of Columbia on 12 November 2019. Prior to the ceremony, the name was accepted by the IAU's Minor Planet Center on 8 November, and the New Horizons team's naming citation was published in a Minor Planet Circular on 12 November. The Powhatan language became extinct in the late 18th century and little was recorded of it. In an old word list, is glossed as 'sky', and this was the meaning intended by the New Horizons team, but it would seem that it actually meant 'cloud'. Shape Arrokoth is a contact binary consisting of two lobes (lobi) attached by a narrow neck or waist, which is encircled by a bright band named Akasa Linea. The lobi were likely once two objects that later merged in a slow collision. The larger lobus, Wenu, is measured at about across its longest axis while the smaller lobus, Weeyo, is measured at across its longest axis. Wenu is lenticular in shape, being highly flattened and moderately elongated. Based on shape models of Arrokoth constructed from images taken by the New Horizons spacecraft, the dimensions of Wenu are approximately . In contrast, Weeyo is less flattened, with dimensions of . As a whole, Arrokoth is across its longest axis and is about thick, with the centers of the lobi separated from each other by . Given the volume equivalent lobus diameters of and , the volume ratio of Wenu to the smaller Weeyo is approximately 1.9:1.0, meaning that Wenu's volume is nearly twice that of Weeyo. Overall, the volume of Arrokoth is around , though this estimate is largely uncertain due to weak constraints on the thicknesses of the lobi. Prior to the New Horizons flyby of Arrokoth, stellar occultations by Arrokoth had provided evidence for its bilobate shape. The first detailed image of Arrokoth confirmed its double-lobed appearance and was described as a "snowman" by Alan Stern, as the lobi appeared distinctively spherical. On 8 February 2019, one month after the New Horizons flyby, Arrokoth was found to be more flattened than initially thought, based on additional images of Arrokoth taken by New Horizons after its closest approach. The flattened lobus Wenu was described as a "pancake", while Weeyo was described as a "walnut" as it appeared less flattened. By observing how the unseen sections of Arrokoth occulted background stars, scientists were able to then outline the shapes of both lobi. The cause of Arrokoth's unexpectedly flattened shape is uncertain, with various explanations including sublimation or centrifugal forces. The longest axes of the lobi are nearly aligned with the rotational axis, which is situated between them. This near-parallel alignment of the lobi suggests that they were mutually locked to each other, likely due to tidal forces, before merging. The alignment of the lobi supports the idea that the two had individually formed from the coalescence of a cloud of icy particles. Geology Spectra and surface Measurements of Arrokoth's absorption spectrum by New Horizons LEISA spectrometer show that Arrokoth's spectrum exhibits a strong red spectral slope extending from red to infrared wavelengths at 1.2–2.5 μm. Spectral measurements from LEISA revealed the presence of methanol and complex organic compounds on the surface of Arrokoth, but no evidence of water ice. One particular absorption band in Arrokoth's spectrum at 1.8 μm indicates that these organic compounds are sulfur-rich. Given the abundance of methanol on Arrokoth's surface, it is predicted that formaldehyde-based compounds resulting from irradiation should also be present, albeit in the form of complex macromolecules. Arrokoth's spectrum shares similarities with that of and the centaur 5145 Pholus, which also display strong red spectral slopes along with signs of methanol present on their surfaces. Preliminary observations by the Hubble Space Telescope in 2016 revealed that Arrokoth has a red coloration, similar to other Kuiper belt objects and centaurs like Pholus. Arrokoth's color is redder than that of Pluto, thus it belongs to the "ultra red" population of cold classical Kuiper belt objects. The red coloration of Arrokoth is caused by the presence of a mix of complex organic compounds called tholins, which are produced from the photolysis of various simple organic and volatile compounds by cosmic rays and ultraviolet solar radiation. The presence of sulfur-rich tholins on Arrokoth's surface implies that volatiles such as methane, ammonia, and hydrogen sulfide were once present on Arrokoth, but were quickly lost due to Arrokoth's small mass. However, less volatile materials such as methanol, acetylene, ethane, and hydrogen cyanide could be retained over a longer period of time, and may likely account for the reddening and production of tholins on Arrokoth. The photoionization of organic compounds and volatiles on Arrokoth was also thought to produce hydrogen gas that would interact with the solar wind, though New Horizons SWAP and PEPSSI instruments did not detect any signature of solar wind interaction around Arrokoth. From color and spectral measurements of Arrokoth, the surface displays subtle color variation among its surface features. Spectral images of Arrokoth show that the Akasa (neck) region and lineation features appear less red compared to the central region of the smaller lobe Weeyo. The larger lobe Wenu also displays redder regions, informally known as "thumbprints" by the New Horizons team. The thumbprint features are located near Wenu's limb. The surface albedo or reflectivity of Arrokoth varies from 5 percent to 12 percent due to various bright features on its surface. Its overall geometric albedo, the quantity of reflected light in visible spectrum, is measured at 21 percent, typical for most Kuiper belt objects. The overall Bond albedo (the quantity of reflected light of any wavelength) of Arrokoth is measured at 6.3 percent. Craters The surface of Arrokoth is lightly cratered and smooth in appearance. Arrokoth's surface has few small craters (from in size to the limits of photographic resolution), implying a paucity of impacts throughout its history. The occurrence of impact events in the Kuiper belt is thought to be uncommon, with a very low impact rate over the course of one billion years. Due to the slower orbital speeds of Kuiper belt objects, the speed of objects impacting Arrokoth is expected to be low, with typical impact speeds around . At such slow impact speeds, large craters on Arrokoth are expected to be rare. With a low frequency of impact events along with the slow speeds of impacts, Arrokoth's surface would remain preserved since its formation. The preserved surface of Arrokoth could possibly give hints to its formation process, as well as signs of accreted material. Numerous small pits on Arrokoth's surface were identified in high resolution images from the New Horizons spacecraft. The size of these pits are measured at about across. The exact cause of these pits is unknown; several explanations for these pits include impact events, the collapse of material, the sublimation of volatile materials, or the venting and escape of volatile gases from the interior of Arrokoth. Surface features The surfaces of each lobus of Arrokoth display regions of varying brightness along with various geological features such as troughs and hills. These geological features are thought to have originated from the clumping of smaller planetesimals that come to form the lobi of Arrokoth. The brighter regions of Arrokoth's surface, especially its bright lineation features, are thought to have resulted from the deposition of material that have rolled down from hills on Arrokoth, as surface gravity on Arrokoth is sufficient for this to occur. The smaller lobus, Weeyo, bears a large depression feature named 'Sky' (previously dubbed 'Maryland' after the home state of the New Horizons team). Assuming Sky has a circular shape, its diameter is , with a depth of . Sky is likely an impact crater that was formed by an object across. Two notably bright streaks of similar size are present within Sky, and may be remnants of avalanches where bright material rolled into the depression. Four subparallel troughs are present near the terminator of Weeyo, along with two possible kilometer-sized impact craters on the rim of Sky. The surface of Weeyo exhibits bright mottled regions separated by broad, dark regions (dm) which may have undergone scarp retreat, in which they were eroded due to the sublimation of volatiles, exposing lag deposits of darker material irradiated by sunlight. Another bright region (rm), located at the equatorial end of Weeyo, exhibits rough terrain along with several topographic features that have been identified as possible pits, craters, or mounds. Weeyo does not display distinct units of rolling topography near Sky, likely as a result of resurfacing caused by the impact event that created the crater. As on Weeyo, troughs and pit crater chains are also present along the terminator of the larger lobus Wenu. Wenu consists of eight distinctive units or blocks of rolling topography, each similarly sized at around . The units are separated by relatively bright boundary regions. The similar sizes of the units suggests that each was once a small planetesimal, and that they coalesced to form Wenu. The planetesimals are expected to have accreted slowly by astronomical standards (at speeds of several meters per second), though they must have a very low mechanical strength in order to merge and form compact bodies at these speeds. The central unit ('mh') is encircled by a bright annular feature, Kaan Arcus (initially dubbed "The Road to Nowhere"). From stereographic analysis, the central unit appears to be relatively flat compared to the surrounding units. Stereographic analysis of Arrokoth has also shown that one particular unit located at Wenu's limb ('md') appears to have a higher elevation and tilt than the others. Akasa Linea, the neck region connecting the two lobi, has a brighter and less red appearance than the surfaces of either lobus. The brightness of Akasa Linea is likely due to a composition of a more reflective material than the surfaces of the lobi. One hypothesis suggests the bright material originated in the deposition of small particles that had fallen from the lobi over time. Since Arrokoth's center of gravity lies between the lobi, small particles are likely to roll down the steep slopes toward the center between each lobus. Another proposal suggests the bright material is produced by the deposition of ammonia ice. Ammonia vapor present on the surface of Arrokoth would solidify around Akasa Linea, where gases cannot escape due to the concave shape of the neck. The brightness of Akasa is thought to be maintained by high seasonal axial tilt as Arrokoth orbits around the Sun. Over the course of its orbit, Akasa Linea is shadowed when the lobi are coplanar to the direction of the Sun, at which times the neck region receives no sunlight, cooling and trapping volatiles in the region. In May 2020, the IAU's Working Group for Planetary System Nomenclature (WGPSN) formally established a naming theme for all features of Arrokoth, which are to be named after words for "sky" in the languages of the world, past and present. In 2021, the first few names were approved, including Sky Crater on the small lobe, later named Weeyo Lobus. In 2022, Kaʼan Arcus was approved for the circular arc on Wenu Lobus. Internal structure Topography variations at the limb of Arrokoth suggest that its interior is likely composed of mechanically strong material consisting of mostly amorphous water ice and rocky material. Trace amounts of methane and other volatile gases in the form of vapors may also be present in Arrokoth's interior, trapped in water ice. Under the assumption that Arrokoth has a low comet-like density of around , its internal structure is expected to be porous, as volatile gases trapped in Arrokoth's interior are thought to escape from the interior to the surface. Assuming that Arrokoth may have an internal heat source caused by the radioactive decay of radionuclides, the trapped volatile gases inside Arrokoth would migrate outward and escape from the surface, similarly to the scenario of outgassing of comets. The escaped gases may subsequently freeze and deposit on Arrokoth's surface, and could possibly account for the presence of ices and tholins on its surface. Orbit and classification Arrokoth orbits the Sun at an average distance of , taking 297.7 years to complete a full orbit around the Sun. Having a low orbital eccentricity of 0.042, Arrokoth follows a nearly circular orbit around the Sun, only slightly varying in distance from 42.7 AU at perihelion to 46.4 AU at aphelion. Because Arrokoth has a low orbital eccentricity, it does not approach close enough to Neptune for its orbit to become perturbed. (Arrokoth's minimum orbital intersection distance from Neptune is 12.75 AU.) Arrokoth's orbit appears to be stable over the long term; simulations by the Deep Ecliptic Survey show that its orbit will not significantly change over the next 10 million years. At the time of the New Horizons flyby in January 2019, Arrokoth's distance from the Sun was . At this distance, light from the Sun takes over six hours to reach Arrokoth. Arrokoth has last passed aphelion around 1906 and is currently approaching the Sun at a rate of approximately 0.13 AU per year, or about . Arrokoth will approach perihelion by 2055. Having an observation arc of 851 days, Arrokoth's orbit is fairly well-determined, with an uncertainty parameter of 2 according to the Minor Planet Center. Hubble Space Telescope observations in May and July 2015 as well as in July and October 2016 have greatly reduced the uncertainties in Arrokoth's orbit, which prompted the Minor Planet Center to assign its permanent minor planet number. In contrast to the orbit calculated by the Minor Planet Center, Arrokoth's observation arc in the JPL Small-Body Database does not include these additional observations and purports the orbit to be highly uncertain, with an uncertainty parameter of 5. Arrokoth is generally classified as a distant minor planet or trans-Neptunian object by the Minor Planet Center as it orbits in the outer Solar System beyond Neptune. Having a non-resonant orbit within the Kuiper belt region 39.5–48 AU from the Sun, Arrokoth is formally classified as a classical Kuiper belt object, or cubewano. Arrokoth's orbit is inclined to the ecliptic plane by 2.45 degrees, relatively low compared to other classical Kuiper belt objects such as . Since Arrokoth has a low orbital inclination and eccentricity, it is part of the dynamically cold population of classical Kuiper belt objects, which are unlikely to have undergone significant perturbations by Neptune during its outward migration in the past. The cold classical population of Kuiper belt objects are thought to be remnant planetesimals left over from the accretion of material during the formation of the Solar System. Rotation and temperature Results from photometric Hubble Space Telescope observations show that the brightness of Arrokoth varies by around 0.3 magnitudes as it rotates. Though the rotation period and light curve amplitude of Arrokoth could not be determined from Hubble observations, the subtle brightness variations suggested that Arrokoth's rotational axis is either pointed toward the Earth or is being viewed at an equator-on configuration with a nearly spherical shape, with a constrained a/b best-fit aspect ratio around 1.0–1.15. Upon the New Horizons spacecraft's approach to Arrokoth, no rotational light curve amplitude was detected by the spacecraft despite Arrokoth's irregular shape. To explain the lack of its rotational light curve, scientists surmised that Arrokoth is rotating on its side, with its rotational axis pointing nearly directly at the approaching New Horizons spacecraft. Subsequent images of Arrokoth from New Horizons upon approach confirmed that its rotation is tilted, with its south pole facing towards the Sun. The rotational axis of Arrokoth is tilted 99 degrees to its orbit. Based on occultation and New Horizons imaging data, Arrokoth's rotation period is determined to be 15.938 hours. Due to the high axial tilt of its rotation, the solar irradiance of the northern and southern hemispheres of Arrokoth varies greatly over the course of its orbit around the Sun. As it orbits around the Sun, one polar region of Arrokoth faces the Sun continuously while the other faces away. The solar irradiance of Arrokoth varies by 17 percent due to the low eccentricity of its orbit. The average temperature of Arrokoth is estimated to be around , with a maximum of around on the illuminated subsolar point of Arrokoth. Radiometric measurements from the New Horizons REX instrument indicate that the mean surface temperature of Arrokoth's unilluminated face is about , higher than the modeled range of . The higher temperature of Arrokoth's unilluminated face as measured by REX implies that thermal radiation is emitted from Arrokoth's subsurface, which was predicted to be intrinsically warmer than the exterior surface. Mass and density The mass and density of Arrokoth are unknown. A definitive mass and density estimate cannot be given as the lobi are in contact rather than orbiting each other. Although a possible natural satellite orbiting Arrokoth could help determine its mass, no such satellites were found. Under the assumption that both lobi are bound by self-gravity, with the mutual gravity of the two overcoming centrifugal forces that would otherwise separate them, Arrokoth is estimated to have a very low density similar to that of comets, with an estimated minimum density of . In order to maintain the shape of the neck, the density of Arrokoth must be less than the maximum possible density of , otherwise the neck would be excessively compressed by the mutual gravity of the lobi such that the entire object would gravitationally collapse into a spheroid. Formation Arrokoth is thought to have formed from two separate progenitor objects that formed over time from a rotating cloud of small, icy bodies since the formation of the Solar System 4.6 billion years ago. Arrokoth had likely formed in a colder environment within a dense, opaque region of the early Kuiper belt where the Sun appeared heavily obscured by dust. Icy particles within the early Kuiper belt experienced streaming instability, in which they slowed down due to drag against the surrounding gas and dust, and gravitationally coalesced into clumps of larger particles. Because there have been few to no disruptive impacts on Arrokoth since it formed, the details of its formation have been preserved. From the differing present appearances of the lobi, each is thought to have accreted separately while in orbit around each other. Both progenitor objects are believed to have formed from a single source of material as they appear to be homogeneous in albedo, color, and composition. The presence of rolling topography units on the larger object indicates that it had likely formed from the coalescence of smaller planetesimal units prior to merging with the smaller object. The larger lobus Wenu appears to be an aggregate of 8 or so smaller components, each approximately across. Flattening and merging It is unclear how Arrokoth has attained its present flattened shape, though two leading hypotheses have been postulated to explain the mechanisms leading to its flattened shape during the formation of the Solar System. The New Horizons team hypothesizes that the two progenitor objects formed with initially rapid rotations, causing their shapes to become flattened due to centrifugal forces. Over time, the rotation rates of the progenitor objects gradually slowed down as they experienced impacts by small objects and transferred their angular momentum to other orbiting debris left over from their formation. Eventually, loss of momentum, caused by impacts and momentum shifting to other bodies in the cloud, caused the pair to slowly spiral closer until they touched—where over time the joints fused together, forming its present bilobate shape. In an alternative hypothesis formulated by researchers of the Chinese Academy of Sciences and the Max Planck Institute in 2020, the flattening of Arrokoth may have resulted from the process of sublimation-driven mass loss over a timescale of several million years after the merging of the lobi. At the time of formation, Arrokoth's composition had a higher volatile concentration from the accretion of condensed volatiles within the dense and opaque Kuiper belt. After the surrounding dust and nebula subsided, solar radiation was no longer obstructed, allowing for photon-induced sublimation to occur in the Kuiper belt. Due to Arrokoth's high rotational obliquity, one polar region faces the Sun continuously for half of its orbital period, resulting in extensive heating and consequent sublimation and loss of frozen volatiles at Arrokoth's poles. Regardless of the uncertainty surrounding the mechanisms for the flattening of Arrokoth, the subsequent merging of the bodies ancestral to the lobi appeared to be gentle. The present appearance of Arrokoth does not indicate deformation or compression fractures, suggesting that the two progenitor objects had merged very slowly at a speed of —comparable to the average walking speed of a person. The progenitor objects must have also merged obliquely at angles greater than 75 degrees in order to account for the present shape of Arrokoth's thin neck while keeping the lobi intact. By the time the two progenitor objects merged, both of them had already been tidally locked in synchronous rotation. The long-term frequency of impact events occurring on Arrokoth was low due to the slower speeds of objects in the Kuiper belt. Over a period of 4.5 billion years, photon-induced sputtering of water ice on Arrokoth's surface would minimally reduce its size by . With the lack of frequent cratering events and perturbations of its orbit, the shape and appearance of Arrokoth would remain virtually pristine since the conjoining of two separate objects that formed its bilobate shape. Observation Discovery Arrokoth was discovered on 26 June 2014 using the Hubble Space Telescope during a preliminary survey to find a suitable Kuiper belt object for the New Horizons spacecraft to fly by. Scientists of the New Horizons team were searching for an object in the Kuiper belt that the spacecraft could study after Pluto, and their next target had to be reachable on New Horizons remaining fuel. Using large ground-based telescopes on Earth, researchers began looking in 2011 for candidate objects and searched multiple times per year for several years. However, none of the objects found were reachable by the New Horizons spacecraft and most Kuiper belt objects that may be suitable were just too distant and faint to be seen through Earth's atmosphere. In order to find these fainter Kuiper belt objects, the New Horizons team initiated a search for suitable targets with the Hubble Space Telescope on 16 June 2014. Arrokoth was first imaged by Hubble on 26 June 2014, 10 days after the New Horizons team began their search for potential targets. While digitally processing images from Hubble, Arrokoth was identified by astronomer Marc Buie, member of the New Horizons team. Buie reported his finding to the search team for subsequent analysis and confirmation. Arrokoth was the second object found during the search, after . Three more candidate targets were later discovered with Hubble, though follow-up astrometric observations eventually ruled them out. Of the five potential targets found with Hubble, Arrokoth was deemed to be the most feasible target for the spacecraft as the flyby trajectory required the least amount of fuel compared to that for , the second most feasible target for New Horizons. On 28 August 2015, Arrokoth was officially selected by NASA as a flyby target for the New Horizons spacecraft. Arrokoth is too small and distant for its shape to be observed directly from Earth, but scientists were able to take advantage of an astronomical event called a stellar occultation, in which the object passes in front of a star from the vantage point of Earth. Since the occultation event is only visible from certain parts of the Earth, the New Horizons team combined data from Hubble and the European Space Agency's Gaia space observatory to figure out exactly when and where on Earth's surface Arrokoth would cast a shadow. They determined that occultations would occur on 3 June, 10 July, and 17 July in 2017, and set off for places around the world where they could see Arrokoth cover up a different star on each of these dates. Based on this string of three occultations, scientists were able to trace out the object's shape. 2017 occultations In June and July 2017, Arrokoth occulted three background stars. The team behind New Horizons formed a specialized "KBO Chasers" team led by Marc Buie to observe these stellar occultations from South America, Africa, and the Pacific Ocean. On 3 June 2017, two teams of NASA scientists tried to detect the shadow of Arrokoth from Argentina and South Africa. When they found that none of their telescopes had observed the object's shadow, it was initially speculated that Arrokoth might be neither as large nor as dark as previously expected, and that it might be highly reflective or even a swarm. Additional data taken with the Hubble Space Telescope in June and July 2017 revealed that the telescopes had been placed in the wrong location, and that these estimations were incorrect. On 10 July 2017, the airborne telescope SOFIA was successfully placed close to the predicted centerline for the second occultation while flying over the Pacific Ocean from Christchurch, New Zealand. The main purpose of those observations was the search for hazardous material like rings or dust near Arrokoth that could threaten the New Horizons spacecraft during its flyby in 2019. Data collection was successful. A preliminary analysis suggested that the central shadow was missed; only in January 2018 was it realized that SOFIA had indeed observed a very brief dip from the central shadow. The data collected by SOFIA will also be valuable to put constraints on dust near Arrokoth. Detailed results of the search for hazardous material were presented on the 49th Meeting of the AAS Division for Planetary Sciences, on 20 October 2017. On 17 July 2017, the Hubble Space Telescope was used to check for debris around Arrokoth, setting constraints on rings and debris within the Hill sphere of Arrokoth at distances of up to from the main body. For the third and final occultation, team members set up another ground-based "fence line" of 24 mobile telescopes along the predicted ground track of the occultation shadow in southern Argentina (Chubut and Santa Cruz provinces) to better constrain the size of Arrokoth. The average spacing between these telescopes was around . Using the latest observations from Hubble, the position of Arrokoth was known with much better precision than for the 3 June occultation, and this time the shadow of Arrokoth was successfully observed by at least five of the mobile telescopes. Combined with the SOFIA observations, this put constraints on possible debris near Arrokoth. Results from the occultation on 17 July showed that Arrokoth could have had a very oblong, irregular shape or be a close or contact binary. According to the duration of the observed chords, Arrokoth was shown to have two "lobes", with diameters of approximately and , respectively. A preliminary analysis of all collected data suggested that Arrokoth was accompanied by an orbiting moonlet about away from the primary. It was later realized, however, that an error with the data processing software resulted in a shift in the apparent location of the target. After accounting for the bug, the short dip observed on 10 July was considered to be a detection of the primary body. By combining data about its light curve, spectra (e.g. color), and stellar occultation data, illustrations could rely on known data to create a concept of what it might look like prior to spacecraft flyby. 2018 occultations There were two potentially useful Arrokoth occultations predicted for 2018: one on 16 July and one on 4 August. Neither of these were as good as the three 2017 events. No attempts were made to observe the 16 July 2018 occultation, which took place over the South Atlantic and the Indian Ocean. For the 4 August 2018 event, two teams, consisting of about 50 researchers in total, went to locations in Senegal and Colombia. The event gathered media attention in Senegal, where it was used as an opportunity for science outreach. Despite some stations being affected by bad weather, the event was successfully observed, as reported by the New Horizons team. Initially, it was unclear whether a chord on the target had been recorded. On 6 September 2018, NASA confirmed that the star had indeed been seen to dip by at least one observer, providing important information about the size and shape of Arrokoth. Hubble observations were carried out on 4 August 2018, to support the occultation campaign. Hubble could not be placed in the narrow path of the occultation, but due to the favourable location of Hubble at the time of the event, the space telescope was able to probe the region down to from Arrokoth. This is much closer than the region that could be observed during the 17 July 2017 occultation. No brightness changes of the target star have been seen by Hubble, ruling out any optically thick rings or debris down to from Arrokoth. Results of the 2017 and 2018 occultation campaigns were presented at the 50th meeting of the American Astronomical Society Division for Planetary Sciences on 26 October 2018. Exploration Having completed its flyby of Pluto in July 2015, the New Horizons spacecraft made four course changes in October and November 2015 to place itself on a trajectory towards Arrokoth. It is the first object to be targeted for a flyby that was discovered after the visiting spacecraft was launched, and is the farthest object in the Solar System ever to be visited by a spacecraft. Moving at a speed of New Horizons passed by Arrokoth at a distance of , equivalent to a few minutes of travel at the craft's speed, and one third of the distance of the spacecraft's closest encounter with Pluto. Closest approach occurred on 1 January 2019, at 05:33 UTC (Spacecraft Event Time – SCET) at which point it was from the Sun in the direction of the constellation Sagittarius. At this distance, the one-way transit time for radio signals between Earth and New Horizons was 6 hours. The science objectives of the flyby include characterizing the geology and morphology of Arrokoth, and mapping the surface composition (searching for ammonia, carbon monoxide, methane, and water ice). Surveys of the surrounding environment to detect possible orbiting moonlets, a coma, or rings, were conducted. Images with resolutions showing details of to are expected. From Hubble observations, faint, small satellites orbiting Arrokoth at distances greater than have been excluded to a depth of >29th magnitude. The object has no detectable atmosphere, and no large rings or satellites larger than in diameter. Nonetheless, a search for a related moon (or moons) continues, which may help better explain the formation of Arrokoth from two individual orbiting objects. New Horizons made its first detection of Arrokoth on 16 August 2018, from a distance of . At that time, Arrokoth was visible at magnitude 20, in the direction of the constellation Sagittarius. Arrokoth was expected to be magnitude 18 by mid-November, and magnitude 15 by mid-December. It reached naked eye brightness (magnitude 6) from the spacecraft's point of view just 3–4 hours before closest approach. If obstacles were detected, the spacecraft had the option of diverting to a more distant rendezvous, though no moons, rings or other hazards were seen. High-resolution images from New Horizons were taken on 1 January. The first images of mediocre resolution arrived the next day. The downlink of data collected from the flyby was expected to last 20 months, through September 2020. Gallery
Physical sciences
Solar System
Astronomy
39913055
https://en.wikipedia.org/wiki/Imperial%20and%20US%20customary%20measurement%20systems
Imperial and US customary measurement systems
The imperial and US customary measurement systems are both derived from an earlier English system of measurement which in turn can be traced back to Ancient Roman units of measurement, and Carolingian and Saxon units of measure. The US Customary system of units was developed and used in the United States after the American Revolution, based on a subset of the English units used in the Thirteen Colonies; it is the predominant system of units in the United States and in U.S. territories (except for Puerto Rico and Guam, where the metric system, which was introduced when both territories were Spanish colonies, is also officially used and is predominant). The imperial system of units was developed and used in the United Kingdom and its empire beginning in 1824. The metric system has, to varying degrees, replaced the imperial system in the countries that once used it. Most of the units of measure have been adapted in one way or another since the Norman Conquest (1066). The units of linear measure have changed the least – the yard (which replaced the ell) and the chain were measures derived in England. The foot used by craftsmen supplanted the longer foot used in agriculture. The agricultural foot was reduced to of its former size, causing the rod, pole or perch to become (rather than the older 15) agricultural feet. The furlong and the acre, once it became a measure of the size of a piece of land rather than its value, remained relatively unchanged. In the last thousand years, three principal pounds were used in England. The troy pound (5760 grains) was used for precious metals, the apothecaries' pound, (also 5760 grains) was used by pharmacists and the avoirdupois pound (7000 grains) was used for general purposes. The apothecaries and troy pounds are divided into 12 ounces (of 480 grains) while the avoirdupois pound has 16 ounces (of 437.5 grains). The unit of volume, the gallon, has different values in the United States and in the United Kingdom – the US fluid gallon being about 0.83 imperial gallons and the US dry gallon being about 0.97 imperial gallons. The US fluid gallon was based on the wine gallon used in England prior to 1826. After the United States Declaration of Independence the units of measurement in the United States developed into what is now known as customary units. The United Kingdom overhauled its system of measurement in 1826, when it introduced the imperial system of units. This resulted in the two countries having different gallons. Later in the century, efforts were made to align the definition of the pound and the yard in the two countries by using copies of the standards adopted by the British Parliament in 1855. However, these standards were of poor quality compared with those produced for the Convention of the Metre. In 1960, the two countries agreed to common definitions of the yard and the pound based on definitions of the metre and the kilogram. This change, which amounted to a few parts per million, had little effect in the United Kingdom, but resulted in the United States having two slightly different systems of linear measure – the international system, and the surveyors system. English units of measure English units of measure, were derived from a combination of Roman, Carolingian and Saxon units of measure. They were a precursor to both the imperial system of units (first defined in 1824, to take effect in 1826) and United States customary units which evolved from English Units from 1776 onwards. The earliest records of English units of measure involve the weight (and therefore the value) of Saxon coins. The penny introduced by Offa was about 20 grains (1.296 g). Edward the Elder increased the weight of the English penny to 26 grains (1.685 g), thereby aligning it with the penny of Charlemagne. By the time of the Norman Conquest (1066), it had decreased to 24 grains (1.555 g). This value was subsequently called the pennyweight and formed the basis of the Troy units of weight—the troy ounce used to this day for weighting precious metals. Edward I (1272–1307) broke the link between a coin's value and its weight when he debased the English coinage by introducing a groat (four pence) which weighed of 89 grains rather than the expected 96 grains. The groat was further devalued in the 1350s when its weight was reduced to 72 grains. During Saxon times land was measured both in terms of its economic value and in terms of its size. The Domesday Book used the hide, an economic unit of measure. In other references the furlong and the rood appear to be units related to ploughing procedures. Of particular interest was the rood which was 15 North German feet in length, the North German foot being equivalent to 335 mm (13.2 inches). Craftsmen, on the other hand used a shorter Roman foot. Standardization of weights and measures was a recurring issue for monarchs. In 965 AD, King Edgar decreed "that only one weight and one measure should pass throughout the King's dominion". In 1197 Richard I decreed that the measures of corn and pulse, and of wine and ale should be the same throughout all England. Magna Carta, signed by King John in 1215 extended this to include cloth. Some time between 1266 and 1303 the weights and measures of England were radically revised by a law known as the Composition of Yards and Perches (Compositio ulnarum et perticarum) often known as the Compositio for short. This law, attributed to either Henry III or his successor Edward I, instituted a new foot that was exactly the length of the old foot, with corresponding reductions in the size of the yard, ell, inch, and barleycorn. (Furlongs remained the same, but the rod changed from 15 old feet to 16 new feet.) In 1324 Edward II systematized units of length by defining the inch as 3 barleycorns, the foot as 12 inches, the yard as 3 feet, the perch as 5 yards, and the acre as an area 4 by 40 perches. Apart from the ell (45 inches or 114.3 cm, which continued to be used in the cloth trade) and the chain (introduced by Edmund Gunter in 1620, and used in land surveying), these units formed the basis of the units of length of the English system of measurement. The units were however redefined many times – during Henry VIII's time standard yards and ells made of brass were manufactured, during Elizabeth I's time these were replaced with standards made of bronze and in 1742, after scientific comparisons showed a variation of up to 0.2% from the mean, a definitive standard yard was proposed (but not manufactured). During the medieval era, agricultural products other than wool were mostly sold by volume, with various bushels and gallons being introduced over the years for different commodities. In the early fourteenth century the wool trade traditionally used the avoirdupois system of weights, a process that was formalized by Edward III in 1340. At the same time, the stone, when used to weigh wool, was formalized as being 14 pounds. During the Tudor period, numerous reforms were made to English weights and measures. In 1496 Henry VII ordered that reference copies of the yard, pound and gallon should be made of brass and distributed to specified towns and cities throughout the kingdom. Many weights and measures that had crept into use were banned: in 1527 Henry VIII banned the Tower pound (5400 grains against the 5760 grains of the apothecaries and troy pounds) and the mercantile pound (6750 grains against the 7000 grains of the pound avoirdupois) and in 1592 Elizabeth I ordered the use of the "statute mile" (5280 feet against the 5000 feet of the London or Old English mile). Under the Act of Union of 1707, Scotland, which had developed its own system of weights and measures independently of England, abandoned them in favour of English weights and measures. The Acts of Union 1800 which united Ireland with Great Britain had less of an effect on weights and measures—Irish weights and measures having been based on the English foot and pound avoirdupois since 1351, though the Irish acre and mile were based on a perch of 7 yards, not yards as in England. By the early nineteenth century many commodities had their own set of units, the units of measure for the wool and cloth industries had units of measure specific to those commodities, albeit derived on the pound avoirdupois or the foot while wine and beer used units with the same names but different sizes – the wine gallon being 231 cubic inches and the beer or ale gallon being 282 cubic inches. Agricultural produce was sold by the bushel which was based on yet another gallon – the dry gallon of 268.8 cubic inches. Even though not explicitly permitted by statute, many markets used bushels based on weight rather than volume when selling wheat and barley. Imperial units The British Weights and Measures Act 1824 repealed all existing British weights and measures legislation, some dating back to the 1300s, and redefined existing units of measure. In particular, a new standard yard and troy pound were manufactured as the standards for length and weight respectively. A new measure, the imperial gallon, which replaced the many gallons in use, was defined as being the volume of 10 pounds of water at which, after the authorized experiments, was found to be 277.274 cubic inches. The bushel, which like the gallon, had definitions reflecting the various gallons, was defined as 8 imperial gallons. The Weights and Measures Act 1824 also introduced some changes to the administration of the standards of weights and measures: previously Parliament had been given the custody of the standards but the act passed this responsibility on to the Exchequer. The act also set up an inspectorate for weights and measures. The standard yard and pound were lost in 1834 when a fire partially destroyed the Palace of Westminster. Following a report published in 1841 by a commission new standard yard and pound were manufactured using the best available secondary sources. Unlike the previous standard, the new pound standard was a pound avoirdupois. They were accepted by an act of Parliament as the standards for length and weight in 1855. Following the debacle over the different gallons that had been adopted by the United States and the United Kingdom thirty years earlier, one of the copies of the standard yard was offered to and accepted by the United States Government. The Weights and Measures Act 1835 tidied up a number of shortcomings in the 1825 Act. In response to representations from traders, the stone and the hundredweight were formally defined as being 14 pounds and 112 pounds respectively and the experiment of defining a "heaped" measure as outlined in the 1824 Act was abandoned. Not all trades followed the use of the 14 stone—Britten, in 1880 for example, catalogued a number of different values of the stone in various British towns and cities ranging from 4 lb to 26 lb The 1835 Act also restricted the use of Troy measure to precious metals and required that coal be sold by weight and not by volume. The Weights and Measures Act 1878 overhauled the inspection regime of weights and measures used in trade. The act also reaffirmed the use of the brass standard yard and platinum standard pound as the standards for use in the United Kingdom, reaffirmed the use of apothecaries measures in the pharmaceutical industry, reaffirmed the 1824 definition of the gallon, removed the Troy pound from the list of legal units of measure, added the fathom to the list of legal units and fixed the ratio of metric to imperial units at one metre being equal to 39.3708 inches and one kilogram being equal to 15432.3487 grains (1 lb = 0.453592654 kg). Subsequent to the passing of the act, the volume of the gallon which had been defined as being the volume of 10 lb distilled water at was remeasured and set at 277.42 cubic inches though HM Customs and Excise continued to use the 1824 definition for excise purposes. The Weights and Measures Act 1878 effectively prohibited the use of metric weights for trade, the United Kingdom having declined to sign the Convention of the Metre three years previously. The standard imperial yard was not stable – in 1947 its rate of shrinkage was quantified and found to be one part per million every 23 years. In April 1884 HJ Chaney, Warden of Standards in London unofficially contacted the BIPM (custodians of the standard metre) inquiring whether the BIPM would calibrate some metre standards that had been manufactured in the United Kingdom. Broch, director of the BIPM replied that he was not authorised to perform any such calibrations for non-member states. On 17 September 1884, the British Government signed the convention on behalf of the United Kingdom. The Weights and Measures Act 1897 authorized the use of metric units for trade; a list of metric to imperial equivalents being published the following year. Under the Weights and Measures Act 1824 custody of the standard yard and pound and custody of the administration of weights and measures was entrusted to the Exchequer but verification was administered locally. The Weights and Measures Act 1835 formally described the office and duties of Inspectors of Weights and Measures and required every borough to appoint such officers and the Standards of Weights, Measures, and Coinage Act 1866 passed responsibility for weights and measures to the Board of Trade. In 1900 the Board of Trade established the National Physical Laboratory (NPL) to provide laboratory facilities for weights and measures. After the passage of the Weights and Measures (Metric System) Act 1897, weights and measures in the United Kingdom remained relatively unchanged until after the Second World War. By the middle of the century the difference of 2 parts per million between the British and US standard yards was causing problems—in 1900 a tolerance of 10 parts per million was adequate for science, but by 1950 this tolerance had shrunk to 0.25 parts per million. In 1960 representatives from the NPL and other national laboratories from the United States and Commonwealth agreed to redefine the yard as being exactly 0.9144 metres, an action that was ratified by the British Government as part of the Weights and Measures Act 1963. Metrication in the United Kingdom began in the mid-1960s. Initially this metrication was voluntary and by 1985 many traditional and imperial units of measure had been voluntarily removed from use in the retail trade. The Weights and Measures Act 1985 formalized their removal for use in trade, though imperial units were retained for use on road signs and the most common imperial weights such as the foot, inch, pound, ounce, gallon and pint continued to be used in the retail trade for the sale of loose goods or goods measured or weighed in front of the customer. Since 1 January 2000 it has been unlawful to use imperial units for weights and measures in retail trade in the United Kingdom except as supplementary units or for the sale of draught beer and cider by the pint or milk that is sold in returnable containers. British Empire When colonies attained dominion status, they also attained the right to control their own systems of weights and measures. Many adopted the imperial system of units with local variations. India and Hong Kong supplemented the imperial system of units with their own indigenous units of measure, parts of Canada and South Africa included land survey units of measure from earlier colonial masters in their systems of measure while many territories used only a subset of the units used in the United Kingdom—in particular the stone, quarter and cental were not catalogued in, amongst others, Australian, Canadian and Indian legislation. Furthermore, Canada aligned her ton with US measures by cataloguing the ton of 2000 lb as being legal for trade, but kept the imperial gallon. The standardization of the yard in 1960 required not only agreement between the United States and the United Kingdom, but also of Canada, Australia, New Zealand and South Africa, all of whom had their own standards laboratories. United States customary units Prior to the United States Declaration of Independence in 1776, the Thirteen Colonies that were to become the United States used the English system of measurement. The Articles of Confederation, which predated the Constitution, gave the central government "the sole and exclusive right and power of...fixing the Standard of Weights and Measures throughout the United States." Subsequent to the formation of the United States, the Constitution reaffirmed the right of Congress to "fix the Standard of Weights and Measures" but reserved the right to regulate commerce and weights and measures to the individual states. During the First Congress of the United States in 1789, Thomas Jefferson was detailed to draw up a plan for the currency and weights of measures that would be used in the new republic. In his 1790 response he noted that the existing system of measure was sound but that control of the base artefact was not under the control of the United States. His report suggested a means of manufacturing a local standard and also left the way open for an adoption of a decimal-based system should this be appropriate. In the event, the existing standards were retained. For many years no action was taken at the federal level to ensure harmony in units of measure – the units acquired by the early colonists appeared to serve their purpose. Congress did nothing, but Ferdinand Hassler, Superintendent of the East Coast survey, who in 1790 had met using contacts in his native Switzerland acquired a copy of the [French] mètre des Archives. In 1810 Ferdinand Hassler was dispatched to Europe by the Treasury to acquire measuring instruments and standards. In 1827 Albert Gallatin, United States minister at London acquired an "exact copy" of the troy pound held by the British Government which in 1828 was adopted as the reference copy of weight in the United States. In 1821 John Quincy Adams, then Secretary of State submitted a report based on research commissioned by the Senate in 1817 which recommended against adoption of the metric system. Congress did nothing and in 1832 the Treasury adopted the yard of 36 inches as the unit of length for customs purposes, the avoirdupois pound of 7000 grains as the unit of weight and the gallon of 231 cubic inches (the "Queen Anne gallon") and the bushel of 2150.42 cubic inches as the units of volume. Congress did little to promote standards across the United States other than fixing the size of the yard and the gallon. Throughout the nineteenth century individual states developed their own standards and in particular a variety of bushels based on weight (mass) rather than volume emerged, dependent on both commodity and state. This lack of uniformity crippled inter-state trade and in 1905 the National Bureau of Standards called a meeting of the states to discuss the lack of uniform standards and in many cases, a means of regulatory oversight. A meeting was held the following year and subsequently became an annual gathering known as the National Conference on Weights and Measures (NCWM). In 1915 the conference published its first model standards. The bushel was not fully standardized and the Chicago Mercantile Exchange still (May 2013) uses different bushels for different commodities—a bushel of corn being 56 lb, a bushel of oats 38 lb and a bushel of soybeans 60 lb and a bushel of red winter wheat (both hard and soft) also 60 lb. Other commodities at the exchange are reckoned in pounds, in short tons or in metric tons. One of the actions taken by Congress was to permit the use of the metric system in trade (1866), made at the height of the metrication process in Latin America. Other actions were to ratify the Metre Convention in 1875 and under the Mendenhall Order of 1897, to redefine the pound and the yard in terms of the international prototype of the kilogram and the international prototype of the metre respectively. In 1901 the administration of weights and measures was handed to a federal agency, the National Bureau of Standards, which in 1988 became the National Institute of Standards and Technology. Inactivity by Congress and the lack of uniformity of weights and measures which were crippling US economic growth in the nineteenth century led to the National Bureau of Standards to call a meeting of states in 1905 which resulted in the setting up of the National Conference on Weights and Measures (NCWM). This organisation is the de facto controlling body for weights and measures in the United States, though in respect of international relations such as membership of the General Conference on Weights and Measures (an intergovernmental organization) the US Government itself has to take the lead. During the twentieth century the principal change in the customary system of weights and measures was an agreement between NIST and the corresponding bodies in Australia, Canada, New Zealand, South Africa and the United Kingdom, signed in 1960, that redefined the yard and the pound in terms of the metre and the kilogram respectively. These new units became known as the international yard and pound. Congress has neither endorsed nor repudiated this action. (See ). Energy, power, and temperature Imperial and US customary units have long been used in many branches of engineering. Two of the earliest such units of measure to come into use were the horsepower and the degree Fahrenheit. The horsepower was defined by James Watt in 1782 as the power required to raise 33,000 pounds of water through a height of one foot in one minute and the degree Fahrenheit was first defined by Daniel Fahrenheit in about 1713 as being a temperature scale having its lower calibration point (0 °F) at temperature where a supersaturated salt/ice mixture froze and its upper calibration point at body temperature (96 °F). In 1777 the Royal Society, under the chairmanship of Henry Cavendish, proposed the definition of the Fahrenheit scale be modified such that the temperature corresponding to the melting point of ice be and the boiling point of water under standard atmospheric conditions be . The British thermal unit (Btu) is defined as the heat needed to raise the temperature of one pound of water by one degree Fahrenheit. It was in use before 1859 as a unit of heat based on imperial units rather than the metric units used by the French—Clément-Desormes having defined the calorie in terms of the kilogram and degrees Celsius ('centigrade') in 1824. In 1873 a committee of the British Association for the Advancement of Science under the chairmanship of William Thomson (Lord Kelvin) introduced the concept of coherence into units of measure and proposed the names dyne and erg as the units of force and work in the CGS system of units. Two years later James Thomson, older brother of William Thomson, introduced the term poundal as a coherent unit of force in the Foot–pound–second system (FPS) of measurement. The FPS unit of work is the foot-poundal. Other systems for the measurement of dynamic quantities that used imperial and US customary units are the British Gravitational System (BG) proposed by Arthur Mason Worthington and the English Engineering System (EE). Both systems depend on the gravitational acceleration, and use the pound-force as the unit of force but use different approaches when applying Newton's laws of motion. In the BG system, force, rather than mass has a base unit while the slug is a derived unit of inertia (rather than mass). On the other hand, the EE system uses a different approach and introduces the acceleration due to gravity (g) into its equations. Both these approaches led to slight variations in the meaning of the pound-force (and also of the kilogram-force) in different parts of the world. Various countries published standard values that should be used for g, and in 1901 the CGPM published a standard value for g that should be used in the "International Service of Weights and Measures", namely , which is equal to the value of g at 45° latitude. Newton's second law in these systems becomes: BG: Force (lbf) = inertia (slugs) × acceleration (ft/s2) EE: Force (lbf) = mass (lb) × acceleration (ft/s2) ÷ g AE: Force (poundals) = mass (lb) × acceleration (ft/s2) AE is ignored in many engineering courses and textbooks while some, such as Darby only uses EE (alongside SI), having described the BG and EE systems as "archaic". Metric equivalents The standard yard and [Troy] pound were lost in 1834 when a fire partially destroyed the Palace of Westminster. Following a report published in 1841 by a commission, a new standard yard and pound were manufactured using the best available secondary sources. Unlike the previous standard, the new pound standard, made of platinum, was a pound avoirdupois. The new yard, slightly longer than a yard to prevent wear as was experienced with the , was made of brass and had two gold plugs close to its end. Scratch marks on the plugs denoted the length of the yard. They were accepted by an Act of Parliament as the standards for length and weight in 1855. Following the debacle over the different gallons that had been adopted by the United States and the United Kingdom thirty years earlier, one of the copies of the standard yard and avoirdupois pound (known in the United States as the "Mint pound") was offered to and accepted by the United States government. In the years that followed the passing of the 1878 act, the standard imperial yard was found to be shrinking at a rate, confirmed in 1950, to be nearly one part per million every 30 years. On the other hand, the international prototype metre, manufactured from a platinum-iridium alloy rather than brass by a British firm, which in 1889 replaced the as the standard for the metre, was found to be more stable than the standard yard. Both the United States and the United Kingdom, as signatories of the Metre Convention, took delivery of copies of both the standard metre and the standard kilogram. The "Mint pound" was also found to be of poor workmanship. In 1866 the United States government legalised use of metric units in contract law, defining them in terms of the equivalent customary units to five significant figures, which was sufficient for purposes of trade. In 1893, under the Mendenhall Order the United States abandoned the 1855 yard as a standard of length and the "Mint pound" as a standard of mass, redefining them in terms of the metre and kilogram using the values of the 1866 legislation. In the United Kingdom fresh comparisons of the imperial and metric standards of length and mass were made and were used in the Weights and Measures (Metric System) Act 1897 (60 & 61 Vict. c. 46) to redefine the yard and pound in terms of the metre and kilogram respectively. In addition, the definitions of both the yard and the pound in terms of the artifacts held by the British government was reaffirmed giving both the yard and the pound two different definitions. The differences between the British and the US yard and pound was of the order of a few parts per million. By the end of the Second World War, the standards laboratories of Canada, Australia, New Zealand and South Africa also had their own copies of the pound and the yard. These legal and technical discrepancies, described by McGreevy (pg 290) as being "unsound" led to the Commonwealth Science Conference of 1946. proposing that the Commonwealth countries and the United States should all redefine the yard and the pound in terms of an agreed fraction of the metre and kilogram respectively. Agreement was reached by the standards laboratories in 1960 to redefine the yard and the pound as 1 international yard = 1 international pound = The final digit of the value chosen for the pound was chosen so as to make the number divisible by 7 without a repeating decimal, making the grain exactly . This agreement was ratified by the United Kingdom in 1963 while Canada pre-empted the decision by adopting these values in 1951, nine years ahead of the full international agreement. The United States Congress has neither ratified nor repudiated the agreement. Comparison of imperial and US customary systems Prior to 1960 the imperial and customary yard and the pound were sufficiently close to each other that for most practical purposes the differences in the sizes of units of length, area, volume and mass could be disregarded, though there were differences in usage - for example, in the United States short road distances are specified in feet while in the United Kingdom they are specified in yards The introduction of the international yard in 1960 caused small but noticeable effects in surveying in the United States which resulted in some states retaining the original definitions of the customary units of measure which are now known as the survey mile, foot, while other states adopted the international foot. According to the National Institute of Standards and Technology, the survey foot is obsolete as of January 1, 2023, and its use is discouraged. The definition of units of weight above a pound differed between the customary and the imperial system - the imperial system employed the stone of 14 pounds, the hundredweight of 8 stone and the ton of 2240 pounds (20 hundredweight), while the customary system of units did not employ the stone but has a hundredweight of 100 pounds and a ton of 2000 pounds. In international trade, the ton of 2240 pounds was often referred to as the "long ton" and the ton of 2000 pounds as the "short ton". When using customary units, it is usual to express body weight in pounds, but when using imperial units, to use stones and pounds. In his Plan for Establishing Uniformity in the Coinage, Weights, and Measures of the United States, Thomas Jefferson, then secretary of state, identified 14 different gallons in English statutes varying in size from 224 to 282 cubic inches (3.67 to 4.62 litres). In 1832, in the absence of any direction by Congress, the United States Treasury chose the second smallest gallon, the "Queen Anne gallon" of 231 cubic inches (3.785 litres), to be the official gallon in the United States for fiscal purposes. Sixteen US fluid ounces make a US pint (8 pints equals 1 gallon in both customary and imperial systems). During the reform of weights and measures legislation in the United Kingdom in 1824, old gallons were replaced by the new imperial gallon, which was defined to be the volume of 10 pounds of water at 62 °F (17 °C), and was determined experimentally to be 277.42 cubic inches (4.54609 litres). Twenty imperial fluid ounces make an imperial pint, the imperial fluid ounce being 0.96 US fluid ounces. The US Customary system of units makes use of set of dry units of capacity that have a similar set of names to those of liquid capacity, though different volumes: the dry pint having a volume of 33.6 cubic inches (550 ml) against the US fluid pint's volume of 28.875 cubic inches (473 ml) and the imperial pint of 34.68 cubic inches (568 ml). The imperial system of measure does not have an equivalent to the US customary system of "dry measure". In the international commodities markets, the barrel (, ≈159 litres) is used in both London and New York/Chicago for trading in crude oil and the troy ounce (≈31.10 grams) for trading in precious metals, except the London markets use metric units and the Chicago Board of Trade uses customary units. Units in use The tables below catalogue the imperial units of measure that were permitted for use in trade in the United Kingdom on the eve of metrication (1976) and the customary "units of measurement that have traditionally been used in the United States". In addition, named units of measure that are used in the engineering industry are also catalogued. Prior to metrication, the units of measure used in Ireland were the same as those used in the United Kingdom while those used in the British Commonwealth and in South Africa were in most cases a subset of those used in the United Kingdom with, in certain cases, local differences. Unless otherwise specified, the units of measure quoted below were used in both the United States, the United Kingdom. The SI equivalents are quoted to four significant figures. Units of length In 1893 the United States fixed the yard at metres, making the yard 0.9144018 metres and 1896 the British authorities fixed the yard as being 0.9143993 metres. At the time the discrepancy of about two parts per million was considered to be insignificant. In 1960, the United Kingdom, United States, Australia, Canada and South Africa standardised their units of length by defining the "international yard" as being 0.9144 metres exactly. This change affected land surveyors in the United States and led to the old units being renamed "survey feet", "survey miles" etc. However the introduction of the metric-based Ordnance Survey National Grid in the United Kingdom in 1938 meant that British surveyors were unaffected by the change.
Physical sciences
Measurement systems
Basics and measurement
54210779
https://en.wikipedia.org/wiki/Berman%20flow
Berman flow
In fluid dynamics, Berman flow is a steady flow created inside a rectangular channel with two equally porous walls. The concept is named after a scientist Abraham S. Berman who formulated the problem in 1953. Flow description Consider a rectangular channel of width much longer than the height. Let the distance between the top and bottom wall be and choose the coordinates such that lies in the midway between the two walls, with points perpendicular to the planes. Let both walls be porous with equal velocity . Then the continuity equation and Navier–Stokes equations for incompressible fluid become with boundary conditions The boundary conditions at the center is due to symmetry. Since the solution is symmetric above the plane , it is enough to describe only half of the flow, say for . If we look for a solution, that is independent of , the continuity equation dictates that the horizontal velocity can at most be a linear function of . Therefore, Berman introduced the following form, where is the average value (averaged cross-sectionally) of at , that is to say This constant will be eliminated out of the problem and will have no influence on the solution. Substituting this into the momentum equation leads to Differentiating the second equation with respect to gives this can substituted into the first equation after taking the derivative with respect to which leads to where is the Reynolds number. Integrating once, we get with boundary conditions This third order nonlinear ordinary differential equation requires three boundary condition and the fourth boundary condition is to determine the constant . and this equation is found to possess multiple solutions. The figure shows the numerical solution for low Reynolds number, solving the equation for large Reynolds number is not a trivial computation. Limiting solutions In the limit , the solution can be written as In the limit , the leading-order solution is given by The above solution satisfies all the necessary boundary conditions even though Reynolds number is infinite (see also Taylor–Culick flow) Axisymmetric case The corresponding problem in porous pipe flows was addressed by S. W. Yuan and A. Finkelstein in 1955.
Physical sciences
Fluid mechanics
Physics
52880788
https://en.wikipedia.org/wiki/Hypernova
Hypernova
A hypernova is a very energetic supernova which is believed to result from an extreme core collapse scenario. In this case, a massive star (>30 solar masses) collapses to form a rotating black hole emitting twin astrophysical jets and surrounded by an accretion disk. It is a type of stellar explosion that ejects material with an unusually high kinetic energy, an order of magnitude higher than most supernovae, with a luminosity at least 10 times greater. Hypernovae release such intense gamma rays that they often appear similar to a type Ic supernova, but with unusually broad spectral lines indicating an extremely high expansion velocity. Hypernovae are one of the mechanisms for producing long gamma ray bursts (GRBs), which range from 2 seconds to over a minute in duration. They have also been referred to as superluminous supernovae, though that classification also includes other types of extremely luminous stellar explosions that have different origins. History In the 1980s, the term hypernova was used to describe a theoretical type of supernova now known as a pair-instability supernova. It referred to the extremely high energy of the explosion compared to typical core collapse supernovae. The term had previously been used to describe hypothetical explosions from diverse events such as hyperstars, extremely massive population III stars in the early universe, or from events such as black hole mergers. In February 1997, Dutch-Italian satellite BeppoSAX was able to trace GRB 970508 to a faint galaxy roughly 6 billion light years away. From analyzing the spectroscopic data for both the GRB 970508 and its host galaxy, Bloom et al. concluded in 1998 that a hypernova was the likely cause. That same year, hypernovae were hypothesized in greater detail by Polish astronomer Bohdan Paczyński as supernovae from rapidly spinning stars. The usage of the term hypernova from the late 20th century has since been refined to refer to those supernovae with unusually large kinetic energy. The first hypernova observed was SN 1998bw, with a luminosity 100 times higher than a standard Type Ib. This supernova was the first to be associated with a gamma-ray burst (GRB) and it produced a shockwave containing an order of magnitude more energy than a normal supernova. Other scientists prefer to call these objects simply broad-lined type Ic supernovae. Since then the term has been applied to a variety of objects, not all of which meet the standard definition; for example ASASSN-15lh. In 2023, the observation of the highly energetic, non-quasar transient event AT2021lwx was published with an extremely strong emission from mid-infrared to X-ray wavelengths and an overall energy of 1.5 1046 Joule. This object is not thought to be a hypernova; instead, it is likely to be a huge gas cloud being absorbed by a massive black hole. The event was also assigned the random name "ZTF20abrbeie" by the Zwicky Transient Facility. This name and the seeming ferocity of the event led to nickname "Scary Barbie", drawing the attention of the mainstream press. Properties Hypernovae are thought to be supernovae with ejecta having a kinetic energy larger than about , an order of magnitude higher than a typical core collapse supernova. The ejected nickel masses are large and the ejection velocity up to 99% of the speed of light. These are typically of type Ic, and some are associated with long-duration gamma-ray bursts. The electromagnetic energy released by these events varies from comparable to other type Ic supernova, to some of the most luminous supernovae known such as SN 1999as. The archetypal hypernova, SN 1998bw, was associated with GRB 980425. Its spectrum showed no hydrogen and no clear helium features, but strong silicon lines identified it as a type Ic supernova. The main absorption lines were extremely broadened and the light curve showed a very rapid brightening phase, reaching the brightness of a type Ia supernova at day 16. The total ejected mass was about and the mass of nickel ejected about . All supernovae associated with GRBs have shown the high-energy ejecta that characterises them as hypernovae. Unusually bright radio supernovae have been observed as counterparts to hypernovae, and have been termed "radio hypernovae". Astrophysical models Models for hypernova focus on the efficient transfer of energy into the ejecta. In normal core collapse supernovae, 99% of neutrinos generated in the collapsing core escape without driving the ejection of material. It is thought that rotation of the supernova progenitor drives a jet that accelerates material away from the explosion at close to the speed of light. Binary systems are increasingly being studied as the best method for both stripping stellar envelopes to leave a bare carbon-oxygen core, and for inducing the necessary spin conditions to drive a hypernova. Collapsar model The collapsar model describes a type of supernova that produces a gravitationally collapsed object, or black hole. The word "collapsar", short for "collapsed star", was formerly used to refer to the end product of stellar gravitational collapse, a stellar-mass black hole. The word is now sometimes used to refer to a specific model for the collapse of a fast-rotating star. When core collapse occurs in a star with a core at least around fifteen times the Sun's mass () — though chemical composition and rotational rate are also significant — the explosion energy is insufficient to expel the outer layers of the star, and it will collapse into a black hole without producing a visible supernova outburst. A star with a core mass slightly below this level — in the range of — will undergo a supernova explosion, but so much of the ejected mass falls back onto the core remnant that it still collapses into a black hole. If such a star is rotating slowly, then it will produce a faint supernova, but if the star is rotating quickly enough, then the fallback to the black hole will produce relativistic jets. Those powerful jets plough through stellar material produce strong shock waves, with the vigorous winds of newly-formed 56Ni blowing off the accretion disk, detonating the hypernova explosion. The ejected radioactive decay of 56Ni renders the visible outburst substantially more luminous than a standard supernova. The jets also beam high energy particles and gamma rays directly outward and thereby produce x-ray or gamma-ray bursts; the jets can last for several seconds or longer and correspond to long-duration gamma-ray bursts, but they do not appear to explain short-duration gamma-ray bursts. Binary models The mechanism for producing the stripped progenitor, a carbon-oxygen star lacking any significant hydrogen or helium, of type Ic supernovae was once thought to be an extremely evolved massive star, for example a type WO Wolf-Rayet star whose dense stellar wind expelled all its outer layers. Observations have failed to detect any such progenitors. It is still not conclusively shown that the progenitors are actually a different type of object, but several cases suggest that lower-mass "helium giants" are the progenitors. These stars are not sufficiently massive to expel their envelopes simply by stellar winds, and they would be stripped by mass transfer to a binary companion. Helium giants are increasingly favoured as the progenitors of type Ib supernovae, but the progenitors of type Ic supernovae is still uncertain. One proposed mechanism for producing gamma-ray bursts is induced gravitational collapse, where a neutron star is triggered to collapse into a black hole by the core collapse of a close companion consisting of a stripped carbon-oxygen core. The induced neutron star collapse allows for the formation of jets and high-energy ejecta that have been difficult to model from a single star.
Physical sciences
Stellar astronomy
Astronomy
59126142
https://en.wikipedia.org/wiki/Open%20source
Open source
Open source is source code that is made freely available for possible modification and redistribution. Products include permission to use and view the source code, design documents, or content of the product. The open source model is a decentralized software development model that encourages open collaboration. A main principle of open source software development is peer production, with products such as source code, blueprints, and documentation freely available to the public. The open source movement in software began as a response to the limitations of proprietary code. The model is used for projects such as in open source appropriate technology, and open source drug discovery. Open source promotes universal access via an open-source or free license to a product's design or blueprint, and universal redistribution of that design or blueprint. Before the phrase open source became widely adopted, developers and producers used a variety of other terms, such as free software, shareware, and public domain software. Open source gained hold with the rise of the Internet. The open-source software movement arose to clarify copyright, licensing, domain, and consumer issues. Generally, open source refers to a computer program in which the source code is available to the general public for use or modification from its original design. Code is released under the terms of a software license. Depending on the license terms, others may then download, modify, and publish their version (fork) back to the community. Many large formal institutions have sprung up to support the development of the open-source movement, including the Apache Software Foundation, which supports community projects such as the open-source framework Apache Hadoop and the open-source HTTP server Apache HTTP. History The sharing of technical information predates the Internet and the personal computer considerably. For instance, in the early years of automobile development a group of capital monopolists owned the rights to a 2-cycle gasoline-engine patent originally filed by George B. Selden. By controlling this patent, they were able to monopolize the industry and force car manufacturers to adhere to their demands, or risk a lawsuit. In 1911, independent automaker Henry Ford won a challenge to the Selden patent. The result was that the Selden patent became virtually worthless and a new association (which would eventually become the Motor Vehicle Manufacturers Association) was formed. The new association instituted a cross-licensing agreement among all US automotive manufacturers: although each company would develop technology and file patents, these patents were shared openly and without the exchange of money among all the manufacturers. By the time the US entered World War II, 92 Ford patents and 515 patents from other companies were being shared among these manufacturers, without any exchange of money (or lawsuits). Early instances of the free sharing of source code include IBM's source releases of its operating systems and other programs in the 1950s and 1960s, and the SHARE user group that formed to facilitate the exchange of software. Beginning in the 1960s, ARPANET researchers used an open "Request for Comments" (RFC) process to encourage feedback in early telecommunication network protocols. This led to the birth of the early Internet in 1969. The sharing of source code on the Internet began when the Internet was relatively primitive, with software distributed via UUCP, Usenet, IRC, and Gopher. BSD, for example, was first widely distributed by posts to comp.os.linux on the Usenet, which is also where its development was discussed. Linux followed in this model. Open source as a term Open source as a term emerged in the late 1990s by a group of people in the free software movement who were critical of the political agenda and moral philosophy implied in the term "free software" and sought to reframe the discourse to reflect a more commercially minded position. In addition, the ambiguity of the term "free software" was seen as discouraging business adoption. However, the ambiguity of the word "free" exists primarily in English as it can refer to cost. The group included Christine Peterson, Todd Anderson, Larry Augustin, Jon Hall, Sam Ockman, Michael Tiemann and Eric S. Raymond. Peterson suggested "open source" at a meeting held at Palo Alto, California, in reaction to Netscape's announcement in January 1998 of a source code release for Navigator. Linus Torvalds gave his support the following day, and Phil Hughes backed the term in Linux Journal. Richard Stallman, the founder of the free software foundation (FSF) in 1985, quickly decided against endorsing the term. The FSF's goal was to promote the development and use of free software, which they defined as software that grants users the freedom to run, study, share, and modify the code. This concept is similar to open source but places a greater emphasis on the ethical and political aspects of software freedom. Netscape released its source code under the Netscape Public License and later under the Mozilla Public License. Raymond was especially active in the effort to popularize the new term. He made the first public call to the free software community to adopt it in February 1998. Shortly after, he founded The Open Source Initiative in collaboration with Bruce Perens. The term gained further visibility through an event organized in April 1998 by technology publisher Tim O'Reilly. Originally titled the "Freeware Summit" and later known as the "Open Source Summit", the event was attended by the leaders of many of the most important free and open-source projects, including Linus Torvalds, Larry Wall, Brian Behlendorf, Eric Allman, Guido van Rossum, Michael Tiemann, Paul Vixie, Jamie Zawinski, and Eric Raymond. At that meeting, alternatives to the term "free software" were discussed. Tiemann argued for "sourceware" as a new term, while Raymond argued for "open source." The assembled developers took a vote, and the winner was announced at a press conference the same evening. Economics Some economists agree that open-source is an information good or "knowledge good" with original work involving a significant amount of time, money, and effort. The cost of reproducing the work is low enough that additional users may be added at zero or near zero costthis is referred to as the marginal cost of a product. Copyright creates a monopoly so that the price charged to consumers can be significantly higher than the marginal cost of production. This allows the author to recoup the cost of making the original work. Copyright thus creates access costs for consumers who value the work more than the marginal cost but less than the initial production cost. Access costs also pose problems for authors who wish to create a derivative work—such as a copy of a software program modified to fix a bug or add a feature, or a remix of a song—but are unable or unwilling to pay the copyright holder for the right to do so. Being organized as effectively a "consumers' cooperative", open source eliminates some of the access costs of consumers and creators of derivative works by reducing the restrictions of copyright. Basic economic theory predicts that lower costs would lead to higher consumption and also more frequent creation of derivative works. Organizations such as Creative Commons host websites where individuals can file for alternative "licenses", or levels of restriction, for their works. These self-made protections free the general society of the costs of policing copyright infringement. Others argue that since consumers do not pay for their copies, creators are unable to recoup the initial cost of production and thus have little economic incentive to create in the first place. By this argument, consumers would lose out because some of the goods they would otherwise purchase would not be available. In practice, content producers can choose whether to adopt a proprietary license and charge for copies, or an open license. Some goods which require large amounts of professional research and development, such as the pharmaceutical industry (which depends largely on patents, not copyright for intellectual property protection) are almost exclusively proprietary, although increasingly sophisticated technologies are being developed on open-source principles. There is evidence that open-source development creates enormous value. For example, in the context of open-source hardware design, digital designs are shared for free and anyone with access to digital manufacturing technologies (e.g. RepRap 3D printers) can replicate the product for the cost of materials. The original sharer may receive feedback and potentially improvements on the original design from the peer production community. Many open-source projects have a high economic value. According to the Battery Open Source Software Index (BOSS), the ten economically most important open-source projects are: The rank given is based on the activity regarding projects in online discussions, on GitHub, on search activity in search engines and on the influence on the labour market. Licensing alternatives Alternative arrangements have also been shown to result in good creation outside of the proprietary license model. Examples include: Creation for its own sake – For example, Wikipedia editors add content for recreation. Artists have a drive to create. Both communities benefit from free starting material. Voluntary after-the-fact donations – used by shareware, street performers, and public broadcasting in the United States. Patron – For example, open-access publishing relies on institutional and government funding of research faculty, who also have a professional incentive to publish for reputation and career advancement. Works of the US government are automatically released into the public domain. Freemium – Give away a limited version for free and charge for a premium version (potentially using a dual license). Give away the product and charge something related – charge for support of open-source enterprise software, give away music but charge for concert admission. Give away work to gain market share – used by artists, in corporate software to spoil a dominant competitor (for example in the browser wars and the Android operating system). For own use – Businesses or individual software developers often create software to solve a problem, bearing the full cost of initial creation. They will then open source the solution, and benefit from the improvements others make for their own needs. Communalizing the maintenance burden distributes the cost across more users; free riders can also benefit without undermining the creation process. Blockchain based licensing. Developers register their contributions on a blockchain and when usage licenses are generated the revenue is shared through the blockchain. Open collaboration The open-source model is a decentralized software development model that encourages open collaboration, meaning "any system of innovation or production that relies on goal-oriented yet loosely coordinated participants who interact to create a product (or service) of economic value, which they make available to contributors and noncontributors alike." A main principle of open-source software development is peer production, with products such as source code, blueprints, and documentation freely available to the public. The open-source movement in software began as a response to the limitations of proprietary code. The model is used for projects such as in open-source appropriate technology, and open-source drug discovery. The open-source model for software development inspired the use of the term to refer to other forms of open collaboration, such as in Internet forums, mailing lists and online communities. Open collaboration is also thought to be the operating principle underlining a gamut of diverse ventures, including TEDx and Wikipedia. Open collaboration is the principle underlying peer production, mass collaboration, and wikinomics. It was observed initially in open-source software, but can also be found in many other instances, such as in Internet forums, mailing lists, Internet communities, and many instances of open content, such as Creative Commons. It also explains some instances of crowdsourcing, collaborative consumption, and open innovation. Riehle et al. define open collaboration as collaboration based on three principles of egalitarianism, meritocracy, and self-organization. Levine and Prietula define open collaboration as "any system of innovation or production that relies on goal-oriented yet loosely coordinated participants who interact to create a product (or service) of economic value, which they make available to contributors and noncontributors alike." This definition captures multiple instances, all joined by similar principles. For example, all of the elements – goods of economic value, open access to contribute and consume, interaction and exchange, purposeful yet loosely coordinated work – are present in an open-source software project, in Wikipedia, or in a user forum or community. They can also be present in a commercial website that is based on user-generated content. In all of these instances of open collaboration, anyone can contribute and anyone can freely partake in the fruits of sharing, which are produced by interacting participants who are loosely coordinated. An annual conference dedicated to the research and practice of open collaboration is the International Symposium on Wikis and Open Collaboration (OpenSym, formerly WikiSym). As per its website, the group defines open collaboration as "collaboration that is egalitarian (everyone can join, no principled or artificial barriers to participation exist), meritocratic (decisions and status are merit-based rather than imposed) and self-organizing (processes adapt to people rather than people adapt to pre-defined processes)." Open-source license Open source promotes universal access via an open-source or free license to a product's design or blueprint, and universal redistribution of that design or blueprint. Before the phrase open source became widely adopted, developers and producers used a variety of other terms. Open source gained hold in part due to the rise of the Internet. The open-source software movement arose to clarify copyright, licensing, domain, and consumer issues. An open-source license is a type of license for computer software and other products that allows the source code, blueprint or design to be used, modified or shared (with or without modification) under defined terms and conditions. This allows end users and commercial companies to review and modify the source code, blueprint or design for their own customization, curiosity or troubleshooting needs. Open-source licensed software is mostly available free of charge, though this does not necessarily have to be the case. Licenses which only permit non-commercial redistribution or modification of the source code for personal use only are generally not considered as open-source licenses. However, open-source licenses may have some restrictions, particularly regarding the expression of respect to the origin of software, such as a requirement to preserve the name of the authors and a copyright statement within the code, or a requirement to redistribute the licensed software only under the same license (as in a copyleft license). One popular set of open-source software licenses are those approved by the Open Source Initiative (OSI) based on their Open Source Definition (OSD). Applications Social and political views have been affected by the growth of the concept of open source. Advocates in one field often support the expansion of open source in other fields. But Eric Raymond and other founders of the open-source movement have sometimes publicly argued against speculation about applications outside software, saying that strong arguments for software openness should not be weakened by overreaching into areas where the story may be less compelling. The broader impact of the open-source movement, and the extent of its role in the development of new information sharing procedures, remain to be seen. The open-source movement has inspired increased transparency and liberty in biotechnology research, for example CAMBIA Even the research methodologies themselves can benefit from the application of open-source principles. It has also given rise to the rapidly-expanding open-source hardware movement. Computer software Open-source software is software which source code is published and made available to the public, enabling anyone to copy, modify and redistribute the source code without paying royalties or fees. LibreOffice and the GNU Image Manipulation Program are examples of open source software. As they do with proprietary software, users must accept the terms of a license when they use open source software—but the legal terms of open source licenses differ dramatically from those of proprietary licenses. Open-source code can evolve through community cooperation. These communities are composed of individual programmers as well as large companies. Some of the individual programmers who start an open-source project may end up establishing companies offering products or services incorporating open-source programs. Examples of open-source software products are: Linux (that much of world's server parks are running) MediaWiki (that Wikipedia is based upon) Many more: List of free and open-source software packages List of formerly proprietary software The Google Summer of Code, often abbreviated to GSoC, is an international annual program in which Google awards stipends to contributors who successfully complete a free and open-source software coding project during the summer. GSoC is a large scale project with 202 participating organizations in 2021. There are similar smaller scale projects such as the Talawa Project run by the Palisadoes Foundation (a non profit based in California, originally to promote the use of information technology in Jamaica, but now also supporting underprivileged communities in the US) Electronics Open-source hardware is hardware which initial specification, usually in a software format, is published and made available to the public, enabling anyone to copy, modify and redistribute the hardware and source code without paying royalties or fees. Open-source hardware evolves through community cooperation. These communities are composed of individual hardware/software developers, hobbyists, as well as very large companies. Examples of open-source hardware initiatives are: Openmoko: a family of open-source mobile phones, including the hardware specification and the operating system. OpenRISC: an open-source microprocessor family, with architecture specification licensed under GNU GPL and implementation under LGPL. Sun Microsystems's OpenSPARC T1 Multicore processor. Sun has released it under GPL. Arduino, a microcontroller platform for hobbyists, artists and designers. Simputer, an open hardware handheld computer, designed in India for use in environments where computing devices such as personal computers are deemed inappropriate. LEON: A family of open-source microprocessors distributed in a library with peripheral IP cores, open SPARC V8 specification, implementation available under GNU GPL. Tinkerforge: A system of open-source stackable microcontroller building blocks. Allows control of motors and read out sensors with the programming languages C, C++, C#, Object Pascal, Java, PHP, Python and Ruby over a USB or Wifi connection on Windows, Linux and Mac OS X. All of the hardware is licensed under CERN OHL (CERN Open Hardware License). Open Compute Project: designs for computer data center including power supply, Intel motherboard, AMD motherboard, chassis, racks, battery cabinet, and aspects of electrical and mechanical design. Food and beverages Some publishers of open-access journals have argued that data from food science and gastronomy studies should be freely available to aid reproducibility. A number of people have published creative commons licensed recipe books. Open-source colas – cola soft drinks, similar to Coca-Cola and Pepsi, whose recipe is open source and developed by volunteers. The taste is said to be comparable to that of the standard beverages. Most corporations producing beverages keep their formulas secret and unknown to the general public. Free Beer (originally Vores Øl) – is an open-source beer created by students at the IT-University in Copenhagen together with Superflex, an artist collective, to illustrate how open-source concepts might be applied outside the digital world. Digital content Open-content projects organized by the Wikimedia Foundation – Sites such as Wikipedia and Wiktionary have embraced the open-content Creative Commons content licenses. These licenses were designed to adhere to principles similar to various open-source software development licenses. Many of these licenses ensure that content remains free for re-use, that source documents are made readily available to interested parties, and that changes to content are accepted easily back into the system. Important sites embracing open-source-like ideals are Project Gutenberg and Wikisource, both of which post many books on which the copyright has expired and are thus in the public domain, ensuring that anyone has free, unlimited access to that content. Open ICEcat is an open catalog for the IT, CE and Lighting sectors with product data-sheets based on Open Content License agreement. The digital content are distributed in XML and URL formats. SketchUp's 3D Warehouse is an open-source design community centered around the use of proprietary software that's distributed free of charge. The University of Waterloo Stratford Campus invites students every year to use its three-storey Christie MicroTiles wall as a digital canvas for their creative work. Medicine Pharmaceuticals – There have been several proposals for open-source pharmaceutical development, which led to the establishment of the Tropical Disease Initiative and the Open Source Drug Discovery for Malaria Consortium. Genomics – The term "open-source genomics" refers to the combination of rapid release of sequence data (especially raw reads) and crowdsourced analyses from bioinformaticians around the world that characterised the analysis of the 2011 E. coli O104:H4 outbreak. OpenEMR – OpenEMR is an ONC-ATB Ambulatory EHR 2011-2012 certified electronic health records and medical practice management application. It features fully integrated electronic health, records, practice management, scheduling, electronic billing, and is the base for many EHR programs. Science and engineering Research – The Science Commons was created as an alternative to the expensive legal costs of sharing and reusing scientific works in journals etc. Research – The Open Solar Outdoors Test Field (OSOTF) is a grid-connected photovoltaic test system, which continuously monitors the output of a number of photovoltaic modules and correlates their performance to a long list of highly accurate meteorological readings. The OSOTF is organized under open-source principles – All data and analysis is to be made freely available to the entire photovoltaic community and the general public. Engineering – Hyperloop, a form of high-speed transport proposed by entrepreneur Elon Musk, which he describes as "an elevated, reduced-pressure tube that contains pressurized capsules driven within the tube by a number of linear electric motors". Construction – WikiHouse is an open-source project for designing and building houses. Energy research – The Open Energy Modelling Initiative promotes open-source models and open data in energy research and policy advice. Robotics An open-source robot is a robot whose blueprints, schematics, or source code are released under an open-source model Other Open-source principles can be applied to technical areas such as digital communication protocols and data storage formats. Open-design – which involves applying open-source methodologies to the design of artifacts and systems in the physical world. It is very nascent but has huge potential. Open-source appropriate technology (OSAT) refers to technologies that are designed in the same fashion as free and open-source software. These technologies must be "appropriate technology" (AT) – meaning technology that is designed with special consideration to the environmental, ethical, cultural, social, political, and economic aspects of the community it is intended for. An example of this application is the use of open-source 3D printers like the RepRap to manufacture appropriate technology. Teaching – which involves applying the concepts of open source to instruction using a shared web space as a platform to improve upon learning, organizational, and management challenges. An example of an Open-source courseware is the Java Education & Development Initiative (JEDI). Other examples include Khan Academy and wikiversity. At the university level, the use of open-source-appropriate technology classroom projects has been shown to be successful in forging the connection between science/engineering and social benefit: This approach has the potential to use university students' access to resources and testing equipment in furthering the development of appropriate technology. Similarly OSAT has been used as a tool for improving service learning. There are few examples of business information (methodologies, advice, guidance, practices) using the open-source model, although this is another case where the potential is enormous. ITIL is close to open source. It uses the Cathedral model (no mechanism exists for user contribution) and the content must be bought for a fee that is small by business consulting standards (hundreds of British pounds). Various checklists are published by government, banks or accounting firms. An open-source group emerged in 2012 that is attempting to design a firearm that may be downloaded from the internet and "printed" on a 3D Printer. Calling itself Defense Distributed, the group wants to facilitate "a working plastic gun that could be downloaded and reproduced by anybody with a 3D printer". Agrecol, a German NGO has developed an open-source licence for seeds operating with copyleft and created OpenSourceSeeds as a respective service provider. Breeders that apply the license to their new invented material prevent it from the threat of privatisation and help to establish a commons-based breeding sector as an alternative to the commercial sector. Open Source Ecology, farm equipment and global village construction kit. "Open" versus "free" versus "free and open" Free and open-source software (FOSS) or free/libre and open-source software (FLOSS) is openly shared source code that is licensed without any restrictions on usage, modification, or distribution. Confusion persists about this definition because the "free", also known as "libre", refers to the freedom of the product, not the price, expense, cost, or charge. For example, "being free to speak" is not the same as "free beer". Conversely, Richard Stallman argues the "obvious meaning" of term "open source" is that the source code is public/accessible for inspection, without necessarily any other rights granted, although the proponents of the term say the conditions in the Open Source Definition must be fulfilled. "Free and open" should not be confused with public ownership (state ownership), deprivatization (nationalization), anti-privatization (anti-corporate activism), or transparent behavior. GNU GNU Manifesto Richard Stallman Gratis versus libre (no cost vs no restriction) Software Generally, open source refers to a computer program in which the source code is available to the general public for use for any (including commercial) purpose, or modification from its original design. Open-source code is meant to be a collaborative effort, where programmers improve upon the source code and share the changes within the community. Code is released under the terms of a software license. Depending on the license terms, others may then download, modify, and publish their version (fork) back to the community. List of free and open-source software packages Open-source license, a copyright license that makes the source code available with a product The Open Source Definition, as used by the Open Source Initiative for open source software Open-source model, a decentralized software development model that encourages open collaboration Open-source software, software which permits the use and modification of its source code History of free and open-source software Open-source software advocacy Open-source software development Open-source-software movement Open-source video games List of open-source video games Business models for open-source software Comparison of open-source and closed-source software Diversity in open-source software MapGuide Open Source, a web-based map-making platform to develop and deploy web mapping applications and geospatial web services (not to be confused with OpenStreetMap (OSM), a collaborative project to create a free editable map of the world). Hardware RISC-V Agriculture, economy, manufacturing and production Open-source appropriate technology (OSAT), is designed for environmental, ethical, cultural, social, political, economic, and community aspects Open-design movement, development of physical products, machines and systems via publicly shared design information, including free and open-source software and open-source hardware, among many others: Open Architecture Network, improving global living conditions through innovative sustainable design OpenCores, a community developing digital electronic open-source hardware Open Design Alliance, develops Teigha, a software development platform to create engineering applications including CAD software Open Hardware and Design Alliance (OHANDA), sharing open hardware and designs via free online services Open Source Ecology (OSE), a network of farmers, engineers, architects and supporters striving to manufacture the Global Village Construction Set (GVCS) OpenStructures (OSP), a modular construction model where everyone designs on the basis of one shared geometrical OS grid Open manufacturing or "Open Production" or "Design Global, Manufacture Local", a new socioeconomic production model to openly and collaboratively produce and distribute physical objects Open-source architecture (OSArc), emerging procedures in imagination and formation of virtual and real spaces within an inclusive universal infrastructure Open-source cola, cola soft drinks made to open-sourced recipes Open-source hardware, or open hardware, computer hardware, such as microprocessors, that is designed in the same fashion as open source software List of open-source hardware projects Open-source product development (OSPD), collaborative product and process openness of open-source hardware for any interested participants Open-source robotics, physical artifacts of the subject are offered by the open design movement Open Source Seed Initiative, open source varieties of crop seeds, as an alternative to patent-protected seeds sold by large agriculture companies. Science and medicine Open science, the movement to make scientific research, data and dissemination accessible to all levels of an inquiring society, amateur or professional Open science data, a type of open data focused on publishing observations and results of scientific activities available for anyone to analyze and reuse Open Science Framework and the Center for Open Science Open Source Lab (disambiguation), several laboratories Open-Source Lab (book), a 2014 book by Joshua M. Pearce Open-notebook science, the practice of making the entire primary record of a research project publicly available online as it is recorded Open Source Physics (OSP), a National Science Foundation and Davidson College project to spread the use of open source code libraries that take care of much of the heavy lifting for physics Open Source Geospatial Foundation NASA Open Source Agreement (NOSA), an OSI-approved software license List of open-source software for mathematics List of open-source bioinformatics software List of open-source health software List of open-source health hardware Media Open-source film, open source movies List of open-source films Open Source Cinema, a collaborative website to produce a documentary film Open-source journalism, commonly describes a spectrum on online publications, forms of innovative publishing of online journalism, and content voting, rather than the sourcing of news stories by "professional" journalists Open-source investigation
Technology
Basics_4
null
39923620
https://en.wikipedia.org/wiki/Lithium%20%28medication%29
Lithium (medication)
Certain lithium compounds, also known as lithium salts, are used as psychiatric medication, primarily for bipolar disorder and for major depressive disorder. Lithium is taken orally (by mouth). Common side effects include increased urination, shakiness of the hands, and increased thirst. Serious side effects include hypothyroidism, diabetes insipidus, and lithium toxicity. Blood level monitoring is recommended to decrease the risk of potential toxicity. If levels become too high, diarrhea, vomiting, poor coordination, sleepiness, and ringing in the ears may occur. Lithium is teratogenic and can cause birth defects at high doses, especially during the first trimester of pregnancy. The use of lithium while breastfeeding is controversial; however, many international health authorities advise against it, and the long-term outcomes of perinatal lithium exposure have not been studied. The American Academy of Pediatrics lists lithium as contraindicated for pregnancy and lactation. The United States Food and Drug Administration categorizes lithium as having positive evidence of risk for pregnancy and possible hazardous risk for lactation. Lithium salts are classified as mood stabilizers. Lithium's mechanism of action is not known. In the nineteenth century, lithium was used in people who had gout, epilepsy, and cancer. Its use in the treatment of mental disorders began with Carl Lange in Denmark and William Alexander Hammond in New York City, who used lithium to treat mania from the 1870s onwards, based on now-discredited theories involving its effect on uric acid. Use of lithium for mental disorders was re-established (on a different theoretical basis) in 1948 by John Cade in Australia. Lithium carbonate is on the World Health Organization's List of Essential Medicines, and is available as a generic medication. In 2022, it was the 212th most commonly prescribed medication in the United States, with more than 1million prescriptions. It appears to be underused in older people, and in certain countries, for reasons including patients’ negative beliefs about lithium. Medical uses In 1970, lithium was approved by the United States Food and Drug Administration (FDA) for the treatment of bipolar disorder, which remains its primary use in the US. It is sometimes used when other treatments are not effective in a number of other conditions, including major depression, schizophrenia, disorders of impulse control, and some psychiatric disorders in children. Because the FDA has not approved lithium for the treatment of other disorders, such use is off-label. Bipolar disorder Lithium is primarily used as a maintenance drug in the treatment of bipolar disorder to stabilize mood and prevent manic episodes, but it may also be helpful in the acute treatment of manic episodes. Although recommended by treatment guidelines for the treatment of depression in bipolar disorder, the evidence that lithium is superior to placebo for acute depression is low-quality; atypical antipsychotics are considered more effective for treating acute depressive episodes. Lithium carbonate treatment was previously considered to be unsuitable for children; however, more recent studies show its effectiveness for treatment of early-onset bipolar disorder in children as young as eight. The required dosage is slightly less than the toxic level (representing a low therapeutic index), requiring close monitoring of blood levels of lithium carbonate during treatment. Within the therapeutic range there is a dose-response relationship. A limited amount of evidence suggests lithium carbonate may contribute to the treatment of substance use disorders for some people with bipolar disorder. Although it is believed that lithium prevents suicide in people with bipolar disorder, a 2022 systematic review found that "Evidence from randomised trials is inconclusive and does not support the idea that lithium prevents suicide or suicidal behaviour." Schizophrenic disorders Lithium is recommended for the treatment of schizophrenic disorders only after other antipsychotics have failed; it has limited effectiveness when used alone. The results of different clinical studies of the efficacy of combining lithium with antipsychotic therapy for treating schizophrenic disorders have varied. Major depressive disorder Lithium is widely prescribed as an adjunct treatment for depression. Augmentation If therapy with antidepressants (such as selective serotonin reuptake inhibitors [SSRIs]) does not fully treat and discontinue the symptoms of major depressive disorder (MDD) (also known as refractory depression or treatment resistant depression [TRD]) then a second augmentation agent is sometimes added to the therapy. Lithium is one of the few augmentation agents for antidepressants to demonstrate efficacy in treating MDD in multiple randomized controlled trials and it has been prescribed (off-label) for this purpose since the 1980s. A 2019 systematic review found some evidence of the clinical utility of adjunctive lithium, but the majority of supportive evidence is dated. While SSRIs have been mentioned above as a drug class in which lithium is used to augment, there are other classes in which lithium is added to increase effectiveness. Such classes are antipsychotics (used for bipolar disorder) as well as antiepileptic drugs (used for both psychiatric and epileptic cases). Lamotrigine and topiramate are two specific antiepileptic drugs in which lithium is used to augment. Monotherapy There are a few old studies indicating efficacy of lithium for acute depression with lithium having the same efficacy as tricyclic antidepressants. A recent study concluded that lithium works best on chronic and recurrent depression when compared to modern antidepressant (i.e. citalopram) but not for patients with no history of depression. A 2019 systemic review found no evidence to support the use of lithium for monotherapy. Prevention of suicide Lithium is widely believed to prevent suicide and is often used in clinical practice towards that end. However, meta-analyses, faced with evidence base limitations, have yielded differing results, and it therefore remains unclear whether or not lithium is efficacious in the prevention of suicide. However, some evidence suggets it is effective in significantly reducing the risk of self-harm and unintentional injury for bipolar disorder in comparison to no treatment and to anti-psychotics or valporate. According to meta-analyses, the increased presence of lithium in drinking water is correlated with lower overall suicide rates, especially among men. It is noted that further testing is needed to confirm this benefit. Alzheimer's disease Alzheimer's disease affects forty-five million people and is the fifth leading cause of death in the 65-plus population. There is no complete cure for the disease, currently. However, lithium is being evaluated for its effectiveness as a potential therapeutic measure. One of the leading causes of Alzheimer's is the hyperphosphorylation of the tau protein by the enzyme GSK-3, which leads to the overproduction of amyloid peptides that cause cell death. To combat this toxic amyloid aggregation, lithium upregulates the production of neuroprotectors and neurotrophic factors, as well as inhibiting the GSK-3 enzyme. Lithium also stimulates neurogenesis within the hippocampus, making it thicker. Yet another cause of Alzheimer's disease is the dysregulation of calcium ions within the brain. Too much or too little calcium within the brain can lead to cell death. Lithium can restore intracellular calcium homeostasis by inhibiting the wrongful influx of calcium upstream. It also promotes the redirection of the influx of calcium ions into the lumen of the endoplasmic reticulum of the cells to reduce the oxidative stress within the mitochondria. In 2009, a study was performed by Hampel and colleagues that asked patients with Alzheimer's to take a low dose of lithium daily for three months; it resulted in a significant slowing of cognitive decline, benefitting patients being in the prodromal stage the most. Upon a secondary analysis, the brains of the Alzheimer's patients were studied and shown to have an increase in BDNF markers, meaning they had actually shown cognitive improvement. Another study, a population study this time by Kessing et al., showed a negative correlation between Alzheimer's disease deaths and the presence of lithium in drinking water. Areas with increased lithium in their drinking water showed less dementia overall in their population. Monitoring Those who use lithium should receive regular serum level tests and should monitor thyroid and kidney function for abnormalities, as it interferes with the regulation of sodium and water levels in the body, and can cause dehydration. Dehydration, which is compounded by heat, can result in increasing lithium levels. The dehydration is due to lithium inhibition of the action of antidiuretic hormone, which normally enables the kidney to reabsorb water from urine. This causes an inability to concentrate urine, leading to consequent loss of body water and thirst. Lithium concentrations in whole blood, plasma, serum, or urine may be measured using instrumental techniques as a guide to therapy, to confirm the diagnosis in potential poisoning victims, or to assist in the forensic investigation in a case of fatal overdosage. Serum lithium concentrations are usually in the range of 0.5–1.3 mmol/L (0.5–1.3 mEq/L) in well-controlled people, but may increase to 1.8–2.5 mmol/L in those who accumulate the drug over time and to 3–10 mmol/L in acute overdose. Lithium salts have a narrow therapeutic/toxic ratio, so should not be prescribed unless facilities for monitoring plasma concentrations are available. Doses are adjusted to achieve plasma concentrations of 0.4 to 1.2 mmol/L on samples taken 12 hours after the preceding dose. Given the rates of thyroid dysfunction, thyroid parameters should be checked before lithium is instituted and monitored after 3–6 months and then every 6–12 months. Given the risks of kidney malfunction, serum creatinine, and eGFR should be checked before lithium is instituted and monitored after 3–6 months at regular intervals. Patients who have a rise in creatinine on three or more occasions, even if their eGFR is > 60 ml/min/ 1.73m2 require further evaluation, including a urinalysis for haematuria, and proteinuria, a review of their medical history with attention paid to cardiovascular, urological, and medication history, and blood pressure control and management. Overt proteinuria should be further quantified with a urine protein-to-creatinine ratio. Discontinuation For patients who have achieved long-term remission, it is recommended to discontinue lithium gradually and in a controlled fashion. Discontinuation symptoms may occur in patients stopping the medication including irritability, restlessness, and somatic symptoms like vertigo, dizziness, or lightheadedness. Symptoms occur within the first week and are generally mild and self-limiting within weeks. Cluster headaches, migraine, and hypnic headache Studies testing prophylactic use of lithium in cluster headaches (when compared to verapamil), migraine attacks, and hypnic headache indicate good efficacy. Adverse effects The adverse effects of lithium include: Very Common (> 10% incidence) adverse effects Confusion Constipation (usually transient, but can persist in some) Decreased memory Diarrhea (usually transient, but can persist in some) Dry mouth EKG changes – usually benign changes in T waves Hand tremor (usually transient, but can persist in some) with an incidence of 27%. If severe, psychiatrist may lower lithium dosage, change lithium salt type or modify lithium preparation from long to short-acting (despite lacking evidence for these procedures) or use pharmacological help Headache Hyperreflexia — overresponsive reflexes Leukocytosis — elevated white blood cell count Muscle weakness (usually transient, but can persist in some) Myoclonus — muscle twitching Nausea (usually transient) Polydipsia — increased thirst Polyuria — increased urination Renal (kidney) toxicity which may lead to chronic kidney failure, although some cases may be misattributed Vomiting (usually transient, but can persist in some) Vertigo Common (1–10%) adverse effects Acne Extrapyramidal side effects — movement-related problems such as muscle rigidity, parkinsonism, dystonia, etc. Euthyroid goitre — i.e. the formation of a goitre despite normal thyroid functioning Hypothyroidism — a deficiency of thyroid hormone, though this condition is already common among patients with bipolar disorder. Hair loss/hair thinning Weight gain — 5% incidence, tends to start fast and then plateau. Usually ends at 1–2 kg. Unknown incidence Sexual dysfunction Hypoglycemia Glycosuria In addition to tremors, lithium treatment appears to be a risk factor for development of parkinsonism-like symptoms, although the causal mechanism remains unknown. In the average bipolar patient, chronic lithium use is not associated with cognitive decline. Most side effects of lithium are dose-dependent. The lowest effective dose is used to limit the risk of side effects. Hypothyroidism The rate of hypothyroidism is around six times higher in people who take lithium. Low thyroid hormone levels in turn increase the likelihood of developing depression. People taking lithium thus should routinely be assessed for hypothyroidism and treated with synthetic thyroxine if necessary. Because lithium competes with the antidiuretic hormone in the kidney, it increases water output into the urine, a condition called nephrogenic diabetes insipidus. Clearance of lithium by the kidneys is usually successful with certain diuretic medications, including amiloride and triamterene. It increases the appetite and thirst ("polydypsia") and reduces the activity of thyroid hormone (hypothyroidism). The latter can be corrected by treatment with thyroxine and does not require the lithium dose to be adjusted. Lithium is also believed to cause renal dysfunction, although this does not appear to be common. Lambert et al. (2016), comparing the rate of hypothyroidism in patients with bipolar disorder treated with 9 different medications, found that lithium users do not have a particularly high rate of hypothyroidism (8.8%) among BD patients – only 1.39 times the rate in oxcarbazepine users (6.3%). Lithium and quetiapine are not statistically different in terms of hypothyroidism rates. However, lithium users are tested much more frequently for hypothyroidism than those using other drugs. The authors write that there may be an element of surveillance bias in understanding lithium's effects on the thyroid glands, as lithium users are tested 2.3–3.1 times as often. Furthermore, the authors argue that because hypothyroidism is common among BD patients regardless of lithium treatment, regular thyroid testing should be applied to all BD patients, not just those on lithium. Pregnancy Lithium is a teratogen, which can cause birth defects in a small number of newborns. Case reports and several retrospective studies have demonstrated possible increases in the rate of a congenital heart defects including Ebstein's anomaly if taken during pregnancy. Teratogenicity is affected by trimester and dose of Lithium. Most significantly affecting first-trimester cardiac development with greater effects at higher doses. As the risks of stopping Lithium can be significant, patients are sometimes recommended to stay on this medicine while pregnant. Careful weighing of the risks and benefits should be made in consultation with a psychiatric physician. For patients who are exposed to lithium, or plan to stay on the medication throughout their pregnancy, fetal echocardiography is routinely performed to monitor for cardiac anomalies. While lithium is typically the most effective treatment, possible alternatives to Lithium include Lamotrigine and Second generation Antipsychotics for the treatment of acute bipolar depression or for the management of bipolar patients with normal mood during pregnancy. Breastfeeding While only small amounts of Lithium are transmitted to the infant in breastmilk, there is limited data on the safety of Breastfeeding while on Lithium. Medical evaluation and monitoring of infants consuming breastmilk during maternal prescription may be indicated. Kidney damage Lithium has been associated with several forms of kidney injury. It is estimated that impaired urinary concentrating ability is present in at least half of individuals on chronic lithium therapy, a condition called lithium-induced nephrogenic diabetes insipidus. Continued use of lithium can lead to more serious kidney damage in an aggravated form of diabetes insipidus. In rare cases, some forms of lithium-caused kidney damage may be progressive and lead to end-stage kidney failure with a reported incidence of 0.2% to 0.7%. Some reports of kidney damage may be wrongly attributed to lithium, increasing the apparent rate of this adverse effect. Nielsen et al. (2018), citing 6 large observational studies since 2010, argue that findings of decreased kidney function are partially inflated by surveillance bias. Furthermore, modern data does not show that lithium increases the risk of end-stage kidney disease. Davis et al. (2018), using literature from a wider timespan (1977–2018), also found that lithium's association with chronic kidney disease is unproven with various contradicting results. They also find contradicting results regarding end-stage kidney disease. A 2015 nationwide study suggests that chronic kidney disease can be avoided by maintaining the serum lithium concentration at a level of 0.6–0.8 mmol/L and by monitoring serum creatinine every 3–6 months. Hyperparathyroidism Lithium-associated hyperparathyroidism is the leading cause of hypercalcemia in lithium-treated patients. Lithium may lead to exacerbation of pre-existing primary hyperparathyroidism or cause an increased set-point of calcium for parathyroid hormone suppression, leading to parathyroid hyperplasia. Interactions Lithium plasma concentrations are known to be increased with concurrent use of diuretics—especially loop diuretics (such as furosemide) and thiazides—and non-steroidal anti-inflammatory drugs (NSAIDs) such as ibuprofen. Lithium concentrations can also be increased with concurrent use of ACE inhibitors such as captopril, enalapril, and lisinopril. Lithium is primarily cleared from the body through glomerular filtration, but some is then reabsorbed together with sodium through the proximal tubule. Its levels are therefore sensitive to water and electrolyte balance. Diuretics act by lowering water and sodium levels; this causes more reabsorption of lithium in the proximal tubules so that the removal of lithium from the body is less, leading to increased blood levels of lithium. ACE inhibitors have also been shown in a retrospective case-control study to increase lithium concentrations. This is likely due to constriction of the afferent arteriole of the glomerulus, resulting in decreased glomerular filtration rate and clearance. Another possible mechanism is that ACE inhibitors can lead to a decrease in sodium and water. This will increase lithium reabsorption and its concentrations in the body. Some drugs can increase the clearance of lithium from the body, which can result in decreased lithium levels in the blood. These drugs include theophylline, caffeine, and acetazolamide. Additionally, increasing dietary sodium intake may also reduce lithium levels by prompting the kidneys to excrete more lithium. Lithium is known to be a potential precipitant of serotonin syndrome in people concurrently on serotonergic medications such as antidepressants, buspirone and certain opioids such as pethidine (meperidine), tramadol, oxycodone, fentanyl and others. Lithium co-treatment is also a risk factor for neuroleptic malignant syndrome in people on antipsychotics and other antidopaminergic medications. High doses of haloperidol, fluphenazine, or flupenthixol may be hazardous when used with lithium; irreversible toxic encephalopathy has been reported. Indeed, these and other antipsychotics have been associated with an increased risk of lithium neurotoxicity, even with low therapeutic lithium doses. Classical psychedelics such as psilocybin and LSD may cause seizures if taken while using lithium, although further research is needed. Overdose Lithium toxicity, which is also called lithium overdose and lithium poisoning, is the condition of having too much lithium in the blood. This condition also happens in persons who are taking lithium in which the lithium levels are affected by drug interactions in the body. In acute toxicity, people have primarily gastrointestinal symptoms such as vomiting and diarrhea, which may result in volume depletion. During acute toxicity, lithium distributes later into the central nervous system resulting in mild neurological symptoms, such as dizziness. In chronic toxicity, people have primarily neurological symptoms which include nystagmus, tremor, hyperreflexia, ataxia, and change in mental status. During chronic toxicity, the gastrointestinal symptoms seen in acute toxicity are less prominent. The symptoms are often vague and nonspecific. If the lithium toxicity is mild or moderate, lithium dosage is reduced or stopped entirely. If the toxicity is severe, lithium may need to be removed from the body. Mechanism of action The specific biochemical mechanism of lithium action in stabilizing mood is unknown. Upon ingestion, lithium becomes widely distributed in the central nervous system and interacts with a number of neurotransmitters and receptors, decreasing norepinephrine release and increasing serotonin synthesis. Unlike many other psychoactive drugs, typically produces no obvious psychotropic effects (such as euphoria) in normal individuals at therapeutic concentrations. Lithium may also increase the release of serotonin by neurons in the brain. In vitro studies performed on serotonergic neurons from rat raphe nuclei have shown that when these neurons are treated with lithium, serotonin release is enhanced during a depolarization compared to no lithium treatment and the same depolarization. Lithium both directly and indirectly inhibits GSK3β (glycogen synthase kinase 3β) which results in the activation of mTOR. This leads to an increase in neuroprotective mechanisms by facilitating the Akt signaling pathway. GSK-3β is a downstream target of monoamine systems. As such, it is directly implicated in cognition and mood regulation. During mania, GSK-3β is activated via dopamine overactivity. GSK-3β inhibits the transcription factors β-catenin and cyclic AMP (cAMP) response element binding protein (CREB), by phosphorylation. This results in a decrease in the transcription of important genes encoding for neurotrophins. In addition, several authors proposed that pAp-phosphatase could be one of the therapeutic targets of lithium. This hypothesis was supported by the low Ki of lithium for human pAp-phosphatase compatible within the range of therapeutic concentrations of lithium in the plasma of people (0.8–1 mM). The Ki of human pAp-phosphatase is ten times lower than that of GSK3β (glycogen synthase kinase 3β). Inhibition of pAp-phosphatase by lithium leads to increased levels of pAp (3′-5′ phosphoadenosine phosphate), which was shown to inhibit PARP-1. Another mechanism proposed in 2007 is that lithium may interact with nitric oxide (NO) signaling pathway in the central nervous system, which plays a crucial role in neural plasticity. The NO system could be involved in the antidepressant effect of lithium in the Porsolt forced swimming test in mice. It was also reported that NMDA receptor blockage augments antidepressant-like effects of lithium in the mouse forced swimming test, indicating the possible involvement of NMDA receptor/NO signaling in the action of lithium in this animal model of learned helplessness. Lithium possesses neuroprotective properties by preventing apoptosis and increasing cell longevity. Although the search for a novel lithium-specific receptor is ongoing, the high concentration of lithium compounds required to elicit a significant pharmacological effect leads mainstream researchers to believe that the existence of such a receptor is unlikely. Oxidative metabolism Evidence suggests that mitochondrial dysfunction is present in patients with bipolar disorder. Oxidative stress and reduced levels of anti-oxidants (such as glutathione) lead to cell death. Lithium may protect against oxidative stress by up-regulating complexes I and II of the mitochondrial electron transport chain. Dopamine and G-protein coupling During mania, there is an increase in neurotransmission of dopamine that causes a secondary homeostatic down-regulation, resulting in decreased neurotransmission of dopamine, which can cause depression. Additionally, the post-synaptic actions of dopamine are mediated through G-protein coupled receptors. Once dopamine is coupled to the G-protein receptors, it stimulates other secondary messenger systems that modulate neurotransmission. Studies found that in autopsies (which do not necessarily reflect living people), people with bipolar disorder had increased G-protein coupling compared to people without bipolar disorder. Lithium treatment alters the function of certain subunits of the dopamine-associated G-protein, which may be part of its mechanism of action. Glutamate and NMDA receptors Glutamate levels are observed to be elevated during mania. Lithium is thought to provide long-term mood stabilization and have anti-manic properties by modulating glutamate levels. It is proposed that lithium competes with magnesium for binding to NMDA glutamate receptor, increasing the availability of glutamate in post-synaptic neurons, leading to a homeostatic increase in glutamate re-uptake which reduces glutamatergic transmission. The NMDA receptor is also affected by other neurotransmitters such as serotonin and dopamine. Effects observed appear exclusive to lithium and have not been observed by other monovalent ions such as rubidium and cesium. GABA receptors GABA is an inhibitory neurotransmitter that plays an important role in regulating dopamine and glutamate neurotransmission. It was found that patients with bipolar disorder had lower GABA levels, which results in excitotoxicity and can cause apoptosis (cell loss). Lithium has been shown to increase the level of GABA in plasma and cerebral spinal fluid. Lithium counteracts these degrading processes by decreasing pro-apoptotic proteins and stimulating release of neuroprotective proteins. Lithium's regulation of both excitatory dopaminergic and glutamatergic systems through GABA may play a role in its mood-stabilizing effects. Cyclic AMP secondary messengers Lithium's therapeutic effects are thought to be partially attributable to its interactions with several signal transduction mechanisms. The cyclic AMP secondary messenger system is shown to be modulated by lithium. Lithium was found to increase the basal levels of cyclic AMP but impair receptor-coupled stimulation of cyclic AMP production. It is hypothesized that the dual effects of lithium are due to the inhibition of G-proteins that mediate cyclic AMP production. Over a long period of lithium treatment, cyclic AMP and adenylate cyclase levels are further changed by gene transcription factors. Inositol depletion hypothesis Lithium treatment has been found to inhibit the enzyme inositol monophosphatase, involved in degrading inositol monophosphate to inositol required in PIP2 synthesis. This leads to lower levels of inositol triphosphate, created by decomposition of PIP2. This effect has been suggested to be further enhanced with an inositol triphosphate reuptake inhibitor. Inositol disruptions have been linked to memory impairment and depression. It is known with good certainty that signals from the receptors coupled to the phosphoinositide signal transduction are affected by lithium. myo-inositol is also regulated by the high affinity sodium mI transport system (SMIT). Lithium is hypothesized to inhibit mI entering the cells and mitigate the function of SMIT. Reductions of cellular levels of myo-inositol results in the inhibition of the phosphoinositide cycle. Neurotrophic factors Lithium's actions on Gsk3 result in activation of CREB, leading to higher expression of BDNF. (Valproate, another mood stabilizer, also increases the expression of BDNF.) As expected of increased BDNF expression, chronic lithium treatment leads to increased grey matter volume in brain areas implicated in emotional processing and cognitive control. Bipolar patients treated with lithium also have higher white matter integrity compared to those taking other drugs. Lithium also increases the expression of mesencephalic astrocyte-derived neurotrophic factor (MANF), another neurotrophic factor, via the AP-1 transcription factor. MANF is able to regulate proteostasis by interacting with GRP78, a protein involved in the unfolded protein response. History Lithium was first used in the 19th century as a treatment for gout after scientists discovered that, at least in the laboratory, lithium could dissolve uric acid crystals isolated from the kidneys. The levels of lithium needed to dissolve urate in the body, however, were toxic. Because of prevalent theories linking excess uric acid to a range of disorders, including depressive and manic disorders, Carl Lange in Denmark and William Alexander Hammond in New York City used lithium to treat mania from the 1870s onwards. By the turn of the 20th century, as theory regarding mood disorders evolved and so-called "brain gout" disappeared as a medical entity, the use of lithium in psychiatry was largely abandoned; however, several lithium preparations were still produced for the control of renal calculi and uric acid diathesis. As accumulating knowledge indicated a role for excess sodium intake in hypertension and heart disease, lithium salts were prescribed to patients for use as a replacement for dietary table salt (sodium chloride). This practice and the sale of lithium itself were both banned in the United States in February 1949, following the publication of reports detailing side effects and deaths. Also in 1949, the Australian psychiatrist John Cade and Australian biochemist Shirley Andrews rediscovered the usefulness of lithium salts in treating mania while working at the Royal Park Psychiatric Hospital in Victoria. They were injecting rodents with urine extracts taken from manic patients in an attempt to isolate a metabolic compound which might be causing mental symptoms. Since uric acid in gout was known to be psychoactive, (adenosine receptors on neurons are stimulated by it; caffeine blocks them), they needed soluble urate for a control. They used lithium urate, already known to be the most soluble urate compound, and observed that it caused the rodents to become tranquil. Cade and Andrews traced the effect to the lithium-ion itself, and after Cade ingested lithium himself to ensure its safety in humans, he proposed lithium salts as tranquilizers. He soon succeeded in controlling mania in chronically hospitalized patients with them. This was one of the first successful applications of a drug to treat mental illness, and it opened the door for the development of medicines for other mental problems in the next decades. The rest of the world was slow to adopt this treatment, largely because of deaths that resulted from even relatively minor overdosing, including those reported from the use of lithium chloride as a substitute for table salt. Largely through the research and other efforts of Denmark's Mogens Schou and Paul Baastrup in Europe, and Samuel Gershon and Baron Shopsin in the U.S., this resistance was slowly overcome. Following the recommendation of the APA Lithium Task Force (William Bunney, Irvin Cohen (Chair), Jonathan Cole, Ronald R. Fieve, Samuel Gershon, Robert Prien, and Joseph Tupin), the application of lithium in manic illness was approved by the United States Food and Drug Administration in 1970, becoming the 50th nation to do so. Lithium has now become a part of Western popular culture. Characters in Pi, Premonition, Stardust Memories, American Psycho, Garden State, and An Unmarried Woman all take lithium. It's the chief constituent of the calming drug in Ira Levin's dystopian This Perfect Day. Sirius XM Satellite Radio in North America has a 1990s alternative rock station called Lithium, and several songs refer to the use of lithium as a mood stabilizer. These include: "Equilibrium met Lithium" by South African artist Koos Kombuis, "Lithium" by Evanescence, "Lithium" by Nirvana, "Lithium and a Lover" by Sirenia, "Lithium Sunset", from the album Mercury Falling by Sting, and "Lithium" by Thin White Rope. 7 Up As with cocaine in Coca-Cola, lithium was widely marketed as one of several patent medicine products popular in the late 19th and early 20th centuries and was the medicinal ingredient of a refreshment beverage. Charles Leiper Grigg, who launched his St. Louis-based company The Howdy Corporation, invented a formula for a lemon-lime soft drink in 1920. The product, originally named "Bib-Label Lithiated Lemon-Lime Soda", was launched two weeks before the Wall Street Crash of 1929. It contained the mood stabilizer lithium citrate, and was one of many patent medicine products popular in the late-19th and early-20th centuries. Its name was soon changed to 7 Up. All American beverage makers were forced to remove lithium from beverages in 1948. Despite the ban, in 1950, the Painesville Telegraph still carried an advertisement for a lithiated lemon beverage. Salts and product names Lithium carbonate () is the most commonly used form of lithium salts, a carbonic acid involving the lithium element and a carbonate ion. Other lithium salts are also used as medication, such as lithium citrate (), lithium sulfate, lithium chloride, and lithium orotate. Nanoparticles and microemulsions have also been invented as drug delivery mechanisms. As of 2020, there is a lack of evidence that alternate formulations or salts of lithium would reduce the need for monitoring serum lithium levels or lower systemic toxicity. As of 2017 lithium was marketed under many brand names worldwide, including Cade, Calith, Camcolit, Carbolim, Carbolit, Carbolith, Carbolithium, Carbolitium, Carbonato de Litio, Carboron, Ceglution, Contemnol, Efadermin (Lithium and Zinc Sulfate), Efalith (Lithium and Zinc Sulfate), Elcab, Eskalit, Eskalith, Frimania, Hypnorex, Kalitium, Karlit, Lalithium, Li-Liquid, Licarb, Licarbium, Lidin, Ligilin, Lilipin, Lilitin, Limas, Limed, Liskonum, Litarex, Lithane, Litheum, Lithicarb, Lithii carbonas, Lithii citras, Lithioderm, Lithiofor, Lithionit, Lithium, Lithium aceticum, Lithium asparagicum, Lithium Carbonate, Lithium Carbonicum, Lithium Citrate, Lithium DL-asparaginat-1-Wasser, Lithium gluconicum, Lithium-D-gluconat, Lithiumcarbonaat, Lithiumcarbonat, Lithiumcitrat, Lithiun, Lithobid, Lithocent, Lithotabs, Lithuril, Litiam, Liticarb, Litijum, Litio, Litiomal, Lito, Litocarb, Litocip, Maniprex, Milithin, Neurolepsin, Plenur, Priadel, Prianil, Prolix, Psicolit, Quilonium, Quilonorm, Quilonum, Téralithe, and Theralite. Research Tentative evidence in Alzheimer's disease showed that lithium may slow progression. It has been studied for its potential use in the treatment of amyotrophic lateral sclerosis (ALS), but a study showed lithium had no effect on ALS outcomes.
Biology and health sciences
Psychiatric drugs
Health
51367694
https://en.wikipedia.org/wiki/Pnictogen%20hydride
Pnictogen hydride
Pnictogen hydrides or hydrogen pnictides are binary compounds of hydrogen with pnictogen ( or ; from "to choke" and -gen, "generator") atoms (elements of group 15: nitrogen, phosphorus, arsenic, antimony, bismuth, and moscovium) covalently bonded to hydrogen. Pnictogen trihydrides The simplest series has the chemical formula XH3 (less commonly H3X), with X representing any of the pnictogens. They take on the pyramidal structure (as opposed to the trigonal planar arrangement of the group 13 hydrides), and therefore are polar. These pnictogen trihydrides are generally increasingly unstable and poisonous with heavier elements. Some properties of the pnictogen trihydrides follow: These gases have no smell in pure form, instead gaining it when in contact with air. Ammonia has an infamous, intense odour resembling urine and/or fish, commonly the result of the decomposition of urea. Phosphine smells like fish or garlic, and stibine like rotten eggs, similar to hydrogen sulfide and selenide. Dipnictogen tetrahydrides Dipnictogen tetrahydrides have the chemical formula . These are generally less stable than the trihydrides, commonly decomposing to the trihydride and the pnictogen involved. Higher derivatives Polyphosphanes exist with the formula PnHn+2 (n = 1–9). Linear and branch isomers of have been detected. Other cyclic and condensed polyphosphane series are known, from PnHn to PnHn−18, amounting to 85 known phosphanes in 1997. Properties Noncyclic hydrogen pnictides follow the formula XnHn+2. Ammonia is produced industrially on the largest scale among all compounds. Like water, hydrogen bonding results in a high melting and boiling point compared to the other pnictogen hydrides, although 26% is lost on melting, another 7% as the liquid is heated to boiling, and the remaining 67% upon boiling. Other effects of hydrogen bonding are a high dielectric constant as well as low values of density, viscosity, and electrical conductivity. Like water, it is an excellent and often-used ionising solvent. Over twenty other hydrides of nitrogen are known, the most important being hydrazine (N2H4) and hydrogen azide (HN3). Hydrazine has physical properties that are remarkably similar to those of water: its melting and boiling points are 2.0 °C and 113.5 °C, the density of the solid at −5 °C is 1.146 g/cm3, while that of the liquid at 25 °C is 1.00 g/cm3. The azanes are a series which include ammonia, hydrazine and triazane. Phosphine, a toxic, colourless gas, is the most stable phosphorus hydride. It is insoluble in water but soluble in organic liquids (as well as carbon disulfide and trichloroacetic acid). Phosphine is a reducing agent. Arsine, stibine, and bismuthine are highly toxic, thermally unstable, and colourless gases. No appreciable hydrogen bonding is found in phosphine, arsine, stibine or bismuthine, and there is no appreciable tendency to dissociate like ammonia to and (M = P, As, Sb, Bi). The pnictogen hydrides become denser down the group and the M–H bond lengths increase, while the H–M–H bond angle decreases slightly. The standard enthalpies of formation reflect the increasing thermal instability going down the group. Arsine decomposes to arsenic and hydrogen at 250–300 °C, stibine to antimony and hydrogen at room temperature, and bismuthine to bismuth and hydrogen above −45 °C. Arsine and stibine are very easily oxidised to arsenic or antimony trioxide and water; a similar reaction happens with sulfur or selenium. Reaction with metals at elevated temperatures leads to arsenides and antimonides. A few lower hydrides are known, such as As2H4, but they are even more unstable and their properties are unknown. Imidogen, a radical composed of one hydrogen atom and one nitrogen atom (NH), can be classed as a pnictogen hydride.
Physical sciences
Hydrogen compounds
Chemistry
49037459
https://en.wikipedia.org/wiki/Domesticated%20quail
Domesticated quail
A domesticated quail is a domestic form of the quail, a collective name which refers to a group of several small species of fowl. Thousands of years of breeding and domestication have guided the bird's evolution. Humans domesticated quails for meat and egg production; additionally, quails can be kept as pets. Domesticated quails are commonly kept in long wire cages and are fed game bird feed. The most common domesticated type is the Coturnix quail (also known as the Japanese quail). Quails live on the ground, and rarely fly unless forced to do so. Breeds Twenty types of wild quail exist along with 70 domestic breeds/strains, including laboratory and commercial lines. Due to their large size, Coturnix quails are kept for meat and egg consumption. This breed contains more meat and produces more eggs than the others. Button quails (also known as King Quail, Chinese-Painted Quail and Blue-Breasted quail) are rarely kept for food production because they are smaller and produce fewer eggs. They are kept in large aviaries to clean the leftover seeds that fall to the floor. California, Gambel's, Bobwhite, Scaled quails, etc. are less common and are rarely kept as pets. Quail breeds Coturnix or Japanese quail Button, King, Chinese-Painted or Blue-Breasted quail (Northern) Bobwhite quail Gambel's quail Mearn's quail Mountain quail Scaled quail California (Valley) quail Manipur Bush quail Jungle Bush quail Both Button and Coturnix quail have different feather coloring due to years of breeding. The common and wild Coturnix quail color is the Pharaoh breed, which is a brown feather color. The Button quail has a red belly, blue body, black and white head, and a brown back all in one (only present in males; females are a brown color all over). The Manipur Bush quail can be found mainly along the river Brahmaputra, in Assam, Manipur, Meghalaya, Nagaland and West Bengal in India. Coturnix (Japanese) quail feather coloring Pharaoh - Rusty brown presented underbelly and an original brown color on the head and upper body. English White - White all over in both males and females. Manchurian Golden - Light rusty all over with a pattern. Males have a darker rusty color presented on the head while females are lighter in color. Italian - Beige with striated marking. Males are presented with brown faces. Tibetan (Dark British Range) - Dark chocolate all over with a spot of white under the beaks. Rosetta (British Range) - Red-brown chocolate all over. Silver - Light grey all over. Tuxedo - White and brown mix. Cinnamon (Red Range) - Light brown all over. Scarlet (Red Golden) - Red-brown all over. Roux - Lighter than the Pharaoh (wild) version. Golden Tuxedo - White feathers all over with blonde feathers presented. Other colors seen may be mutations. Button quail feather coloring Wild (Common) Feather Color - Red breast, blue body, black and white face, and a brown back. Females are brown all over. Silver - Another common feather coloring. Both females and males are a light grey. Males are presented with a black and white face. White - Plain white all over in both males and females. Red Breasted - Large red underbelly. Much alike the wild feather coloring. Blue Faced - Blue underbelly and dark brown back in males and a dark brown all over in females. Cinnamon - Light brown. Golden Pearl - Females are a lighter brown. Tuxedo Pied - A white and brown color mix. Other colors seen may be mutations.
Biology and health sciences
Galliformes
Animals
54241771
https://en.wikipedia.org/wiki/Biblical%20mile
Biblical mile
Biblical mile () is a unit of distance on land, or linear measure, principally used by Jews during the Herodian dynasty to ascertain distances between cities and to mark the Sabbath limit, equivalent to about ⅔ of an English statute mile, or what was about four furlongs (four stadia). The basic Jewish traditional unit of distance was the cubit (), each cubit being roughly between The standard measurement of the biblical mile, or what is sometimes called tǝḥūm šabbat (Sabbath limit; Sabbath boundary), was 2,000 cubits. Etymology The word mīl, as used in Hebrew texts between the 2nd and 5th centuries CE, is a Roman loanword, believed to be a shortened adaptation of the Latin mīliarium, literally meaning, "milestone," and which word signifies "a thousand" [passuum <paces> of two steps each]; hence: Roman mile. The word appears in the Mishnah, a compendium of Jewish oral law compiled by Rabbi Judah the Prince in 189 CE, and is used to this very day by religious Jews in the application of certain halachic laws. Halachic applications On Shabbat, one is not allowed to travel further than 1 biblical mile outside one's city; this law is known as techum shabbat. A procedure known as eruv techumin allows one to travel up to one more biblical mile. The rabbinic ordinance of washing hands prior to eating bread requires of people travelling the roads to go as far as 4 biblical miles if there is a known water source that can be used for washing. This applies only to when the water source lies in one's general direction of travel. However, had he already passed the water source, he is not obligated to backtrack unless the distance is within 1 biblical mile. Sliced pieces of meat that are to be cooked in a pot require salting before they are cooked. The first process is rinsing in water followed by salting with any coarse salt, while laid over a grating or colander to allow for drainage. The salt is allowed to remain on the meat for the time that it takes to walk one biblical mile (appx.18–24minutes). Afterwards, the residue of salt is rinsed away with water, and the meat cooked. Salting in this way helps to draw out the blood. Divergent methods Nearly two thousand years of Jewish exile from the Land of Israel have given rise to disputes over the precise length of the biblical mile observed by the ancients. Some hold the biblical mile to be 1,152 m, while others hold it to be 960 m, depending on the length they prescribe to each cubit. Originally, the 2,000 cubit Sabbath limit was measured with a standard 50-cubit rope. Another dispute is the actual time it takes for an average man to walk a biblical mile. Most authorities hold that a biblical mile can be traversed in 18 minutes; four biblical miles in 72 minutes. Elsewhere, however, Maimonides held the view that an average man walks a biblical mile in about 20 to 24 minutes. Distances between cities Hamath to Tiberias = 1 mil (before the two cities converged as one) Beit Maon to Tiberias = 1 mil (before the two cities converged as one) Migdal Nunia ('the Fish Tower') to Tiberias = 1 mil Migdal to Hamath = 1 mil Sepphoris to Tiberias = 18 mil Lod (Lydda) to Ono = 3 mil Beth-jeshimoth to Abel-shittim = 12 mil Zoar to Sodom = 5 mil. Modiin (Modiith) to Jerusalem = 15 mil.
Physical sciences
Other
Basics and measurement
57557247
https://en.wikipedia.org/wiki/Seismic%20zone
Seismic zone
In seismology, a seismic zone or seismic belt is an area of seismicity potentially sharing a common cause. It can be referred to as an earthquake belt as well. It may also be a region on a map for which a common areal rate of seismicity is assumed for the purpose of calculating probabilistic ground motions. An obsolete definition is a region on a map in which a common level of seismic design is required. The major seismic zones A type of seismic zone is a Wadati–Benioff zone which corresponds with the down-going slab in a subduction zone. The world's greatest seismic belt, known as the Circum-Pacific seismic belt, is where a majority of the Earth's quakes occur. Approximately 81% of major earthquakes occur along this belt. The Circum-Pacific seismic belt has earned its own nickname and is often referred to as the Ring of Fire, a ring-like formation that encompasses a majority of the Pacific Ocean. The notorious San Andreas Fault, responsible for many major quakes in the West Coast of the United States, lies within the Circum-Pacific Seismic Belt or Ring of Fire. Examples Charlevoix seismic zone (Quebec, Canada) New Madrid seismic zone (Midwestern United States) South West seismic zone (Western Australia)
Physical sciences
Seismology
Earth science
57561320
https://en.wikipedia.org/wiki/Scooter-sharing%20system
Scooter-sharing system
A scooter-sharing system or kicksharing system is a shared transport service in which electric motorized scooters (also referred to as e-scooters) are made available to use for short-term rentals. E-scooters are typically "dockless", meaning that they do not have a fixed home location and are dropped off and picked up from certain locations in the service area. Scooter-sharing systems work towards providing the public with a fast and convenient mode of transport for last-mile mobility in urban areas. Due to the growing popularity of scooter-sharing, municipal governments have enforced regulations on e-scooters to increase rider and pedestrian safety while avoiding the accrual of visual pollution. Scooter-sharing systems are one of the least expensive and most popular micromobility options. Scooter-sharing industry Rise of e-scooter industry In 2012, Scoot Networks released a moped-style vehicle that provided a short-range rental of scooters. In 2016, Neuron Mobility introduced e-scooter docking stations in Singapore. In 2017, Bird Global and Lime introduced dockless electric kick scooters. Since its launch in Santa Monica, California, United States, Bird expanded its services to over 100 cities and reached a valuation of 2 billion dollars in 2018. In the same year, Lime amassed over 11.5 million rides. Early 2018 also saw India-based Yulu launching its IOT-enabled smart bicycles in Bengaluru, followed in 2019 by the launch of its shared electric vehicles. Lyft and Uber, the largest ride-sharing companies in the U.S., introduced their own electric scooter sharing services in 2018. By 2030, the global scooter market is expected to be valued at 300 billion to 500 billion dollars. Technology Apps To rent a dockless e-scooter, users download a smartphone application. The application shows users a map of nearby e-scooters and enables them to unlock them. The application also includes a secure payment gateway such as PayPal. Scooters are equipped with built-in GPS chips and cellular connectivity which allows them to broadcast their location in real-time during a trip. Through GPS and cellular tracking, companies can gather usage statistics, track which scooters are being used, and charge customers accordingly for the time spent per trip. Anti-theft E-scooters have built-in features to prevent theft, and hacking. Hackers steal e-scooters and replace the existing hardware to convert the scooter for personal use. Users are only able to unlock and ride e-scooters by using a smartphone application; when a user has completed a trip, they use the app to lock the e-scooter and immobilize the wheels. Bird and Lime e-scooters have built-in alarms that will trigger if someone attempts to move or tamper with an e-scooter without using the app to unlock it. In response to the growing problem of scooter hacking, Lime claims it has developed custom scooter hardware that cannot be easily replaced with third-party parts. International expansion Asia The market for the Asian scooter-sharing industry is currently less than 4 percent of the North American market size. Singaporean ride-sharing startups, Grab and Neuron Mobility, were the first movers in the Southeast-Asian e-scooter sector. Grab is valued at 10 billion dollars and currently only provides e-scooters from a singular location in Singapore. In 2018, Uber secured 27.5 percent of Grab's equity to compete in the Southeast-Asian market. Neuron Mobility owns and operates the most expansive collection of e-scooters in Thailand and Singapore. Lime has selected Singapore as the headquarters for its operations in Asia and was the first foreign company permitted to provide e-scooters within the city. Starting in 2019, Bird and Lime have been working alongside Japanese traffic regulators and testing local markets to assess the viability of an expansion to Japan. In 2022, Beam, a Singaporean startup which currently operates e-scooters and e-bikes in 35 cities raised $135 million of funding to expand. Europe Estonian mobility technology company Bolt launched scooter-sharing services on its mobile app platform in 2019. It has since become the largest micromobility operator in Europe, with operations in more than 130 cities across 20 countries. At the end of 2021, Bolt become the first company to launch scooter charging docks in Europe. In April 2022, Bolt announced plans to invest 150 million euros to further expand its scooter offering, pledging to operate 230,000 scooters across Europe by the end of the year. Lime launched the first large-scale European expansion of scooter-sharing systems in Paris during June 2018. By October 2018, Lime's app became the top-ranked travel application on Apple's App Store in France. As of 2019, Lime provides scooter-sharing systems to more than 50 European cities including Paris, Berlin, London, Rome, Madrid, and Athens. Bird launched its own European market-development strategy in Paris in August 2018. Bird's coverage has expanded to more than 20 major European municipalities. Uber's Jump entered the European market in April 2019 through a test-launch in Madrid, Spain. Within a 7-month window, expanded the accessibility of their service from Madrid to 10 of Europe's most populated urban centers. European e-scooter start-ups, Voi Technology from Sweden and Tier Mobility out of Germany, accrued 80 million dollars and 28 million dollars of funding respectively. In 2020, Tier subsequently raised a further 250 million dollars, valuing the company at just under 1 billion dollars. Since 2017 Amsterdam-based Felyx is active in the Netherlands and since 2019 in Brussels. From 2017 to 2018 the number of shared e-scooters in Europe increased by nearly 200 percent. The European demand for scooter-sharing systems is expected to grow 26.2 percent annually through 2025. Since 2019, Turkey-based micro mobility platform, Scootable, provide services in 3 country and with more than 1500 scooters. In addition to scooters, the company also provides software infrastructure for many electric vehicles such as forklifts, street sweepers, cargo E-bikes, golf cart, scissor lifts, farm buggy, electric boats, baggage towing tractor. Since 2018, kicksharing has appeared in Moscow, Russia. Currently available 42,000 scooters in 5 rental services. Scooters must be parked in special places. There are speed-restricted zones in the city - scooters automatically reduce speed to 5–15 km/h. South America Until 2019, Brazilian startup, Yellow was the largest e-scooter service in South America. The startup set the South American record for an initial fundraising round at 63 million dollars of investment. At the start of 2019, Yellow carried out a merger with the Mexican e-scooter service Grin to form the conglomerate Grow Mobility. Grow Mobility is the largest scooter-sharing service in South America with 100,000 e-scooters and plans to double this coverage by the end of 2019. Other competitors in the South American market include Colombian e-scooter start-up Cosmic Go, and the multinational mobility service Movo headquartered in Spain. Effects Right-of-way obstruction and visual pollution Visual pollution is a major concern caused by scooter-sharing in cities due to users illegally parking e-scooters on sidewalks, entryways, roads, and access points. E-scooters that are incorrectly parked litter sidewalks and block pedestrian walkways. Riding e-scooters on the sidewalk is discouraged because it disturbs pedestrians and poses a safety risk at high speeds. The term "scooter rage" or "scooter war" describes a movement by displeased city residents to illegally dump e-scooters into waterways or bury them so that users are unable to find and rent them. Injuries, fatalities and safety There is limited information on the overall scale of injuries caused by electric scooters. However, in a three-month study, 20 people were injured for every 100,000 rides. A close majority were head injuries, and of those cases, 15 percent were traumatic. Broken bones; ligament, tendon, or nerve impairments, severe bleeding; and organ damage are other injuries experienced by electric scooter riders. Non-riders have also been a victim to electronic scooter injuries through collisions or tripping on the devices in the streets. In the United States, 11 fatalities occurred between the start of 2018 to mid-2019. Common times of accidents occur during work and rush hours. 33 percent of all injuries occur on sidewalks and 55 percent occur on streets. Several accidents involved cars and obstacles on the ground, like curbs, poles, or manhole covers. Mechanical problems, such as failing brakes and wheels, and distracted riders were other contributing factors for accidents. 60 percent of injured people reported to have reviewed the training created by the electric scooter companies before riding. Only 4 percent of injured riders are reported to have worn helmets, even though helmets significantly reduce head injuries. Lime and Bird are redesigning the devices with sturdier brakes to help reduce the mechanical troubles of riding the scooters. The companies have also been working alongside cities to develop infrastructure, like bike lanes that will be safer for people to travel. Last-mile problem and micromobility The last-mile problem is a public transportation dilemma regarding the difficulty of moving passengers from private residences to mass-transit centers i.e. bus stops, train stations, etc. This spatial inefficiency forces passengers to use personal transportation (i.e. cars, motorcycles, etc.) in order to commute the short distance between transportation hubs and their homes. The last-mile problem reduces the intended benefits of public transportation: reduced carbon emissions, reduced traffic congestion, and increased convenience. Micro-mobility options, provide a solution to the last-mile problem and are characterized as light-weight, communal, and designed for short-distance travel. Scooter-sharing systems are one of the most heavily adopted micromobility services. The ease of accessibility and intuitive usability of scooter-sharing systems will increase the adoption of public transportation and reduce the usage of personal vehicles. Citizens may incur alternative feedback benefits such as increased access to job opportunities, reduced traffic congestion, and reduced air and noise pollution. Traffic Traffic congestion is amplified by the increased usage of personal-automobile transportation as a means of overcoming the last-mile problem. 46 percent of all vehicle congestion in the United States can be attributed to drivers making trips within a three-mile radius, and over 60 percent of car trips fell within the micro-mobility range, 0–5 miles. E-Scooters provide a means of subverting congestion and output higher speeds than the 9 mile per hour average of automobile traffic within many major urban hubs. At the individual level, the reduction of commute time is associated with an increase in economic mobility and advancement. In the United States alone, an estimated 87 billion dollars were lost to time spent waiting in traffic. Micromobility Investor Oliver Bruce has asserted that 4 trillion miles of automobile travel globally can realistically be replaced with scooter-sharing and other micro-mobility alternatives. As more drivers transition towards the adoption of scooter-sharing systems, personal-automobile traffic is reduced. Sustainability E-scooters are powered by electricity and therefore have zero direct carbon emissions. The reduced carbon impact between personal automobiles and e-scooters has been a central tenet in the value propositions of market-leaders Bird and Lime, though these propositions have been called into question, with research finding most of the time scooter riders would have otherwise walked, biked, or taken public transportation. E-scooters are more energy-efficient than alternative electric vehicle options; the same amount of energy will propel a scooter twenty-times farther than an electric automobile. The ridership of e-scooters yields a neutral primary carbon footprint, but the production, distribution, and charging of e-scooters create a significant secondary carbon footprint. In comparison to personal-automobiles and dockless e-bikes, dockless e-scooters have a smaller aggregate carbon footprint. Buses, bicycles, and personal electric bipedal vehicles maintain smaller carbon-footprints than dockless e-scooters. Some e-scooter renting companies say they are seeking for ways to reduce some part of their secondary carbon footprint. A life cycle assessment of e-scooter sharing systems performed by researchers at North Carolina State University calls claims of sustainability benefits of the programs into question, finding that nearly two thirds of the time people use shared e-scooters, they are creating more emission than they would have if scooter share was not an option. Privacy concerns Scooter-sharing companies collect GPS and cellular-based data on customer rides; this data helps companies and cities plan for the building of new bike lanes and enforce program rules such as parking and allowed service area. Cities require companies to share data that contains the precise details of when and where e-scooters are used. In November 2019, the Los Angeles Department of Transportation (LADOT), in California, United States, temporarily suspended Uber subsidiary Jump's permit to rent e-scooters and bikes following Uber's failure to transmit real-time data detailing the start point, endpoint, and travel time on all rides as a part of the city's one-year pilot permit program. Uber, backed by several data privacy organizations, argues that the city's policy "constitutes government surveillance" and that little analysis is required to generate a precise log of an individual's movements. LADOT said that the data is necessary to monitor which scooter-sharing companies are complying with the permit program's rules such as the number of scooters deployed and operation of scooters in prohibited areas. LADOT does not collect specific data about users beyond trip details, but precise mobility data may contain personally identifiable information. In a 2013 study, researchers studied location information from cell towers for 1.5 million individuals and were able to uniquely identify the mobility traces of 95 percent of individuals by using four data points. Response and regulations Several United States cities have introduced regulations on e-scooters and scooter-sharing companies to address safety concerns and the illegal dumping of e-scooters. In May 2018, shortly after the initial launch of e-scooters in San Francisco, the city issued a cease and desist order to Bird, Spin, and Lime after receiving about 1,900 complaints from residents regarding sidewalk congestion due to the illegal parking of e-scooters. As of June 2018, prospective scooter-sharing companies are required by the SFMTA to submit a business plan regarding safety concerns and sidewalk clutter to receive a permit to rent and own e-scooters. In August 2018, San Francisco awarded permits to Scoot Networks and Skip, allowing each company to launch 625 e-scooters to jumpstart a year-long pilot program. In August 2019, the Nashville Metro Council in Tennessee, United States, voted against a ban on e-scooters in the city. All seven scooter-sharing companies in the city will continue to operate until a selection process to allow a maximum of three companies to continue operations is finalized. In the meantime, councilmembers approved legislation in July to cut existing scooter fleets in half, restrict hours of operation, and introduce no-ride and safe zones. Washington D.C.’s district council has proposed legislation to establish rules to define where e-scooters can be parked, enforce speed limits, and restrict hours of operation. In September 2019, France banned the riding of e-scooters on sidewalks following an increase in accidents and sidewalk congestion; users who violate the ban will be fined 135 euros. Singapore also banned e-scooters on sidewalks as of November 2019 after a rise in accidents including at least one fatality. Violators will face a fine of 2000 Singapore dollars and/or up-to three months in jail. In response to backlash from city regulators and lawmakers, scooter-sharing companies have launched initiatives that include charity, outreach to low-income communities, and infrastructure improvements. Lime introduced a donation module on its app called Lime Hero so that customers can opt in to donate a portion of their ride fare to a nonprofit organization. Lime also introduced Lime Access which grants qualifying low-income users a 50 percent discount to ride on its e-scooters and bikes. Similarly, Bird waived its one dollar base ride fee for qualifying customers, who are only required to pay a 15 cent-per-mile fee. In addition, Bird is setting aside one dollar per day per scooter to help cities build and maintain bike lanes. Citizens of Paris have raised concerns against scooter driving. These concerns include riders not wearing helmets, driving up to 27km/h, and even 12-year-olds renting these devices. In 2023, Paris Mayor Anne Hidalgo called for a referendum on rental e-scooters. This referendum, which seeks to ban battery-powered rental e-scooters has amassed 91,300 votes or 90% of the 103,000 voters. With 1.38 million eligible voters recorded in the city’s electoral register, the total voter turnout was less than eight percent. Employment Electric chargers The scooter-sharing system introduced charging jobs that compensate people to find and charge scooters. Bird can approve workers after receiving personal, tax, and bank-account information. The process does not require a background check and attracts students and young professionals who want a flexible way to earn extra money. Companies even offer additional bonuses for missing or hard to find scooters; however, the incentives have backfired because some chargers intentionally hide the device to reap the extra cash. Earnings depend on the device's charge and location, but often range from 5 dollars to 20 dollars. Typically, scooters need half of a kilowatt-hour of electricity which costs about 5 cents. Competition over collecting scooters escalated to criminal acts including impersonating company officials to retrieve hoarded scooters and stealing account information through Facebook groups. Mechanics To sustain the condition of scooters, Bird hires three level of mechanics, L1, L2, and L3, to repair devices. The most rudimentary level, L1 focuses on minor repairs of brakes, tires, and throttles. The amount of compensation depends on the extent of damage and generally ranges from 5 to 20 dollars. Developments and innovations Usability Jump has invested in improving the durability and safety of e-scooters by increasing the size of the vehicle and adding more-effective handbrakes. Bird has increased its vehicle size by up to 55 percent to make e-scooters last longer. Lime has doubled the duration of its scooters' usable life through their own design changes. Third-party software companies such as Maas have sought to ease access to e-scooters by developing mapping programs that compile adjacent micro-mobility options from multiple providers. Compliance In 2018, Skip debuted the first dockless e-scooters attached with cameras taking periodic snapshots to monitor riding patterns, ensure that patrons are not riding on sidewalks, and confirm that vehicles are properly parked. Skip released a second scooter in 2018 featuring a locking mechanism to reduce theft and encourage riders to use designated parking areas. Working alongside municipalities since 2018, Bird has developed a, 'GovTech,' program that gives city governments visibility into Bird's usage data such as localized ridership or congestion. Bird has also instituted geo-fences and geo-speed limits that limit the functionality of the scooters within prohibited spatial boundaries. Bird has publicly advocated and provided funding for city governments to increase the number of bike lanes and improve upon the safety of existing routes. Gender gap A large-scale questionnaire survey conducted by Portland State University demonstrated the gender gap in e-scooter usage: 64% identified as a man, 34% as a woman, and 2% as transgender or non-binary. A big social data based study led by University of Washington also reported similar gender gap with 34.86% identified as female and 65.14% as male. Conservancy Partnering with French green-energy provider, , has allowed Lime to convert the entirety of its charging infrastructure to be powered by renewable energy. All non-battery materials in Lime's e-scooters are completely recycled for future production. Both Bird and Lime have invested in carbon offset projects to mitigate the carbon impact of transportation and distributing e-scooters. Jump and Skip have sought to reduce their secondary carbon footprint by introducing swappable batteries for e-scooters; swappable batteries minimize the role of sub-contracted chargers that collect scooters using carbon-emitting vehicles.
Technology
Motorized road transport
null
41347987
https://en.wikipedia.org/wiki/Infectious%20diseases%20%28medical%20specialty%29
Infectious diseases (medical specialty)
Infectious diseases (ID), also known as infectiology, is a medical specialty dealing with the diagnosis and treatment of infections. An infectious diseases specialist's practice consists of managing nosocomial (healthcare-acquired) infections or community-acquired infections. An ID specialist investigates and determines the cause of a disease (bacteria, virus, parasite, fungus or prions). Once the cause is known, an ID specialist can then run various tests to determine the best drug to treat the disease. While infectious diseases have always been around, the infectious disease specialty did not exist until the late 1900s after scientists and physicians in the 19th century paved the way with research on the sources of infectious disease and the development of vaccines. Scope Infectious diseases specialists typically serve as consultants to other physicians in cases of complex infections, and often manage patients with HIV/AIDS and other forms of immunodeficiency. Although many common infections are treated by physicians without formal expertise in infectious diseases, specialists may be consulted for cases where an infection is difficult to diagnose or manage. They may also be asked to help determine the cause of a fever of unknown origin. Specialists in infectious diseases can practice both in hospitals (inpatient) and clinics (outpatient). In hospitals, specialists in infectious diseases help ensure the timely diagnosis and treatment of acute infections by recommending the appropriate diagnostic tests to identify the source of the infection and by recommending appropriate management such as prescribing antibiotics to treat bacterial infections. For certain types of infections, involvement of specialists in infectious diseases may improve patient outcomes. In clinics, specialists in infectious diseases can provide long-term care to patients with chronic infections such as HIV/AIDS. History Infectious diseases are historically associated with hygiene and epidemiology due to periodic outbreaks ravaging countries, especially in the cities before the advent of sanitation, but also with travel medicine and tropical medicine, as many diseases acquired in tropical and subtropical areas are infectious in nature. Western innovations for treating infectious diseases originated in Ancient Greece, and before infectious disease was even conceptualized,  a Greek Physician named Hippocrates formed the Hippocratic Corpus. Included in this collection of 70 documents was a text that contained illness-causing infectious diseases. This text, called the Epidemiai volumes, played a key role in forming the western approach to infectious disease. A physician during the Roman empire, Galen of Pergamon, also made great impacts on the western perception of infectious disease with his multiple treatises. These treatises gave insight into the Antonine Plague which we now recognize as smallpox based on the description in Galen's treatises. Between the 16th and 18th centuries, medical professionals were educating more people, learning more from their research, and gaining access to information from other professionals in the field due to the use of printers like Gutenberg and the mass production of medical books. These books, now in the hands of many, included observations of infectious diseases. Such as syphilis, malaria, and smallpox. In the late 18th century we start to see vaccinations forming and the first vaccination for smallpox was established. Although there were records of individual infectious diseases spread out over medical documents, a combined perception of infectious disease as an area of medicine did not exist at that time. During the 19th century, modern medicine began to develop and the sources of infectious diseases became more clear. Robert Koch, a German physician who studied pathogens, discovered three major pathogens that were the cause of Anthrax, Tuberculosis, and Cholera. Louis Pasteur was a pioneer in the creation of vaccines for infectious diseases, one being a vaccine for Anthrax. He also developed the germ theory of infectious diseases which influenced Joseph Lister to practice methods during surgery that reduce the growth of pathogens that cause infectious disease. Although infectious disease started to become a more collective concept in the 19th-century it was not considered a medical specialty until the 1970s due to a number of newly discovered diseases and vaccines. Investigations When diagnosing, a medical professional must first determine if a patient has an infectious disease or another condition not caused by infection but exhibits similar symptoms. Once the illness is confirmed to be caused by an infection, Infectious diseases specialists employ a variety of diagnostic tests to help identify the pathogen that is causing an infection. Common tests include staining, culture tests, serological tests, susceptibility tests, genotyping, nucleic acid-base test, and polymerase chain reaction. Seeing as samples of bodily fluid or tissue are used in these tests, a specialist will have to distinguish between the non-disease-causing bacteria and disease-causing bacteria inhabiting the body to effectively identify and treat the infection. Staining is a method of testing that uses a special dye to change the color of pathogens and a microscope to view them. The change in color helps doctors distinguish the pathogen from its surrounding and identify what it is. This method is only successful with large and plentiful pathogens present. Therefore, this method is unsuccessful with viruses because they can not be viewed under a microscope due to their small size. Staining has more of an effect on bacteria where a violet colored stain is used, this is called gram staining. If the bacteria appears blue it is considered gram positive and if it appears red it is gram negative. Culture tests are done when there is not enough of the pathogen to be seen through other tests. ID specialists will grow the pathogen in the lab until they have enough to work with. Although cultures work on some pathogens, such as the bacteria that causes strep throat, it is ineffective on many others, such as syphilis. A test to identify the pathogen, such as staining, would take place after culture tests. Susceptibility tests are done by ID specialists to discover which antimicrobial drug would be most effective at killing the pathogen. Cultures can also be used as a form of susceptibility testing by adding the drug to the cultured pathogens and observing whether or not it kills the pathogen and how much of the drug is needed to kill it. Nucleic acid-base tests are used to detect genetic material. For pathogens that can't be cultured, ID specialists can identify them by looking for specific DNA or RNA. Polymerase chain reaction (PCR), a type of nucleic acid-base test, is similar to culture tests in that genes from the pathogen are duplicated. This method is mainly used when a specific pathogen is suspected. Treatments Infectious diseases specialists employ a variety of antimicrobial agents to help treat infections. The type of antimicrobial depends on the organism that is causing the infection. Antibiotics are used to treat bacterial infections; antiviral agents treat viral infections; and antifungal agents treat fungal infections. Training United States In the United States, infectious diseases is a subspecialty of internal medicine and pediatrics. In order to "sit" for the infectious diseases' board certification test (administered by the American Board of Internal Medicine, or the American Board of Pediatrics), physicians must have completed their residency (in internal medicine, or pediatrics), then undergo additional fellowship training (for at least two, or three years, respectively). The exam has been given as a subspecialty of internal medicine since 1972 and as a subspecialty of pediatrics since 1994.
Biology and health sciences
Fields of medicine
Health
54258215
https://en.wikipedia.org/wiki/Local%20Hole
Local Hole
The KBC Void (or Local Hole) is an immense, comparatively empty region of space, named after astronomers Ryan Keenan, Amy Barger, and Lennox Cowie, who studied it in 2013. The existence of a local underdensity has been the subject of many pieces of literature and research articles. The underdensity is proposed to be roughly spherical, approximately 2 billion light-years (600 megaparsecs, Mpc) in diameter. As with other voids, it is not completely empty; it contains the Milky Way, the Local Group, and the larger part of the Laniakea Supercluster. The Milky Way is within a few hundred million light-years of the void's center. It is debated whether the existence of the KBC void is consistent with the ΛCDM model. While Haslbauer et al. say that voids as large as the KBC void are inconsistent with ΛCDM, Sahlén et al. argue that the existence of supervoids such as the KBC void is consistent with ΛCDM. Galaxies inside a void experience a gravitational pull from outside the void, which yields a larger local value for the Hubble constant, a cosmological measure of how fast the universe expands. Some authors have proposed the structure as the cause of the discrepancy between measurements of the Hubble constant using galactic supernovae and Cepheid variables (72–75 km/s/Mpc) and from the cosmic microwave background and baryon acoustic oscillation data (67–68 km/s/Mpc). Other work has found no evidence for this in observations, finding the scale of the claimed underdensity to be incompatible with observations which extend beyond its radius. Important deficiencies were subsequently pointed out in this analysis, leaving open the possibility that the Hubble tension is indeed caused by outflow from the KBC void, albeit in the context of MOND gravity rather than general relativity. It was later discovered that this outflow model successfully predicted the bulk flow curve, an important measure of the velocity field in the local Universe.
Physical sciences
Notable patches of universe
Astronomy
62786585
https://en.wikipedia.org/wiki/SARS-CoV-2
SARS-CoV-2
Severe acute respiratory syndrome coronavirus 2 (SARS‑CoV‑2) is a strain of coronavirus that causes COVID-19, the respiratory illness responsible for the COVID-19 pandemic. The virus previously had the provisional name 2019 novel coronavirus (2019-nCoV), and has also been called human coronavirus 2019 (HCoV-19 or hCoV-19). First identified in the city of Wuhan, Hubei, China, the World Health Organization designated the outbreak a public health emergency of international concern from January 30, 2020, to May 5, 2023. SARS‑CoV‑2 is a positive-sense single-stranded RNA virus that is contagious in humans. SARS‑CoV‑2 is a strain of the species Betacoronavirus pandemicum (SARSr-CoV), as is SARS-CoV-1, the virus that caused the 2002–2004 SARS outbreak. There are animal-borne coronavirus strains more closely related to SARS-CoV-2, the most closely known relative being the BANAL-52 bat coronavirus. SARS-CoV-2 is of zoonotic origin; its close genetic similarity to bat coronaviruses suggests it emerged from such a bat-borne virus. Research is ongoing as to whether SARS‑CoV‑2 came directly from bats or indirectly through any intermediate hosts. The virus shows little genetic diversity, indicating that the spillover event introducing SARS‑CoV‑2 to humans is likely to have occurred in late 2019. Epidemiological studies estimate that in the period between December 2019 and September 2020 each infection resulted in an average of 2.4–3.4 new infections when no members of the community were immune and no preventive measures were taken. However, some subsequent variants have become more infectious. The virus is airborne and primarily spreads between people through close contact and via aerosols and respiratory droplets that are exhaled when talking, breathing, or otherwise exhaling, as well as those produced from coughs and sneezes. It enters human cells by binding to angiotensin-converting enzyme 2 (ACE2), a membrane protein that regulates the renin–angiotensin system. Terminology During the initial outbreak in Wuhan, China, various names were used for the virus; some names used by different sources included "the coronavirus" or "Wuhan coronavirus". In January 2020, the World Health Organization (WHO) recommended "2019 novel coronavirus" (2019-nCoV) as the provisional name for the virus. This was in accordance with WHO's 2015 guidance against using geographical locations, animal species, or groups of people in disease and virus names. On 11 February 2020, the International Committee on Taxonomy of Viruses adopted the official name "severe acute respiratory syndrome coronavirus 2" (SARS‑CoV‑2). To avoid confusion with the disease SARS, the WHO sometimes refers to SARS‑CoV‑2 as "the COVID-19 virus" in public health communications and the name HCoV-19 was included in some research articles. Referring to COVID-19 as the "Wuhan virus" has been described as dangerous by WHO officials, and as xenophobic by many journalists and academics. Infection and transmission Human-to-human transmission of SARS‑CoV‑2 was confirmed on 20 January 2020 during the COVID-19 pandemic. Transmission was initially assumed to occur primarily via respiratory droplets from coughs and sneezes within a range of about . Laser light scattering experiments suggest that speaking is an additional mode of transmission and a far-reaching one, indoors, with little air flow. Other studies have suggested that the virus may be airborne as well, with aerosols potentially being able to transmit the virus. During human-to-human transmission, between 200 and 800 infectious SARS‑CoV‑2 virions are thought to initiate a new infection. If confirmed, aerosol transmission has biosafety implications because a major concern associated with the risk of working with emerging viruses in the laboratory is the generation of aerosols from various laboratory activities which are not immediately recognizable and may affect other scientific personnel. Indirect contact via contaminated surfaces is another possible cause of infection. Preliminary research indicates that the virus may remain viable on plastic (polypropylene) and stainless steel (AISI 304) for up to three days, but it does not survive on cardboard for more than one day or on copper for more than four hours. The virus is inactivated by soap, which destabilizes its lipid bilayer. Viral RNA has also been found in stool samples and semen from infected individuals. The degree to which the virus is infectious during the incubation period is uncertain, but research has indicated that the pharynx reaches peak viral load approximately four days after infection or in the first week of symptoms and declines thereafter. The duration of SARS-CoV-2 RNA shedding is generally between 3 and 46 days after symptom onset. A study by a team of researchers from the University of North Carolina found that the nasal cavity is seemingly the dominant initial site of infection, with subsequent aspiration-mediated virus-seeding into the lungs in SARS‑CoV‑2 pathogenesis. They found that there was an infection gradient from high in proximal towards low in distal pulmonary epithelial cultures, with a focal infection in ciliated cells and type 2 pneumocytes in the airway and alveolar regions respectively. Studies have identified a range of animals—such as cats, ferrets, hamsters, non-human primates, minks, tree shrews, raccoon dogs, fruit bats, and rabbits—that are susceptible and permissive to SARS-CoV-2 infection. Some institutions have advised that those infected with SARS‑CoV‑2 restrict their contact with animals. Asymptomatic and presymptomatic transmission On 1February 2020, the World Health Organization (WHO) indicated that "transmission from asymptomatic cases is likely not a major driver of transmission". One meta-analysis found that 17% of infections are asymptomatic, and asymptomatic individuals were 42% less likely to transmit the virus. However, an epidemiological model of the beginning of the outbreak in China suggested that "pre-symptomatic shedding may be typical among documented infections" and that subclinical infections may have been the source of a majority of infections. That may explain how out of 217 on board a cruise liner that docked at Montevideo, only 24 of 128 who tested positive for viral RNA showed symptoms. Similarly, a study of ninety-four patients hospitalized in January and February 2020 estimated patients began shedding virus two to three days before symptoms appear and that "a substantial proportion of transmission probably occurred before first symptoms in the index case". The authors later published a correction that showed that shedding began earlier than first estimated, four to five days before symptoms appear. Reinfection There is uncertainty about reinfection and long-term immunity. It is not known how common reinfection is, but reports have indicated that it is occurring with variable severity. The first reported case of reinfection was a 33-year-old man from Hong Kong who first tested positive on 26 March 2020, was discharged on 15 April 2020 after two negative tests, and tested positive again on 15 August 2020 (142 days later), which was confirmed by whole-genome sequencing showing that the viral genomes between the episodes belong to different clades. The findings had the implications that herd immunity may not eliminate the virus if reinfection is not an uncommon occurrence and that vaccines may not be able to provide lifelong protection against the virus. Another case study described a 25-year-old man from Nevada who tested positive for SARS‑CoV‑2 on 18 April 2020 and on 5 June 2020 (separated by two negative tests). Since genomic analyses showed significant genetic differences between the SARS‑CoV‑2 variant sampled on those two dates, the case study authors determined this was a reinfection. The man's second infection was symptomatically more severe than the first infection, but the mechanisms that could account for this are not known. Reservoir and origin No natural reservoir for SARS-CoV-2 has been identified. Prior to the emergence of SARS-CoV-2 as a pathogen infecting humans, there had been two previous zoonosis-based coronavirus epidemics, those caused by SARS-CoV-1 and MERS-CoV. The first known infections from SARS‑CoV‑2 were discovered in Wuhan, China. The original source of viral transmission to humans remains unclear, as does whether the virus became pathogenic before or after the spillover event. Because many of the early infectees were workers at the Huanan Seafood Market, it has been suggested that the virus might have originated from the market. However, other research indicates that visitors may have introduced the virus to the market, which then facilitated rapid expansion of the infections. A March 2021 WHO-convened report stated that human spillover via an intermediate animal host was the most likely explanation, with direct spillover from bats next most likely. Introduction through the food supply chain and the Huanan Seafood Market was considered another possible, but less likely, explanation. An analysis in November 2021, however, said that the earliest-known case had been misidentified and that the preponderance of early cases linked to the Huanan Market argued for it being the source. For a virus recently acquired through a cross-species transmission, rapid evolution is expected. The mutation rate estimated from early cases of SARS-CoV-2 was of per site per year. Coronaviruses in general have high genetic plasticity, but SARS-CoV-2's viral evolution is slowed by the RNA proofreading capability of its replication machinery. For comparison, the viral mutation rate in vivo of SARS-CoV-2 has been found to be lower than that of influenza. Research into the natural reservoir of the virus that caused the 2002–2004 SARS outbreak has resulted in the discovery of many SARS-like bat coronaviruses, most originating in horseshoe bats. The closest match by far, published in Nature (journal) in February 2022, were viruses BANAL-52 (96.8% resemblance to SARS‑CoV‑2), BANAL-103 and BANAL-236, collected in three different species of bats in Feuang, Laos. An earlier source published in February 2020 identified the virus RaTG13, collected in bats in Mojiang, Yunnan, China to be the closest to SARS‑CoV‑2, with 96.1% resemblance. None of the above are its direct ancestor. Bats are considered the most likely natural reservoir of SARS‑CoV‑2. Differences between the bat coronavirus and SARS‑CoV‑2 suggest that humans may have been infected via an intermediate host; although the source of introduction into humans remains unknown. Although the role of pangolins as an intermediate host was initially posited (a study published in July 2020 suggested that pangolins are an intermediate host of SARS‑CoV‑2-like coronaviruses), subsequent studies have not substantiated their contribution to the spillover. Evidence against this hypothesis includes the fact that pangolin virus samples are too distant to SARS-CoV-2: isolates obtained from pangolins seized in Guangdong were only 92% identical in sequence to the SARS‑CoV‑2 genome (matches above 90 percent may sound high, but in genomic terms it is a wide evolutionary gap). In addition, despite similarities in a few critical amino acids, pangolin virus samples exhibit poor binding to the human ACE2 receptor. Phylogenetics and taxonomy SARS‑CoV‑2 belongs to the broad family of viruses known as coronaviruses. It is a positive-sense single-stranded RNA (+ssRNA) virus, with a single linear RNA segment. Coronaviruses infect humans, other mammals, including livestock and companion animals, and avian species. Human coronaviruses are capable of causing illnesses ranging from the common cold to more severe diseases such as Middle East respiratory syndrome (MERS, fatality rate ~34%). SARS-CoV-2 is the seventh known coronavirus to infect people, after 229E, NL63, OC43, HKU1, MERS-CoV, and the original SARS-CoV. Like the SARS-related coronavirus implicated in the 2003 SARS outbreak, SARS‑CoV‑2 is a member of the subgenus Sarbecovirus (beta-CoV lineage B). Coronaviruses undergo frequent recombination. The mechanism of recombination in unsegmented RNA viruses such as SARS-CoV-2 is generally by copy-choice replication, in which gene material switches from one RNA template molecule to another during replication. The SARS-CoV-2 RNA sequence is approximately 30,000 bases in length, relatively long for a coronavirus—which in turn carry the largest genomes among all RNA families. Its genome consists nearly entirely of protein-coding sequences, a trait shared with other coronaviruses. A distinguishing feature of SARS‑CoV‑2 is its incorporation of a polybasic site cleaved by furin, which appears to be an important element enhancing its virulence. It was suggested that the acquisition of the furin-cleavage site in the SARS-CoV-2 S protein was essential for zoonotic transfer to humans. The furin protease recognizes the canonical peptide sequence RX[R/K] R↓X where the cleavage site is indicated by a down arrow and X is any amino acid. In SARS-CoV-2 the recognition site is formed by the incorporated 12 codon nucleotide sequence CCT CGG CGG GCA which corresponds to the amino acid sequence P RR A. This sequence is upstream of an arginine and serine which forms the S1/S2 cleavage site (P RR A R↓S) of the spike protein. Although such sites are a common naturally-occurring feature of other viruses within the Subfamily Orthocoronavirinae, it appears in few other viruses from the Beta-CoV genus, and it is unique among members of its subgenus for such a site. The furin cleavage site PRRAR↓ is highly similar to that of the feline coronavirus, an alphacoronavirus 1 strain. Viral genetic sequence data can provide critical information about whether viruses separated by time and space are likely to be epidemiologically linked. With a sufficient number of sequenced genomes, it is possible to reconstruct a phylogenetic tree of the mutation history of a family of viruses. By 12 January 2020, five genomes of SARS‑CoV‑2 had been isolated from Wuhan and reported by the Chinese Center for Disease Control and Prevention (CCDC) and other institutions; the number of genomes increased to 42 by 30 January 2020. A phylogenetic analysis of those samples showed they were "highly related with at most seven mutations relative to a common ancestor", implying that the first human infection occurred in November or December 2019. Examination of the topology of the phylogenetic tree at the start of the pandemic also found high similarities between human isolates. 3,422 SARS‑CoV‑2 genomes, belonging to 19 strains, sampled on all continents except Antarctica were publicly available. On 11 February 2020, the International Committee on Taxonomy of Viruses announced that according to existing rules that compute hierarchical relationships among coronaviruses based on five conserved sequences of nucleic acids, the differences between what was then called 2019-nCoV and the virus from the 2003 SARS outbreak were insufficient to make them separate viral species. Therefore, they identified 2019-nCoV as a virus of Severe acute respiratory syndrome–related coronavirus. In July 2020, scientists reported that a more infectious SARS‑CoV‑2 variant with spike protein variant G614 has replaced D614 as the dominant form in the pandemic. Coronavirus genomes and subgenomes encode six open reading frames (ORFs). In October 2020, researchers discovered a possible overlapping gene named ORF3d, in the SARS‑CoV‑2 genome. It is unknown if the protein produced by ORF3d has any function, but it provokes a strong immune response. ORF3d has been identified before, in a variant of coronavirus that infects pangolins. Phylogenetic tree Variants There are many thousands of variants of SARS-CoV-2, which can be grouped into the much larger clades. Several different clade nomenclatures have been proposed. Nextstrain divides the variants into five clades (19A, 19B, 20A, 20B, and 20C), while GISAID divides them into seven (L, O, V, S, G, GH, and GR). Several notable variants of SARS-CoV-2 emerged in late 2020. The World Health Organization has currently declared five variants of concern, which are as follows: Alpha: Lineage B.1.1.7 emerged in the United Kingdom in September 2020, with evidence of increased transmissibility and virulence. Notable mutations include N501Y and P681H. An E484K mutation in some lineage B.1.1.7 virions has been noted and is also tracked by various public health agencies. Beta: Lineage B.1.351 emerged in South Africa in May 2020, with evidence of increased transmissibility and changes to antigenicity, with some public health officials raising alarms about its impact on the efficacy of some vaccines. Notable mutations include K417N, E484K and N501Y. Gamma: Lineage P.1 emerged in Brazil in November 2020, also with evidence of increased transmissibility and virulence, alongside changes to antigenicity. Similar concerns about vaccine efficacy have been raised. Notable mutations also include K417N, E484K and N501Y. Delta: Lineage B.1.617.2 emerged in India in October 2020. There is also evidence of increased transmissibility and changes to antigenicity. Omicron: Lineage B.1.1.529 emerged in Botswana in November 2021. Other notable variants include 6 other WHO-designated variants under investigation and Cluster 5, which emerged among mink in Denmark and resulted in a mink euthanasia campaign rendering it virtually extinct. Virology Virus structure Each SARS-CoV-2 virion is in diameter; its mass within the global human populace has been estimated as being between 0.1 and 10 kilograms. Like other coronaviruses, SARS-CoV-2 has four structural proteins, known as the S (spike), E (envelope), M (membrane), and N (nucleocapsid) proteins; the N protein holds the RNA genome, and the S, E, and M proteins together create the viral envelope. Coronavirus S proteins are glycoproteins and also type I membrane proteins (membranes containing a single transmembrane domain oriented on the extracellular side). They are divided into two functional parts (S1 and S2). In SARS-CoV-2, the spike protein, which has been imaged at the atomic level using cryogenic electron microscopy, is the protein responsible for allowing the virus to attach to and fuse with the membrane of a host cell; specifically, its S1 subunit catalyzes attachment, the S2 subunit fusion. Genome As of early 2022, about 7 million SARS-CoV-2 genomes had been sequenced and deposited into public databases and another 800,000 or so were added each month. By September 2023, the GISAID EpiCoV database contained more than 16 million genome sequences. SARS-CoV-2 has a linear, positive-sense, single-stranded RNA genome about 30,000 bases long. Its genome has a bias against cytosine (C) and guanine (G) nucleotides, like other coronaviruses. The genome has the highest composition of U (32.2%), followed by A (29.9%), and a similar composition of G (19.6%) and C (18.3%). The nucleotide bias arises from the mutation of guanines and cytosines to adenosines and uracils, respectively. The mutation of CG dinucleotides is thought to arise to avoid the zinc finger antiviral protein related defense mechanism of cells, and to lower the energy to unbind the genome during replication and translation (adenosine and uracil base pair via two hydrogen bonds, cytosine and guanine via three). The depletion of CG dinucleotides in its genome has led the virus to have a noticeable codon usage bias. For instance, arginine's six different codons have a relative synonymous codon usage of AGA (2.67), CGU (1.46), AGG (.81), CGC (.58), CGA (.29), and CGG (.19). A similar codon usage bias trend is seen in other SARS–related coronaviruses. Replication cycle Virus infections start when viral particles bind to host surface cellular receptors. Protein modeling experiments on the spike protein of the virus soon suggested that SARS‑CoV‑2 has sufficient affinity to the receptor angiotensin converting enzyme 2 (ACE2) on human cells to use them as a mechanism of cell entry. By 22 January 2020, a group in China working with the full virus genome and a group in the United States using reverse genetics methods independently and experimentally demonstrated that ACE2 could act as the receptor for SARS‑CoV‑2. Studies have shown that SARS‑CoV‑2 has a higher affinity to human ACE2 than the original SARS virus. SARS‑CoV‑2 may also use basigin to assist in cell entry. Initial spike protein priming by transmembrane protease, serine 2 (TMPRSS2) is essential for entry of SARS‑CoV‑2. The host protein neuropilin 1 (NRP1) may aid the virus in host cell entry using ACE2. After a SARS‑CoV‑2 virion attaches to a target cell, the cell's TMPRSS2 cuts open the spike protein of the virus, exposing a fusion peptide in the S2 subunit, and the host receptor ACE2. After fusion, an endosome forms around the virion, separating it from the rest of the host cell. The virion escapes when the pH of the endosome drops or when cathepsin, a host cysteine protease, cleaves it. The virion then releases RNA into the cell and forces the cell to produce and disseminate copies of the virus, which infect more cells. SARS‑CoV‑2 produces at least three virulence factors that promote shedding of new virions from host cells and inhibit immune response. Whether they include downregulation of ACE2, as seen in similar coronaviruses, remains under investigation (as of May 2020). Treatment and drug development Very few drugs are known to effectively inhibit SARS‑CoV‑2. Masitinib was found to inhibit SARS-CoV-2 main protease, showing a greater than 200-fold reduction in viral titers in the lungs and nose of mice, however it is not approved for the treatment of COVID-19 in humans. In December 2021, the United States granted emergency use authorization to Nirmatrelvir/ritonavir for the treatment of the virus; the European Union, United Kingdom, and Canada followed suit with full authorization soon after. One study found that Nirmatrelvir/ritonavir reduced the risk of hospitalization and death by 88%. COVID Moonshot is an international collaborative open-science project started in March 2020 with the goal of developing an un-patented oral antiviral drug for treatment of SARS-CoV-2. Epidemiology Retrospective tests collected within the Chinese surveillance system revealed no clear indication of substantial unrecognized circulation of SARS‑CoV‑2 in Wuhan during the latter part of 2019. A meta-analysis from November 2020 estimated the basic reproduction number () of the virus to be between 2.39 and 3.44. This means each infection from the virus is expected to result in 2.39 to 3.44 new infections when no members of the community are immune and no preventive measures are taken. The reproduction number may be higher in densely populated conditions such as those found on cruise ships. Human behavior affects the R0 value and hence estimates of R0 differ between different countries, cultures, and social norms. For instance, one study found relatively low R0 (~3.5) in Sweden, Belgium and the Netherlands, while Spain and the US had significantly higher R0 values (5.9 to 6.4, respectively). There have been about 96,000 confirmed cases of infection in mainland China. While the proportion of infections that result in confirmed cases or progress to diagnosable disease remains unclear, one mathematical model estimated that 75,815 people were infected on 25 January 2020 in Wuhan alone, at a time when the number of confirmed cases worldwide was only 2,015. Before 24 February 2020, over 95% of all deaths from COVID-19 worldwide had occurred in Hubei province, where Wuhan is located. As of , the percentage had decreased to . As of , there were total confirmed cases of SARS‑CoV‑2 infection. The total number of deaths attributed to the virus was .
Biology and health sciences
Infectious disease
null
47318223
https://en.wikipedia.org/wiki/Kepler-452b
Kepler-452b
Kepler-452b (sometimes quoted to be an Earth 2.0 or Earth's Cousin based on its characteristics; also known by its Kepler object of interest designation KOI-7016.01) is a candidate super-Earth exoplanet orbiting within the inner edge of the habitable zone of the sun-like star Kepler-452 and is the only planet in the system discovered by the Kepler space telescope. It is located about from Earth in the constellation of Cygnus. Kepler-452b orbits its star at a distance of from its host star (nearly the same distance as Earth from the Sun), with an orbital period of roughly 385 days, has a mass at least five times that of Earth, and has a radius of around 1.5 times that of Earth. It is the first potentially rocky super-Earth planet discovered orbiting within the habitable zone of a very Sun-like star. However, it is unknown if it is entirely habitable, as it is receiving slightly more energy from its star than Earth and could be subjected to a runaway greenhouse effect. The Kepler space telescope identified the exoplanet, and its discovery was announced by NASA on 23 July 2015. The planet is about away from the Solar System. At the speed of the New Horizons spacecraft, at about , it would take approximately 30 million years to get there. Physical characteristics Mass, radius and temperature Kepler-452b has a probable mass five times that of Earth, and its surface gravity is nearly twice as much as Earth's, though calculations of mass for exoplanets are only rough estimates. If it is a terrestrial planet, it is most likely a super-Earth with many active volcanoes due to its higher mass and density. The clouds on the planet would be thick and misty, covering much of the surface as viewed from space. The planet takes 385 Earth days to orbit its star. Its radius is 50% larger than Earth's, and lies within the conservative habitable zone of its parent star. It has an equilibrium temperature of , a little warmer than Earth. Host star The host star, Kepler-452, is a G-type and has about the same mass as the sun, only 3.7% more massive and 11% larger. It has a surface temperature of 5757 K, nearly the same as the Sun, which has a surface temperature of 5778 K. The star's age is estimated to be about 6 billion years old, about 1.5 billion years older than the Sun, which is estimated to have existed for 4.6 billion years. Kepler-452b has been in Kepler-452's habitable zone for most of its existence, a duration just over six billion years. From the surface of Kepler-452b, its star would look almost identical to the Sun as viewed from the Earth. The star's apparent magnitude, or how bright it appears from Earth's perspective, is 13.426; therefore, it is too dim to be seen with the naked eye. Orbit Kepler-452b orbits its host star with an orbital period of 385 days and an orbital radius of about 1.04 AU, nearly the same as Earth's (1 AU). Kepler-452b is most likely not tidally locked and has a circular orbit. Its host star, Kepler-452, is about 20% more luminous than the Sun (L = 1.2 ). Potential habitability It is not known if Kepler-452b is a rocky planet but based on its small radius, Kepler-452b is likely to be rocky. It is not clear if Kepler-452b offers habitable environments. It orbits a G2V-type star, like the Sun, which is 20% more luminous, with nearly the same temperature and mass. However, the star is roughly 6 billion years old, making it 1.5 billion years older than the Sun. At this point in its star's evolution, Kepler-452b is currently receiving 10% more energy from its parent star than Earth is currently receiving from the Sun. If Kepler-452b is a rocky planet, it may be subject to a runaway greenhouse effect similar to that seen on Venus. "Delayed" runaway greenhouse effect However, due to the planet Kepler-452b being 50 percent bigger in terms of size, it is likely to have an estimated mass of 5 , which could allow it to hold on to any oceans it may have for a longer period, preventing Kepler-452b from succumbing to runaway greenhouse effect for another 500 million years. This, in turn, would be accompanied by the carbonate–silicate cycle being "buffered", extending its lifetime due to increased volcanic activity on Kepler-452b. This could allow any potential life on the surface to inhabit the planet for another 500–900 million years before the habitable zone is pushed beyond Kepler-452b's orbit. Discovery and follow-up studies In 2009, NASA's Kepler space telescope was observing stars on its photometer, the instrument it uses to detect transit events, in which a planet crosses in front of and dims its host star for a brief and roughly regular time. In this last test, Kepler observed stars in the Kepler Input Catalog, including Kepler-452; the preliminary light curves were sent to the Kepler science team for analysis, who chose obvious planetary companions from the bunch for follow-up by other telescopes. Observations for the potential exoplanet candidates took place between 13 May 2009 and 17 March 2012. Kepler-452b exhibited a transit that occurred roughly every 385 days, and it was eventually concluded that a planetary body was responsible. The discovery was announced by NASA on 23 July 2015. At a distance of nearly , Kepler-452b is too remote for current telescopes or the next generation of planned telescopes to determine its true mass or whether it has an atmosphere. The Kepler space telescope focused on a single small region of the sky but next-generation planet-hunting space telescopes, such as TESS and CHEOPS, will examine nearby stars throughout the sky with follow up studies planned for these closer exoplanets by the James Webb Space Telescope and future large ground-based telescopes to analyze their atmospheres, determine masses, and infer compositions. A study in 2018 by Mullally et al. claimed that statistically, Kepler-452b has not been proven to exist and must still be considered a candidate. However, Kepler-452b is still a possible planet and has not been shown to be a false positive. SETI targeting Scientists with the SETI (Search for Extraterrestrial Intelligence Institute) have already begun targeting Kepler-452b, the first near-Earth-size world found in the habitable zone of a Sun-like star. SETI Institute researchers are using the Allen Telescope Array, a collection of 6-meter (20 feet) telescopes in the Cascade Mountains of California, to scan for radio transmissions from Kepler-452b. As of July 2015, the array has scanned the exoplanet on over 2 billion frequency bands, with no result. The telescopes will continue to scan over a total of 9 billion channels, searching for alien radio analysis. Observation and exploration Kepler-452b is from Earth. The fastest current spacecraft, the New Horizons uncrewed probe that passed Pluto in July 2015, travels at just . At that speed, it would take a spacecraft about 26 million years to reach Kepler-452b from Earth, if it were going in that direction. Gallery
Physical sciences
Notable exoplanets
Astronomy
64179842
https://en.wikipedia.org/wiki/Metallization%20pressure
Metallization pressure
Metallization pressure is the pressure required for a non-metallic chemical element to become a metal. Every material is predicted to turn into a metal if the pressure is high enough, and temperature low enough. Some of these pressures are beyond the reach of diamond anvil cells, and are thus theoretical predictions. Neon has the highest metallization pressure for any element. The value for phosphorus refers to pressurizing black phosphorus. The value for arsenic refers to pressurizing metastable black arsenic; grey arsenic, the standard state, is already a metallic conductor at standard conditions. No value is known or theoretically predicted for astatine and radon.
Physical sciences
Phase transitions
Physics
52968860
https://en.wikipedia.org/wiki/Biodiversity%20loss
Biodiversity loss
Biodiversity loss happens when plant or animal species disappear completely from Earth (extinction) or when there is a decrease or disappearance of species in a specific area. Biodiversity loss means that there is a reduction in biological diversity in a given area. The decrease can be temporary or permanent. It is temporary if the damage that led to the loss is reversible in time, for example through ecological restoration. If this is not possible, then the decrease is permanent. The cause of most of the biodiversity loss is, generally speaking, human activities that push the planetary boundaries too far. These activities include habitat destruction (for example deforestation) and land use intensification (for example monoculture farming). Further problem areas are air and water pollution (including nutrient pollution), over-exploitation, invasive species and climate change. Many scientists, along with the Global Assessment Report on Biodiversity and Ecosystem Services, say that the main reason for biodiversity loss is a growing human population because this leads to human overpopulation and excessive consumption. Others disagree, saying that loss of habitat is caused mainly by "the growth of commodities for export" and that population has very little to do with overall consumption. More important are wealth disparities between and within countries. Climate change is another threat to global biodiversity. For example, coral reefs—which are biodiversity hotspots—will be lost by the year 2100 if global warming continues at the current rate. Still, it is the general habitat destruction (often for expansion of agriculture), not climate change, that is currently the bigger driver of biodiversity loss. Invasive species and other disturbances have become more common in forests in the last several decades. These tend to be directly or indirectly connected to climate change and can cause a deterioration of forest ecosystems. Groups that care about the environment have been working for many years to stop the decrease in biodiversity. Nowadays, many global policies include activities to stop biodiversity loss. For example, the UN Convention on Biological Diversity aims to prevent biodiversity loss and to conserve wilderness areas. However, a 2020 United Nations Environment Programme report found that most of these efforts had failed to meet their goals. For example, of the 20 biodiversity goals laid out by the Aichi Biodiversity Targets in 2010, only six were "partially achieved" by 2020. This ongoing global extinction is also called the holocene extinction or sixth mass extinction. Global estimates across all species The current rate of global biodiversity loss is estimated to be 100 to 1000 times higher than the (naturally occurring) background extinction rate, faster than at any other time in human history, and is expected to grow in the upcoming years. The fast-growing extinction trends of various animal groups like mammals, birds, reptiles, amphibians, and fish have led scientists to declare a current biodiversity crisis in both land and ocean ecosystems. In 2006, many more species were formally classified as rare or endangered or threatened; moreover, scientists have estimated that millions more species are at risk that have not been formally recognized. Deforestation also plays a large role in biodiversity loss. More than half of the worlds biodiversity is hosted in tropical rainforest. Regions that are subjected to exponential loss of biodiversity are referred to as biodiversity hotspots. Since 1988 the hotspots increased from 10 to 34. Of the total 34 hotspots currently present, 16 of them are in tropical regions (as of 2006). Researchers have noted in 2006 that only 2.3% of the world is covered with biodiversity loss hotspots, and even though only a small percentage of the world is covered in hotspots, it host a large fraction (50%) of vascular plant species. In 2021, about 28 percent of the 134,400 species assessed using the IUCN Red List criteria are now listed as threatened with extinction—a total of 37,400 species compared to 16,119 threatened species in 2006. A 2022 study that surveyed more than 3,000 experts found that "global biodiversity loss and its impacts may be greater than previously thought", and estimated that roughly 30% of species "have been globally threatened or driven extinct since the year 1500." Research published in 2023 found that, out of 70,000 species, about 48% are facing decreasing populations due to human activities, while only 3% are seeing an increase in populations. Methods to quantify loss Biologists define biodiversity as the "totality of genes, species and ecosystems of a region". To measure biodiversity loss rates for a particular location, scientists record the species richness and its variation over time in that area. In ecology, local abundance is the relative representation of a species in a particular ecosystem. It is usually measured as the number of individuals found per sample. The ratio of abundance of one species to one or multiple other species living in an ecosystem is called relative species abundance. Both indicators are relevant for computing biodiversity. There are many different biodiversity indexes. These investigate different scales and time spans. Biodiversity has various scales and subcategories (e.g. phylogenetic diversity, species diversity, genetic diversity, nucleotide diversity). The question of net loss in confined regions is often a matter of debate. Observations by type of life Wildlife in general An October 2020 analysis by Swiss Re found that one-fifth of all countries are at risk of ecosystem collapse as the result of anthropogenic habitat destruction and increased wildlife loss. If these losses are not reversed, a total ecosystem collapse could ensue. In 2022, the World Wildlife Fund reported an average population decline of 68% between 1970 and 2016 for 4,400 animal species worldwide, encompassing nearly 21,000 monitored populations. Terrestrial invertebrates Insects Earthworms Scientists have studied loss of earthworms from several long-term agronomic trials. They found that relative biomass losses of minus 50–100% (with a mean of minus 83 %) match or exceed those reported for other faunal groups. Thus it is clear that earthworms are similarly depleted in the soils of fields used for intensive agriculture. Earthworms play an important role in ecosystem function, helping with biological processing in soil, water, and even greenhouse gas balancing. There are five reasons for the decline of earthworm diversity: "(1) soil degradation and habitat loss, (2) climate change, (3) excessive nutrient and other forms of contamination load, (4) over-exploitation and unsustainable management of soil, and (5) invasive species". Factors like tillage practices and intensive land use decimate the soil and plant roots that earthworms use to create their biomass. This interferes with carbon and nitrogen cycles. Knowledge of earthworm species diversity is quite limited as not even 50% of them have been described. Sustainable agriculture methods could help prevent earthworm diversity decline, for example reduced tillage. The Secretariat of the Convention on Biological Diversity is trying to take action and promote the restoration and maintenance of the many diverse species of earthworms. Amphibians Wild mammals Birds Some pesticides, like insecticides, likely play a role in reducing the populations of specific bird species. According to a study funded by BirdLife International, 51 bird species are critically endangered and eight could be classified as extinct or in danger of extinction. Nearly 30% of extinction is due to hunting and trapping for the exotic pet trade. Deforestation, caused by unsustainable logging and agriculture, could be the next extinction driver, because birds lose their habitat and their food. Plants Trees While plants are essential for human survival, they have not received the same attention as the conservation of animals. It is estimated that a third of all land plant species are at risk of extinction and 94% have yet to be evaluated in terms of their conservation status. Plants existing at the lowest trophic level require increased conservation to reduce negative impacts at higher trophic levels. In 2022, scientists warned that a third of tree species are threatened with extinction. This will significantly alter the world's ecosystems because their carbon, water and nutrient cycles will be affected. Forest areas are degraded due to common factors such as logging, fire, and firewood harvesting. The GTA (global tree assessment) has determined that "17,510 (29.9%) tree species are considered threatened with extinction. In addition, there are 142 tree species recorded as Extinct or Extinct in the Wild." Possible solutions can be found in some silvicultural methods of forest management that promote tree biodiversity, such as selective logging, thinning or crop tree management, and clear cutting and coppicing. Without solutions, secondary forests recovery in species richness can take 50 years to recover the same amount as the primary forest, or 20 years to recover 80% of species richness. Flowering plants Freshwater species Freshwater ecosystems such as swamps, deltas, and rivers make up 1% of earth's surface. They are important because they are home to approximately one third of vertebrate species. Freshwater species are beginning to decline at twice the rate of species that live on land or in the ocean. This rapid loss has already placed 27% of 29,500 species dependent on fresh water on the IUCN Red List. Global populations of freshwater fish are collapsing due to water pollution and overfishing. Migratory fish populations have declined by 76% since 1970, and large "megafish" populations have fallen by 94% with 16 species declared extinct in 2020. Marine species Marine biodiversity encompasses any living organism that resides in the ocean or in estuaries. By 2018, approximately 240,000 marine species had been documented. But many marine species—estimates range between 178,000 and 10 million oceanic species—remain to be described. It is therefore likely that a number of rare species (not seen for decades in the wild) have already disappeared or are on the brink of extinction, unnoticed. Human activities have a strong and detrimental influence on marine biodiversity. The main drivers of marine species extinction are habitat loss, pollution, invasive species, and overexploitation. Greater pressure is placed on marine ecosystems near coastal areas because of the human settlements in those areas. Overexploitation has resulted in the extinction of over 25 marine species. This includes seabirds, marine mammals, algae, and fish. Examples of extinct marine species include Steller's sea cow (Hydrodamalis gigas) and the Caribbean monk seal (Monachus tropicalis). Not all extinctions are because of humans. For example, in the 1930s, the eelgrass limpet (Lottia alveus) became extinct in the Atlantic once the Zostera marina seagrass population declined upon exposure to a disease. The Lottia alveus were greatly impacted because the Zostera marina were their sole habitats. Causes The main causes of current biodiversity loss are: Habitat loss, fragmentation and degradation; for example habitat fragmentation for commercial and agricultural uses (specifically monoculture farming) Land use intensification (and ensuing land loss/habitat loss); a significant factor in loss of ecological services due to direct effects as well as biodiversity loss Nutrient pollution and other forms of pollution (air and water pollution) Overexploitation and unsustainable use (for example unsustainable fishing methods, overfishing, overconsumption and human overpopulation) Invasive species that effectively compete for a niche, replacing indigenous species Climate change (e.g. extinction risk from climate change, effects of climate change on plant biodiversity) Jared Diamond describes an "Evil Quartet" of habitat destruction, overkill, introduced species and secondary extinctions. Edward O. Wilson suggested the acronym HIPPO for the main causes of biodiversity loss: Habitat destruction, Invasive species, Pollution, human over-Population and Over-harvesting. Habitat destruction For example, habitat loss is one of the causes in the decline of insect populations (see the section below on insects). Urban growth and habitat fragmentation The direct effects of urban growth on habitat loss are well understood: building construction often results in habitat destruction and fragmentation. This leads to selection for species that are adapted to urban environments. Small habitat patches cannot support the level of genetic or taxonomic diversity they formerly could while some more sensitive species may become locally extinct. Species abundance populations are reduced due to the reduced fragmented area of habitat. This causes an increase of species isolation and forces species toward edge habitats and to adapt to foraging elsewhere. Infrastructure development in Key Biodiversity Areas (KBA) is a major driver of biodiversity loss, with infrastructure present in roughly 80% of KBAs. Infrastructure development leads to conversion and fragmentation of natural habitat, pollution and disturbance. There can also be direct harm to animals through collisions with vehicles and structures. This can have impacts beyond the infrastructure site. Land use intensification Humans are changing the uses of land in various ways, and each can lead to habitat destruction and biodiversity loss. The 2019 Global Assessment Report on Biodiversity and Ecosystem Services found that industrial agriculture is the primary driver of biodiversity collapse. The UN's Global Biodiversity Outlook 2014 estimated that 70% of the projected loss of terrestrial biodiversity is caused by agriculture use. According to a 2005 publication, "Cultivated systems [...] cover 24% of Earth's surface". The publication defined cultivated areas as "areas in which at least 30% of the landscape is in croplands, shifting cultivation, confined livestock production, or freshwater aquaculture in any particular year". More than 17,000 species are at risk of losing habitat by 2050 as agriculture continues to expand to meet future food needs (as of 2020). A global shift toward largely plant-based diets would free up land to allow for the restoration of ecosystems and biodiversity. In the 2010s over 80% of all global farmland was used to rear animals. As of 2022, 44% of Earth's land area required conservation attention, which may include declaring protected areas and following land-use policies. Nutrient pollution and other forms of pollution Air pollution Air pollution adversely affects biodiversity. Pollutants are emitted into the atmosphere by the burning of fossil fuels and biomass, for example. Industrial and agricultural activity releases the pollutants sulfur dioxide and nitrogen oxides. Once sulfur dioxide and nitrogen oxide are introduced into the atmosphere, they can react with cloud droplets (cloud condensation nuclei), raindrops, or snowflakes, forming sulfuric acid and nitric acid. With the interaction between water droplets and sulfuric and nitric acids, wet deposition occurs and creates acid rain. A 2009 review studied four air pollutants (sulfur, nitrogen, ozone, and mercury) and several types of ecosystems. Air pollution affects the functioning and biodiversity of terrestrial as well as aquatic ecosystems. For example, "air pollution causes or contributes to acidification of lakes, eutrophication of estuaries and coastal waters, and mercury bioaccumulation in aquatic food webs". Noise pollution Noise generated by traffic, ships, vehicles, and aircraft can affect the survivability of wildlife species and can reach undisturbed habitats. Noise pollution is common in marine ecosystems, affecting at least 55 marine species. One study found that as seismic noises and naval sonar increases in marine ecosystems, cetacean diversity decreases (including whales and dolphins). Multiple studies have found that fewer fishes, such as cod, haddock, rockfish, herring, sand seal, and blue whiting, have been spotted in areas with seismic noises, with catch rates declining by 40–80%. Noise pollution has also altered avian communities and diversity. Noise can reduce reproductive success, minimize nesting areas, increase stress response, and reduce species abundance. Noise pollution can alter the distribution and abundance of prey species, which can then impact predator populations. Pollution from fossil fuel extraction Fossil fuel extraction and associated oil and gas pipelines have major impacts on the biodiversity of many biomes due to land conversion, habitat loss and degradation, and pollution. An example is the Western Amazon region. Exploitation of fossil fuels there has had significant impacts on biodiversity. As of 2018, many of the protected areas with rich biodiversity were in areas containing unexploited fossil fuel reserves worth between $3 and $15 trillion. The protected areas may be under threat in future. Overexploitation Continued overexploitation can lead to the destruction of the resource, as it will be unable to replenish. The term applies to natural resources such as water aquifers, grazing pastures and forests, wild medicinal plants, fish stocks and other wildlife. Overfishing A 2019 Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services report found that overfishing is the main driver of mass species extinction in oceans. Overfishing has reduced fish and marine mammal biomass by 60% since the 1800s. It is currently pushing over one-third of sharks and rays toward extinction. Many commercial fishes have been overharvested: a 2020 FAO report classified as overfished 34% of the fish stocks of the world's marine fisheries. By 2020, global fish populations had declined 38% since 1970. Many regulatory measures are available for controlling overfishing. These include fishing quotas, bag limits, licensing, closed seasons, size limits, and the creation of marine reserves and other marine protected areas. Human overpopulation and overconsumption The world's population numbered nearly 7.6 billion as of mid-2017 and is forecast to peak toward the end of the 21st century at 10–12 billion people. Scholars have argued that population size and growth, along with overconsumption, are significant factors in biodiversity loss and soil degradation. Review articles, including the 2019 IPBES report, have also noted that human population growth and overconsumption are significant drivers of species decline. A 2022 study warned that conservation efforts will continue to fail if the primary drivers of biodiversity loss continue to be ignored, including population size and growth. Other scientists have criticized the assertion that population growth is a key driver for biodiversity loss. They argue that the main driver is the loss of habitat, caused by "the growth of commodities for export, particularly soybean and oil-palm, primarily for livestock feed or biofuel consumption in higher income economies." Because of the wealth disparities between countries, there is a negative correlation between a country's total population and its per capita footprint. On the other hand, the correlation between a country's GDP and its footprint is strong. The study argues that population as a metric is unhelpful and counterproductive for tackling environmental challenges. Invasive species The term invasive is poorly defined and often very subjective. The European Union defines invasive alien species as those outside their natural distribution area that threaten biological diversity. Biotic invasion is considered one of the five top drivers of global biodiversity loss and is increasing because of tourism and globalization. This may be particularly true in poorly regulated fresh water systems, though quarantines and ballast water rules have improved the situation. Invasive species may drive local native species to extinction via competitive exclusion, niche displacement, or hybridisation with related native species. Therefore, alien invasions may result in extensive changes in the structure, composition and global distribution of the biota at sites of introduction. This leads to the homogenisation of the world's fauna and flora and biodiversity loss. Climate change Climate change is another threat to global biodiversity. But habitat destruction, e.g., for the expansion of agriculture, is currently a more significant driver of biodiversity loss. A 2021 collaborative report by scientists from the IPBES and the IPCC found that biodiversity loss and climate change must be addressed simultaneously, as they are inextricably linked and have similar effects on human well-being. In 2022, Frans Timmermans, Vice-President of the European Commission, said that people are less aware of the threat of biodiversity loss than they are of the threat of climate change. The interaction between climate change and invasive species is complex and not easy to assess. Climate change is likely to favour some invasive species and harm others, but few authors have identified specific consequences of climate change for invasive species. Invasive species and other disturbances have become more common in forests in the last several decades. These tend to be directly or indirectly connected to climate change and have negative consequences for forest ecosystems. Extinction risks Impacts On ecosystems Biodiversity loss has bad effects on the functioning of ecosystems. This in turn affects humans, because affected ecosystems can no longer provide the same quality of ecosystem services, such as crop pollination, cleaning air and water, decomposing waste, and providing forest products as well as areas for recreation and tourism. Two key statements of a 2012 comprehensive review of the previous 20 years of research include: "There is now unequivocal evidence that biodiversity loss reduces the efficiency by which ecological communities capture biologically essential resources, produce biomass, decompose and recycle biologically essential nutrients"; and  "Impacts of diversity loss on ecological processes might be sufficiently large to rival the impacts of many other global drivers of environmental change" Permanent global species loss (extinction) is a more dramatic phenomenon than regional changes in species composition. But even minor changes from a healthy stable state can have a dramatic influence on the food web and the food chain, because reductions in one species can adversely affect the entire chain (coextinction). This can lead to an overall reduction in biodiversity, unless alternative stable states of the ecosystem are possible. For example, a study on grasslands used manipulated grassland plant diversity and found that ecosystems with higher biodiversity show more resistance of their productivity to climate extremes. On food and agriculture In 2019, the UN's Food and Agriculture Organization (FAO) produced its first report on The State of the World's Biodiversity for Food and Agriculture. It warned that "Many key components of biodiversity for food and agriculture at genetic, species and ecosystem levels are in decline." The report also said, "Many of the drivers that have negative impacts on BFA (biodiversity for food and agriculture), including overexploitation, overharvesting, pollution, overuse of external inputs, and changes in land and water management, are at least partially caused by inappropriate agricultural practices" and "transition to intensive production of a reduced number of species, breeds and varieties, remain major drivers of loss of BFA and ecosystem services." To reduce biodiversity loss related to agricultural practices, FAO encourages the use of "biodiversity-friendly management practices in crop and livestock production, forestry, fisheries and aquaculture". On health and medicines The WHO has analyzed how biodiversity and human health are connected: "Biodiversity and human health, and the respective policies and activities, are interlinked in various ways. First, biodiversity gives rise to health benefits. For example, the variety of species and genotypes provide nutrients and medicines." The ongoing drivers and effects of biodiversity loss has the potential to lead to future zoonotic disease outbreaks like the COVID-19 pandemic. Medicinal and aromatic plants are widely used in traditional medicine as well as in cosmetic and food industries. The WHO estimated in 2015 that about "60,000 species are used for their medicinal, nutritional and aromatic properties". There is a global trade in plants for medicinal purposes. Biodiversity contributes to the development of pharmaceuticals. A significant proportion of medicines are derived from natural products, either directly or indirectly. Many of these natural products come from marine ecosystems. However, unregulated and inappropriate over-harvesting (bioprospecting) could potentially lead to overexploitation, ecosystem degradation and loss of biodiversity. Users and traders harvest plants for traditional medicine either by planting them or by collecting them in the wild. In both cases, sustainable medicinal resource management is important. Proposed solutions Scientists are investigating what can be done to address biodiversity loss and climate change together. For both of these crises, there is a need to "conserve enough nature and in the right places". A 2020 study found that "beyond the 15% land area currently protected, 35% of land area is needed to conserve additional sites of particular importance for biodiversity and stabilize the climate." Additional measures for protecting biodiversity, beyond just environmental protection, are important. Such measures include addressing drivers of land use change, increasing efficiency in agriculture, and reducing the need for animal agriculture. The latter could be achieved by increasing the shares of plant-based diets. Convention on Biological Diversity Many governments have conserved portions of their territories under the Convention on Biological Diversity (CBD), a multilateral treaty signed in 1992–3. The 20 Aichi Biodiversity Targets are part of the CBD's Strategic Plan 2011–2020 and were published in 2010. Aichi Target Number 11 aimed to protect 17% of terrestrial and inland water areas and 10% of coastal and marine areas by 2020 . Of the 20 biodiversity goals laid out by the Aichi Biodiversity Targets in 2010, only six were partially achieved by 2020. The 2020 CBD report highlighted that if the status quo does not change, biodiversity will continue to decline due to "currently unsustainable patterns of production and consumption, population growth and technological developments". The report also singled out Australia, Brazil, Cameroon and the Galapagos Islands (Ecuador) for having had one of its animals lost to extinction in the previous ten years. Following this, the leaders of 64 nations and the European Union pledged to halt environmental degradation and restore the natural world. The pledge was not signed by leaders from some of the world's biggest polluters, namely China, India, Russia, Brazil and the United States. Some experts contend that the United States' refusal to ratify the Convention on Biological Diversity is harming global efforts to halt the extinction crisis. Scientists say that even if the targets for 2020 had been met, no substantial reduction of extinction rates would likely have resulted. Others have raised concerns that the Convention on Biological Diversity does not go far enough, and argue the goal should be zero extinctions by 2050, along with cutting the impact of unsustainable food production on nature by half. That the targets are not legally binding has also been subject to criticism. In December 2022, every country except the United States and the Holy See signed onto the Kunming-Montreal Global Biodiversity Framework at the 2022 United Nations Biodiversity Conference. This framework calls for protecting 30% of land and oceans by 2030 (30 by 30). It also has 22 other targets intended to reduce biodiversity loss. At the time of signing the agreement, only 17% of land territory and 10% of ocean territory were protected. The agreement includes protecting the rights of Indigenous peoples and changing the current subsidy policy to one better for biodiversity protection, but it takes a step backward in protecting species from extinction in comparison to the Aichi Targets. Critics said the agreement does not go far enough to protect biodiversity, and that the process was rushed. Other international and national action In 2019 the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services (IPBES) published the Global Assessment Report on Biodiversity and Ecosystem Services. This report said that up to a million plant and animal species are facing extinction because of human activity. The IPBES is an international organization that has a similar role to the Intergovernmental Panel on Climate Change (IPCC), except that it focuses on biodiversity and ecosystem services, not climate change. The United Nations' Sustainable Development Goal 15 (SDG 15), "Life on Land", includes biodiversity targets. Its fifth target is: "Take urgent and significant action to reduce the degradation of natural habitats, halt the loss of biodiversity and, by 2020, protect and prevent the extinction of threatened species." This target has one indicator: the Red List Index. Nearly three-quarters of bird species, two thirds of mammals and more than half of hard corals have been recorded at World Heritage Sites, even though they cover less than 1% of the planet. Countries with World Heritage Sites can include them in their national biodiversity strategies and action plans.
Biology and health sciences
Ecology
Biology
55865352
https://en.wikipedia.org/wiki/Seismic%20intensity%20scales
Seismic intensity scales
Seismic intensity scales categorize the intensity or severity of ground shaking (quaking) at a given location, such as resulting from an earthquake. They are distinguished from seismic magnitude scales, which measure the magnitude or overall strength of an earthquake, which may, or perhaps may not, cause perceptible shaking. Intensity scales are based on the observed effects of the shaking, such as the degree to which people or animals were alarmed, and the extent and severity of damage to different kinds of structures or natural features. The maximal intensity observed, and the extent of the area where shaking was felt (see isoseismal map, below), can be used to estimate the location and magnitude of the source earthquake; this is especially useful for historical earthquakes where there is no instrumental record. Ground shaking Ground shaking can be caused in various ways (volcanic tremors, avalanches, large explosions, etc.), but shaking intense enough to cause damage is usually due to rupturing of the Earth's crust known as earthquakes. The intensity of shaking depends on several factors: The "size" or strength of the source event, such as measured by various seismic magnitude scales. The type of seismic wave generated, and its orientation. The depth of the event. The distance from the source event. Site response due to local geology Site response is especially important as certain conditions, such as unconsolidated sediments in a basin, can amplify ground motions as much as ten times. Where an earthquake is not recorded on seismographs an isoseismal map showing the intensities felt at different areas can be used to estimate the location and magnitude of the quake. Such maps are also useful for estimating the shaking intensity, and thereby the likely level of damage, to be expected from a future earthquake of similar magnitude. In Japan this kind of information is used when an earthquake occurs to anticipate the severity of damage to be expected in different areas. The intensity of local ground-shaking depends on several factors besides the magnitude of the earthquake, one of the most important being soil conditions. For instance, thick layers of soft soil (such as fill) can amplify seismic waves, often at a considerable distance from the source. At the same time, sedimentary basins will often resonate, increasing the duration of shaking. This is why, in the 1989 Loma Prieta earthquake, the Marina district of San Francisco was one of the most damaged areas, though it was nearly from the epicenter. Geological structures were also significant, such as where seismic waves passing under the south end of San Francisco Bay reflected off the base of the Earth's crust towards San Francisco and Oakland. A similar effect channeled seismic waves between the other major faults in the area. History The first simple classification of earthquake intensity was devised by Domenico Pignataro in the 1780s. The first recognizable intensity scale in the modern sense of the word was drawn up by the German mathematician Peter Caspar Nikolaus Egen in 1828. However, the first modern mapping of earthquake intensity was made by Robert Mallet, an Irish engineer who was sent by Imperial College, London, to research the December 1857 Basilicata earthquake, also known as The Great Neapolitan Earthquake of 1857. The first widely adopted intensity scale, the 10-grade Rossi–Forel scale, was introduced in the late 19th century. In 1902, Italian seismologist Giuseppe Mercalli, created the Mercalli Scale, a new 12-grade scale. Significant improvements were achieved, mainly by Charles Francis Richter during the 1950s, when (1) a correlation was found between seismic intensity and the Peak ground acceleration (PGA; see the equation that Richter found for California). and (2) a definition of the strength of the buildings and their subdivision into groups (called type of buildings) was made. Then, the seismic intensity was evaluated based on the degree of damage to a given type of structure. That gave the Mercalli Scale, as well as the European MSK-64 scale that followed, a quantitative element representing the vulnerability of the building's type. Since then, that scale has been called the Modified Mercalli intensity scale (MMS) and the evaluations of the seismic intensities are more reliable. In addition, more intensity scales have been developed and are used in different parts of the world:
Physical sciences
Seismology
Earth science
60705059
https://en.wikipedia.org/wiki/Paleotempestology
Paleotempestology
Paleotempestology is the study of past tropical cyclone activity by means of geological proxies as well as historical documentary records. The term was coined by American meteorologist Kerry Emanuel. The usual approach in paleotempestology is the identification of deposits left by storms. Most commonly, these are overwash deposits in waterbodies close to the coast; other means are oxygen isotope ratio variations caused by tropical cyclone rainfall in trees or speleothems (cave deposits), and identifying beach ridges kicked up by storm waves. The occurrence rate of tropical cyclones can then be inferred from these deposits and sometimes also their intensity – typically the stronger events are the most easily recognizable ones –, by comparing them to deposits left by historical events. Paleotempestological research has shown that in the Coast of the Gulf of Mexico and in Australia, the occurrence rate of intense tropical cyclones is about once every few centuries, and there are long-term variations in occurrence which are caused, for example, by shifts in their paths. Common problems in paleotempestology are confounding factors such as tsunami-generated deposits, and the fact that only some parts of the world have been investigated. Definition and rationale Paleotempestology is the estimation of tropical cyclone activity with the help of proxy data. The name was coined by Kerry Emanuel of the Massachusetts Institute of Technology; the field has seen increased activity since the 1990s and studies were first carried out in the United States of America on the East Coast. The realisation that one cannot rely solely on historical records to infer past storm activity was a major driving force for the development of paleotempestology. The historical record in many places is too short (one century at most) to properly determine the hazard produced by tropical cyclones, especially the rare very intense ones which at times are undersampled by historical records; in the United States, for example, only about 150 years of record are available, and only a small number of hurricanes classified as category 4 or 5 – the most destructive ones on the Saffir-Simpson scale – have come ashore, making it difficult to estimate the hazard level. Such records may also not be representative for future weather patterns. Information about past tropical cyclone occurrences can be used to constrain how their occurrences may change in the future, or about how they respond to large-scale climate modes, such as sea surface temperature changes, or to check the accuracy of climate models. In general, the origin and behaviour of tropical cyclone systems is poorly understood, and there is concern that human-caused global warming will increase the intensity of tropical cyclones and the frequency of strong events by increasing sea surface temperatures. Techniques In general, paleotempestology is a complex field of science that overlaps with other disciplines like climatology and coastal geomorphology. A number of techniques have been used to estimate the past hazards from tropical cyclones. Many of these techniques have also been applied to studying extratropical storms, although research on this field is less advanced than on tropical cyclones. Overwash deposits Overwash deposits in coastal atolls, coastal lakes, marshes or reef flats or even archeological sites are the most important paleoclimatological evidence of tropical cyclone strikes. When storms hit these areas, currents and waves can overtop barriers, erode these and other beach structures, and lay down deposits in the water bodies behind barriers. Isolated breaches and especially widespread overtopping of coastal barriers during storms can generate fan-like, layered deposits behind the barrier. Individual layers can be correlated to particular storms in favourable circumstances; in addition they are often separated by a clear boundary from earlier sediments. Such deposits have been observed in North Carolina after Hurricane Isabel in 2003, for example. The intensity and impacts of the tropical cyclone can also be inferred from overwash deposits by comparing the deposits to these formed by known storms and analyzing their lithology (their physical characteristics). Additionally, thicker sediment layers usually correspond to stronger storm systems. This procedure is not always clear-cut however. Several techniques have been applied to separate out storm overwash deposits from other sediments: Compared to the normal sedimentation processes in such places, tropical cyclone deposits are rougher and can be detected with sieving, laser-dependent technologies or x-ray fluorescence techniques. In sediment cores, deposits formed by tropical cyclones may be denser due to a larger proportion of mineral content associated with overwashes, which can be detected with x-ray fluorescence techniques. They may contain less organic matter than deposits formed through steady sedimentation, which can be detected by combusting the deposits and measuring the resulting mass loss. This and sediment grain sizes are the most common research tools for sediment cores. A little used technique is the analysis of organic material in sediment cores; there are characteristic changes in carbon and nitrogen isotope ratios after flooding and the entering of seawater, including a general increase in biological productivity. Overwash deposits can contain elements that do not normally occur at the site, such as strontium; this can be detected with x-ray fluorescence techniques. Overwash deposits have usually brighter colors than those generated during steady sedimentation and different quantities of coarse fragments. Storm surges can transport living structures into such deposits that do not normally occur in these settings. Droughts or the entry of water unrelated to a storm can confound such records, however. Thus, this method is often supplemented with other proxies. The most common living structure employed here are foraminifera, although bivalves, diatoms, dinoflagellates, ostracods and pollen have also been used. Marine foraminifera, however, are not always present in deposits formed by historical storms. Generally, sites suitable for obtaining paleotempestology records are not found along the entire length of the coastline, and depending on the properties of the site such as vegetation cover, they might only track storms approaching from a certain direction. Prerequisites for successful correlation of overwash deposits to tropical cyclones are: The absence of tsunamis in the region, as their deposits can usually not be easily distinguished from storm deposits. The investigation area should have low biological activity, as bioturbation can otherwise erase evidence of storm deposits. Low biological activity can be found in sites with high salt or low oxygen concentrations. A high geomorphic stability of the site. High sedimentation rates can facilitate the preservation of storm deposits. Tides can destroy layered storm deposits; thus non-tidal waterbodies are ideally used. In tidally active waterbodies, correlations involving various sediment cores can be applied. Dating and intensity determination Various dating techniques can then be used to produce a chronology of tropical cyclone strikes at a given location and thus a recurrence rate; for example, at Lake Shelby in Alabama a return period of once every 318 years was determined. The storms in the Lake Shelby record have windspeeds of over as Hurricane Ivan which in 2004 made landfall in the region at that intensity did not leave a deposit. Based on geological considerations the minimum windspeed of storms recorded there might be . For dating purposes radiometric dating procedures involving carbon-14, cesium-137, and lead-210 are most commonly used, often in combination. Uranium series dating, optically stimulated luminescence, and correlations to human land use can also be used in some places. Beach ridges Beach ridges and cheniers form when storm surges, storm waves or tides deposit debris in ridges, with one ridge typically corresponding to one storm. Ridges can be formed by coral rubble where coral reefs lie at the coast, and can contain complicated layer structures, shells, pumice, and gravel. A known example is the ridge that Cyclone Bebe generated on Funafuti atoll in 1971. Beach ridges are common on the deltaic shores of China, and are indicative of increased typhoon activity. They have also been found on the Australian coast facing the Great Barrier Reef and are formed from reworked corals. The height of each ridge appears to correlate with the intensity of the storm that produced it, and thus the intensity of the forming storm can be inferred by numerical modelling and comparison to known storms and known storm surges. Ridges tend to be older the farther inland they are; they can also be dated through optically stimulated luminescence and radiocarbon dating. In addition, no tsunami-generated beach ridges have been observed, and tsunamis are important confounding factors in paleotempestology. Wind-driven erosion or accumulation can alter the elevation of such ridges, and, in addition, the same ridge can be formed by more than one storm event as has been observed in Australia. Beach ridges can also shift around through non-storm processes after their formation and can form through non-tropical cyclone processes. Sedimentary texture can be used to infer the origin of a ridge from storm surges. Isotope ratios Precipitation in tropical cyclones has a characteristic isotope composition with a depletion of heavy oxygen isotopes; carbon and nitrogen isotope data have also been used to infer tropical cyclone activity. Corals can store oxygen isotope ratios which in turn reflect water temperatures, precipitation and evaporation; these in turn can be related to tropical cyclone activity. Fish otoliths and bivalves can also store such records, as can trees where the oxygen isotope ratios of precipitation are reflected in the cellulose of trees, and can be inferred with the help of tree rings. However, confounding factors like natural variation and soil properties also influence oxygen isotope ratios of tree cellulose. For these reasons, only the frequency of storms can be reliably estimated from tree ring isotopic records, not their intensity. Speleothems, deposits formed in caves through the dissolution and redeposition of dolomite and limestone, can store isotope signatures associated with tropical cyclones, especially in fast growing speleothems, areas with thin soils and speleothems which have undergone little alteration. Such deposits have a high temporal resolution, and are also protected from many confounding factors although the extraction of annual layers has become possible only recently, with a two-week resolution (two separate layers correlated to two hurricanes that struck two weeks apart) achieved in one case. However, the suitability of speleothems depends on the characteristics of the cave they are found in; caves that flood frequently may have their speleothems eroded or otherwise damaged, for example, making them less suitable for paleotempestology research. Caves where speleothems form mainly during the offseason are also likely to miss tropical cyclones. Very old records can be obtained from oxygen isotope ratios in rocks. Other techniques Historical documents such as county gazettes in China, diaries, logbooks of travellers, official histories and old newspapers can contain information on tropical cyclones. In China such records go back over a millennium, while elsewhere it is usually confined to the last 130 years. Such historical records however are often ambiguous or unclear, they only record landfalling storms and sometimes confuse non-tropical systems or intense convective storms for tropical cyclones. The frequency of shipwrecks has been used to infer past tropical cyclone occurrence, such as has been done with a database of shipwrecks that the Spaniards suffered in the Caribbean and with wrecks in the Paracel Islands in the South China Sea. Aside from oxygen isotope ratios, tree rings can also record information on storm-caused plant damage or vegetation changes, such as thin tree rings due to storm-induced damage to a tree canopy, and saltwater intrusion and the resulting slowdown in tree growth. The term "dendrotempestology" is used in this context. The tree ring approach tends to measure rainy storms rather than strong storms, and cannot always distinguish tropical cyclones from other weather systems. Speleothems can also store trace elements which can signal tropical cyclone activity and mud layers formed by storm-induced cave flooding. Droughts on the other hand can cause groundwater levels to drop enough that subsequent storms cannot induce flooding and thus fail to leave a record, as has been noted in Yucatan. Other techniques: Rhythmites in river mouths. These are formed when storms resuspend sediments; the sediments when the storm wanes fall out and form the deposits, especially in places with high sediment supplies. Carbon isotope and chemical data can be used to distinguish them from non-storm sedimentation. Sand dunes on coastlines are influenced by storm surge height, and sand splays can be formed when sand is swept off these dunes by storm surges and waves; such deposits however are better studied in the context of tsunamis and there is no clear way to distinguish between tsunami- and storm-formed splays. Hummocky deposits in shallow seas, known as tempestites. The mechanics of their formation are still controversial, and such deposits are prone to reworking which wipes out the traces of a storm. Boulders and coral blocks can be moved by storms and such moved blocks can potentially be dated to obtain the age of the storm, if certain conditions are met. They can be correlated to storms with the help of oxygen isotope excursions for example. This technique has also been applied to islands formed by storm-moved blocks. Wave-driven erosion during storms can create scarps which can be dated with the assistance of optically-stimulated luminescence. Such scarps however tend to be altered over time – later storms can erode away older scarps, for example – and their preservation and formation is often strongly dependent on the local geology. Other techniques involve the identification of freshwater flood deposits by storms such as humic acid and other evidence in corals, and lack of bromine – which is common in marine sediments – in flood-related deposits, and oyster bed kills caused by sediments suspended by storms (oyster kills however can also be caused by non-storm phenomena). Luminescence of coral deposits has been used to infer tropical cyclone activity. Tridacna shells record trace elements on daily or hourly basis, as well as growth impairments caused by tropical cyclones. Structures in submarine karst areas, like blue holes and submarine caves, accumulate sediments as there is little erosion and currents transport sediment into the depressions. During storms, larger quantities of often rougher sediments are transported, forming distinct storm layers. Identifying storm layers out of the normal sedimentation can be a challenge. Timespans A database of tropical cyclones going back to 6,000 BC has been compiled for the western North Atlantic Ocean. In the Gulf of Mexico, records go back five millennia but only a few typhoon records go back 5,000–6,000 years. In general, tropical cyclone records do not go farther back than 5,000–6,000 years ago when the Holocene sea level rise levelled off; tropical cyclone deposits formed during sea level lowstands likely were reworked during sea level rise. Only tentative evidence exists of deposits from the last interglacial. Tempestite deposits and oxygen isotope ratios in much older rocks have also been used to infer the existence of tropical cyclone activity as far back as the Jurassic. Results Paleotempestological information has been used by the insurance industry in risk analysis in order to set insurance rates. The industry has also funded paleotempestological research. Paleotempestology information is further of interest to archeologists, ecologists, and forest and water resource managers. Recurrence rates The recurrence rate, the time gap between storms, is an important metric used to estimate tropical cyclone risk, and it can be determined by paleotempestological research. In the Gulf of Mexico, catastrophic hurricane strikes at given locations occur once about every 350 years in the last 3,800 years or about 0.48%–0.39% annual frequency at any given site, with a recurrence rate of 300 years or 0.33% annual probability at sites in the Caribbean and Gulf of Mexico; category 3 or more storms occur at a rate of 3.9–0.1 category 3 or more storms per century in the northern Gulf of Mexico. Elsewhere, tropical cyclones with intensities of category 4 or more occur about every 350 years in the Pearl River Delta (China), one storm every 100–150 years at Funafuti and a similar rate in French Polynesia, one category 3 or stronger every 471 years in St. Catherines Island (Georgia), 0.3% each year for an intense storm in eastern Hainan, one storm every 140–180 years in Nicaragua, one intense storm every 200–300 years in the Great Barrier Reef – formerly their recurrence rate was estimated to be one strong event every few millennia – and one storm of category 2–4 intensity every 190–270 years at Shark Bay in Western Australia. Steady rates have been found for the Gulf of Mexico and the Coral Sea for timespans of several millennia. However, it has also been found that the occurrence rates of tropical cyclone measured with instrumental data over historical time can be significantly different from the actual occurrent rate. In the past, tropical cyclones were far more frequent in the Great Barrier Reef and the northern Gulf of Mexico than today; in Apalachee Bay, strong storms occur every 40 years, not every 400 years as documented historically. Serious storms in New York occurred twice in 300 years not once every millennium or less. In general, the area of Australia appears to be unusually inactive in recent times by the standards of the past 550–1500 years, and the historical record underestimates the incidence of strong storms in Northeastern Australia. Long term fluctuations Long-term variations of tropical cyclone activity have also been found. The Gulf of Mexico saw increased activity between 3,800 and 1,000 years ago with a fivefold increase of category 4–5 hurricane activity, and activity at St. Catherines Island and Wassaw Island was also higher between 2,000 and 1,100 years ago. This appears to be a stage of increased tropical cyclone activity spanning the region from New York to Puerto Rico, while the last 1,000 years have been inactive both there and in the Gulf Coast. Before 1400 AD, the Caribbean and the Gulf of Mexico were active while the East Coast of the United States was inactive, followed by a reversal that lasted until 1675 AD; in an alternative interpretation, the US Atlantic coast and the Caribbean saw low activity between 950 AD and 1700 with a sudden increase around 1700. It is unclear whether in the Atlantic hurricane activity is more regionally modulated or basin-wide. Such fluctuations appear to mainly concern strong tropical cyclone systems, at least in the Atlantic; weaker systems have a more steady pattern of activity. Rapid fluctuations over short timespans have also been observed. In the Atlantic Ocean, the so-called "Bermuda High" hypothesis stipulates that changes in the position of this anticyclone can cause storm paths to alternate between landfalls on the East Coast and the Gulf Coast but also Nicaragua. Paleotempestological data support this theory although additional findings on Long Island and Puerto Rico have demonstrated that storm frequency is more complex as active periods appear to correlate between the three sites. A southward shift of the High has been inferred to have occurred 3,000–1,000 years ago, and has been linked with the "hurricane hyperactivity" period in the Gulf of Mexico between 3,400 and 1,000 years ago. Conversely a decrease in hurricane activity is recorded after the mid-millennium period and after 1,100 the Atlantic changes from a pattern of widespread activity to a more geographically confined one. Between 1,100-1,450 the Bahamas and the Florida Gulf Coast were frequently struck while between 1,450-1,650 activity was higher in New England. Furthermore, a tendency to a more northerly storm track may be associated with a strong North Atlantic Oscillation while the Neoglacial cooling is associated with a southward shift. In West Asia, high activity in the South China Sea and the southern parts of the basin coincides with low activity in Japan and mid-latitudes and vice versa. Role of climate modes The influence of natural trends on tropical cyclone activity has been recognized in paleotempestology records, such as a correlation between Atlantic hurricane tracks and activity with the status of the ITCZ; position of the Loop Current (for Gulf of Mexico hurricanes); El Niño-Southern Oscillation activity; North Atlantic Oscillation both in East Asia and the Atlantic; sea surface temperatures and the strength of the West African Monsoon; ENSO activity and Sahara dust with East Asian typhoons; and Australian cyclone activity and the Pacific Decadal Oscillation. Increased insolation – either from solar activity or from orbital variations – have been found to be detrimental to tropical cyclone activity in some regions In the first millennium AD, warmer sea surface temperatures in the Atlantic as well as more restricted anomalies may be responsible for stronger regional hurricane activity. but not in the northeastern Gulf of Mexico. The climate mode dependency of tropical cyclone activity may be more pronounced in temperate regions where tropical cyclones find less favourable conditions. Among the known climate modes that influence tropical cyclone activity in paleotempestological records are ENSO phase variations, which influence tropical cyclone activity in Australia and the Atlantic, but also their path as has been noted for typhoons. More general global correlations have been found, such as a negative correlation between tropical cyclone activity in Japan on the one hand and the North Atlantic, Gulf of Thailand and South China on the other hand, and a correlation between the Atlantic and Australia on the one hand and between Australia and French Polynesia on the other hand. Influence of long-term temperature variations The effect of general climate variations have also been found. Hurricane and typhoon tracks tend to shift north (e.g. Amur Bay) during warm periods and south (e.g. South China) during cold periods, patterns that might be mediated by shifts in the subtropical anticyclones. These patterns (northward shift with warming) has been observed as a consequence of human-induced global warming and the end of the Little Ice Age but also after volcanic eruptions (southward shift with cooling); some volcanic eruptions have been linked to decreased hurricane activity, although this observation is not universal. The Dark Ages Cold Period has been linked to decreased activity off Belize. Initially the Medieval Climate Anomaly featured increased activity across the Atlantic, but later activity decreased along the US East Coast. During the 1350 to present interval in the Little Ice Age, there were more but weaker storms in the Gulf of Mexico while hurricane activity did not decrease in western Long Island. Colder waters may have impeded tropical cyclone activity in the Gulf of Mexico during the Little Ice Age. Increased hurricane activity during the last 300 years in the Caribbean may also correlate to the Little Ice Age. The Little Ice Age may have been accompanied by more but weaker storms in the South China Sea relative to preceding or following periods, leading to increased ship loss rates. The response of tropical cyclones to future global warming is of great interest. The Holocene Climatic Optimum did not induce increased tropical cyclone strikes in Queensland and phases of higher hurricane activity on the Gulf Coast are not associated with global warming; however warming has been correlated with typhoon activity in the Gulf of Thailand and marine warming with typhoon activity in the South China Sea, increased hurricane activity in Belize (which increased during the Medieval Warm Period) and during the Mesozoic when carbon dioxide caused warming episodes such as the Toarcian anoxic event. After-effects of tropical cyclones A correlation between hurricane strikes and subsequent wildfire activity and vegetation changes has been noted in the Alabamian and Cuban paleotempestological record. In St. Catherines Island, cultural activity ceased at the time of increased storm activity, and both Taino settlement of the Bahamas and Polynesian expansion across the Pacific may have been correlated to decreased tropical cyclone activity. Tropical cyclone induced alteration in oxygen isotope ratios may mask isotope ratio variations caused by other climate phenomena, which may thus be misinterpreted. On the other hand, the Classic Maya collapse may or may not coincide with, and have been caused by, a decrease in tropical cyclone activity. Tropical cyclones are important for preventing droughts in the southeastern US. Paleotempestology has found evidence that the Kamikaze typhoons that impeded the Mongol invasions of Japan did, in fact, exist. Other patterns Sites in the Bahamas show more strong storms in the northern Bahamas than the southern ones, presumably because storms approaching the southern Bahamas have passed over the Greater Antilles before and have lost much of their intensity there. Atmospheric conditions favourable for tropical cyclone activity in the "main development region" of the Atlantic are correlated to unfavourable conditions along the East Coast. The anti-correlation between Gulf of Mexico and Bahamas activity with the US East Coast activity may be due to active hurricane seasons - which tend to increase storm activity in the former - being accompanied by unfavourable climatological conditions along the East Coast. Problems Paleotempestological reconstructions are subject to a number of limitations, including the presence of sites suited for the obtainment of paleotempestological records, changes in the hydrological properties of the site due to e.g. sea level rise which increases the sensitivity to weaker storms and "false positives" caused by for example non-tropical cyclone-related floods, sediment winnowing, wind-driven transport, tides, tsunamis, bioturbation and non-tropical storms such as nor'easters or winter storm, the latter of which however usually result in lower surges. In particular, tsunamis are a problem for paleotempestological studies in the Indian and Pacific Ocean; one technique that has been used to differentiate the two is the identification of traces of runoff which occurs during storms but not during tsunamis. Coastal paleotempestology records are based on storm surge, and do not always reflect wind speeds, e.g in large and slow-moving storms. Not all of the world has been investigated with paleotempestological methods; among the places thus researched are Belize, the Carolinas of North America, northern coasts of the Gulf of Mexico, the northeastern United States, (in a lesser measure) the South Pacific islands and tropical Australia. Conversely China, Cuba, Florida, Hispaniola, Honduras, the Lesser Antilles and North America north of Canada are poorly researched. The presence of research institutions active in paleotempestology and suitable sites for paleotempestological research and tropical cyclone landfalls may influence whether a given location is researched or not. In the Atlantic Ocean, research has been concentrated on regions where hurricanes are common rather than more marginal areas. Paleotempestology records mostly record activity during the Holocene and tend to record mainly catastrophic storms as these are the ones most likely to leave evidence. In addition, there has been little effort in making comprehensive databases of paleotempestological data or in attempting regional reconstructions from local results. Different sites have different intensity thresholds and thus capture different storm populations, and the same layer can be caused by a landfall of a weaker storm closer to the site or a landfall at a larger distance of a stronger storm. Also, paleotempestological records, especially overwash records in marshes, are often grossly incomplete with questionable geochronology. Deposition mechanism are poorly documented, and it is often not clear how to identify storm deposits. The magnitude of overwash deposits is fundamentally a function of storm surge height, which, however, is not a function of storm intensity. Overwash deposits are regulated by the height of the overwashed barrier and there is no expectation that it will remain stable over time; tropical cyclones themselves have been observed eroding such barriers and such barrier height decreases (e.g. through storm erosion or sea level rise) may induce a spurious increase of tropical cyclone deposits over time. Successive overwash deposits can be difficult to distinguish, and they are easily eroded by subsequent storms. Storm deposits can vary strongly even a short distance from the landfall point, even over few tens of metres, and changes in tropical cyclone activity recorded at one site might simply reflect the stochastic nature of tropical cyclone landfalls. In particular, in core tropical cyclone activity regions weather variations rather than large-scale modes may control tropical cyclone activity. Application to non-tropical storms Paleotempestological research has been mostly carried out in low-latitude regions but research in past storm activity has been conducted in the British Isles, France and the Mediterranean. Increases in storm activity on the European Atlantic coast have been noted AD 1350–1650, AD 250–850, AD 950–550, 1550–1350 BC, 3550–3150 BC, and 5750–5150 BC. In southern France, a recurrence rate of 0.2% per year of catastrophic storms has been inferred for the last 2,000 years. Storm records indicate increased storm activity during colder periods such as the Little Ice Age, Medieval Dark Age and Iron Age Cold Epoch. During cold periods, increased temperature gradients between the polar and low-latitude regions increase baroclinic storm activity. Changes in the North Atlantic Oscillation may also play a role. During the last millenary, the cross-referencing of sedimentological data with historical archives in Western Europe has also highlighted the intense storms of 1351-1352, 1469, 1645, 1711 and 1751, which caused severe damage and long-lasting flooding along much of Europe's coastline. Examples
Physical sciences
Paleoclimate
Earth science
65681263
https://en.wikipedia.org/wiki/Repeating%20firearm
Repeating firearm
A repeating firearm or repeater is any firearm (either a handgun or long gun) that is designed for multiple, repeated firings before the gun has to be reloaded with new ammunition. Unlike single-shot firearms, which can only hold and fire a single round of ammunition, a repeating firearm can store multiple cartridges inside a magazine (as in pistols, rifles, or shotguns), a cylinder (as in revolvers), or a belt (as in machine guns), and uses a moving action to manipulate each cartridge into and out of the battery position (within the chamber and in alignment with the bore). This allows the weapon to be discharged repeatedly in relatively quick succession, before manually reloading the ammunition is needed. Typically the term "repeaters" refers to the more ubiquitous single-barreled variants. Multiple-barrel firearms such as derringers, pepperbox guns, double-barreled shotguns/rifles, combination guns, and volley guns can also hold and fire more than one cartridge (one in each chamber of every barrel) before needing to be reloaded, but do not use magazines for ammunition storage and also lack any moving actions to facilitate ammunition-feeding, which makes them technically just bundled assemblies of multiple single-shot barrels fired in succession and/or simultaneously, therefore they are not considered true repeating firearms despite their functional resemblance. On the contrary, rotary-barrel firearms (e.g. Gatling guns), though also multi-barreled, do use belts and/or magazines with moving actions for feeding ammunition, which allow each barrel to fire repeatedly just like any single-barreled repeater, and therefore still qualify as a type of repeating firearm from a technical view point. Although repeating flintlock breechloading firearms (e.g. the Lorenzóni repeater, Cookson repeater, and Kalthoff repeater) had been invented as early as the 17th century, the first repeating firearms that received widespread use were revolvers and lever-action repeating rifles in the latter half of the 19th century. These were a significant improvement over the preceding single-shot breechloading guns, as they allowed a much greater rate of fire, as well as a longer interval between reloads for more sustained firing, and the widespread use of metallic cartridges also made reloading these weapons quicker and more convenient. Revolvers became very popular sidearms since its introduction by the Colt's Patent Firearms Manufacturing Company in the mid-1830s, and repeating rifles saw use in the early 1860s during the American Civil War. Repeating pistols were first invented during the 1880s, and became widely adopted in the early 20th century, with important design contributions from inventors such as John Browning and Georg Luger. The first repeating gun to see military service was actually not a firearm, but an airgun. The Girardoni air rifle, designed by Italian inventor Bartolomeo Girardoni circa 1779 and more famously associated with the Lewis and Clark Expedition into the western region of North America during the early 19th century, it was one of the first guns to make use of a tubular magazine. Early repeaters Multiple-barrel firearm Revolver (15th century) Superposed load (1558) Volley gun (1570s) Breechloader (16th century) Kalthoff repeater (about 1630) Cookson repeater (about 1650) Blowback and Recoil operation (1663) Chelembron system (1668) Lagatz rifle: a modification of the Lorenzoni System, designed by Danzig gunsmith Daniel Lagatz around the year 1700. Puckle gun (1718) Pepper-box (1739) Harmonica gun (1742) Fafting/Fasting rifle: In 1774 a rifle was invented by a Norwegian or Danish colonel by the name of Fafting or Fasting capable of firing 18 to 20 shots a minute and being used as an ordinary rifle by taking off a spring-loaded container attached to the gun's lock. It was also stated that the inventor was working on a gun capable of firing up to 30 times in a minute on more or less the same principles. Belton flintlock (1777) Girandoni air rifle (1779) Break Action Flintlock (18th century) Boxlock action (1782) 1789 French rifle: In 1791 it was mentioned in a book published in France that there existed since at least 1789 a rifle that held 5 or 6 shots and was capable of being reloaded three times in a minute for a total of 15 or 18 shots a minute. A rifle similar in type to this was also stated to be kept at the Hotel de la Guerre(fr). Joseph Manton's shotguns (1790s) Church and Bartemy/Bartholomew gun: A repeating rifle designed by the Americans William Church and Chrostus Bartemy or Bartholomew in 1813 with three separate magazines for containing up to 42 charges of ammunition and capable of firing 25 shots a minute. It could be reloaded in one minute. Thomson rifle: a flintlock repeating rifle patented in 1814, using multiple breeches to obtain repeating fire. Leroy rifle: In 1815 (sometimes incorrectly dated as 1825) a French inventor called Julien Leroy patented a flintlock and percussion revolving rifle with a mechanically indexed cylinder and a priming magazine. Lepage guns: In 1819 a French gunsmith called Lepage invented and presented at the French industrial exposition of that year percussion 2-shot and 4-shot turn-over rifles. In 1823 he exhibited a volley rifle that fired 7 rifled barrels simultaneously as well as a turn-over carbine. In 1827, the same inventor exhibited at another French industrial exposition 11 percussion and 1 flintlock firearms which included a 4-shot turn-over rifle, a 'double rifle' with a cylinder with 5 charges and a 'single rifle' and a pair of pistols also with a cylinder with 5 charges. Sutherland magazine pistol: In 1821 the British gunmakers R and R Sutherland advertised for auction, amongst a variety of firearms, a single-barrelled six-shot magazine pistol. Pirmet-Baucheron revolving rifle: In 1822 a French gunsmith called Pirmet-Baucheron presented a revolving rifle with 7 shots and a single lock. Hewson magazine gun: In 1824 an English gunsmith called W. P. Hewson advertised, amongst other firearms and one air gun, a magazine gun. Jobard rifle: a turret rifle with 14 shots patented in Belgium in 1826 and presented to the government in 1835. Henry rifle: a French 14 shot flintlock rifle in the style of the Kalthoff and Lorenzoni rifles patented in 1831 (granted in 1835) by Francois-Antoine Henry though possibly based on an earlier design published in 1809 by the same author. Baker pistols: In 1833 an English gunsmith called T. H. Baker advertised one, two, four, five and seven shot pistols for sale. Kavanagh pistols: In 1834 a variety of pistols were exhibited by the Irish gunsmith William Kavanagh, one of which had a 'revolving breech' capable of firing 7 or 8 times, invented by a clergyman called Robert Carey, as well as a 'self-loading pistol'. Olive pistol: In 1835 it was mentioned in a French periodical that a French inventor called Jean-Francois-Augustin Olive who was seeking funds for developing a breech-loading, 8-shot pistol into a 30-shot version had been arrested. Osterried guns: In 1835 it was mentioned in a French newspaper that an Osterried of Bavaria had invented a rifle and 3 different kinds of pistols, the first of which had 2 barrels and 4 hammers for firing 4 successive shots, the second of which had one barrel and 6 'mouths', no hammer and was actuated by the trigger and the third of which had 8 'mouths' and could be fired 16 successive times. In response to this announcement it was mentioned in an Austrian newspaper that similar inventions had already been known in their country for a long time and used as an example a pistol invented a few years prior to the announcement of Osterried's inventions by the head gunsmith of the local imperial armoury called Ulrich which was claimed to be able to fire 14 successive times from 7 barrels which were all loaded at once and could fit comfortably inside a user's pocket. Irish Magazine pistol: In 1836 a magazine pistol was advertised for auction in Ireland. Silas Day magazine gun: A percussion revolving rifle to which was attached a loose-powder-and-ball magazine patented in the US in 1837. Colt ring lever rifles (1837) Bailey, Ripley and Smith Magazine rifle: In 1838 the Americans Lebbeus Bailey, John B. Ripley and William B. Smith patented a percussion repeating rifle with a gravity-operated tubular magazine in the stock which could hold up to 15 re-useable steel cartridge-chambers. Eaton rifle: In 1838 a percussion rifle invented in America by James Eaton was described as being capable of holding 24 rounds in a rotating magazine and discharging them all in four minutes for a rate of fire of 6 rounds per minute. Kratsch rifle: In 1839 it was reported that a mechanic called Kratsch from Bayreuth had invented a rifle capable of firing 30 times in a minute and being reloaded in one minute. Branch pistols: In 1842 an English gunsmith called T. Branch advertised two six shot 'self acting' pistols for sale. Devisme guns: In 1844 a French gunsmith known as Devisme presented a variety of repeating firearms for the French Industrial Exposition of 1844 including an 18 shot pistol with no visible hammer or lock, a 6 shot pistol, a rifle with 6 shots and a 'revolving breech' and a four shot 'double acting' rifle. Jennings Magazine rifle: in 1847 Walter Hunt patented in Britain a repeating rifle he called "the Volitional Repeater". He would patent it again in the United States in 1849. This rifle featured a tubular magazine beneath the barrel and a lever mechanism to raise cartridges into the chamber. Unable to finance the building of the rifle, Hunt sold the rights to George Arrowsmith who in turn had an employee, Lewis Jennings, improve the lever mechanism. Courtland Palmer placed the first order for the "Jennings Magazine rifle" for his hardware store: Robbins & Lawrence. The rifle did not sell well as the ammunition was a hollow based bullet containing gunpowder. Most of the guns were later converted to single shot rifles. Two employees working at Robbins & Lawrence: Horace Smith and Daniel B. Wesson improved the design and sold it as the "Smith-Jennings Repeating Rifle". At first they used a slightly modified Flobert cartridge, patented in 1853, but later they would switch to a modified Rocket Ball type of ammunition altered so as to function as a self-contained centerfire cartridge. Cass Repeating Belt gun: A percussion repeating rifle patented in 1848 in the US using a chain or belt in the stock which carried paper cartridges to the breech of the gun. Buchel Cartridge Magazine gun: The first tubular cartridge magazine gun to be patented in the United States in February 1849. Perry 'Faucet-Breech' gun: A hinged or tilting breech repeating rifle patented in the US in December 1849 by Alonzo Perry using paper cartridges contained in several gravity-operated tubular magazines in the stock and a separate magazine for fulminate pills which were used for ignition. Porter self-loading gun: In February 1851 a loose-powder-and-ball percussion magazine gun invented by a Parry W. Porter, better known for the turret rifle he invented and to which the magazine for his loose-powder-and-ball gun was to be attached, was reported on in American newspapers and later in the same year a patent was procured by the inventor. Needham self-loading carbine: A self-loading carbine demonstrated in June 1851 at the Great Exhibition by Joseph Needham. Renette self-loading pocket pistol: A self-loading pocket pistol demonstrated in 1851 at the Great Exhibition in London by the French inventor Gastinne Renette, using cylindro-conoidal bullets. Bertonnet self-loading firearm: It is mentioned in Hunt's Handbook to the Official Catalogues of the Great Exhibition of 1851 that a French inventor called Bertonnet demonstrated a self-loading firearm in 1851 at the Great Exhibition though no details are provided. Dixon self-loading and self-priming gun: A repeating gun demonstrated by a C. S. Dixon which won a silver award at the Annual Fair of the American Institute in October 1851. The first slide action patent: Issued in Britain in 1854, to Alexander Bain who modified the mechanism of a harmonica gun. 1854 Lindner revolving rifle: In 1854 the German Edward Lindner patented in the United States and Britain a repeating rifle which used a revolving cylinder to elevate the cartridges, which were paper and could be either self-contained needlefire cartridges or use external percussion caps for ignition, to the breech from a tubular magazine located under the barrel. Colette gravity pistol: a repeating saloon gun premiered at the 1855 World's Fair. Despite popularly being known as the Colette Gravity Pistol its original inventor was actually a Belgian called Jean Nicolas Herman. Colt revolving rifle (1855) Leroux magazine gun: At the Exposition Universelle (1855) in France a French gunsmith called Leroux demonstrated a repeating carbine with a magazine for 36 Flobert cartridges and which featured a novel cartridge extractor. Spencer repeating rifle (1860) Roper repeating shotgun (1866) Mechanisms Manual In a manually operated repeating firearm (or "manual repeater" for short), the user needs to manually apply force to the action to operate it, either directly to a handle on the bolt or an external hammer, or indirectly through a linkage connected to a lever or slide. Revolver action Revolvers use a rotating cylinder containing multiple chambers, which functions similarly to a rotary magazine (with each chamber holding a round of cartridge). When the hammer is cocked (either directly by hand, or indirectly via trigger-pull), internal linkage will rotate the cylinder and index each chamber into alignment with the barrel bore. When firing, the bullet will make a slight "jump" across the gap between the cylinder and the barrel, creating out a small "breech blast" from any hot, high-pressure propellant gas that leaks out of the gap. The breech portion of the bore is also often widened slightly into a funnel-like "cone" to better facilitate the bullet jump across the cylinder gap. Although multiple-barrel "pepper-box" guns had appeared for centuries and were popular handguns in the early 19th century, the revolver was the first true repeating handgun. In 1836, Samuel Colt applied a patent for a "revolving gun" later named the Colt Paterson; he was granted the patent on 25 February 1836 (later numbered 9430X). This instrument and patent No. 1304, dated 29 August 1836, protected the basic principles of his revolving-breech-loading, folding-trigger firearm and gave him a monopoly of revolver manufacture until 1857. It was the first practical revolver and the first practical repeating firearm, and became an industrial and cultural legacy as well as a contribution to the development of war technology, represented ironically by the name of one of his company's later innovations, the "Peacemaker". While some early long guns were also made using the revolver mechanism, these did not have longevity as it posed a problem with long guns: without special sealing details, the cylinder produces a gas discharge close to the face when the weapon is fired from the shoulder, as was a common approach with rifles. The Milkor MGL is a lightweight 40 mm grenade launcher based on a six-shot revolver mechanism designed to significantly increase a squad's firepower when compared to traditional single-shot grenade launchers like the M203. Although intended primarily for military combat, the launcher is also suitable as a riot gun for mob control and other law enforcement operations using tear gas or non-/less-lethal munitions. Revolver cannon A revolver cannon is a large-caliber gun (cannon) that uses a revolver-like cylinder to speed up the loading-firing-ejection cycle. Unlike a rotary cannon, a revolver cannon has only a single gun barrel. An early precursor was the Puckle gun of 1718, a large manually-operated flintlock gun, whose design idea was impractical due to it being far ahead of what 18th century technology could achieve. During the 19th century, The Confederate Army used a single 2-inch revolver cannon with 5 manually rotated chambers during the Siege of Petersburg. The gun was captured in Danville, Virginia by the Union Army on 27 April 1865. Modern revolver cannons are actually automatically operated weapons. In 1905, C. M. Clarke patented the first fully automatic, gas-operated rotary chamber gun, but his design was ignored at the time as it came as reciprocating-bolt automatic weapons like the Maxim gun and the Browning gun were peaking in popularity. In 1932, the Soviet ShKAS machine gun, a 7.62 mm calibre aircraft ordnance, used a twelve-round capacity, revolver-style feeding mechanism with a single barrel and single chamber, to achieve firing rates of well over 1800 rounds per minute, and as high as 3,000 rounds per minute in special test versions in 1939, all operating from internal gas-operated reloading. Some 150,000 ShKAS weapons were produced for arming Soviet military aircraft through 1945. Around 1935, Silin, Berezin and Morozenko worked on a 6000 rpm 7.62 mm aircraft machine gun using revolver design, called SIBEMAS (СИБЕМАС), but the project was abandoned. It was not until the mid-1940s that the first practical modern revolver cannon emerged. The archetypal revolver cannon is the Mauser MK 213, from which almost all current revolver cannons are derived. In the immediate post-war era, Mauser engineers spread out from Germany and developed similar weapons around the world. Both the British and French made outright copies of the 30 mm versions of the MK 213, as the ADEN and DEFA, respectively. Switzerland produced the Oerlikon KCA. The American M39 cannon used the 20 mm version, re-chambered for a slightly longer 102 mm cartridge, intermediate between the 213's 82 mm and Hispano-Suiza HS.404's 110 mm. Several generations of the basic ADEN/DEFA weapons followed, remaining largely unchanged into the 1970s. Around that time, a new generation of weapons developed, based on the proposed NATO 25 mm caliber standard and the Mauser 27 mm round. A leading example is the Mauser BK-27. In the 1980s, the French developed the GIAT 30, a newer generation power-driven revolver cannon. The Rheinmetall RMK30 modifies the GIAT system further, by venting the gas to the rear to eliminate recoil. Larger experimental weapons have also been developed for anti-aircraft use, like the Anglo-Swiss twin barrel but single chamber 42 mm Oerlikon RK 421 given the code name "Red King" and the related single-barrel "Red Queen" – all of which were cancelled during development. The largest to see service is the Rheinmetall Millennium 35 mm Naval Gun System. Soviet revolver cannon are less common than Western ones, especially on aircraft. A mechanism for a Soviet revolver-based machine gun was patented in 1944. The virtually unknown Rikhter R-23 was fitted only to some Tu-22 models, but later abandoned in favor of the two-barrel, Gast gun Gryazev-Shipunov GSh-23 in the Tu-22M. The Rikhter R-23 does have the distinction of being fired from the space station Salyut 3. The Soviet navy has also adopted a revolver design, the NN-30, typically in a dual mount in the AK-230 turret. Lever-action In a classic Henry–Winchester type lever-action firearm, cartridges are loaded tandemly into a tubular magazine below the barrel. A short bolt is manipulated via linkage to a pivoted cocking lever. Once closed, an over-center toggle action helps locking the bolt in place and prevents the breech from opening accidentally when the weapon is fired. The cocking lever is often integral with the trigger guard, and gets manually flexed down and forward when operated. An interlock prevents firing unless the toggle is fully closed. The famous Model 1873 Winchester is exemplary of this type. Later lever-action designs, such as Marlin lever guns and those designed for Winchester by John Browning, use one or two vertical locking blocks instead of a toggle-link. There also exist lever-action rifle/shotguns that feed from a box magazine, which allows them to use pointed bullets. Some of the early manual repeating pistols (e.g. Volcanic pistol) also use a scaled-down version of lever-action. A one-off example of lever-action loading on an automatic firearm is the M1895 Colt–Browning machine gun. This weapon had a swinging lever beneath its barrel that was actuated by a gas bleed in the barrel, unlocking the breech to reload. This unique operation gave the nickname "potato digger" as the lever swung each time the weapon fired. Pump-action With a pump-action firearm, the action is operated by sliding a movable handguard on the fore-end backward and forward, with manipulated the bolt via linkage to eject a spent round, and extract and chamber a fresh round of ammunition. Pump-actions are usually associated with shotguns, but an example of a pump-action rifle is the Remington Model 7600 series. This type of rifle is still popular with some local law enforcement branches as it is easier to train police officers who are already familiar with a pump-action shotgun. Bolt-action In bolt-action firearms, the bolt is operated by directly gripping a bolt handle (usually on the right side) to extract spent cartridges case, push new rounds into the chamber and reset the hammer/striker to ready the weapon for firing again. Most bolt-action firearms use a rotating-bolt ("turn-and-pull") design. When the bolt is closed against the breech end of the gun barrel, it is locked onto the receiver via protruded lugs (usually on the bolt head) and occasionally also aided by the bolt handle that fits into a notch. To unlock the bolt, the handle must be rotated upwards first, which will shift the locking lugs out of their corresponding sockets. This allows the bolt to then be physically pulled rearwards, opening the barrel breech. An extractor on the bolt will hook onto the rim and pull out any cartridge (either fired or unused) remaining in the chamber, allowing it to be ejected from the gun. When the bolt is fully pulled to the rearmost position, the hammer/striker will get loaded against a spring and trapped by the sear, a process known as cocking. At the same time, the magazine will lift another round of its stored cartridges up into the path of the bolt head, so moving the bolt forward will push this new round into the chamber. The bolt handle is then rotated downward for relocking, the gun is safe and ready for another firing. The Mauser Gewehr 98 rifle is the most famous and influential bolt-action design, with many similar weapons derived from its pioneering design concept, such as the Karabiner 98 Kurz (abbreviated often as Kar98k or simply K98), the M1903 Springfield and the Arisaka Type 38 rifles. The Russian Mosin–Nagant rifle, the British Lee–Enfield, and the Norwegian Krag–Jørgensen are examples of alternate bolt-action designs. Another much rarer type of bolt-action is the straight-pull system, which uses complex bolt head mechanisms to facilitate locking. Straight-pull designs do not require the bolt handle to be rotated, allowing the user to cycle the action linearly, reducing the movements needed from originally four to only two, therefore significantly increasing the rate of fire. Examples of such firearms include the Schmidt–Rubin, Mannlicher M1886/M1888/M1890/M1895, M1895 Lee Navy, Ross rifle, Anschütz 1827 Fortner, Blaser R93/R8 and VKS. Autoloading Self-loading (or autoloading) repeating firearms can use some of the excess energy released from propellant combustion to cycle its action and facilitate loading of subsequent rounds of ammunition into the chamber, without needing the user to do any extra loading work with his hands. Depending on whether the action can automatically perform both the loading and ignition procedures, or only automatically load the ammo but require manual actuation of the hammer/striker, self-loading repeaters can be categorized into fully automatic and semi-automatic firearms. Blowback In blowback operation, the bolt is not actually locked at the moment of firing. To prevent violent recoil, in most firearms using this mechanism the opening of the bolt is delayed in some way. In many small arms, the round is fired while the bolt is still travelling forward, and the bolt does not open until this forward momentum is overcome. Other methods involve delaying the opening until two rollers have been forced back into recesses in the receiver in which the bolt is carried. Simple blowback action is simple and inexpensive to manufacture, but is limited in the power it can handle, so it is seen on small caliber weapons such as machine pistols and submachine guns. Lever-delayed blowback, as seen in for example the French FAMAS assault rifle, can also handle more powerful cartridges but is more complicated and expensive to manufacture. Blow-forward Blow-forward firearms incorporates a frame with a fixed breech face and the barrel moves away from the breech (frame) during the cycle of operation, in contrast to blowback firearms, which have the frame fixed to the barrel and the breech face moves in relation to the frame. The breech face is a part of the moving slide or bolt, depending on the layout of the blowback firearm. During firing, the friction of the bullet traveling down the barrel and the bore pressure pulls the barrel forward. This mechanism contains a minimum of moving parts (the barrel and spring are generally the only moving parts) and is more compact than other operating mechanism of equal barrel length. However, due to the reduced mass of rear-moving parts coupled with the increased mass of the forward-moving parts (the barrel plus the bullet and propellant gasses), recoil energy is significantly greater than other operating mechanisms. Most blow-forward guns rely partially on the inertia of the barrel as the rest of the firearm recoil away from it. The first blow-forward firearm was the Mannlicher M1894 pistol and protected under . The principle has been used in a few other weapons, including Schwarzlose Model 1908, Hino Komuro M1908, HIW VSK, Mk 20 Mod 0 grenade launcher, Pancor Jackhammer and Howa Type 96. Recoil-operated In a recoil-operated firearm, the breech is locked, and the barrel recoils as part of the firing cycle. In long-recoil actions, such as the Browning Auto-5 shotgun, the barrel and breechblock remain locked for the full recoil travel, and separate on the return; in short-recoil actions, typical of most semiautomatic handguns (e.g. the Colt M1911), the barrel recoils only a short distance before decoupling from the breechblock. Gas-operated In a gas-operated mechanism, a portion of the gases propelling the bullet from the barrel are extracted and used to operate a piston. The motion of this piston in turn unlocks and operates the bolt, which performs extraction of the spent cartridge and via spring action readies the next round. Almost all modern military rifles use mechanisms of this type. Rotary-barrel Rotary-barrel firearms (or rotary guns for short) uses multiple paraxial barrels in a rotating assembly, with each barrel firing automatically when rotated to a designated position, to achieve a rate of fire proportional to the speed of the barrel rotation. Rotary guns are typically belt-fed, though the earlier versions used top-mounted box magazines. Each barrel is paired with a cam-driven reciprocating action, so every barrel-action group is technically an independent repeater unit whose operating status corresponds to its rotational position within the assembly, and at any moment all the groups are at different stages of operating cycle to each other. Due to their capability to tolerate extremely rapid-firing (much higher than single-barreled automatic weapons of the same caliber), rotary guns are frequently used to deliver direct saturation fire for suppression and area denial. Early rotary guns are manually powered, and though quite successful at the time, was largely replaced from the battlefield before the turn of the 20th century by newer and more reliable machine guns such as the Maxim gun, but made a comeback during the Cold War in the form of automatic rotary cannons. One of the main reasons for the resurgence of these electrically/hydraulically powered multiple-barrel guns is the system's inherent tolerance for continuous high rates of fire. For example, 1000 rounds per minute of continuous fire from a conventional single-barrel weapon ordinarily results in rapid barrel overheating followed by action stoppages caused also by overheating; in contrast, a five-barreled rotary gun firing 1000 rounds per minute endures only 200 rounds per minute for each barrel. The other factor is that while single-barrel designs can achieve high cycling rates, each loading-extraction cycle can only commence after the previous cycle is physically complete, or else the system will jam mechanically, and the risk of such malfunction increases exponentially with increasingly higher cycling rates; a multiple-barrel design however allows multiple barrel-action groups to work simultaneously in overlapped, differentially timed cycles, thus diffusing the operational stress of each action into the duration of an entire barrel rotation (which is multitudes more than the cycle time of a single-barrel automatic firearm with the same firing rate). The design also solves the problem of defective ammunition, which can cause a typical single-barrel machine gun to cease operation when a cartridge fails to load, fire or eject; as a rotary gun is normally powered by an external power source, the barrel rotation will continue independently, ejecting any defective rounds indifferently as part of the operational cycle, and the firing will merely experience a brief pause for that non-firing barrel before resuming to usual firing with other barrels. Manual The earliest rotary-barrel firearm is the Gatling gun, invented by Richard Jordan Gatling in 1861, and patented on 4 November 1862. The Gatling gun operated by a hand-crank mechanism, with six barrels revolving around a central shaft (although some models had as many as ten). Each barrel fires once per revolution at about the same 4 o'clock position. The barrels, a carrier and a lock cylinder were separate and all mounted on a solid plate, mounted on an oblong fixed frame. Manually turning the crank rotated the shaft. The carrier was grooved and the lock cylinder was drilled with holes corresponding to the barrels. Cartridges, held in a hopper-like magazine on top, dropped individually into the grooves of the carrier. The lock was simultaneously forced by the cam to move forward and load the cartridge, and when the cam was at its highest point, the cocking ring freed the lock and fired the cartridge. After the cartridge was fired the continuing action of the cam drew back the lock bringing with it the spent casing which then dropped to the ground. The Gatling gun was first used in combat during the American Civil War. Twelve of the guns were purchased personally by Union Army commanders and used in the trenches during the Siege of Petersburg (June 1864 – April 1865). Eight other Gatling guns were fitted on gunboats. The gun was not accepted by the Army until 1866, when a sales representative of the manufacturing company demonstrated it in combat. On 17 July 1863, Gatling guns were purportedly used to overawe New York anti-draft rioters. Post-Civil War, two Gatling guns were brought by a Pennsylvania National Guard unit from Philadelphia to use against strikers in the Pittsburgh Railway riots. During the American Indian Wars, Gatling guns saw frequent service, though famously not used at the Battle of the Little Bighorn when Gen. George Armstrong Custer chose not to bring any with his main force. In 1885, Lieutenant Arthur L. Howard of the Connecticut National Guard took a personally owned Gatling gun to Saskatchewan, Canada for use with the Canadian military against Métis rebels during Louis Riel's North-West Rebellion. Gatling guns were used by the U.S. Army during both the Spanish–American War and the Philippine–American War. A four-gun battery of Colt-made Model 1895 ten-barrel Gatling guns in .30 Army was formed into a separate detachment led by Lt. John "Gatling Gun" Parker. The detachment proved very effective, supporting the advance of American forces at the Battle of San Juan Hill. Three of the Gatlings with swivel mountings were used with great success against the Spanish defenders. Despite this, the Gatling's weight and cumbersome artillery carriage hindered its ability to keep up with infantry forces over difficult ground, particularly in Cuba and the Philippines, where outside the major cities there were heavily foliaged forests and steep mountain paths, and the roads were often little more than jungle footpaths. Elsewhere, a Gatling gun was purchased in April 1867 for the Argentine Army by minister Domingo F. Sarmiento under instructions from president Bartolomé Mitre. Captain Luis Germán Astete of the Peruvian Navy took dozens of Gatling guns with him in December 1879 from the United States for use during the Peru-Chile War of the Pacific, especially in the Battle of Tacna (May 1880) and the Battle of San Juan (January 1881). The Gatling gun was used most successfully to expand European colonial empires in Africa to defeat mounting massed attacks by indigenous warriors (e.g. the Zulu, Bedouin, and Mahdists). Imperial Russia purchased 400 Gatling guns against Turkmen cavalry and other nomads of Central Asia. The British Army first deployed the Gatling gun in 1873–74 during the Anglo-Ashanti wars, and extensively during the latter actions of the 1879 Anglo-Zulu war. The Royal Navy used Gatling guns during the 1882 Anglo-Egyptian War. Automatic After the original Gatling gun was replaced in service by newer recoil-/gas-operated machine guns, the approach of using multiple rotating barrels fell into disuse for many decades. However, some prototypes were developed during the interwar years, but rarely used. During World War I, Imperial Germany worked on the Fokker-Leimberger, an externally powered 12-barrel Gatling gun nicknamed "nutcracker", that could fire more than 7,200 rounds per minute, though many accused it of exaggeration. Failures during the war were attributed to the poor quality of German wartime ammunition, although the type of breech employed had ruptured-case problems in a British 1950s experimental weapon. Fokker continued to experiment with this type of breech after his post-war move to the United States. A different Fokker prototype in a US museum attests to the failure of this line of development. After World War II, the U.S. Army Air Force determined that an improved automatic cannon with an extremely high rate of fire was required against fast-moving enemy jet aircraft. Using experience gained from the Luftwaffe MG 151 and MK 108 cannons, a larger-caliber cannon shell for the new gun was deemed desirable. In June 1946, the General Electric Company was awarded a U.S. military defense contract to develop a high-ROF aircraft gun, which GE termed "Project Vulcan". While researching prior work, ordnance engineers recalled the experimental electrically-driven Gatling weapons from the turn of the 20th century. In 1946, a Model 1903 Gatling gun borrowed from a museum was set up with an electric motor and test-fired, briefly managing a rate of 5,000 rounds per minute. In 1949, GE began testing the first model of its modified Gatling design, now called the Vulcan Gun. The first prototype was designated the T45 (Model A), firing ammunition at about 2,500 rounds per minute from six barrels, and in 1950 GE delivered ten initial Model A .60 cal. T45 guns for evaluation. Thirty-three model C T45 guns in three calibers (.60 cal., 20 mm and 27 mm) were delivered in 1952 for additional testing. After extensive testing, the T171 20mm gun was selected for further development, and was standardized by the U.S. Army and U.S. Air Force in 1956 as the M61 Vulcan gun.
Technology
Mechanisms_2
null
52970582
https://en.wikipedia.org/wiki/Diagnostic%20microbiology
Diagnostic microbiology
Diagnostic microbiology is the study of microbial identification. Since the discovery of the germ theory of disease, scientists have been finding ways to harvest specific organisms. Using methods such as differential media or genome sequencing, physicians and scientists can observe novel functions in organisms for more effective and accurate diagnosis of organisms. Methods used in diagnostic microbiology are often used to take advantage of a particular difference in organisms and attain information about what species it can be identified as, which is often through a reference of previous studies. New studies provide information that others can reference so that scientists can attain a basic understanding of the organism they are examining. Aerobic vs anaerobic Anaerobic organisms require an oxygen-free environment. When culturing anaerobic microbes, broths are often flushed with nitrogen gas to extinguish oxygen present, and growth can also occur on media in a chamber without oxygen present. Sodium resazurin can be added to indicate redox potential. Cultures are to be incubated in an oxygen-free environment for 48 hours at 35 °C before growth is examined. Anaerobic bacteria collection can come from a variety of sources in patient samples, including blood, bile, bone marrow, cerebrospinal fluid, direct lung aspirate, tissue biopsies from a normally sterile site, fluid from a normally sterile site (like a joint), dental, abscess, abdominal or pelvic abscess, knife, gunshot, or surgical wound, or severe burn. Incubation length Incubation times vary based upon the microbe that requires culturing. Traditional culturing techniques, for example, require less than 24 hours culture time for Escherichia coli but 6–8 weeks for successful culturing of Mycobacterium tuberculosis before definitive results are expressed. A benefit of non-culture tests is that physicians and microbiologists are not handicapped by waiting periods. Incubation follows a growth curve variable for every microorganism. Cultures follow a lag, log, stationary, and finally death phase. The lag phase is not well known in microbiology, but it is speculated that this phase consists of the microorganism adjusting to its environment by synthesizing proteins specific for the surrounding habitat. The log phase is the period where a culture experiences logarithmic growth until nutrients become scarce. The stationary phase is when culture concentration is the highest and cells stop reproducing. When nutrients in the environment are depleting, organisms enter the death phase where toxic metabolites become abundant and nutrients are depleted to the point where cell death exceeds reproduction. Rapid identification after culture Automated culturing systems Automatic cell culturing systems are becoming popular because of their ability to maintain a sterile growth environment and remove strain on the laboratory staff involving repetitive experimentation. Laboratories can also set incubation times to adjust for the lag period involved in bacterial growth. Blood cultures Blood cultures can allow for diagnostic results after culture. Recent development of DNA based PCR diagnostics have provided faster diagnostic results as opposed to overnight biochemical tests. DNA diagnostic test can diagnose with near the same specificity as biochemical test, resulting in the same diagnostic result in 90% of cases. Breath tests Breath test for microbial diagnosis on patients has been used in a clinical setting for bacteria, including Helicobacter pylori. Diagnostic test using the breath of patients look for metabolites excreted that were manufactured by the infectious microorganism. H. pylori is tested by testing patients for CO2 concentration, increased because of the organism’s ability to convert urea into other derivatives. Conventional tests Antibody detection A benefit of antibody detection (ELISA) is that protein identification on a microorganism becomes faster than a western blot. Antibody detection works by attaching an indicator to an antibody with a known specificity and observing whether the antibody attaches. ELISA can also indicate viral presence and is highly specific, having a detection specificity of 10−9-10−12 moles per litre detection. By knowing the epitope sequence of the antibody, ELISA can also be used for antigen detection in a sample. Histological detection and culture Histological methods used for microbiology are useful because of their ability to quickly identify a disease present in a tissue biopsy. Microscopy Staining Staining used in microbiology identification include: Gram stain, Acid-fast stain, Giemsa stain, India ink stain, Ziehl–Neelsen stain. Wet Prep Rapid antigen tests Immunofluorescence Immunofluorescence is performed by the production of anti-antibodies with a fluorescent molecule attached, making it a chemiluminescent molecule, which provides a glow when subject to ultraviolet light. Antibodies are added to a bacterial solution, providing an antigen for the binding of fluorescent anti-antibody adherence. Mass spectrometry MALDI-TOF (Matrix-assisted laser desorption/ionization - time of flight) is a specific type of mass spectrometry that is able to identify microorganisms. A pure culture is isolated and spread directly on a stainless steel or disposable target. The cells are lysed and overlaid with a matrix, which forms protein complexes with the bacterial proteins. The MALDI fires a laser and ionizes the protein complexes, which break off and travel up the vacuum where they are detected based on mass and charge. The resulting protein spectra is compared to a known database of previously catalogued organisms, resulting in rapid diagnosis of microorganisms. Recent studies have suggested that these tests can become specific enough to diagnose down to the sub-species level by observing novel biomarkers. The MALDI-TOF identification method requires pure cultures that are less than 72 hours old. This places the organism in log phase with an abundance of ribosomal proteins, which are the most common proteins detected in the spectra. Identifications with this technology can also be impacted if the culture is exposed to cold temperatures, as this would change the typical protein distribution. Biochemical Profile-based Microbial Identification Systems Phenotypic tests are used to identify microbes based on metabolic and biochemical pathways present in those microbes. There are many automated and semi-automated commercial systems available. These methods can be very informative but are not as accurate as MALDI-TOF or genotypic methods. 6.5% salt broth The 6.5% salt broth test is used to analyze the tolerance level of various bacteria under halophilic conditions. This test is used because most organisms cannot survive in high salt concentrations while Staphylococci, Enterococci, and Aerococci are all expected to tolerate 6.5% NaCl concentrations. Acetate utilization The acetate utilization test is used primarily to differentiate between Escherichia coli from members of the genus Shigella. Many of the Escherichia coli strains have the capability of the utilization of acetate for a sole carbon and energy source, while Shigella does not. Since acetate utilization results in an increase in pH, an indicator is added that changes color under conditions of acetate utilization. ALA An ALA (delta-aminolevulinic acid) test is used to test for the presence of porphyrin and cytochrome compounds. Finding hemin synthesis indicates that the organism is likely Haemophilus. Aminopeptidase The aminopeptidase test analyzes bacteria for the production of the enzyme L-alanine-aminopeptidase, an enzyme found in many gram-negative bacteria. Adding L-Alanine-4-nitroanilide hydrochloride to a bacterial culture works as an indicator, changing to a yellow color in the presence of L-alanine-aminopeptidase. Analytical profile index An analytical profile index is a fast identification system based on biochemical incubation tests. Usually, this test is used to quickly diagnose clinically relevant bacteria by allowing physicians to run about 20 tests at one time. Antibiotic disks Antibiotic disks are used to test the ability for an antibiotic to inhibit growth of a microorganism. This method, which is commonly used with Mueller–Hinton agar, is used by evenly seeding bacteria over a petri dish and applying an antibiotic treated disk to the top of the agar. By observing the ring formed around the disk formed due to the lack of bacterial growth, the zone of inhibition can be found, which is used to find the susceptibility of an organism to an antibiotic. Bile esculin agar The bile esculin test is used to differentiate members of the genus Enterococcus from Streptococcus. Bile solubility Bile solubility is used to test for Streptococcus Pneumoniae due to their unique ability to be lysed by sodium deoxycholate. Lysis indicates S. Pneumoniae while no lysis does not. CAMP A CAMP test is used to differentiate between Streptococcus agalactiae and other species of beta-hemolytic Streptococcus. This biochemical test uses the fact that Streptococcus agalactiae excretes a CAMP substance, making it slightly more hemolytic, which can be observed on blood agar media. Catalase The catalase test tests whether a microbe produces the enzyme catalase, which catalyzes the breakdown of hydrogen peroxide. Smearing a colony sample onto a glass slide and adding a solution of hydrogen peroxide (3% H2O2) will indicate whether the enzyme is present or not. Bubbling is a positive test while nothing happening is a negative result. Cetrimide agar Cetrimide agar slants is a selective agar used to isolate Pseudomonas aeruginosa. CLO tests The CLO test is used to diagnose H. Pylori in patient biopsies. A sample of the biopsy is places in a medium containing urea, which H. Pylori can use in some of its biochemical pathways. Consumption of urea indicates a positive test result. Coagulase The coagulase test determines whether an organism can produce the enzyme coagulase, which causes the fibrin to clot. Inoculating a plasma test tube with the microbe indicates whether coagulase is produced. A clot indicates the presence of coagulase, while no clot indicates the lack of coagulase. DNA hydrolysis DNase agar is used to test whether a microbe can produce the exoenzyme deoxyribonuclease (DNase), which hydrolyzes DNA. Methyl green is used as an indicator in the growth medium because it is a cation that provides an opaqueness to a medium with the presence of negatively charged DNA strands. When DNA is cleaved, the media becomes clear, showing the presence of DNase activity. DNA hydrolysis is tested by growing an organism on a DNase Test Agar plate (providing nutrients and DNA) and then checking the plate for hydrolysis. The agar plate has DNA-methyl green complex, and if the organism on the agar does hydrolyze DNA then the green color fades and the colony is surrounded by a colorless zone. Gelatin The gelatin test is used to analyze whether a microbe can hydrolyze gelatin with the enzyme gelatinase. The gelatin makes the agar solid, so if an organism can produce gelatinase and consume gelatin as an energy and carbon source, the agar will become liquid during growth. Gonocheck II The Gonochek II test, a commercial biochemical test, is used to differentiate between Neisseria lactamica, Neisseria meningitidis, N. gonorrhoeae and Moraxella catarrhalis. The principle behind this test is to use enzymes native to the organism to create a colored product in the presence of foreign molecules. The chemical 5-bromo-4-chloro-3-indolyl-beta-D-galactoside is used in the test because N. lactamica can hydrolyze it with the production of β-galactosidase, turning the solution into a blue color. Gamma-glutamyl-p-nitroanilide is added to the solution to indicate whether the bacteria is N. meningitides, which hydrolyzes the molecule with the enzyme gamma-glutamylaminopeptidase, producing a yellow end-product. Prolyl-4-methoxynaphthylamide is in the solution to identify N. gonorrhoeae because of its ability to hydrolyze the molecule with the enzyme hydroxyprolylaminopeptidase, creating a red-pink derivative. M. catarrhalis contains none of these enzymes, rendering the solution colorless. This process of identification takes approximately 30 minutes in total. Hippurate The Hippurate diagnostic test is used to differentiate between Gardnerella vaginalis, Campylobacter jejuni, Listeria monocytogenes and group B streptococci using the chemical Hippurate. The Hippurate hydrolysis pathway, capable by organisms with the necessary enzymes, produces glycine as a byproduct. Using the indicator ninhydrin, which changes color in the presence of glycine, will display either a colorless product, a negative result, of a dark blue color, a positive result. Indole butyrate disk An indole butyrate disc is used to differentiate between Neisseria gonorrhoeae (negative result) and Moraxella catarrhalis (positive result). This test involves a butyrate disk, which when smeared with a culture, will change color for a positive result after 5 minutes of incubation. A blue color is the result of a positive test. Lysine iron agar slant The lysine iron agar slant test is used to tell whether an organism can decarboxylate lysine and/or produce hydrogen sulfide. Lysostaphin The lysostaphin test is used to differentiate between Staphylococcus and Micrococcus bacteria. Lysostaphin can lyse Staphylococcus, but Micrococcus bacteria are resistant to the chemical. Methyl red test The methyl red test is used to analyze whether a bacterium produces acids through sugar fermentation. Microdase Microdase is a modified oxidase test used to differentiate Micrococcus from Staphylococcus by testing for the presence of cytochrome c. A positive result produces a dark color around the inoculant while negative result produces no color change. Nitrite test The nitrite test is commonly used to diagnose urinary tract infections by measuring the concentrations of nitrite in solution, indicating the presence of a gram-negative organism. A simple nitrite test can be performed by adding 4 M sulfuric acid to the sample until acidic, and then adding 0.1 M iron (II) sulfate to the solution. A positive test for nitrite is indicated by a dark brown solution, arising from the iron-nitric oxide complex ion. Oxidase The oxidase test indicates whether a microbe is aerobic. By using the chemical N,N,N,N-tetramethyl-1,4-phenylendiamin, an electron acceptor that changes color when oxidized by cytochrome c oxidase, one can deduce whether a microbe can perform aerobic respiration. A color change to purple indicates oxidative respiration while no color change provides evidence that the organism does not have cytochrome c oxidase. Phenylalanine deaminase The phenylalanine deaminase test is used to tell whether an organism can produce the enzyme deaminase. Deaminase is the enzyme that can deaminate the amino acid phenylalanine into the products ammonia and phenylpyruvic acid. The test is performed by adding phenylalanine to the growth medium and allowing growth to occur. After incubation, 10% ferric chloride is added to the solution, which will react with phenylpyruvic acid in solution to make a dark green color, resulting in a positive test result. PYR The PYR test is used to check if an organism has enzymes to hydrolyze L-pyrrolidonyl- β-napthylamide. A positive result indicates that the organism is either group A streptococcus and/or group D enterococcus. Reverse CAMP The reverse CAMP test utilizes the synergetic hemolytic abilities of the CAMP factor produced by Streptococcus agalactiae with the α-toxin produced by Clostridium perfringens. Streaking these two organisms perpendicular to each other on a blood agar plate will yield a “bow-tie” clearing of the blood agar by the hemolytic capabilities of the two organisms’ toxins. Incubation requires 24 hours at 37 °C. Simmons' citrate agar Simmons' citrate agar is used to test whether an organism can utilize citrate for its sole carbon source. Spot indole The spot indole test is used to determine if a microbe can deaminate tryptophan to produce indole. This test is performed by saturating a piece of filter paper with Indole Kovacs Reagent and scraping a portion of microbe onto the paper. A color to a pink-red color indicates a positive result while no color change indicates the lack of tryptophanase. Sulphide indole motility medium The sulfide indole motility medium is a three-part test for an organism’s ability to reduce sulfates, produce indoles, and motile ability. TSI slant The triple sugar iron (TSI) test is a differential media used to tell whether an organism can ferment glucose, sucrose, and/or lactose and whether an organism can produce hydrogen sulfide gas. Urea agar slant The urease agar slant is used to measure an organism’s ability to produce urease, an enzyme capable to digesting urea in carbon dioxide and ammonia through hydrolysis. Because ammonia is alkaline, the media contains phenol red, an indicator that changes from orange to pink when a pH increases above 8.1. When ammonia is increased to high enough concentrations, the media will change to a pink color, indicating the presence of urease production. Voges–Proskauer test The Voges-Proskauer test detects whether a bacterium is producing the product acetoin from the digestion of glucose. Cellular fatty acid based identification Mycolic acid analysis Mycolic acid analysis has been an evolving field of study for gas-liquid chromatography, as it offers a solution to slow growth rates in Mycobacterium. Mycolic acid is a fatty acid found in the disease tuberculosis, offering a chemical target for diagnosticians to look for. Nucleic acid extraction techniques Cesium chloride / Ethidium bromide density gradient centrifugation With high speed buoyant density ultracentrifugation, a density gradient is created with caesium chloride in water. DNA will go to the density that reflects its own, and ethidium bromide is then added to enhance the visuals the nucleic acid band provides. Magnetic bead method New extraction techniques have been developed using magnetic beads for the purification of nucleic acids by taking advantage of the charged and polymeric nature of long strand of DNA. Beads are both uncoated to increase surface are and yield, while others are more selective by being coated with functional groups that interact with the polymers present in microbes. One common method is to use polyethylene glycol to drive DNA binding to the magnetic beads. The molecular weight and concentration of the PEG will control what molecular weight DNA binds. Phenol–chloroform extraction Phenol-Chloroform extraction is a liquid-liquid method used by biochemists to separate nucleic acids from proteins and lipids after cells have been lysed. This method has fallen out of favor with scientists and microbiologists as there are easier methods available which require less hazardous chemicals. Solid phase extraction Solid phase extraction which separates long polymers like DNA from other substances found in the cells. This is similar to magnetic beads, where the solid phase is fixed and selectively binds a cellular component, allowing for its isolation. Methods with electrophoretic outputs Gel electrophoresis is a technique to separate macromolecules by taking advantage of the charge on many of the molecules found in nucleic acids and protein. This is also the key method for Sanger sequencing. Fluorescent-labeled DNA fragments move through a polymer and are separated with one base precision. A laser excites the fluorescent tag and is captured by a camera. The result is an electropherogram which reads the DNA sequence. Restriction enzyme based Optical mapping Optical mapping is a technique using multiple restriction enzymes to create a genomic “barcode” which can be referenced back to diagnose an unknown microbe. Pulsed-field gel electrophoresis Pulsed-field gel electrophoresis is a technique used to separate large DNA in an electric field that periodically changes direction. By cutting segments of the DNA with restriction enzymes, pulse-field can be used to separate out the segments of DNA. Restriction enzymes then gel electrophoresis Restriction enzymes are first used to recognize and then cut specific nucleic acid sequences. These cut pieces of DNA can be run through a gel electrophoresis to allow diagnostics of the organism by referencing back to previous gel electrophoresis results. Ribotyping Ribotyping is a rapid automated method for microbial diagnostics, testing for rRNA in bacteria using restriction enzyme digestion and Southern blot technology. PCR-based Multiple loci VNTR analysis Multiple-locus VNTR analysis is a test used to detect variable number tandem repeats, which act as a DNA fingerprint in microbial diagnostics. DNA sequence-based methods Multi-locus sequence typing Multilocus sequence typing (MLST) is the sequencing of numerous loci to diagnose an organism by comparing DNA sequences to a database of known organisms. This method is often used to compare isolates or strains of the same species to see if they are indistinguishable or different from each other. This is common for tracking food-borne illnesses and public health outbreaks. Most MLST assays are published in scientific journals so consistent methods are used worldwide. There are also public databases available for tracking and comparisons. Single-locus sequence typing Single-locus sequence typing (SLST) is the sequencing of a single locus of an organism to produce data that can be used for strain-level comparisons between isolates of the same species. Genotypic identifications For bacterial identifications, microbiologists sequence the 16S rRNA gene and for fungal identifications, sequence the ITS regions. Both regions are part of the ribosomal operon so they are well-conserved but provide enough variation to allow for speciation. Accurate identifications require high quality sequence data, a robust data analysis, and a broad microbial database of known organisms. It is also useful to use a Neighbor Joining tree or some other phylogenetic approach to make the identification. Whole genome sequencing (WGS) Whole genome sequencing and genomics applications can be used for large-scale alignment and comparative analysis with both bacteria and fungi. WGS can be used to diagnose, identify, or characterize an organism down to the individual base pairs by sequencing the entire genome. WGS can also be used to compare the genomes or average nucleotide identity (ANI) of the shared genes between two strains and can be a robust way to compare genetic relatedness and if often used for investigating organisms involved in foodborne illness and other outbreaks.
Biology and health sciences
Diagnostics
Health
42814957
https://en.wikipedia.org/wiki/Green%20iguana
Green iguana
The green iguana (Iguana iguana), also known as the American iguana or the common green iguana, is a large, arboreal, mostly herbivorous species of lizard of the genus Iguana. Usually, this animal is simply called the iguana. The green iguana ranges over a large geographic area; it is native from southern Brazil and Paraguay as far north as Mexico. A herbivore, it has adapted significantly with regard to locomotion and osmoregulation as a result of its diet. It grows to in length from head to tail, although a few specimens have grown more than with bodyweights upward of . Commonly found in captivity as a pet due to its calm disposition and bright colors, it can be very demanding to care for properly. Space requirements and the need for special lighting and heat can prove challenging to the hobbyist. Taxonomy The species was first officially described by Swedish botanist Carl Linnaeus in 1758. Since then, numerous subspecies have been identified, but later classified as merely regional variants of the same species. Using nuclear and mitochondrial DNA-sequence data to explore the phylogenic history of the green iguana, scientists from El Salvador, Mexico, and the United States studied animals collected from 17 countries. The topology of phylogeny indicated that the species originated in South America and eventually radiated through Central America and the Caribbean. The study revealed no unique mitochondrial DNA haplotypes for subspecific status, but did indicate the deep lineage divergence between Central and South American populations. Naturalists once classified the Central American iguanas as a separate subspecies (I. i. rhinolopha), but this classification was later found to be invalid based on mitochondrial DNA, and iguanas with similar nose projections appeared randomly in other populations and interbred freely with those that do not share this trait. Genetic studies in the late 2010s still recovered I. rhinolopha as a distinct species, along with several other cryptic lineages present in I. iguana, and classifying only the South American populations may be the "true" green iguana. Two new insular subspecies (I. i. insularis and I. i. sanctaluciae) endemic to St. Lucia, Saint Vincent and the Grenadines and Grenada were also identified in 2019; a 2020 study also recovered both these subspecies as part of a distinct species, the southern Antillean horned iguana (I. insularis). The study also found the Saban black iguana (I. melanoderma), described in that study, to be the sister group of South American I. iguana, with the clade containing both being sister to that of I. insularis. The Reptile Database disagrees with these conclusions, and groups all of these within the green iguana, with four subspecies: I. i. melanoderma, I. i. insularis, I. i.sanctaluciae, and I. i. iguana. Etymology The word "iguana" is derived from a Spanish form of the Taíno name for the species: iwana. In some Spanish-speaking countries, males of the species are referred to as garrobo or ministro and juveniles are called iguanita or garrobito. Distribution and habitat The native range of the green iguana extends from southern Mexico to central Brazil, Paraguay, and Bolivia and the Caribbean; specifically Grenada, Aruba, Curaçao, Bonaire, Trinidad and Tobago, St. Lucia, St. Vincent, Montserrat, Saba and Útila. They have been introduced to Grand Cayman, Puerto Rico, Hispaniola (in the Dominican Republic side), Saint Martin (island), Guadeloupe, Martinique, Saint Vincent and the Grenadines, Singapore, Thailand, Taiwan, Texas, Florida, Hawaii, and the U.S. Virgin Islands. Furthermore, green iguanas colonised the island of Anguilla in 1995 after being washed ashore following a hurricane, serving as direct proof of the mechanism of oceanic dispersal in allowing species to colonise areas where they previously did not occur. Though the species is not native to Martinique, a small wild colony of released or escaped green iguanas endures at historic Fort Saint Louis. The green iguana has been introduced from South America to Puerto Rico and is very common throughout the island, where it is considered an invasive species; in the United States, feral populations also exist in South Florida (including the Florida Keys), Hawaii, the U.S. Virgin Islands and the Rio Grande Valley of Texas. The green iguana has become rare in parts of its native range of Central and South America due to hunting of wild iguanas for food, where iguanas have received the nickname gallino de palo ("bamboo chicken" or "chicken of the trees"). Overhunting resulted in a partial closure of markets in Nicaragua in 1976, while the government of Panama had taken action by the late 1960s to protect iguanas. Green iguanas are diurnal, arboreal, and are often found near water. Agile climbers, Iguana iguana can fall up to and land unhurt (iguanas use their hind leg claws to clasp leaves and branches to break a fall). During cold, wet weather, green iguanas prefer to stay on the ground for greater warmth. When swimming, iguanas remain submerged, letting their legs hang limply against their sides. They propel through the water with powerful tail strokes. While they may often be found in trees, these animals are well-known burrowers. The size of their burrow can range from deep, with a diameter of . They have been observed burrowing in canals, levees, and dikes and along seawalls in southern Florida. If individuals do not dig their own, they may even use gopher tortoise burrows or usurp those of the Florida burrowing owl. Description The green iguana is a large lizard and is probably the largest species in the iguana family, though a few in the genus Cyclura may match or exceed it in weight. Adults typically grow to in length from head to tail. As in all iguanas, the tail comprises much of this length, and the snout-to-vent length of most green iguanas is . A typical adult male weighs around while the smaller adult female typically weighs . A few large males can reach or exceed in weight and long. Some specimens have even reportedly been measured at a body weight of greater than . Despite their name, green iguanas occur in different colours and types. In southern countries of their range, such as Peru, green iguanas appear bluish in colour, with bold blue markings. On islands such as Bonaire, Curaçao, Aruba, and Grenada, a green iguana's colour may vary from green to lavender, black, and even reddish brown. Green iguanas from the western region of Costa Rica are red, and animals of the northern ranges, such as Mexico, appear orange. Juvenile green iguanas from El Salvador are often bright blue, but lose this color as they get older. Adult iguanas found on most of St. Lucia, mainly on the northeastern coast, Louvette, and Grand Anse, have many differences from other green iguana populations. They are light green with predominant black stripes. Instead of the typical orange dewlap, the iguanas of St. Lucia have a black dewlap. When compared to the common green iguana, females lay about half the number of eggs, 25 instead of 50. Scales to the back of their head, near the jawbone, are smaller. Their irises are white or cream, whereas other green iguanas have yellow irises. Green iguanas possess a row of spines along their backs and tails, which helps to protect them from predators. Their whip-like tails can be used to deliver painful strikes, and like many other lizards, when grabbed by the tail, iguanas can allow it to break, so they can escape and eventually regenerate a new one. In addition, iguanas have a well-developed dewlap, which helps regulate their body temperature. This dewlap is used in courtships and territorial displays. Green iguanas have excellent vision, enabling them to detect shapes and motions at long distances. As green iguanas have only a few rod cells, they have poor vision in low-light conditions. At the same time, they have cells called double-cone cells that give them sharp color vision and enable them to see ultraviolet wavelengths. This ability is highly useful when basking so they can ensure they absorb enough sunlight to produce vitamin D. Green iguanas have a white photosensory organ on the top of their heads called the parietal eye (also called the third eye, pineal eye, or pineal gland), in contrast to most other lizards that have lost this primitive feature. This "eye" has only a rudimentary retina and lens and cannot form images, but is sensitive to changes in light and dark and can detect movement. This helps the iguana detect predators stalking it from above. Green iguanas have very sharp teeth that are capable of shredding leaves and even human skin. These teeth are shaped like a leaf, broad and flat, with serrations on the edge. The similarity of these teeth to those of one of the first dinosaurs discovered led to the dinosaur being named Iguanodon, meaning "iguana tooth", and the incorrect assumption that it had resembled a gigantic iguana. The teeth are situated on the inner sides of the jawbones, which is why they are hard to see in smaller specimens. Primarily herbivorous, green iguanas are presented with a special problem for osmoregulation; plant matter contains more potassium and as it has less dense nutritional content, more must be eaten to meet metabolic needs. As green iguanas are not capable of creating liquid urine more concentrated than their bodily fluids, like birds they excrete nitrogenous wastes as urate salts through a salt gland. As a result, green iguanas have developed a lateral nasal gland to supplement renal salt secretion by expelling excess potassium and sodium chlorides. Green iguanas from Guatemala and southern Mexico (which may belong to the distinct species I. rhinolopha) predominantly have small horns on their snouts between their eyes and their nostrils, whereas others do not. Ecology Reproductive biology Male green iguanas have highly developed femoral pores on the underside of their thighs, which secrete a scent (females have femoral pores, but they are smaller in comparison to those of the males). In addition, the dorsal spines that run along a green iguana's back are noticeably longer and thicker in males than they are in females, making the animals somewhat sexually dimorphic. Male green iguanas tend to display more dominant behaviors, such as head bobbing and tail whipping. They also tend to develop a taller dorsal crest than females, as well as taller dorsal spines (or spikes). Large, round, very pronounced jowls are generally a male characteristic. Jowls are located under the jaw and are protected by the subtympanic plate, which is a large, green, circular-shaped scale. Green iguanas are oviparous, with females laying clutches of 20 to 71 eggs once per year during a synchronized nesting period. The female green iguana gives no parental protection after egg laying, apart from defending the nesting burrow during excavation. In Panama, the green iguana has been observed sharing nest sites with American crocodiles, and in Honduras with spectacled caimans. The hatchlings emerge from the nest after 10–15 weeks of incubation. Once hatched, the young iguanas look similar to the adults in color and shape, resembling adult females more so than males and lacking dorsal spines. Juveniles stay in familial groups for the first year of their lives. Male green iguanas in these groups often use their own bodies to shield and protect females from predators, and it appears to be the only species of reptile to do this. Behavior When frightened by a predator, green iguanas attempt to flee, and if near a body of water, dive into it and swim away. If cornered by a threat, the green iguana extends and displays the dewlap under its neck, stiffens and puffs up its body, hisses, and bobs its head at the aggressor. If the threat persists, the iguana can lash with its tail, bite, and use its claws in defense. The wounded are more inclined to fight than uninjured prey. Green iguanas use "head bobs" and dewlaps in a variety of ways in social interactions, such as greeting another iguana or to court a possible mate. The frequency and number of head bobs have particular meanings to other iguanas. Green iguanas are hunted by predatory birds, and their fear of these is exploited as a ploy to catch them in the wild. A hunter imitates the sound of a hawk by whistling or screaming, causing the iguana to freeze and making its capture easier. Diet Green iguanas are primarily herbivores, with captives feeding on leaves such as turnip, mustard, and dandelion greens, flowers, fruit, and growing shoots of upwards of 100 different species of plant. In Panama, one of the green iguana's favorite foods is the wild plum (Spondias mombin). Although they consume a wide variety of foods if offered, green iguanas are naturally herbivorous and require a precise ratio of minerals (two to one calcium to phosphorus) in their diet. Captive iguanas must have a variety of leafy greens along with fruits and vegetables such as turnip greens, collard greens, butternut squash, acorn squash, mango, and parsnip. Juvenile iguanas often eat feces from adults to acquire the essential microflora to digest their low-quality and hard-to-process vegetation-only diet. Some debate exists as to whether captive green iguanas should be fed animal protein. Some evidence shows wild iguanas eating grasshoppers and tree snails, usually as a byproduct of eating plant material. Wild adult green iguanas have been observed eating birds' eggs and chicks. They occasionally eat a small amount of carrion or invertebrates. Zoologists, such as Adam Britton, believe that such a diet containing protein is unhealthy for the animal's digestive system, resulting in severe long-term health damage, including kidney failure and leading to premature death. On the other side of the argument is that green iguanas at the Miami Seaquarium in Key Biscayne, Florida, have been observed eating dead fish, and individuals kept in captivity have been known to eat mice without any ill effects. De Vosjoli writes that captive animals have been known to survive and thrive on eating nothing but whole rodent block, or monkey chow, and one instance of romaine lettuce with vitamin and calcium supplements. When found in unnatural habitats, especially those of high human population, they have also been known to feed on human garbage and poultry feces. Captive iguanas should not be fed lettuce or meat, and instead receive the vitamins and minerals they need from a purely herbivorous diet. As an invasive species Caribbean In the aftermath of Hurricane Luis and Hurricane Marilyn in 1995, a raft of uprooted trees carrying 15 or more green iguanas landed on the eastern side of Anguilla – an island where green iguanas had never been recorded before. These iguanas were apparently accidentally caught on the trees and rafted across the ocean from Guadeloupe, where green iguanas are an introduced species. Examination of the weather patterns and ocean currents indicated that the iguanas had probably spent three weeks at sea before arriving on Anguilla. Evidence of this new colony breeding on the island was found within two years of its arrival. In February 2012, the government of Puerto Rico proposed that the islands' iguanas, which were said to have a population of 4 million and considered to be a non-native nuisance, be eradicated and sold for meat. Iguanas have especially established introduced populations on islands in the Lesser Antilles, such as most of the French West Indies, Sint Eustatius, and Dominica. Fiji The green iguana is present as an invasive species on some of the islands of Fiji, where it is known as the American iguana. It poses a threat to the native iguanas through the potential spread of disease and to humans by spreading Salmonella. They were initially brought to Qamea in 2000 by an American who wanted them to eat the numerous insects on the island, although they are primarily herbivorous. They are now on the islands of Laucala, Matagi and Taveuni. United States The green iguana is established on Oahu and Maui, Hawaii, as a feral invasive species, despite strict legislation banning the importation of any reptiles, and in the Rio Grande Valley of Texas. As most reptiles carry Salmonella spp., this is a concern and a reason legislation has been sought to regulate the trade in green iguanas. Due to a combination of events, the green iguana is considered an invasive species in South Florida, and is found along the east coast, as well as the Gulf Coast, of Florida from Key West to Pinellas County. The original small populations in the Florida Keys were stowaways on ships carrying fruit from South America. Over the years, other iguanas were introduced into the wild, mostly originating through the pet trade. Some escaped and some were intentionally released by their owners; these iguanas survived and then thrived in their new habitat. They commonly hide in the attics of houses and on beaches. They often destroy gardens and landscaping. They seem to be fond of eating a native endangered plant, Cordia globosa, and feeding on nickernut (Caesalpinia) a primary food plant of the endangered Miami blue butterfly (Cyclargus thomasi bethunebakeri); additionally on Marco Island, green iguanas have been observed using the burrows of the Florida burrowing owl (Athene cunicularia floridana), a species of special concern, all of which can make them more of a serious threat to Florida's ecosystem than originally believed. Currently, the damage green iguanas have caused has become significant and expected to increase, but controversy remains on how to deal with the problem. For example, according to Florida Fish and Wildlife Conservation Commission green iguanas are not protected in Florida except by anti-cruelty law and can be humanely killed on private property with landowner permission. In January 2008, large numbers of iguanas established in Florida dropped from the trees in which they lived, due to unseasonably cold nights that put them in a state of torpor and caused them to lose their grip on the tree branches. Though no specific numbers were provided by local wildlife officials, local media described the phenomenon as a "frozen iguana shower" in which dozens "littered" local bike paths. Upon the return of daytime warmth, many (but not all) of the iguanas "woke up" and resumed their normal activities. This occurred again in January 2010, January 2018, and December 2020 after prolonged cold fronts once again hit southern Florida. Other countries Iguanas are also present in Ishigaki Island, Singapore, Thailand, and Taiwan. Captivity Green iguanas are by far the most globally traded reptiles, representing 46% of the total reptile trade in the US from 1996 and 2012, with annual imports reaching 1 million in 1996. The American pet trade has put a great demand on the green iguana; 800,000 iguanas were imported into the U.S. in 1995 alone, primarily originating from captive farming operations based in their native countries (Honduras, El Salvador, Colombia, and Panama). However, these animals are demanding to care for properly over their lifetimes, and many die within a few years of acquisition. Recently, an increase in illegal trading has been identified, and a trade ban for transport within and out from the Lesser Antilles was suggested. Green iguanas thrive only in temperatures of to and must have appropriate sources of UVB and UVA lighting, or else their bodies cannot produce vitamin D that promotes calcium absorption, which can result in a metabolic bone disease that can be fatal. In some locales (such as New York City and Hawaii), iguanas are considered exotic pets, and ownership is prohibited. Due to the potential impact of an introduced species on Hawaii's ecosystem, the state has strict regulations regarding the import and possession of green iguanas; violators can spend three years in jail and be fined up to $200,000. Conservation The green iguana is listed under Appendix II of the Convention on International Trade in Endangered Species (CITES), meaning that international trade is regulated through the CITES permit system. In addition, the green iguana is listed as Least Concern by the IUCN, with a mention of habitat depletion from development being a possible concern for green iguana populations in the future. Historically, green iguana meat and eggs have been eaten as a source of protein throughout their native range, and are prized for their alleged medicinal and aphrodisiac properties. In the past, there have been efforts to raise green iguanas in captivity as a food source in an attempt to encourage more sustainable land use in Panama and Costa Rica. In 2020, iguana researchers collaborated to create an extended and 'live' database on genetic variation within the green iguana. The intent of the database is primarily to guide population management, hybrid identification, and monitoring of invasions and illegal trade. Cultural references The Moche people of ancient Peru worshipped animals and often depicted green iguanas in their art. The green iguana and its relative the black iguana (Ctenosaura similis) have been used as a food source in Central and South America for the past 7,000 years. It is possible that some of the populations in the Caribbean were translocated there from the mainland by various tribes as a food source. In Central and South America, green iguanas are still used as a source of meat and are often referred to as gallina de palo ("bamboo chicken" or "chicken of the tree"), because they are said to taste like chicken. Gallery
Biology and health sciences
Reptiles
null
61880639
https://en.wikipedia.org/wiki/Climate%20change%20in%20India
Climate change in India
India was ranked seventh among the list of countries most affected by climate change in 2019. India emits about 3 gigatonnes (Gt) CO2eq of greenhouse gases each year; about two and a half tons per person, which is less than the world average. The country emits 7% of global emissions, despite having 17% of the world population. The climate change performance index of India ranks eighth among 63 countries which account for 92% of all GHG emissions in the year 2021. Temperature rises on the Tibetan Plateau are causing Himalayan glaciers to retreat, threatening the flow rate of the Ganges, Brahmaputra, Yamuna and other major rivers. A 2007 World Wide Fund for Nature (WWF) report states that the Indus River may run dry for the same reason. Severe landslides and floods are projected to become increasingly common in such states as Assam. Heat waves' frequency and intensity are increasing in India because of climate change. Temperatures in India have risen by between 1901 and 2018. According to some current projections, the number and severity of droughts in India will have markedly increased by the end of the present century. Greenhouse gas emissions Greenhouse gas emissions by India are the third largest in the world and the main source is coal. India emitted 2.8 Gt of CO2eq in 2016 (2.5 including LULUCF). 79% were , 14% methane and 5% nitrous oxide. India emits about 3 gigatonnes (Gt) CO2eq of greenhouse gases each year; about two tons per person, which is half the world average. The country emits 7% of global emissions. In India in 2023, emissions increased by 190 million tonnes due to strong GDP growth and reduced hydroelectricity production following a weak monsoon, with its per capita emissions remaining significantly below the global average. Cutting greenhouse gas emissions, and therefore air pollution in India, would have health benefits worth 4 to 5 times the cost, which would be the most cost-effective in the world. The Paris Agreement commitments included a reduction of this intensity by 33–35% by 2030. India's annual emissions per person are less than the global average, and the UNEP forecasts that by 2030 they will be between 3 and 4 tonnes. In 2019 China is estimated to have emitted 27% of world GhG, followed by the US with 11%, then India with 6.6%. The Indian national carbon trading scheme may be created in 2026. Electricity generation As of September 2021 India generates 39.8% of its electricity from renewable energy sources and 60.2% of its electricity from fossil fuels of which 51% is generated from coal. Coal fired power stations As well as coal mining in India, the country also imports coal to burn in coal-fired power stations in India. New plants are unlikely to be built, old and dirty plants may be shut down and more coal may be burnt in the remaining plants. Household fuel Switching from traditional fuels to liquefied petroleum gas and electricity provides health and climate benefits. Industry The industrial sector, including the production of cement, iron, and steel, is a major contributor to global emissions, accounting for about a quarter of the total. From 2000 to 2014, fuel consumption in this sector surged by 406%, reflecting its rapid expansion and escalating energy demand. By 2014, the industry was also responsible for 42% of total energy consumption, highlighting the significant environmental impacts associated with industrial activities. India, the world's second-largest steel producer, accounts for 7% of global crude steel production. With rising domestic demand, emissions from this sector could significantly increase. Implementing standards for low and near-zero emissions steel is crucial to mitigate this risk. These standards facilitate essential policies and mechanisms such as procurement, financing, carbon pricing, and emissions trading systems, which are key in supporting India's transition to net-zero emissions and reducing the environmental impacts of its industrial sector. Agriculture Agricultural emissions increased 25% between 2005 and 2014, in part due to significant increases in the use of artificial fertilizers and the burning of crops. Waste Waste emitted 78 Mt of CO2eq in 2014. Impacts on the natural environment Temperature and weather changes Temperatures in India have risen by between 1901 and 2018, thereby changing the climate in India. In May 2022 severe heatwave was recorded in Pakistan and India. The temperature reached 51 °C. Climate change makes such heatwaves 100 times more likely. Without climate change heatwaves, more severe that those who occurred in 2010 are expected to arrive 1 time in 312 years. Now they are expected to occur every 3 years. A 2018 study projects droughts to increase in Northern and North-western India in the near future. Around the end of the century, most parts of India will likely face more and more severe droughts. Severe landslides and floods are projected to become increasingly common in such states as Assam. Sea level rise Meghalaya and other northeastern states are concerned that rising sea levels will submerge much of Bangladesh and spawn a refugee crisis. If severe climate changes occurs, Bangladesh and parts of India that border it may lose vast tracts of coastal land. Thousands of people have been displaced by ongoing sea level rises that have submerged low-lying islands in the Sundarbans. Water resources Temperature rises on the Tibetan Plateau are causing Himalayan glaciers to retreat, threatening the flow rate of the Ganga, Brahmaputra, Yamuna, and other major rivers; the livelihoods of hundreds of thousands of farmers depend on these rivers. A 2007 World Wide Fund for Nature (WWF) report states that the Indus River may run dry for the same reason. Ecosystems Ecological disasters, such as a 1998 coral bleaching event that killed off more than 70% of corals in the reef ecosystems off Lakshadweep and the Andamans and was brought on by elevated ocean temperatures tied to global warming, are also projected to become increasingly common. Impacts on people Economic impacts India has the world's highest social cost of carbon. A report by the London-based global think tank Overseas Development Institute found that India may lose anywhere around 3–10% of its GDP annually by 2100 and its poverty rate may rise by 3.5% in 2040 due to climate change. Reduced crop yields Climate Change in India will have a disproportionate impact on the more than 400 million that makeup India's poor community. This is because so many depend on natural resources for their food, shelter and income. More than 56% of people in India work in agriculture, while many others earn their living in coastal areas. The impact of climate change on Indian agriculture was investigated through the National Innovations in Climate Resilient Agriculture (NICRA) study. The findings indicate that rainfed rice yields in India are expected to experience a marginal reduction of less than 2.5% in the years 2050 and 2080. On the other hand, irrigated rice yields are projected to decline by 7% in 2050 and 10% in 2080 scenarios. Moreover, the study forecasts a decrease in wheat yield ranging from 6% to 25% in the year 2100, while maize yields are estimated to decrease by 18% to 23% during the same period. However, there is a potential positive impact on chickpea, with anticipated productivity increases of 23% to 54% in the future climates. Health impacts Air pollution, which reflects sunlight, and irrigation, which cools the air by evaporation, have counteracted climate change since 1970. These two factors do however increase the impact of heat waves, as both lead to increased mortality. Heat waves Heat waves' frequency and power are increasing in India because of climate change. In 2019, the temperature reached 50.6 degrees Celsius, 36 people were killed. The high temperatures are expected to impact 23 states in 2019, up from nine in 2015 and 19 in 2018. The number of heat wave days has increased—not just day temperature, night temperatures increased also. 2018 was the country's sixth hottest year on record, and 11 of its 15 warmest years have occurred since 2004. The capital New Delhi broke its all-time record with a high of 48 degrees Celsius. In India, exposure to heat waves is said to increase by 8 times between 2021 and 2050, and by 300% by the end of this century. The number of Indians exposed to heat waves increased by 200% from 2010 to 2016. Heat waves also affect farm labour productivity. The heat waves affect central and northwestern India the most, and the eastern coast and Telangana have also been affected. In 2015, the latter places witnessed at least 2500 deaths. In 2016, for the first time in history, Kerala reported a heat wave. The government is being advised by the Indian Institute of Tropical Meteorology in predicting and mitigating heat waves. The government of Andhra Pradesh, for instance, is creating a Heat Wave Action Plan. The death toll from India's heat waves has decreased in the last four years. More than 2,000 people died in 2015, 375 in 2017 and 20 in 2018. "Officials say this is because the government has made an effort to reduce the death toll by encouraging residents to reduce or alter the time spent working on hot days and by providing free drinking water to hard-hit populations". It also used water to cool streets and forced police to guard water tankers in Madhya Pradesh state after fights over supply turned deadly. Those measures cost a lot of money and water, and the government's resources were limited in 2019 by the country's national election. The heat wave may continue, as monsoon rains have been delayed this year. Impacts on migration Around seven million people are projected to be displaced due to, among other factors, submersion of parts of Mumbai and Chennai, if global temperatures were to rise by 2 °C (3.6 °F). By the year 2050, India is expected to witness a significant increase in climate-related displacement, with around 45 million people compelled to migrate from their homes due to climate disasters. This number is three times higher than the current count of individuals being displaced because of extreme weather events. According to the "State of India's Environment-2022" report, India ranks as the fourth worst-affected country globally in terms of climate change-induced migration, with over three million people forced to abandon their residences in the year 2020-2021. These statistics emphasize the escalating impact of climate change on migration patterns within the country. Villagers in India's North Eastern state of Meghalaya are also concerned that rising sea levels will submerge neighboring low-lying Bangladesh, resulting in an influx of refugees into Meghalaya which has few resources to handle such a situation. Mitigation Greenhouse gas sinks Land use, land-use change, and forestry absorbed 300 Mt of CO2eq in 2014 and in 2020 total carbon stored in forests was 7000 Mt. Energy policy The National Energy Plan is in accord with the Paris Agreement target of 2 °C global warming, but if India stopped building coal-fired power stations it would meet the 1.5 °C aspiration. India pledged to achieve electric power generation of 40% percent non-fossil fuel energy by 2030. In its Biennial Update Report to the United Nations Framework Convention on Climate Change (UNFCCC) submitted in February, India said it has progressively continued decoupling of economic growth from greenhouse gas emissions. India's emission intensity of gross domestic product (GDP) has reduced by 24% between 2005 and 2016. India is therefore on track to meet its voluntary declaration to reduce the emission intensity of GDP by 20–25% from 2005 levels by 2020, making India the only G20 nation to meet climate goals. India's Intended Nationally Determined Contribution includes reducing emission intensity by a third by 2030. India has adequate carbon neutral resources such as biomass, wind, solar, hydro power including pumped storage, etc. to achieve net zero carbon emissions. With accelerated coal plant closures, and an anticipated surge in renewables, thermal power will account for only an estimated 42.7% of installed capacity across India by 2027, down dramatically from 66.8% in 2017. Cutting greenhouse gas emissions, and therefore air pollution in India, would have health benefits worth 4 to 5 times the cost, which would be the most cost-effective in the world. India has made significant strides in the energy sector and the country is now a global leader in renewable energy. Policies and legislation The Indian Government as well as various state governments have taken certain steps in accordance with India's energy policy and the Paris Agreement. Following are some of those steps: Doubling India's renewable energy target to 450 gigawatt (GW) by 2030 National Solar Mission Wind power in India In 2008, India published its National Action Plan on Climate Change (NAPCC), which contains several goals for the country. These goals include but are not limited to: covering one third of the country with forests and trees, increasing renewable energy supply to 6% of total energy mix by 2022, and the further maintenance of disaster management. All of the actions work to improve the resiliency of the country as a whole, and this proves to be important because India has an economy closely tied to its natural resource base and climate-sensitive sectors such as agriculture, water, and forestry. While presenting the fiscal year 2020-2021 state budget for the Indian state of Odisha, Finance Minister of the state Niranjan Pujari introduced the Climate Budget. Climate budget aims to keep track of the expenses made by the government for climate change or to support mitigation and adaption actions to address climate change. As per the document, It will help the government to decide whether to redesign or safeguard the existing projects by seeing their impact on the climate change. Odisha has become the first state in India to introduce climate budget. Niti Aayog is in the process of devising a policy framework and its deployment mechanism in India for carbon capture and utilization or storage (CCUS) to reduce greenhouse emissions per unit of economic activity. The right "to be free from adverse impacts of climate change" was legally recognized as a fundamental right in India by the Supreme Court, in 2024. This decision can impact further climate legislation in India. For achieving the aims of the Paris agreement India must peak power sector emissions by 2026. Until recently the country was expected to reach this target, but the recent governmental push for coal undermined it. Carbon emission trading and pricing Carbon emission trading is yet to be implemented in India. However, related instruments such as energy saving certificates (PAT), various renewable purchase obligations (RPO), and renewable energy certificates (REC) are traded on the power exchanges regularly. India does not have a carbon tax, but since 2010 the country has had a tax on both domestically produced and imported coal, which powers more than half of its electricity generation. Originally set at per tonne of coal, it was raised to ₹100 in 2014 and ₹200 in 2015. As of 2020 the coal tax stands at per tonne. International cooperation As a party to the Paris Agreement India is due to submit its first biennial transparency report to the UNFCCC by 2024 and inventory figures in standard format. In September 2021 India announced that it will submit a new Nationally Determined Contribution before COP26. At COP26, India set the latest target date planning to be net-zero by 2070. This was the first time in that a date for carbon neutrality has been given as part of India's climate policy. At COP26 Indian prime minister Narendra Modi announced 5 main commitments called Panchamrit – "India's gift to the world": Reaching carbon neutrality by 2070. Expand the energy capacity not coming from fossil fuels to 500GW by 2030. Cut the carbon intensity of economy by 45% by 2030. Draw half of its energy requirement from renewable sources by 2030. Cut 1 billion tons of GHG emission from the amount projected to the year 2030. The prime minister also proposed to advance a new agenda: LIFE - Lifestyle for Environment, meaning changing lifestyle for benefit the environment. Even though the date of net zero is far behind that of China and the US and India's government wants to continue with the use of coal, Indian environmentalists and economists applauded the decision, describing it as a bold climate action. Adaptation An Ice Stupa designed by Sonam Wangchuk brings glacial water to farmers in the Himalayan Desert of Ladakh, India. A research project conducted between 2014 and 2018 in the five districts (Puri, Khordha, Jagatsinghpur, Kendrapara and Bhadrak) of Mahanadi Delta, Odisha and two districts (North and South 24 Parganas) of Indian Bengal Delta (includes the Indian Sundarbans), West Bengal provides evidence on the kinds of adaptations practiced by the delta dwellers. In the Mahanadi delta, the top three practiced adaptations were changing the amount of fertiliser used in the farm, the use of loans, and planting of trees around the homes. In the Indian Bengal Delta, the top three adaptations were changing the amount of fertiliser used in the farm, making changes to irrigation practices, and use of loans. Migration as an adaptation option is practiced in both these deltas but is not considered as a successful adaptation. In the Indian Sundarbans of West Bengal, farmers are cultivating salt-tolerant rice varieties which have been revived to combat the increasing issue of soil salinity. Other agricultural adaptations include mixed farming, diversifying crops, rain water harvesting, drip irrigation, use of neem-based pesticide, and ridge and farrow land shaping techniques where "the furrows help with drainage and the less-saline ridges can be used to grow vegetables". These have helped farmers to grow a second crop of vegetables besides the monsoon paddy crop. In Puri district of Odisha, water logging is a hazard that affects people yearly. In the Totashi village, many women are turning the "water logging in their fields to their advantage" by cultivating vegetables in the waterlogged fields and boosting their family income and nutrition. Education is an integral tool that can be used in the adaptation of the measures that have been put in place to curb climate change. When considering the adaptation of measures that have been established to curb climate change, it is important to ensure that the education system has been included in such a project. By improving people's knowledge of climate change, it would be easier for them to adopt different mitigation measures. Also, there is a need to instill a culture among the younger generation on the best practices when it comes to environmental matters. The government must seek to ensure that systems that support learning, which undergirds adaptation are supported to enhance adaptation. Society and culture Media coverage A qualitative analysis of some mainstream Indian newspapers (particularly opinion and editorial pieces) during the release of the IPCC 4th Assessment Report and during the Nobel Peace Prize win by Al Gore and the IPCC found that Indian media strongly pursue the frame of scientific certainty in their coverage of climate change. This is in contrast to the skepticism displayed by American newspapers at the time. Alongside, Indian media highlight frames of energy challenge, social progress, public accountability and looming disaster. This sort of coverage finds parallels in European media narratives as well and helps build a transnational, globalized discourse on climate change. Another study has found that the media in India are divided along the lines of a north–south, risk-responsibility discourse. Activism Calculations in 2021 showed that, for giving the world a 50% chance of avoiding a temperature rise of 2 degrees or more India should increase its climate commitments by 55%. For a 95% chance it should increase the commitments by 147%. For giving a 50% chance of staying below 1.5 degrees India should increase its commitments by 191%. There have been school strikes for climate organised by activists such as Disha Ravi. Tribal people in India's remote northeast planned to honor former U.S. Vice President Al Gore in 2007 with an award for promoting awareness on climate change that they say will have a devastating impact on their homeland. Meghalaya- meaning 'Abode of the Clouds' in Hindi—is home to the towns of Cherrapunji and Mawsynram, which are credited with being the wettest places in the world due to their high rainfall. But scientists state that global climate change is causing these areas to experience an increasingly sparse and erratic rainfall pattern and a lengthened dry season, affecting the livelihoods of thousands of villagers who cultivate paddy and maize. Some areas are also facing water shortages. People are becoming aware of the ills of global warming. Taking initiative on their own people from Sangamner, Maharashtra (near Shirdi) have started a campaign of planting trees known as Dandakaranya- The Green Movement. It was started by visionary & ace freedom fighter the late Shri Bhausaheb Thorat in 2005. To date, they have sowed more than 12 million seeds & planted half a million plants.
Physical sciences
Climate change
Earth science
53002210
https://en.wikipedia.org/wiki/Bouncing%20ball
Bouncing ball
The physics of a bouncing ball concerns the physical behaviour of bouncing balls, particularly its motion before, during, and after impact against the surface of another body. Several aspects of a bouncing ball's behaviour serve as an introduction to mechanics in high school or undergraduate level physics courses. However, the exact modelling of the behaviour is complex and of interest in sports engineering. The motion of a ball is generally described by projectile motion (which can be affected by gravity, drag, the Magnus effect, and buoyancy), while its impact is usually characterized through the coefficient of restitution (which can be affected by the nature of the ball, the nature of the impacting surface, the impact velocity, rotation, and local conditions such as temperature and pressure). To ensure fair play, many sports governing bodies set limits on the bounciness of their ball and forbid tampering with the ball's aerodynamic properties. The bounciness of balls has been a feature of sports as ancient as the Mesoamerican ballgame. Forces during flight and effect on motion The motion of a bouncing ball obeys projectile motion. Many forces act on a real ball, namely the gravitational force (FG), the drag force due to air resistance (FD), the Magnus force due to the ball's spin (FM), and the buoyant force (FB). In general, one has to use Newton's second law taking all forces into account to analyze the ball's motion: where m is the ball's mass. Here, a, v, r represent the ball's acceleration, velocity, and position over time t. Gravity The gravitational force is directed downwards and is equal to where m is the mass of the ball, and g is the gravitational acceleration, which on Earth varies between and . Because the other forces are usually small, the motion is often idealized as being only under the influence of gravity. If only the force of gravity acts on the ball, the mechanical energy will be conserved during its flight. In this idealized case, the equations of motion are given by where a, v, and r denote the acceleration, velocity, and position of the ball, and v0 and r0 are the initial velocity and position of the ball, respectively. More specifically, if the ball is bounced at an angle θ with the ground, the motion in the x- and y-axes (representing horizontal and vertical motion, respectively) is described by The equations imply that the maximum height (H) and range (R) and time of flight (T) of a ball bouncing on a flat surface are given by Further refinements to the motion of the ball can be made by taking into account air resistance (and related effects such as drag and wind), the Magnus effect, and buoyancy. Because lighter balls accelerate more readily, their motion tends to be affected more by such forces. Drag Air flow around the ball can be either laminar or turbulent depending on the Reynolds number (Re), defined as: where ρ is the density of air, μ the dynamic viscosity of air, D the diameter of the ball, and v the velocity of the ball through air. At a temperature of , and . If the Reynolds number is very low (Re < 1), the drag force on the ball is described by Stokes' law: where r is the radius of the ball. This force acts in opposition to the ball's direction (in the direction of ). For most sports balls, however, the Reynolds number will be between 104 and 105 and Stokes' law does not apply. At these higher values of the Reynolds number, the drag force on the ball is instead described by the drag equation: where Cd is the drag coefficient, and A the cross-sectional area of the ball. Drag will cause the ball to lose mechanical energy during its flight, and will reduce its range and height, while crosswinds will deflect it from its original path. Both effects have to be taken into account by players in sports such as golf. Magnus effect The spin of the ball will affect its trajectory through the Magnus effect. According to the Kutta–Joukowski theorem, for a spinning sphere with an inviscid flow of air, the Magnus force is equal to where r is the radius of the ball, ω the angular velocity (or spin rate) of the ball, ρ the density of air, and v the velocity of the ball relative to air. This force is directed perpendicular to the motion and perpendicular to the axis of rotation (in the direction of ). The force is directed upwards for backspin and downwards for topspin. In reality, flow is never inviscid, and the Magnus lift is better described by where ρ is the density of air, CL the lift coefficient, A the cross-sectional area of the ball, and v the velocity of the ball relative to air. The lift coefficient is a complex factor which depends amongst other things on the ratio rω/v, the Reynolds number, and surface roughness. In certain conditions, the lift coefficient can even be negative, changing the direction of the Magnus force (reverse Magnus effect). In sports like tennis or volleyball, the player can use the Magnus effect to control the ball's trajectory (e.g. via topspin or backspin) during flight. In golf, the effect is responsible for slicing and hooking which are usually a detriment to the golfer, but also helps with increasing the range of a drive and other shots. In baseball, pitchers use the effect to create curveballs and other special pitches. Ball tampering is often illegal, and is often at the centre of cricket controversies such as the one between England and Pakistan in August 2006. In baseball, the term 'spitball' refers to the illegal coating of the ball with spit or other substances to alter the aerodynamics of the ball. Buoyancy Any object immersed in a fluid such as water or air will experience an upwards buoyancy. According to Archimedes' principle, this buoyant force is equal to the weight of the fluid displaced by the object. In the case of a sphere, this force is equal to The buoyant force is usually small compared to the drag and Magnus forces and can often be neglected. However, in the case of a basketball, the buoyant force can amount to about 1.5% of the ball's weight. Since buoyancy is directed upwards, it will act to increase the range and height of the ball. Impact When a ball impacts a surface, the surface recoils and vibrates, as does the ball, creating both sound and heat, and the ball loses kinetic energy. Additionally, the impact can impart some rotation to the ball, transferring some of its translational kinetic energy into rotational kinetic energy. This energy loss is usually characterized (indirectly) through the coefficient of restitution (or COR, denoted e): where vf and vi are the final and initial velocities of the ball, and uf and ui are the final and initial velocities of the impacting surface, respectively. In the specific case where a ball impacts on an immovable surface, the COR simplifies to For a ball dropped against a floor, the COR will therefore vary between 0 (no bounce, total loss of energy) and 1 (perfectly bouncy, no energy loss). A COR value below 0 or above 1 is theoretically possible, but would indicate that the ball went through the surface (), or that the surface was not "relaxed" when the ball impacted it (), like in the case of a ball landing on spring-loaded platform. To analyze the vertical and horizontal components of the motion, the COR is sometimes split up into a normal COR (ey), and tangential COR (ex), defined as where r and ω denote the radius and angular velocity of the ball, while R and Ω denote the radius and angular velocity the impacting surface (such as a baseball bat). In particular rω is the tangential velocity of the ball's surface, while RΩ is the tangential velocity of the impacting surface. These are especially of interest when the ball impacts the surface at an oblique angle, or when rotation is involved. For a straight drop on the ground with no rotation, with only the force of gravity acting on the ball, the COR can be related to several other quantities by: Here, K and U denote the kinetic and potential energy of the ball, H is the maximum height of the ball, and T is the time of flight of the ball. The 'i' and 'f' subscript refer to the initial (before impact) and final (after impact) states of the ball. Likewise, the energy loss at impact can be related to the COR by The COR of a ball can be affected by several things, mainly the nature of the impacting surface (e.g. grass, concrete, wire mesh) the material of the ball (e.g. leather, rubber, plastic) the pressure inside the ball (if hollow) the amount of rotation induced in the ball at impact the impact velocity External conditions such as temperature can change the properties of the impacting surface or of the ball, making them either more flexible or more rigid. This will, in turn, affect the COR. In general, the ball will deform more at higher impact velocities and will accordingly lose more of its energy, decreasing its COR. Spin and angle of impact Upon impacting the ground, some translational kinetic energy can be converted to rotational kinetic energy and vice versa depending on the ball's impact angle and angular velocity. If the ball moves horizontally at impact, friction will have a "translational" component in the direction opposite to the ball's motion. In the figure, the ball is moving to the right, and thus it will have a translational component of friction pushing the ball to the left. Additionally, if the ball is spinning at impact, friction will have a "rotational" component in the direction opposite to the ball's rotation. On the figure, the ball is spinning clockwise, and the point impacting the ground is moving to the left with respect to the ball's center of mass. The rotational component of friction is therefore pushing the ball to the right. Unlike the normal force and the force of gravity, these frictional forces will exert a torque on the ball, and change its angular velocity (ω). Three situations can arise: If a ball is propelled forward with backspin, the translational and rotational friction will act in the same directions. The ball's angular velocity will be reduced after impact, as will its horizontal velocity, and the ball is propelled upwards, possibly even exceeding its original height. It is also possible for the ball to start spinning in the opposite direction, and even bounce backwards. If a ball is propelled forward with topspin, the translational and rotational friction act will act in opposite directions. What exactly happens depends on which of the two components dominate. If the ball is spinning much more rapidly than it was moving, rotational friction will dominate. The ball's angular velocity will be reduced after impact, but its horizontal velocity will be increased. The ball will be propelled forward but will not exceed its original height, and will keep spinning in the same direction. If the ball is moving much more rapidly than it was spinning, translational friction will dominate. The ball's angular velocity will be increased after impact, but its horizontal velocity will be decreased. The ball will not exceed its original height and will keep spinning in the same direction. If the surface is inclined by some amount θ, the entire diagram would be rotated by θ, but the force of gravity would remain pointing downwards (forming an angle θ with the surface). Gravity would then have a component parallel to the surface, which would contribute to friction, and thus contribute to rotation. In racquet sports such as table tennis or racquetball, skilled players will use spin (including sidespin) to suddenly alter the ball's direction when it impacts surface, such as the ground or their opponent's racquet. Similarly, in cricket, there are various methods of spin bowling that can make the ball deviate significantly off the pitch. Non-spherical balls The bounce of an oval-shaped ball (such as those used in gridiron football or rugby football) is in general much less predictable than the bounce of a spherical ball. Depending on the ball's alignment at impact, the normal force can act ahead or behind the centre of mass of the ball, and friction from the ground will depend on the alignment of the ball, as well as its rotation, spin, and impact velocity. Where the forces act with respect to the centre of mass of the ball changes as the ball rolls on the ground, and all forces can exert a torque on the ball, including the normal force and the force of gravity. This can cause the ball to bounce forward, bounce back, or sideways. Because it is possible to transfer some rotational kinetic energy into translational kinetic energy, it is even possible for the COR to be greater than 1, or for the forward velocity of the ball to increase upon impact. Multiple stacked balls A popular demonstration involves the bounce of multiple stacked balls. If a tennis ball is stacked on top of a basketball, and the two of them are dropped at the same time, the tennis ball will bounce much higher than it would have if dropped on its own, even exceeding its original release height. The result is surprising as it apparently violates conservation of energy. However, upon closer inspection, the basketball does not bounce as high as it would have if the tennis ball had not been on top of it, and transferred some of its energy into the tennis ball, propelling it to a greater height. The usual explanation involves considering two separate impacts: the basketball impacting with the floor, and then the basketball impacting with the tennis ball. Assuming perfectly elastic collisions, the basketball impacting the floor at 1 m/s would rebound at 1 m/s. The tennis ball going at 1 m/s would then have a relative impact velocity of 2 m/s, which means it would rebound at 2 m/s relative to the basketball, or 3 m/s relative to the floor, and triple its rebound velocity compared to impacting the floor on its own. This implies that the ball would bounce to 9 times its original height. In reality, due to inelastic collisions, the tennis ball will increase its velocity and rebound height by a smaller factor, but still will bounce faster and higher than it would have on its own. While the assumptions of separate impacts is not actually valid (the balls remain in close contact with each other during most of the impact), this model will nonetheless reproduce experimental results with good agreement, and is often used to understand more complex phenomena such as the core collapse of supernovae, or gravitational slingshot manoeuvres. Sport regulations Several sports governing bodies regulate the bounciness of a ball through various ways, some direct, some indirect. AFL: Regulates the gauge pressure of the football to be between and . FIBA: Regulates the gauge pressure so the basketball bounces between 1035 mm and 1085 mm (bottom of the ball) when it is dropped from a height of 1800 mm (bottom of the ball). This corresponds to a COR between 0.758 and 0.776. FIFA: Regulates the gauge pressure of the soccer ball to be between of and at sea level (61 to 111 kPa). FIVB: Regulates the gauge pressure of the volleyball to be between to (29.4 to 31.9 kPa) for indoor volleyball, and to (17.2 to 22.1 kPa) for beach volleyball. ITF: Regulates the height of the tennis ball bounce when dropped on a "smooth, rigid and horizontal block of high mass". Different types of ball are allowed for different types of surfaces. When dropped from a height of , the bounce must be for Type 1 balls, for Type 2 and Type 3 balls, and for High Altitude balls. This roughly corresponds to a COR of 0.735–0.775 (Type 1 ball), 0.728–0.762 (Type 2 & 3 balls), and 0.693–0.728 (High Altitude balls) when dropped on the testing surface. ITTF: Regulates the playing surface so that the table tennis ball bounces approximately 23 cm when dropped from a height of 30 cm. This roughly corresponds to a COR of about 0.876 against the playing surface. NBA: Regulates the gauge pressure of the basketball to be between 7.5 and 8.5 psi (51.7 to 58.6 kPa). NFL: Regulates the gauge pressure of the American football to be between 12.5 and 13.5 psi (86 to 93 kPa). R&A/USGA: Limits the COR of the golf ball directly, which should not exceed 0.83 against a golf club. The pressure of an American football was at the center of the deflategate controversy. Some sports do not regulate the bouncing properties of balls directly, but instead specify a construction method. In baseball, the introduction of a cork-based ball helped to end the dead-ball era and trigger the live-ball era.
Physical sciences
Classical mechanics
Physics
59267487
https://en.wikipedia.org/wiki/Climate%20change%20in%20Indonesia
Climate change in Indonesia
Due to its geographical and natural diversity, Indonesia is one of the countries most susceptible to the impacts of climate change. This is supported by the fact that Jakarta has been listed as the world's most vulnerable city, regarding climate change. It is also a major contributor as of the countries that has contributed most to greenhouse gas emissions due to its high rate of deforestation and reliance on coal power. Made up of more than 17,000 islands and with a long coastline, Indonesia stands particularly vulnerable to the effects of rising sea levels and extreme weather events such as floods, droughts, and storms. Its vast areas of tropical forests are vital in balancing out climate change by taking in carbon dioxide from the atmosphere. Projected impacts on Indonesia's agricultural sector, national economy and health are also significant issues. Indonesia has committed to reducing its emissions within the framework of the Copenhagen Accord and Paris Agreement. Despite the significant impacts of climate change on the country, surveys show that Indonesia has a high proportion of climate change deniers. Greenhouse gas emissions Indonesia is one of the world's largest emitters of greenhouse gases due to its large deforestation and forest degradation. Since 2010, Indonesia has been actively involved in the REDD+ program (Reducing Emissions from Deforestation and Forest Degradation), which incentivizes developing countries to reduce deforestation and forest degradation to lower their greenhouse gas emissions. The country strives to achieve these goals by collaborating with national and local stakeholders, setting up a monitoring system to track emissions and forest cover, and integrating policies and institutional frameworks. Not only does this REDD+ program reduce Indonesia's greenhouse gas emissions, but it also protects biodiversity and benefits local communities. While the program looks promising for the future, its implementation in Indonesia is hindered by various obstacles, such as poor governance and institutional capacity, insufficient funding, and tenure issues. Apart from REDD+, Indonesia has the potential to leverage other forest-based climate change mitigation measures such as sustainable forest management and agroforestry. This is important because it ensures that forests are managed in a way that balances economic, social, and environmental objectives. They do this by promoting the conservation and sustainable use of forest resources while also maintaining their carbon stocks. Despite the goal of reducing greenhouse gas emissions by 29% by the end of 2030, Indonesia has made little progress in reducing emissions in recent years. This can be traced back to the lack of financial support, prevalence of coal-fired power plants, and ongoing deforestation. From 2014 to 2019, Indonesia's emissions increased by 2.2%. To counter all these challenges, the Indonesian government aims to increase the use of renewable energy sources and try to phase out coal. In order to achieve this, there is a need for more concrete action and effective policies to address greenhouse gas emissions. Impacts on the natural environment Temperature and climate change projections Indonesia is almost entirely dominated by a tropical climate with air humidities of up to 90% and hot average temperatures of 28 °C in warmer areas. Precipitation mainly exists in low areas and regions of higher altitudes with cold temperatures. During the El Niño, there is less precipitation and during La Niña events, there are more rainfalls. The climate can be divided into wet seasons from November to April and dry seasons from May to October. According to climate projections, the average temperatures will rise by 1.6 °C by the year 2050 and by 3.9 °C by 2100 under a high-emissions scenario with no limitations in greenhouse gas emissions. Precipitation estimates are largely complex under all scenarios because of the diverse regional patterns that can be found throughout the country. It is estimated that, under a high-emissions scenario centered at 2050 with respect to the reference time frame 1985–2014, there will be around 8% longer heatwaves with an increase of 98% in heatwave frequency which entails more extreme weather events like droughts and increased runoff processes leading to flooding and other destructive processes. Marine ecosystems As Indonesia forms the largest archipelago in the world, marine environments are of high importance for the livelihoods and food security of millions of people. With changing climate trends, these ecosystems are gravely impacted. Oceanic warming and enrichment in CO2 concentrations due to higher greenhouse gas contents in the atmosphere affect the health of coral reef areas and can lead to bleaching and the ultimately the death of the ecosystem. This in turn affects the health, diversity and abundance of species in that whole area and indirectly connected marine parts of the country. Not only does the acidification of the sea water cause lasting harm to the coral reefs through bleaching but it also triggers declines in plankton abundance in general. This causes a change of balance in the entire food web since plankton serves as a food source for a variety of marine organisms. Due to the increased incidence of extreme weather events such as storms and typhoons predicted for the future climate, vulnerable marine environments like coral reefs will experience further damage. Rises in sea levels already are particularly challenging for Indonesia. Estimates show that around 42 million people living less than 10 meters above sea level are menaced. This will have effects like coastal erosion, flooding and loss of habitats crucial for biodiversity like mangrove forests which create breeding grounds for fish and a high number of other marine species. If these areas of high biodiversity decrease in size and abundance, fish populations will decline. Increased temperatures coupled with changing climatic conditions may have negative impacts on ocean currents and the distribution of fish populations, creating fluctuations in the availability and distribution of stocks. This causes imbalances in the food web system. Terrestrial environment The impact of climate change upon the terrestrial environment of Indonesia is varied. Indonesia has one of the highest rates of deforestation in the world, much of which is driven by agricultural and logging industries. A study in 2022 estimated that the emissions impact from deforestation fires in Indonesia and Brazil was 3.7 (±0.4) and 1.9 (±0.2) Gt CO2eq in 2019 and 2020, respectively. Consequently, Indonesia's terrestrial environment has suffered from land changes, deforestation, changes to the groundwater table, reduction in biodiversity and ecosystem structural changes. An increase in extreme weather events due to climate change, notably forest fires in Indonesia have further contributed to the emission of greenhouse gas emissions. The estimated anthropogenic effects upon bioregions have been measured using the Human Footprint analysis. Human footprint is a measure of pressures from human populations, transportation infrastructure, housing and land transformations upon the integrity of natural systems and environments. Between 2012 and 2017, the human footprint of all bioregions within national parks and in a 10 km buffer area outside the park were reported to have increased in Indonesia. Around 2.2 million Ha of degraded forests exists within ‘protected areas’ in Indonesia, accounting for about 10% of total protected areas. The majority of peatlands in Indonesia have been subject to logging, agricultural expansion and plantation resulting in the drainage of peat. The drainage of peatlands are associated with increases in erosion, release of carbon dioxide due to exposure of organic material, loss of biodiversity and changes in the topography of the landscape due to processes such as subsidence. Peatlands and fire In Indonesia, peatlands began to accumulate following the last glacial period as a result of the extremely wet climate conditions. One can find between 160 and 270.000 km2 of peatlands of which the biggest part is located on the sub-coastal lowlands. Not only are they home to numerous species, but they serve as a natural carbon sink, are used for agriculture and settlements, act as a control system, and stabilize the landscape against erosion. In recent decades, the occurrence of extensive degradation, due to human activities, in Indonesia has risen, resulting in the nation becoming the fourth-largest contributor to carbon dioxide emissions. Peatlands are vital ecosystems of wetlands on land, where water logging conditions inhibit the complete decomposition of plant material. The organic matter accumulates as peat, which can store a large amount of carbon. Peatlands are known to play a crucial role in the mitigation of climate change due to their sequestration abilities of carbon from the atmosphere. But in the last 20 years (2001–2021), there has been an increase in fires which led to a decrease of 18% of the tree cover in Indonesia, producing 19.7 Gt of CO2 emissions. Over 90% of this tree cover loss is due to deforestation. Burning peatlands is a major cause of carbon emissions, releasing carbon dioxide and other greenhouse gases which contribute to climate change. These peat fires are responsible for up to 5% of the world's total annual emissions, as well as significant air pollution that can have serious health implications on local communities. As such, it is essential that effective strategies are put in place to prevent and manage peatland burning both now and in the future. Biodiversity Indonesia is home to a wide variety of flora and fauna. The main factors affecting the loss of biodiversity in Indonesia are habitat degradation, fragmentation, introduced species, overexploitation, climate change, fires and the economic and political crisis. Indonesia is home to about 12% of the world's mammals (515 species), ranking it second for fauna diversity after Brazil. The cumulative effect of climate change and anthropological activities have contributed to the decline of animal populations and biodiversity in Indonesia. It has been estimated that 25% of Indonesia's native mammals are endangered. The population of Sumatran elephants has been estimated to have dropped by 35% since the 1990s. Tigers and Sumatran primates population levels have not been maintained in protected areas. The Sumatran tigers and orangutans are also critically endangered animals in Indonesia, despite efforts to increase forest density in nature parks. In Indonesia, it has been estimated 80% of disasters due to climate change from 1998 to 2018 were flooding (18%), wind storm (26%), landslides (22%) and drought (8%). Increased frequency of such extreme weather events can have direct and indirect impact on species richness through habitat destruction, fragmentation, habitat loss and altering ecosystem processes. Indonesia has about 10% of the world's flowering plant species, 16% of the world's reptiles and 17% of the total species of birds. Despite Indonesia ranking highly on species richness and species diversity, logging, deforestation, agricultural practices and disasters are placing species under constant threat. Sea level rise due to climate change has been associated with a loss of mangrove forest habitat. Indonesia contains 24% of the worlds mangrove forests. Over the past three decades, 40% of its mangroves have been degraded or lost. These forests provide a breeding ground for many fish, marine species, birds and reptiles. Damage to the mangrove forests on the east coast of North Sumatra has resulted in two-thirds of the area's fish species becoming harder to catch. Indonesia has implemented several initiatives to restore mangrove habitats in effort to preserve ecosystems and stabilise fauna populations that rely on the mangroves as their habitat such as the proboscis monkey and estuarine crocodile. Sea level rise and land subsidence The mean sea level rise globally was 3–10 mm per year, while the subsidence rate for Jakarta was around 75–100 mm per year, making the relative rise in sea level nearly 10 cm per year. Continued carbon emissions at the 2019 rate, in combination with unlicensed groundwater extraction, is predicted to immerse 95% of Northern Jakarta by 2050. Some studies have suggested that climate change induced sea level rise may be minimal compared to the rise induced by lack of water infrastructure and rapid urban development. The Indonesian government views land subsidence, mostly due to over extraction of groundwater, as the primary threat to Jakarta's infrastructure and development. Dutch urban planning is in large part to blame for the water crisis today as a consequence of canals built during the colonial era which intentionally subdivided the city, segregating indigenous people and Europeans, providing clean water access and infrastructure almost exclusively to European settlers. Due to the lack of access to clean water in Jakarta outside of wealthier communities, many locals have been pushed to extract groundwater without permits. Jakarta's growing population and rapid urban development has been eating away at the surrounding agriculture further destroying natural flood mitigation, such as forests, and polluting river systems relied on by predominantly poorer locals pushing said locals to rely on groundwater. In 2019, water pipes in Jakarta reached only sixty percent of the population. Despite this being a very pressing issue in the city, almost half of the local population does not know or have not been made aware of the correlation between land subsidence, their extraction and increased flooding making an organized approach to this issue much more difficult. The issue has persisted so long that Indonesia has confirmed the movement of their nation's capital, Jakarta, to a new city in East Kalimantan in the island of Borneo, citing the land subsidence issue as a primary reason. The movement of the capital to Borneo, in part, minimizes the effects of natural disasters due to its strategic location, but the rapid pace of the planned relocation may exacerbate environmental issues on the island in the near future, particularly biodiversity loss. Impacts upon people Agriculture The agricultural sector builds the base of income for the lives of millions of Indonesians. The country's top export products are palm oil, cocoa, coffee, rice, spices, tea, coconuts, fruit and tobacco. Temperatures, potentially rising by up to 1.5 °C by the year 2050 in a high-emission scenario, have a direct influence on agricultural productivity and thereby local food security. Higher heat stress combined with long-lasting and intensifying droughts induces reduced yields and comes with a higher incidence of pests and plant diseases. Depending on the region, future climate projections show a complex variability of rainfall. The increasingly severe extreme events like floods and locally higher average precipitation will lead to a surplus of water, while generally higher temperatures along with intense droughts will make for large deficiencies. These disparities will directly impact agricultural productivity as well as the quantity and quality of goods that can be harvested. Connected to missing or excessive rainfall patterns, soil degradation significantly reduces the fertility of land and therefore agricultural productivity causing economic losses. In order to provide harvest efficiently, it becomes increasingly important to develop efficient water strategies for the irrigation of crops. Currently, more than half of the total irrigated agricultural area is estimated to have insufficiently maintained water infrastructure systems. Given that agricultural water demand is estimated to be rising to 52.1%, these inadequate water management conditions pose an issue and a threat to both the amount of water that can be supplied and its quality. For areas that depend heavily on irrigation systems, this is highly problematic. In 2024, Indonesian President Joko Widodo unveiled a plan to swiftly deploy 20,000 water pumps nationwide to shield crops from extreme weather and bolster food security. The focus will be on regions that produce rice, a staple food for over 270 million Indonesians. Fishery Indonesia's fishing sector contributed 2.77% of the country's GDP in 2021 and employs around 12 million people directly and indirectly. With over 5.8 million km2 of sea, Indonesia is home to diverse habitats such as coral reefs, mangroves, estuaries and deep sea which enables diverse fishery activity. With it comes overfishing, illegal fishing and in many places insufficient management of fishing authorization. Due to climate change, there will be an estimated reduction of fish catch potential by around 20.3% if temperatures rise by 1.5 °C until 2050 and with warmer surroundings, the acidification of the ocean increases substantially. In the private sector, fishing represents an important part of Indonesian culture. Traditional methods and equipment will no longer be safe or sufficient in many parts of the country given the climatic circumstances and a higher vulnerability to natural catastrophes. Therefore, the application of adaptive methods should be reinforced for sustainable small-scale fishing in order to be self-sufficient in the future. In the 2020s, seaweed farming along the coasts of Eastern Indonesia has been negatively impacted by ongoing climate change, with declines in revenue and seaweed harvests occurring as a result. Rapid developments can be observed in the transformation process of mangrove ecosystems to aquaculture units. Having the highest coverage on the planet, the degradation and deforestation of Indonesian mangrove environments, is particularly problematic as this type of ecosystem serves as a major carbon sink and creates natural barriers protecting inland areas in case of extreme weather events. Infrastructure The increased frequency of flooding, heavy storm events and sea level rise are the major threats of climate change upon the infrastructure in Indonesia. Currently, sea level rise is approximately 3.9 ± 0.4 mm per year. Experts predict that before 2050, thousands of islands and houses located along coastal areas in Indonesia will disappear. A recent analysis conducted by one of Indonesia's biggest newspapers estimate 199 out of 514 cities and districts could be affected by tidal flooding by 2050. Cracking on housing, sinking, sloping of buildings and issues with drainage are examples of infrastructure problems that have been associated with flooding and subsidence. An increased frequency of heavy storms are further associated with infrastructure damage, building loss and displacement of people from their homes and jobs. Expenditure will be required to invest in flood protection strategies, re-build roads and buildings and reallocate people out of their affected area. Forestry and mining Indonesia is a country abundant in natural resources, with strong industries linked to forestry and mining. These industries have been heavily affected by climate change (temperature increase, change in precipitation patterns, forest degradation, more frequent and intense forest fire). This in turn has had an immense impact on the environment. For example, deforestation contributes to global greenhouse gas emissions which accelerates climate change even further as well as destroys animal habitats and biodiversity. Such effects of climate change have posed a direct threat to Indonesia's forestry industry, hindering its development and limiting its potential. Mining is an important industry in Indonesia. The country is a major producer of coal, gold, and nickel. However, it carries significant risks to the environment including water pollution, soil erosion, and deforestation. Climate change is exacerbating these risks further, with changing rainfall patterns leading to reduced water availability along with an increased risk of flooding and landslides. Additionally, deforestation and mining activities release greenhouse gases such as carbon dioxide into the atmosphere which contribute to global warming. This highlights the importance of sustainable mining and forestry practices, which minimize environmental damage while also helping to slow down climate change. Indonesia has taken steps not only to address the interrelated issues of climate change but also the forestry and mining industries. To mitigate deforestation, the government has implemented the Indonesia Forest Moratorium and the REDD+ program, as well as regulations regarding environmental impact assessments and monitoring of mining activities. In addition, acknowledging that these industries themselves contribute to climate change, addressing these impacts requires a collaborative effort from all stakeholders (government, industry, civil society) to promote sustainable practices, reduce greenhouse gas emissions and ultimately create a more sustainable future for Indonesia. Tourism and trade Tourism Tourism accounts for approximately 4% of Indonesia's total economy. Climate change is expected to impact the tourism sector in a multitude of ways. Sea level rise will limit the geographical locations of housing available to incoming tourists and disproportionally impact low-lying islands that provide tourism services. Tidung Island, Bidadari Island and Pramuka Island are examples of coastal tourism hotspots in Indonesia that might be impacted from rising sea levels. A recent study found that an increase in 1% in temperature and relative humidity is associated with a decrease in the number of international tourists in Indonesia by 1.37% and 0.59% respectively. These findings provide insight for climate change adaptation policies for policy makers and climate change experts in Indonesia. The Minister for tourism and creative economy in Indonesia has established a campaign called the ‘Every Step Matters’ movement that aims to reduce carbon dioxide emissions from the tourism sector by up to 50% by 2030 and to achieve zero emissions by 2045. Trade Trade is expected to be affected by climate change on both a local and national scale. On a local level, a potential consequence of climate change is the reduced production capacity of farms and the disruption of local transportation routes from an increased occurrence of extreme weather events. A notable example how climate change is impacting trade is through the agricultural industry in Indonesia. Rising temperatures, a change in precipitation patterns and increased occurrence of extreme weather events pose a threat to food security and crop yield, thereby impacting the efficiency of transportation systems to import and export goods, the quantity of goods that are produced and supply chain networks. On a national level, the increased frequency of weather events such as floods and heavy storms has the potential to disrupt supply chain networks, increase delays and costs of goods and overall reduce the efficiency of trading systems. Health impacts The effect of climate change can also be seen in the health of people in Indonesia (heat-related illnesses, respiratory disease, vector-borne disease, waterborne disease, malnutrition). There have been several studies, which show the correlation between the effect of climate change on health issues like the respiratory system, malaria transmission, and increased risk of vector-borne disease. Other factors like bad water and air quality, and malnutrition are other indirect effects that climate change has on people's health. Collectively, these studies demonstrate that urgent action is necessary both to limit further damage from climate change and to adapt current public health strategies accordingly. Mitigation and adaptation Policies and legislation Indonesia has committed to reducing their greenhouse gas emissions since the Conference of Parties (COP) 15 of 2009, more commonly known as the Copenhagen Summit. Regarding mitigation approaches, Indonesia has pledged to reduce their own greenhouse emissions by 26% and by 41% with the help from external international assistance by 2020. Indonesia has established a payment for ecosystem services (PES) to encourage the uptake of climate friendly practices. The program aims to focus on assisting local and rural communities to encourage sustainable agricultural practices. Offering monetary incentives to farmers helps to build resilience in the landscape and reduces the chance of soil erosion, forest fires and landslides. The government implemented a moratorium first issued in 2011 on forest clearing permits, this policy has been labeled as ‘propaganda’ and activists are skeptical that the new moratorium will do much to reduce the rate of deforestation. Indonesia has established a forest conservation program that aims to establish a number of protected national parks, wildlife reserves and forest conservation areas. In 2015, the Indonesian government submitted its Intended Nationally Determined Contributions (INDCs) to the United Nations Framework Convention on Climate Change (UNFCCC). Indonesia's INDC outlined its commitment to reducing greenhouse gas emissions by 29% by 2030, compared to business-as-usual emissions. On a state level, Indonesia is implementing policies such as feed-in tariffs for renewable energy producers, tax incentives for renewable energy project and the development of a geothermal power plant to achieve these targets. Paris Agreement Indonesia is a signatory to the Paris agreement, committing to reducing global greenhouse gas emissions by 29% by 2030. They have further agreed to reduce greenhouse gas emissions from deforestation and forest degradation by 90% by 2030, this also includes restoring 12 million hectares of degraded peatlands and forest. They are committed to transitioning to greener energy sources, aiming to increase its mix of renewable energy sources to 23% by 2025 and 31% by 2030. However, Indonesia is still a long way from achieving these targets. Indonesia has taken some action in reducing greenhouse gas emissions from deforestation and peatland areas through establishing a One Map policy to improve monitoring and conflict resolutions between stakeholders. According to the Global Forest Watch, Indonesia lost 4.3 million hectares of tree cover between 2001 and 2020. Regarding Indonesia's progress in adopting renewable energy courses, their renewable energy mix was 9.8% in 2015 and increased to 11.2% in 2020. Regarding national greenhouse emissions, Indonesia emitted 602.6 million tonnes of carbon dioxide into the atmosphere in 2021, making it one of the largest greenhouse gas emitters of a developing nation. Although Indonesia has made progress decreasing its greenhouse gas emissions, extra assistance and work is required to meet its 2030 target. Society and culture A 2019 survey by YouGov and the University of Cambridge concluded that at 18%, Indonesia has "the biggest percentage of climate deniers, followed by Saudi Arabia (16 percent) and the U.S. (13 percent)." Climate education is not a part of the school curriculum.
Physical sciences
Climate change
Earth science
47376875
https://en.wikipedia.org/wiki/Past%20sea%20level
Past sea level
Global or eustatic sea level has fluctuated significantly over Earth's history. The main factors affecting sea level are the amount and volume of available water and the shape and volume of the ocean basins. The primary influences on water volume are the temperature of the seawater, which affects density, and the amounts of water retained in other reservoirs like rivers, aquifers, lakes, glaciers, polar ice caps and sea ice. Over geological timescales, changes in the shape of the oceanic basins and in land/sea distribution affect sea level. In addition to eustatic changes, local changes in sea level are caused by the earth's crust uplift and subsidence. Over geologic time sea level has fluctuated by more than 300 metres, possibly more than 400 metres. The main reasons for sea level fluctuations in the last 15 million years are the Antarctic ice sheet and Antarctic post-glacial rebound during warm periods. The current sea level is about 130 metres higher than the historical minimum. Historically low levels were reached during the Last Glacial Maximum (LGM), about 20,000 years ago. The last time the sea level was higher than today was during the Eemian, about 130,000 years ago. Over a shorter timescale, the low level reached during the LGM rebounded in the early Holocene, between about 14,000 and 6,500 years ago, leading to a 110 m sea level rise. Sea levels have been comparatively stable over the past 6,500 years, ending with a 0.50 m sea level rise over the past 1,500 years. For example, about 10,200 years ago the last land bridge between mainland Europe and Great Britain was submerged, leaving behind a salt marsh. By 8000 years ago the marshes were drowned by the sea, leaving no trace of any former dry land connection. Observational and modeling studies of mass loss from glaciers and ice caps indicate a contribution to a sea-level rise of 2 to 4 cm over the 20th century. Glaciers and ice caps Each year about of water from the entire surface of the oceans falls onto the Antarctica and Greenland ice sheets as snowfall. Slightly more water returns to the ocean in icebergs, from ice melting at the edges, and from rivers of meltwater flowing from ice sheets to the sea. The change in the total mass of ice on land, called the mass balance, is important because it causes changes in global sea level. High-precision gravimetry from satellites in low-noise flight has determined that in 2006, the Greenland and Antarctic ice sheets experienced a combined mass loss of 475 ± 158 Gt/yr, equivalent to 1.3 ± 0.4 mm/yr sea level rise. Notably, the acceleration in ice sheet loss over the period 1988–2006 was 22 ± 1 Gt/yr² for Greenland and 14.5 ± 2 Gt/yr² for Antarctica, for a total of 36 ± 2 Gt/yr². By 2010 the acceleration had increased to over 50 Gt/yr². This acceleration is 3 times larger than for mountain glaciers and ice caps (12 ± 6 Gt/yr²). Ice shelves float on the surface of the sea and, if they melt, to first order they do not change sea level. Likewise, the melting of the northern polar ice cap which is composed of floating pack ice would not significantly contribute to rising sea levels. However, because floating ice pack is lower in salinity than seawater, their melting would cause a very small increase in sea levels, so small that it is generally neglected. Scientists previously lacked knowledge of changes in terrestrial storage of water. Surveying of water retention by soil absorption and by artificial reservoirs ("impoundment") show that a total of about of water (just under the size of Lake Huron) has been impounded on land since 1930. Such impoundment masked about of sea level rise in that time. Conversely estimates of excess global groundwater extraction during 1900–2008 totals ~4,500 km3, equivalent to a sea-level rise of (>6% of the total). Furthermore, the rate of groundwater depletion has increased markedly since about 1950, with maximum rates occurring during the most recent period (2000–2008), when it averaged ~145 km3/yr (equivalent to 0.40 mm/yr of sea-level rise, or 13% of the reported rate of 3.1 mm/yr during this recent period). If small glaciers and polar ice caps on the margins of Greenland and the Antarctic Peninsula melt, the projected rise in sea level will be around . Melting of the Greenland ice sheet would produce of sea-level rise, and melting of the Antarctic ice sheet would produce of sea level rise. The collapse of the grounded interior reservoir of the West Antarctic Ice Sheet would raise sea level by - . The snowline altitude is the altitude of the lowest elevation interval in which minimum annual snow cover exceeds 50%. This ranges from about above sea-level at the equator down to sea level at about 70° N&S latitude, depending on regional temperature amelioration effects. Permafrost then appears at sea level and extends deeper below sea level polewards. As most of the Greenland and Antarctic ice sheets lie above the snowline and/or base of the permafrost zone, they will melt more slowly than ice shelves. Some estimates have them melting over several millennia even if temperatures continue to rise. However rising temperatures shift the permafrost zone, and the ice sheets also contribute to sea level rise through enhanced flow and iceberg calving. By the 2010s, Greenland was contributing roughly 0.8 mm/yr to sea level rise, and Antarctica was contributing roughly 0.4 mm/yr, both accelerating by 10%/yr (a doubling time of 7 years). Climate models estimate they will contribute 1 m - 2 m to sea level rise by 2100, mostly in the latter half of the century As of the early 2000s, the current rise in sea level observed from tide gauges, of about 3.4 mm/yr, is within the estimate range from the combination of factors above, but active research continues in this field. Geological influences At times during Earth's long history, the configuration of the continents and sea floor has changed due to plate tectonics. This affects global sea level by altering the depths of various ocean basins and also by altering glacier distribution with resulting changes in glacial-interglacial cycles. Changes in glacial-interglacial cycles are at least partially affected by changes in glacier distributions across the Earth. The depth of the ocean basins is a function of the age of oceanic lithosphere (the tectonic plates beneath the floors of the world's oceans). As older plates age, they become denser and sink, allowing newer plates to rise and take their place. Therefore, a configuration with many small oceanic plates that rapidly recycle the oceanic lithosphere would produce shallower ocean basins and (all other things being equal) higher sea levels. A configuration with fewer plates and more cold, dense oceanic lithosphere, on the other hand, would result in deeper ocean basins and lower sea levels. When there was much continental crust near the poles, the rock record shows unusually low sea levels during ice ages, because there was much polar land mass on which snow and ice could accumulate. During times when the land masses clustered around the equator, ice ages had much less effect on sea level. Over most of geologic time, the long-term mean sea level has been higher than today (see graph above). Only at the Permian-Triassic boundary ~250 million years ago was the long-term mean sea level lower than today. Long term changes in the mean sea level are the result of changes in the oceanic crust, with a downward trend expected to continue in the very long term. During the glacial-interglacial cycles over the past few million years, the mean sea level has varied by somewhat more than a hundred metres. This is primarily due to the growth and decay of ice sheets (mostly in the northern hemisphere) with water evaporated from the sea. The Mediterranean Basin's gradual growth as the Neotethys basin, begun in the Jurassic, did not suddenly affect ocean levels. While the Mediterranean was forming during the past 100 million years, the average ocean level was generally 200 metres above current levels. However, the largest known example of marine flooding was when the Atlantic breached the Strait of Gibraltar at the end of the Messinian Salinity Crisis about 5.2 million years ago. This restored Mediterranean Sea levels at the sudden end of the period when that basin had dried up, apparently due to geologic forces in the area of the Strait. Changes through geologic time Sea level has changed over geologic time. The lowest level occurred at the Permian-Triassic boundary about 250 million years ago. During the most recent ice age (at its maximum about 20,000 years ago) the world's sea level was about 130 m lower than today, due to the large amount of sea water that had evaporated and been deposited as snow and ice, mostly in the Laurentide Ice Sheet. Most of this had melted by about 10,000 years ago. Hundreds of similar glacial cycles have occurred throughout the Earth's history. Geologists who study the positions of coastal sediment deposits through time have noted dozens of similar basinward shifts of shorelines associated with a later recovery. This results in sedimentary cycles which in some cases can be correlated around the world with great confidence. This relatively new branch of geological science linking eustatic sea level to sedimentary deposits is called sequence stratigraphy. The most up-to-date chronology of sea level change through the Phanerozoic shows the following long-term trends: Gradually rising sea level through the Cambrian Relatively stable sea level in the Ordovician, with a large drop associated with the end-Ordovician glaciation Relative stability at the lower level during the Silurian A gradual fall through the Devonian, continuing through the Mississippian to long-term low at the Mississippian/Pennsylvanian boundary A gradual rise until the start of the Permian, followed by a gentle decrease lasting until the Mesozoic. Sea level rise since the last glacial maximum During deglaciation between about 19–, sea level rose at extremely high rates as the result of the rapid melting of the British-Irish Sea, Fennoscandian, Laurentide, Barents-Kara, Patagonian, Innuitian ice sheets and parts of the Antarctic ice sheet. At the onset of deglaciation about 19,000 years ago, a brief, at most 500-year long, glacio-eustatic event may have contributed as much as 10 m to sea level with an average rate of about 20 mm/yr. During the rest of the early Holocene, the rate of sea level rise varied from a low of about 6.0–9.9  mm/yr to as high as 30–60  mm/yr during brief periods of accelerated sea level rise. Solid geological evidence, based largely upon analysis of deep cores of coral reefs, exists only for 3 major periods of accelerated sea level rise, called meltwater pulses, during the last deglaciation. They are Meltwater pulse 1A between circa 14,600 and 14,300 years ago; Meltwater pulse 1B between circa 11,400 and 11,100 years ago; and Meltwater pulse 1C between 8,200 and 7,600 years ago. Meltwater pulse 1A was a 13.5 m rise over about 290 years centered at 14,200 years ago and Meltwater pulse 1B was a 7.5 m rise over about 160 years centered at 11,000 years ago. In sharp contrast, the period between 14,300 and 11,100 years ago, which includes the Younger Dryas interval, was an interval of reduced sea level rise at about 6.0–9.9  mm/yr. Meltwater pulse 1C was centered at 8,000 years ago and produced a rise of 6.5 m in less than 140 years, such that sea levels 5000 years ago were around 3m lower than present day, as evidenced in many locations by fossil beaches. Such rapid rates of sea level rising during meltwater events clearly implicate major ice-loss events related to ice sheet collapse. The primary source may have been meltwater from the Antarctic ice sheet. Other studies suggest a Northern Hemisphere source for the meltwater in the Laurentide Ice Sheet. Recently, it has become widely accepted that late Holocene, 3,000 calendar years ago to present, sea level was nearly stable prior to an acceleration of rate of rise that is variously dated between 1850 and 1900 AD. Late Holocene rates of sea level rise have been estimated using evidence from archaeological sites and late Holocene tidal marsh sediments, combined with tide gauge and satellite records and geophysical modeling. For example, this research included studies of Roman wells in Caesarea and of Roman piscinae in Italy. These methods in combination suggest a mean eustatic component of 0.07 mm/yr for the last 2000 years. Since 1880, the ocean began to rise briskly, climbing a total of through 2009 causing extensive erosion worldwide and costing billions. Sea level rose by 6 cm during the 19th century and 19 cm in the 20th century. Evidence for this includes geological observations, the longest instrumental records and the observed rate of 20th century sea level rise. For example, geological observations indicate that during the last 2,000 years, sea level change was small, with an average rate of only 0.0–0.2 mm per year. This compares to an average rate of 1.7 ± 0.5 mm per year for the 20th century. Baart et al. (2012) show that it is important to account for the effect of the 18.6-year lunar nodal cycle before acceleration in sea level rise should be concluded. Based on tide gauge data, the rate of global average sea level rise during the 20th century lies in the range 0.8 to 3.3 mm/yr, with an average rate of 1.8 mm/yr.
Physical sciences
Stratigraphy
Earth science
47398884
https://en.wikipedia.org/wiki/African%20wolf
African wolf
The African wolf (see below for other names; Canis lupaster) is a canine native to North Africa, West Africa, the Sahel, northern East Africa, and the Horn of Africa. It is listed as least concern on the IUCN Red List. In the Middle Atlas in Morocco, it was sighted in elevations as high as . It is primarily a predator of invertebrates and mammals as large as gazelle fawns, though larger animals are sometimes taken. Its diet also includes animal carcasses, human refuse, and fruit. They are monogamous and territorial; offspring remain with the parents to assist in raising their parents' younger pups. The African wolf was previously classified as an African variant of the golden jackal, though a series of analyses on the species' mitochondrial DNA and nuclear genome in 2015 demonstrated that it is a distinct species more closely related to the gray wolf and coyote. It is nonetheless still close enough to the golden jackal to produce hybrid offspring, as indicated through genetic tests on jackals in Israel, and a 19th-century captive crossbreeding experiment. Further studies demonstrated that it is the descendant of a genetically admixed canid of 72% gray wolf and 28% Ethiopian wolf ancestry. It plays a prominent role in some African cultures; it was considered sacred in ancient Egypt, particularly in Lycopolis, where it was venerated as a god. In North African folklore, it is viewed as an untrustworthy animal whose body parts can be used for medicinal or ritualistic purposes, while it is held in high esteem in Senegal's Serer religion as being the first creature to be created by the god Roog. Names The taxon is known under the following names: African wolf, African golden wolf, golden wolf, African golden jackal, North African jackal, African jackal, gray jackal, wolf jackal, jackal wolf, Egyptian wolf, Egyptian jackal. Local and indigenous names: Description The African wolf is intermediate in size between the African jackals (L. mesomelas and L. adusta) and the small subspecies of gray wolves, with both sexes weighing , and standing 40 cm in height. There is however a high degree of size variation geographically, with Western and Northern African specimens being larger than their East African cousins. It has a relatively long snout and ears, while the tail is comparatively short, measuring 20 cm in length. Fur color varies individually, seasonally and geographically, though the typical coloration is yellowish to silvery grey, with slightly reddish limbs and black speckling on the tail and shoulders. The throat, abdomen and facial markings are usually white, and the eyes are amber-colored. Females bear two to four pairs of teats. Although superficially similar to the golden jackal (particularly in East Africa), the African wolf has a more pointed muzzle and sharper, more robust teeth. The ears are longer in the African wolf, and the skull has a more elevated forehead. Various C. lupaster phenotypes, ranging from gracile jackal-like morphs to more robust wolf-like ones. Taxonomy Early writings Aristotle wrote of wolves living in Egypt, mentioning that they were smaller than the Greek kind. Georg Ebers wrote of the wolf being among the sacred animals of Egypt, describing it as a "smaller variety" of wolf to those of Europe, and noting how the name Lykopolis, the Ancient Egyptian city dedicated to Anubis, means "city of the wolf". The African wolf was first recognised as being a separate species from the golden jackal by Frédéric Cuvier in 1820, who described it as being a more elegant animal, with a more melodic voice and a less strong odour. The binomial name he chose for it was derived from the Arcadian Anthus family described by Pliny the Elder in his Natural History, whose members would draw lots to become werewolves. Eduard Rüppell proposed that the animal was the ancestor of Egyptian sighthounds, and named it Wolfs-hund (wolf dog), while C.H. Smith named it "thoa" or "thous dog". An attempt was also made in 1821 to hybridise the two species in captivity, resulting in the birth of five pups, three of which died before weaning. The two survivors were noted to never play with each other, and had completely contrasting temperaments: One pup inherited the golden jackal's shyness, while the other was affectionate toward its human captors. English biologist G.J. Mivart emphasized the differences between the African wolf and the golden jackal in his writings: The canids present in Egypt in particular were noted to be so much more gray wolf-like than populations elsewhere in Africa that W.F. Hemprich and C.G. Ehrenberg gave them the binomial name Canis lupaster in 1832. Likewise, T.H. Huxley, upon noting the similarities between the skulls of lupaster and Indian wolves, classed the animal as a subspecies of the gray wolf. However, the animal was subsequently synonymised with the golden jackal by Ernst Schwarz in 1926. In 1965, the Finnish paleontologist Björn Kurtén wrote: In 1981, zoologist Walter Ferguson argued in favor of lupaster being a subspecies of the gray wolf based on cranial measurements, stating that the classing of the animal as a jackal was based solely on the animal's small size, and predated the discovery of , which is intermediate in size between and lupaster. 21st-century discoveries Further doubts over its being conspecific with the golden jackal of Eurasia arose in December 2002, when a canid was sighted in Eritrea's Danakil Desert whose appearance did not correspond to that of the golden jackal or the six other recognized species of the area, but strongly resembled that of the gray wolf. The area had previously been largely unexplored because of its harsh climate and embroilment in the Eritrean War of Independence and subsequent Eritrean–Ethiopian War, though local Afar tribesmen knew of the animal, and referred to it as wucharia (wolf). The animal's wolf-like qualities were confirmed in 2011, when several golden "jackal" populations in Egypt and the Horn of Africa classed as Canis aureus lupaster were found to have mtDNA sequences more closely resembling those found in gray wolves than those of golden jackals. These wolf-like mtDNA sequences were found to occur over a 6,000 km wide area, encompassing Algeria, Mali and Senegal. Furthermore, the sampled African specimens displayed much more nucleotide and haplotype diversity than that present in Indian and Himalayan wolves, thus indicating a larger ancestral population, and an effective extant population of around 80,000 females. Both these studies proposed reclassifying Canis aureus lupaster as a subspecies of the gray wolf. In 2015, a more thorough comparative study of mitochondrial and nuclear genomes on a larger sample of wolf-like African canids from northern, eastern and western Africa showed that they were in fact all distinct from the golden jackal, with a genetic divergence of around 6.7%, which is greater than that between gray wolves and coyotes (4%) and that between gray wolves and domestic dogs (0.2%). Furthermore, the study showed that these African wolf-like canids (renamed Canis lupaster, or African wolves) were more closely related to gray wolves and coyotes than to golden jackals, and that C. l. lupaster merely represents a distinct phenotype of the African wolf rather than an actual gray wolf. The phylogenetic tree below is based on nuclear sequences: It was estimated that the African wolf diverged from the wolf–coyote clade 1.0–1.7 million years ago, during the Pleistocene, and therefore its superficial similarity to the golden jackal (particularly in East Africa, where African wolves are similar in size to golden jackals) would be a case of parallel evolution. Considering its phylogenetic position and the canid fossil record, it is likely that the African wolf evolved from larger ancestors that became progressively more jackal-like in size upon populating Africa on account of interspecific competition with both larger and smaller indigenous carnivores. Traces of African wolf DNA were identified in golden jackals in Israel, which adjoins Egypt, thus indicating the presence of a hybrid zone. The study's findings were corroborated that same year by Spanish, Mexican and Moroccan scientists analyzing the mtDNA of wolves in Morocco, who found that the specimens analyzed were distinct from both golden jackals and gray wolves but bore a closer relationship to the latter. Studies on RAD sequences found instances of African wolves hybridizing with both feral dogs and Ethiopian wolves. In 2017, it was proposed by scientists at the Oslo and Helsinki Universities that the binomial name C. anthus was a nomen dubium, as Cuvier's 1820 description of the holotype, a female collected from Senegal, seems to be describing the side-striped jackal rather than the actual African wolf, and does not match the appearance of a male specimen described by Cuvier in his later writings. This ambiguity, coupled with the disappearance of the holotype's remains, led to the scientists proposing giving priority to Hemprich and Ehrenberg's name C. lupaster, due to the type specimen having a more detailed and consistent description, and its remains being still examinable at the Museum für Naturkunde. The following year, a major genetic study of Canis species also referred to the African wolf as Canis lupaster. In 2019, a workshop hosted by the IUCN/SSC Canid Specialist Group recommended that because the specimen identified as Canis anthus Cuvier, 1820 was uncertain, the species should be known as Canis lupaster Hemprich and Ehrenberg, 1832 until Canis anthus can be validated. Admixture with other Canis species In 2018, whole genome sequencing was used to compare members of the genus Canis. The study supports the African wolf being distinct from the golden jackal, and with the Ethiopian wolf being genetically basal to both. Two genetically distinct African wolf populations exist in northwestern and eastern Africa. This suggests that Ethiopian wolves – or a close and extinct relative – once had a much larger range within Africa to admix with other canids. There is evidence of gene flow between the eastern population and the Ethiopian wolf, which has led to the eastern population being distinct from the northwestern population. The common ancestor of both African wolf populations was a genetically admixed canid of 72% gray wolf and 28% Ethiopian wolf ancestry. There is evidence of gene flow between African wolves, golden jackals, and gray wolves. One African wolf from the Egyptian Sinai Peninsula showed high admixture with the Middle Eastern gray wolves and dogs, highlighting the role of the land bridge between the African and other continents in canid evolution. African wolves form a sister clade to Middle Eastern gray wolves based on mitochondrial DNA, but to coyotes and gray wolves based on nuclear DNA. Relationship to the Himalayan wolf Between 2011 and 2015, two mtDNA studies found that the Himalayan wolf and Indian wolf were closer to the African wolf than they were to the Holarctic gray wolf. In 2017, a study of mitochondrial DNA, X-chromosome (maternal lineage) markers and Y-chromosome (male lineage) markers found that the Himalayan wolf is genetically basal to the Holarctic gray wolf. The Himalayan wolf shares a maternal lineage with the African wolf, and possesses a unique paternal lineage that falls between the gray wolf and the African wolf. Subspecies Although in the past several attempts have been made to synonymise many of the proposed names, the taxonomic position of West African wolves, in particular, is too confused to come to any precise conclusion, as the collected study materials are few. Prior to 1840, six of the 10 supposed West African subspecies were named or classed almost entirely because of their fur color. The species' display of high individual variation, coupled with the scarcity of samples and the lack of physical barriers on the continent preventing gene flow, brings into question the validity of some of the West African forms. However, a study showed that the genetic divergence of all of the African wolves occurred between 50,000 and 10,500 years ago, with most occurring between 30,000 and 16,000 years ago during the Late Glacial Maximum (33,000–16,000 years ago). There were very dry conditions across the Sahara during this period. The study proposes that these wolves were isolated in refugia and therefore isolated for hundreds of generations, leading to genetic divergence. Behavior Social and reproductive behaviors The African wolf's social organisation is extremely flexible, varying according to the availability and distribution of food. The basic social unit is a breeding pair, followed by its current offspring, or offspring from previous litters staying as "helpers". Large groups are rare, and have only been recorded to occur in areas with abundant human waste. Family relationships among African wolves are comparatively peaceful in relation to those of the black-backed jackal; although the sexual and territorial behavior of grown pups is suppressed by the breeding pair, they are not actively driven off once they attain adulthood. African wolves also lie together and groom each other much more frequently than black-backed jackals. In the Serengeti, pairs defend permanent territories encompassing 2–4 km2, and will vacate their territories only to drink or when lured by a large carcass. The pair patrols and marks its territory in tandem. Both partners and helpers will react aggressively towards intruders, though the greatest aggression is reserved for intruders of the same sex; pair members do not assist each other in repelling intruders of the opposite sex. The African wolf's courtship rituals are remarkably long, during which the breeding pair remains almost constantly together. Prior to mating, the pair patrols and scent marks its territory. Copulation is preceded by the female holding her tail out and angled in such a way that her genitalia are exposed. The two approach each other, whimpering, lifting their tails and bristling their fur, displaying varying intensities of offensive and defensive behavior. The female sniffs and licks the male's genitals, whilst the male nuzzles the female's fur. They may circle each other and fight briefly. The copulatory tie lasts roughly four minutes. Towards the end of estrus, the pair drifts apart, with the female often approaching the male in a comparatively more submissive manner. In anticipation of the role he will take in raising pups, the male regurgitates or surrenders any food he has to the female. In the Serengeti, pups are born in December–January, and begin eating solid food after a month. Weaning starts at the age of two months, and ends at four months. At this stage, the pups are semi-independent, venturing up to 50 meters from the den, even sleeping in the open. Their playing behavior becomes increasingly more aggressive, with the pups competing for rank, which is established after six months. The female feeds the pups more frequently than the male or helpers do, though the presence of the latter allows the breeding pair to leave the den and hunt without leaving the litter unprotected. The African wolf's life centers around a home burrow, which usually consists of an abandoned and modified aardvark or warthog earth. The interior structure of this burrow is poorly understood, though it is thought to consist of a single central chamber with 2–3 escape routes. The home burrow can be located in both secluded areas or surprisingly near the dens of other predators. Communication African wolves frequently groom one another, particularly during courtship, during which it can last up to 30 minutes. Nibbling of the face and neck is observed during greeting ceremonies. When fighting, the African wolf slams its opponents with its hips, and bites and shakes the shoulder. The species' postures are typically canine, and it has more facial mobility than the black-backed and side-striped jackals, being able to expose its canine teeth like a dog. The vocalisations of the African wolf are similar to those of the domestic dog, with seven sounds having been recorded, including howls, barks, growls, whines and cackles. Subspecies can be recognised by differences in their howls. One of the most commonly heard sounds is a high, keening wail, of which there are three varieties; a long single toned continuous howl, a wail that rises and falls, and a series of short, staccato howls. These howls are used to repel intruders and attract family members. Howling in chorus is thought to reinforce family bonds and establish territorial status. A comparative analysis of African wolf and some gray wolf subspecies' howls demonstrated that the former's howls bear similarities to those of the Indian wolf, being high-pitched and of relatively short duration. Hunting behavior The African wolf rarely catches hares, due to their speed. Gazelle mothers (often working in groups of two or three) are formidable when defending their young against single wolves, which are much more successful in hunting gazelle fawns when working in pairs. A pair of wolves will methodically search for concealed gazelle fawns within herds, tall grass, bushes and other likely hiding places. Although it is known to kill animals up to three times its own weight, the African wolf targets mammalian prey much less frequently than the black-backed jackal overall. On capturing large prey, the African wolf makes no attempt to kill it; instead it rips open the belly and eats the entrails. Small prey is typically killed by shaking, though snakes may be eaten alive from the tail end. The African wolf often carries away more food than it can consume, and caches the surplus, which is generally recovered within 24 hours. When foraging for insects, the African wolf turns over dung piles to find dung beetles. During the dry seasons, it excavates dung balls to reach the larvae inside. Grasshoppers and flying termites are caught either in mid-air or by pouncing on them while they are on the ground. It is fiercely intolerant of other scavengers, having been known to dominate vultures on kills – one can hold dozens of vultures at bay by threatening, snapping and lunging at them. Ecology Distribution and habitat C. lupaster has a wide range across the upper half of Africa, occurring in Senegal, Burkina Faso, Cameroon, the Central African Republic, Djibouti, Eritrea, Ethiopia, Guinea, Mali, Mauritania, Niger, Somalia, South Sudan, Sudan, Western Sahara, Nigeria, Chad, Morocco, Algeria, Tunisia, Libya, Kenya, Egypt, and Tanzania. Fossil finds dating back to the Pleistocene indicate that the species' range was not always restricted to Africa, with remains having been found in the Levant and Saudi Arabia. In Tanzania, the African wolf is limited to a small area of the north between the western slopes of Mount Kilimanjaro and the centre of the Serengeti. In the latter area, it occurs mostly in the short-grass plains, the floor of the Ngorongoro Crater, and the plains between the Olmoti and Empakai Craters, being relatively rare in Serengeti National Park, Loliondo and the Maswa game reserve. The species also inhabits the Lake Natron area and West Kilimanjaro. It is sometimes found in the northern part of Arusha National Park, and as far south as Manyara. In areas where it is common, such as the short-grass plains of Serengeti National Park and the Ngorongoro Crater, population densities can range between 0.5 and 1.5 specimens per km2. A population decrease of 60% has been recorded in the southern plains of Serengeti National Park since the early 1970s, though the reasons are unknown. The African wolf inhabits a number of different habitats; in Algeria it lives in Mediterranean, coastal and hilly areas (including hedged farmlands, scrublands, pinewoods and oak forests), while populations in Senegal inhabit tropical, semi-arid climate zones including Sahelian savannahs. Wolf populations in Mali have been documented in arid Sahelian massifs. In Egypt, the African wolf inhabits agricultural areas, wastelands, desert margins, rocky areas, and cliffs. At Lake Nasser, it lives close to the lakeshore. In 2012, African wolves were photographed in Morocco's Azilal Province at an elevation of 1,800 meters. It apparently does well in areas where human density is high and natural prey populations low, as is the case in the Enderta district in northern Ethiopia. This wolf has been reported in the very dry Danakil Depression desert on the coast of Eritrea, in eastern Africa. Diet In West Africa, the African wolf mostly confines itself to small prey, such as hares, rats, ground squirrels and cane rats. Other prey items include lizards, snakes, and ground-nesting birds, such as francolins and bustards. It also consumes a large amount of insects, including dung beetles, larvae, termites and grasshoppers. It will also kill young gazelles, duikers and warthogs. In East Africa, it consumes invertebrates and fruit, though 60% of its diet consists of rodents, lizards, snakes, birds, hares and Thomson's gazelles. During the wildebeest calving season, African wolves will feed almost exclusively on their afterbirth. In the Serengeti and Ngorongoro Crater, less than 20% of its diet comes from scavenging. In Senegal, where both C. l. anthus and C. l. lupaster coexist, some degree of niche segregation is apparent in their choice of prey; the former is reputed to feed primarily on lambs, whereas the latter attacks larger prey, such as sheep, goats and cattle. Enemies and competitors The African wolf generally manages to avoid competing with black-backed and side-striped jackals by occupying a different habitat (grassland, as opposed to the closed and open woodlands favored by the latter two species) and being more active during the daytime. Nevertheless, the African wolf has been known to kill the pups of black-backed jackals, but has in turn been observed to be dominated by adults during disputes over carcasses. It often eats alongside African wild dogs, and will stand its ground if the dogs try to harass it. Encounters with Ethiopian wolves are usually antagonistic, with Ethiopian wolves dominating African wolves if the latter enter their territories, and vice versa. Although African wolves are inefficient rodent hunters and thus not in direct competition with Ethiopian wolves, it is likely that heavy human persecution prevents the former from attaining numbers large enough to completely displace the latter. Nevertheless, there is at least one record of an African wolf pack adopting a male Ethiopian wolf. African wolves will feed alongside spotted hyenas, though they will be chased if they approach too closely. Spotted hyenas will sometimes follow wolves during the gazelle fawning season, as wolves are effective at tracking and catching young animals. Hyenas do not take to eating wolf flesh readily; four hyenas were reported to take half an hour in eating one. Overall, the two animals typically ignore each other when no food or young is at stake. Wolves will confront a hyena approaching too closely to their dens by taking turns in biting the hyena's hocks until it retreats. African wolves in the Serengeti are known to carry the canine parvovirus, canine herpesvirus, canine coronavirus and canine adenovirus. In culture The wolf was the template of numerous Ancient Egyptian deities, including Anubis, Wepwawet and Duamutef. The wolf was sacred in Lycopolis, whose inhabitants would mummify wolves and store them in chambers, as opposed to other areas of Egypt, where wolves were buried at their place of death. According to Diodorus Siculus in Bibliotheca historica, there were two reasons as to why the wolf was held in such high regard, the first being the animal's affinity to the dog, and the second being a legend that told of how Lycopolis received its name after a pack of wolves repelled an Ethiopian invasion. Plutarch noted in his On the Worship of Isis and Osiris that Lycopolis was the only nome in Egypt where people consumed sheep, as the practice was associated with the wolf, which was revered as a god. The importance of the wolf in Lycopolite culture continued through to the Roman period, where images of the animal were minted on the reverse sides of coins. Herodotus mockingly wrote of a festival commemorating Rhampsinit's descent to the underworld where a priest would be led by two wolves to the temple of Ceres. Arab Egyptian folklore holds that the wolf can cause chickens to faint from fear by simply passing underneath their roosts, and associates its body parts with various forms of folk magic: placing a wolf's tongue in a house is believed to cause the inhabitants to argue, and its meat is thought to be useful in treating insanity and epilepsy. Its heart is believed to protect the bearer from wild animal attacks, while its eye can protect against the evil eye. Although considered haram in Islamic dietary laws, the wolf is important in Moroccan folk medicine. Edvard Westermarck wrote of several remedies derived from the wolf in Morocco, including the use of its fat as a lotion, the consumption of its meat to treat respiratory ailments, and the burning of its intestines in fumigation rituals meant to increase the fertility of married couples. The wolf's gall bladder was said to have various uses, including curing sexual impotence and serving as a charm for women wishing to divorce their husbands. Westermarck noted, however, that the wolf was also associated with more nefarious qualities: it was said that a child who eats wolf flesh before reaching puberty will be forever cursed with misfortune and that scribes and saintly persons refrain from consuming it even in areas where it is socially acceptable, as doing so would render their charms useless. The African wolf is not common in Neolithic rock art, though it does occasionally appear; a definite portrayal is shown on the Kef Messiouer cave in Algeria's Tébessa Province, where it is shown feeding on a wild boar carcass alongside a lion pride. It plays a role in Berber mythology, particularly that of the Ait Seghrouchen of Morocco, where it plays a similar role in folktales as the red fox does in Medieval European fables, though it is often the victim of the more cunning hedgehog. The African wolf plays a prominent role in the Serer religion's creation myth, where it is viewed as the first living creature created by Roog, the Supreme God and Creator. In one aspect, it can be viewed as an Earth-diver sent to Earth by Roog, in another, as a fallen prophet for disobeying the laws of the divine. The wolf was the first intelligent creature on Earth, and it is believed that it will remain on Earth after human beings have returned to the divine. The Serers believe that, not only does it know in advance who will die, but it traces the tracks in advance of those who will go to funerals. The movements of the wolf are carefully observed, because the animal is viewed as a seer who came from the transcendence and maintains links with it. Although believed to be rejected in the bush by other animals and deprived of its original intelligence, it is still respected because it dared to resist the supreme being who still keeps it alive.
Biology and health sciences
Canines
Animals
42870316
https://en.wikipedia.org/wiki/Ghost%20pepper
Ghost pepper
The ghost pepper, also known as bhüt jolokia ( or 'Ghost pepper' in Assamese), is an interspecific hybrid chili pepper cultivated in Northeast India. It is a hybrid of Capsicum chinense and Capsicum frutescens. In 2007, Guinness World Records certified that the ghost pepper was the world's hottest chili pepper, 170 times hotter than Tabasco sauce. The ghost chili is rated at more than one million Scoville Heat Units (SHUs) and far surpasses the amount of a cayenne pepper. However, in the race to grow the hottest chili pepper, the ghost chili was superseded by the Trinidad Scorpion Butch T pepper in 2011, the Carolina Reaper in 2013 and Pepper X in 2023. Etymology and regional names The name bhüt jolokia means 'Bhutanese pepper' in Assamese; the first element bhüt , meaning 'Bhutanese', was mistakenly confused for a near-homonym bhut meaning 'ghost'. In Assam, the pepper is also known as bih zôlôkia meaning 'poison chili', from Assamese bih meaning 'poison' and zôlôkia meaning 'chili pepper', denoting the plant's heat. Similarly, in Nagaland, one of the regions of cultivation, the chili is called Raja Mirja meaning King chili ('Naga king chili'; also romanized nôga zôlôkia) and bhut jolokia (also romanized bhût zôlôkiya). This name is especially common in other regions where it is grown, such as Assam and Manipur. It has also been called the Tezpur chili after the Assamese city of Tezpur. In Manipur, the chili is called umorok. In Northeast India, bhut jolokia is also known as the "king chili" or "king cobra chilli'". Other usages on the subcontinent are saga jolokia, 'Indian mystery chili' and 'Indian rough chili'. Scoville rating In 2000, India's Defence Research Laboratory (DRL) reported a Scoville rating for the ghost pepper of 855,000 SHUs, and in 2004 a rating of 1,041,427 SHUs was made using HPLC analysis. For comparison, Tabasco red pepper sauce rates at 2,500–5,000, and pure capsaicin (the chemical responsible for the pungency of pepper plants) rates at 16,000,000 SHUs. In 2005, New Mexico State University's Chile Pepper Institute in Las Cruces, New Mexico, found ghost peppers grown from seed in southern New Mexico to have a Scoville rating of 1,001,304 SHUs by HPLC. Unlike most peppers, ghost peppers produce capsaicin in vesicles not only in the placenta around the seeds but also throughout the fruit. Characteristics Ripe peppers measure in length and in width with a red, yellow, orange, or chocolate color. The unselected strain of ghost peppers from India is an extremely variable plant, with a wide range in fruit sizes and fruit production per plant. Ghost pepper pods are unique among peppers because of their characteristic shape and very thin skin. However, the red fruit variety has two different types: the rough, dented fruit and the smooth fruit. The rough fruit plants are taller, with more fragile branches, while the smooth fruit plants yield more fruit and are compact with sturdier branches. It takes about 7–12 days to germinate at 32–38 °C. Uses Culinary Ghost peppers are used as a food and a spice. It is used in both fresh and dried forms to heat up curries, pickles and chutneys. It is popularly used in combination with pork or dried or fermented fish. The pepper's intense heat makes it a fixture in competitive chili pepper eating. Animal control In northeastern India, the peppers are smeared on fences or incorporated in smoke bombs as a safety precaution to keep wild elephants at a distance. Chili grenades In 2009, scientists at India's Defence Research and Development Organisation (DRDO) announced plans to use the peppers in hand grenades as a nonlethal method to control rioters with pepper sprays or in self-defence. The DRDO said that ghost pepper-based aerosol sprays could be used as a "safety device", and "civil variants" of chili grenades could be used to control and disperse mobs. Chili grenades made from ghost peppers were successfully used by the Indian Army in August 2015 to flush out a terrorist hiding in a cave. Gallery
Biology and health sciences
Botanical fruits used as culinary vegetables
Plants
41451915
https://en.wikipedia.org/wiki/Solid%20acid
Solid acid
Solid acids are acids that are insoluble in the reaction medium. They are often used as heterogeneous catalysts. Many solid acids are zeolites. A variety of techniques are used to quantify the strength of solid acids. Examples Examples of inorganic solid acids include silico-aluminates (zeolites, alumina, silico-aluminophosphate), and sulfated zirconia. Many transition metal oxides are acidic, including titania, zirconia, and niobia. Such acids are used in cracking. Many solid Brønsted acids are also employed industrially, including polystyrene sulfonate, solid phosphoric acid, niobic acid, and heteropolyoxometallates. Applications Solid acids are used in catalysis in many industrial chemical processes, from large-scale catalytic cracking in petroleum refining to the synthesis of various fine chemicals. One large scale application is alkylation, e.g., the combination of benzene and ethylene to give ethylbenzene. Another application is the rearrangement of cyclohexanone oxime to caprolactam. Many alkylamines are prepared by amination of alcohols, catalyzed by solid acids. Acylations are also catalyzed by solid acids.<ref></ref Solid acids can be used as electrolytes in fuel cells.
Physical sciences
Concepts
Chemistry
53037756
https://en.wikipedia.org/wiki/Dipole%20repeller
Dipole repeller
The dipole repeller is a center of effective repulsion in the large-scale flow of galaxies in the neighborhood of the Milky Way, first detected in 2017. It is thought to represent a large supervoid, the Dipole Repeller Void. The dipole repeller is directly opposed to the Shapley Attractor, an over-density of galaxies located in the Shapley Supercluster. The dipole repeller's apparent repulsion is due to matter in the vicinity being pulled towards the Shapley Attractor, along with the Great Attractor. Due to this, the dipole repeller has likely become devoid of matter, causing an apparent repulsion on galaxies between the repeller and the Shapley Attractor. Discovery The Local Group of galaxies is moving relative to the cosmic microwave background (CMB) at . There is also a pattern of bulk flow in the motion of neighboring galaxies extending to distances of over 250 megaparsecs (Mpc). There is a known overdensity – the Shapley Supercluster – creating an attraction in the flow of galaxies. The repeller appears to be located at a distance of about 220 Mpc and is anticipated to coincide with a void in galaxy density. That single center of attraction along with a roughly equal single repeller appear to be the most significant contributors to the CMB dipole. The authors of the article published in Nature Astronomy in January 2017 argue that the distance velocity measurements of the Dipole Repeller . No single observed concentration of matter (gravitationally attractive) can explain the observed velocities and directions of distance from stars and galaxies. We can therefore observe the presence of an additional force, repulsive and whose nature is not specified, according to these authors. One of the authors, Hoffman, told The Guardian: Hoffman also told Wired: Hoffman told IFLScience: The CNRS shared the same position and stated in a press release: The same research team identified in September 2017 a second void with repulsive force: the Cold Spot Repeller. These voids, which repel by the inverse gravitational force, are among main components of the cosmic "V-Web". Controversy about the Dipole Repeller and its 'repulsive force' Nevertheless, the discovery of the Dipole Repeller was commented on by astrophysicists and journalists in the mainstream media without using repulsive force . This is the case of Peter Coles, author of the blog "In the dark", Ethan Siegel in an article published by Forbes, as well as in an article published by Ars Technica. This is because gravitation is an attractive force, but if there is an underdense region it apparently acts as a gravitational repeller, based on the concept that there may be less attraction in the direction of the underdensity, and the greater attraction due to the higher density in other directions acts to pull objects away from the underdensity; in other words, the apparent repulsion is not an active force, but due simply to the lack of a force counteracting the attraction.
Physical sciences
Other notable objects
Astronomy
70103301
https://en.wikipedia.org/wiki/Environmental%20conflict
Environmental conflict
Environmental conflicts, socio-environmental conflict or ecological distribution conflicts (EDCs) are social conflicts caused by environmental degradation or by unequal distribution of environmental resources. The Environmental Justice Atlas documented 3,100 environmental conflicts worldwide as of April 2020 and emphasised that many more conflicts remained undocumented. Parties involved in these conflicts include locally affected communities, states, companies and investors, and social or environmental movements; typically environmental defenders are protecting their homelands from resource extraction or hazardous waste disposal. Resource extraction and hazardous waste activities often create resource scarcities (such as by overfishing or deforestation), pollute the environment, and degrade the living space for humans and nature, resulting in conflict. A particular case of environmental conflicts are forestry conflicts, or forest conflicts which "are broadly viewed as struggles of varying intensity between interest groups, over values and issues related to forest policy and the use of forest resources". In the last decades, a growing number of these have been identified globally. Frequently environmental conflicts focus on environmental justice issues, the rights of indigenous people, the rights of peasants, or threats to communities whose livelihoods are dependent on the ocean. Outcomes of local conflicts are increasingly influenced by trans-national environmental justice networks that comprise the global environmental justice movement. Environmental conflict can complicate response to natural disaster or exacerbate existing conflictsespecially in the context of geopolitical disputes or where communities have been displaced to create environmental migrants. The study of these conflicts is related to the fields of ecological economics, political ecology, and environmental justice. Causes The origin of environmental conflicts can be directly linked to the industrial economy. As less than 10% of materials and energy are recycled, the industrial economy is constantly expanding energy and material extraction at commodity frontiers through two main processes: Appropriating new natural resources through territorial claims and land grabs. Making exploitation of existing sites more efficient through investments or social and technical innovation EDCs are caused by the unfair distribution of environmental costs and benefits. These conflicts arise from social inequality, contested claims over territory, the proliferation of extractive industries, and the impacts of the economic industrialization over the past centuries. Oil, mining, and agriculture industries are focal points of environmental conflicts. Types of conflicts A 2020 paper mapped the arguments and concerns of environmental defenders in over 2743 conflicts found in the Environmental Justice Atlas (EJAtlas). The analysis found that the industrial sectors most frequently challenged by environmental conflicts were mining (21%), fossil energy (17%), biomass and land uses (15%), and water management (14%). Killings of environmental defenders happened in 13% of the reported cases. There was also a distinct difference in the types of conflict found in high and low income countries. There were more conflicts around conservation, water management, and biomass and land use in low income countries; while in high income countries almost half of conflicts focused on waste management, tourism, nuclear power, industrial zones, and other infrastructure projects. The study also found that most conflicts start with self-organized local groups defending against infringement, with a focus on non-violent tactics. Water protectors and land defenders who defend indigenous rights are criminalized at a much higher rate than in other conflicts. Environmental conflicts can be classified based on the different stages of the commodity chain: during the extraction of energy sources or materials, in the transportation and production of goods, or at the final disposal of waste. EJAtlas Categories The EJAtlas was founded and is co-directed by Leah Temper and Joan Martinez-Alier, and it is coordinated by Daniela Del Bene. Its aim is “to document, understand and analyse the political outcomes that emerge or that may emerge” from ecological distribution conflicts. It is housed at the ICTA of the Universitat Autonoma de Barcelona. Since 2012, academics and activists have collaborated to write the entries, reaching 3,500 by July 2021. The EJ Atlas identifies ten categories of ecological distribution conflicts: Biodiversity conservation conflicts: Biomass and land conflicts (Forests, Agriculture, Fisheries and Livestock Management) Fossil Fuels and Climate Justice/Energy Industrial and Utilities Conflicts Infrastructure and Built Environment Mineral Ores and Building Materials Extraction Nuclear Tourism Recreation Waste Management Water Management Ecological distribution conflicts Ecological Distribution Conflicts (EDCs) were introduced as a concept in 1995 by Joan Martínez-Alier and Martin O'Connor to facilitate more systematic documentation and analysis of environmental conflicts and to produce a more coherent body of academic, activist, and legal work around them. EDCs arise from the unfair access to natural resources, unequally distributed burdens of environmental pollution, and relate to the exercise of power by different social actors when they enter into disputes over access to or impacts on natural resources. For example, a factory may pollute a river thus affecting the community whose livelihood depends on the water of the river. The same can apply to the climate crisis, which may cause sea level rise on some Pacific islands. This type of damage is often not valued by the market, preventing those affected from being compensated. Ecological conflicts occur at both global and local scales. Often conflicts take place between the global South and the global North, e.g. a Finnish forest company operating in Indonesia, or in econonomic peripheries, although there is a growing emergence of conflicts in Europe, including violent ones. There are also local conflicts that occur within a short commodity chain (e.g. local extraction of sand and gravel for a nearby cement factory). Intellectual history Since its conception, the term Ecological Distribution Conflict has been linked to research from the fields of political ecology, ecological economics, and ecofeminism. It has also been adopted into a non-academic setting through the environmental justice movement, where it branches academia and activism to assist social movements in legal struggles. In his 1874 lecture ‘Wage Labour and Capital’, Karl Marx introduced the idea that economic relations under capitalism are inherently exploitative, meaning economic inequality is an inevitability of the system. He theorised that this is because capitalism expands through capital accumulation, an ever-increasing process which requires the economic subjugation of parts of the population in order to function. Building on this theory, academics in the field of political economy created the term ‘economic distribution conflicts’ to describe the conflicts that occur from this inherent economic inequality. This type of conflict typically occurs between parties with an economic relationship but unequal power dynamic, such as buyers and sellers, or debtors and creditors. However, Martinez Allier and Martin O’Connor noticed that this term focuses solely on the economy, omitting the conflicts that do not occur from economic inequality but from the unequal distribution of environmental resources. In response, in 1995, they coined the term ‘ecological distribution conflict’. This type of conflict occurs at commodity frontiers, which are constantly being moved and reframed due to society's unsustainable social metabolism. These conflicts might occur between extractive industries and Indigenous populations, or between polluting actors and those living on marginalised land. Its roots can still be seen in Marxian theory, as it is based on the idea that capitalism's need for expansion drives inequality and conflict. Unfair ecological distribution can be attributed to capitalism as a system of cost-shifting. Neoclassical economics usually consider these impacts as “market failures” or “externalities” that can be valued in monetary terms and internalized into the price system. Ecological economics and political ecology scholars oppose the idea of economic commensuration that could form the basis of eco-compensation mechanisms for impacted communities. Instead, they advocate for different valuation languages such as sacredness, livelihood, rights of nature, Indigenous territorial rights, archaeological values, and ecological or aesthetic value. Social movements Ecological distribution conflicts have given rise to many environmental justice movements around the globe. Environmental justice scholars conclude that these conflicts are a force for sustainability. These scholars study the dynamics that drive these conflicts towards an environmental justice success or a failure. Globally, around 17% of all environmental conflicts registered in the EJAtlas report environmental justices 'successes', such as stopping an unsustainable project or redistributing resources in a more egalitarian way. Movements usually shape their repertoires of contention as protest forms and direct actions, which are influenced by national and local backgrounds. In environmental justice struggles, the biophysical characteristics of the conflict can further shape the forms of mobilization and direct action. Resistance strategies can take advantage of ‘biophysical opportunity structures’, where they attempt to identify, change or disrupt the damaging ecological processes they are confronting. Finally, the ‘collective action frames’ of movements emerging in response to environmental conflicts becomes very powerful when they challenge the mainstream relationship of human societies with the environment. These frames are often expressed through pithy protest slogans, that scholars refer to as the ‘vocabulary of environmental justice’ and which includes concepts and phrases such as ‘environmental racism’, ‘tree plantations are not forests’, ‘keep the oil in the soil’, ‘keep the coal in the hole’ and the like, resonating and empathizing with those communities affected by EDC. Environmentalism of the poor Some scholars make a distinction between environmentalist conflicts that have an objective of sustainability or resource conservation and environmental conflicts more broadly (which are any conflict over a natural resource). The former type of conflict gives rise to environmentalism of the poor, in which environmental defenders protect their land from degradation by industrial economic forces. Environmentalist conflicts tend to be intermodal conflicts in which peasant or agricultural land uses are in conflict with industrial uses (such as mining). Intramodal conflicts, in which peasants dispute amongst themselves about land use may not be environmentalist. In this division movement such as La Via Campesina (LVC), or the International Planning Committee for Food Sovereignty (IPC) can be considered in the halfway between these two approaches. In their defense of peasant agriculture and against large-scale capitalist industrial agriculture, both LVC and the IPC have fundamentally contributed to promoting agroecology as a sustainable agriculture model across the globe, adopting an intermodal approach against industrial agriculture and providing new sources of education to poor communities that could incentive an aware integration in the redistribution of resources. A similar attitude has shaped the action of the Brazilian Landless Farmworkers movement (MST) in the way it has struggled with the idea of productivity and the use of chemical products by several agribusiness realities that destroy resources rich in fertility and biodiversity. Such movements often question the dominant form of valuation of resource uses (i.e. monetary values and cost-benefit analyses) and renegotiate the values deemed relevant for sustainability. Sometimes, particularly when the resistance weakens, demands for monetary compensation are made (in a framework of ‘weak sustainability’). The same groups, at other times or when feeling stronger, might argue in terms of values which are not commensurate with money, such as indigenous territorial rights, irreversible ecological values, human right to health or the sacredness of redefining the very economic, ecological and social principles behind particular uses of the Mother Earth, implicitly defending a conception of ‘strong sustainability’. In contesting and environment, such intermodal conflicts are those that are most clearly forced towards broader sustainability transitions. Conflict resolution A distinct field of conflict resolution called Environmental Conflict Resolution, focuses on developing collaborative methods for deescalating and resolving environmental conflicts. As a field of practice, people working on conflict resolution focus on the collaboration, and consensus building among stakeholders. An analysis of such resolution processes found that the best predictor of successful resolution was sufficient consultation with all parties involved. A new tool with certain potential in this regard is the development of video games proposing distinct options to the gamers for handling conflicts over environmental resources, for instance in the fishery sector. Critique Some scholars critique the focus on natural resources used in descriptions of environmental conflict. Often these approaches focus on the commercialization of the natural environment that doesn't acknowledge the underlying value of a healthy environment.
Physical sciences
Earth science basics: General
Earth science
49168255
https://en.wikipedia.org/wiki/Planet%20Nine
Planet Nine
Planet Nine is a hypothetical ninth planet in the outer region of the Solar System. Its gravitational effects could explain the peculiar clustering of orbits for a group of extreme trans-Neptunian objects (ETNOs), bodies beyond Neptune that orbit the Sun at distances averaging more than 250 times that of the Earth i.e. over 250 astronomical units (AU). These ETNOs tend to make their closest approaches to the Sun in one sector, and their orbits are similarly tilted. These alignments suggest that an undiscovered planet may be shepherding the orbits of the most distant known Solar System objects. Nonetheless, some astronomers question this conclusion and instead assert that the clustering of the ETNOs' orbits is due to observational biases, resulting from the difficulty of discovering and tracking these objects during much of the year. Based on earlier considerations, this hypothetical super-Earth-sized planet would have had a predicted mass of five to ten times that of the Earth, and an elongated orbit 400–800 AU. The orbit estimation was refined in 2021, resulting in a somewhat smaller semimajor axis of This was shortly thereafter updated to and to in 2025. Batygin & Brown suggested that Planet Nine may be the core of a giant planet that was ejected from its original orbit by Jupiter during the genesis of the Solar System. Others proposed that the planet was captured from another star, was once a rogue planet, or that it formed on a distant orbit and was pulled into an eccentric orbit by a passing star. Although sky surveys such as Wide-field Infrared Survey Explorer (WISE) and Pan-STARRS did not detect Planet Nine, they have not ruled out the existence of a Neptune-diameter object in the outer Solar System. The ability of these past sky surveys to detect Planet Nine was dependent on its location and characteristics. Further surveys of the remaining regions are ongoing using NEOWISE and the 8 meter Subaru Telescope. Unless Planet Nine is observed, its existence remains purely conjectural. Several alternative hypotheses have been proposed to explain the observed clustering of trans-Neptunian objects (TNOs). History Following the discovery of Neptune in 1846, there was considerable speculation that another planet might exist beyond its orbit. The best-known of these theories predicted the existence of a distant planet that was influencing the orbits of Uranus and Neptune. After extensive calculations, Percival Lowell predicted the possible orbit and location of the hypothetical trans-Neptunian planet and began an extensive search for it in 1906. He called the hypothetical object a name previously used by Gabriel Dallet. Clyde Tombaugh continued Lowell's search and in 1930 discovered Pluto, but it was soon determined to be too small to qualify as Lowell's Planet X. After Voyager 2s flyby of Neptune in 1989, the difference between Uranus' predicted and observed orbit was determined to have been due to the use of a previously inaccurate mass of Neptune. Attempts to detect planets beyond Neptune by indirect means such as orbital perturbation date to before the discovery of Pluto. Among the first was George Forbes who postulated the existence of two trans-Neptunian planets in 1880. One would have an average distance from the Sun, or semi-major axis, of 100 AU, 100 times that of the Earth. The second would have a semi-major axis of 300 AU. His work is considered similar to more recent Planet Nine theories in that the planets would be responsible for a clustering of the orbits of several objects, in this case the clustering of aphelion distances of periodic comets near about 100–300 AU. This is similar to how the aphelion distances of Jupiter-family comets cluster near its orbit. The discovery of Sedna, a dwarf planet with a highly peculiar orbit in 2004, led to speculation that it had encountered a massive body other than one of the known planets. Sedna's orbit is detached, with a perihelion distance of 76 AU that is too large to be due to gravitational interactions with Neptune. Several authors proposed that Sedna entered this orbit after encountering a massive body such as an unknown planet on a distant orbit, a member of the open cluster that formed with the Sun, or another star that later passed near the Solar System. The announcement in March 2014 of the discovery of a second sednoid with a perihelion distance of 80 AU, , in a similar orbit led to renewed speculation that an unknown super-Earth remained in the distant Solar System. At a conference in 2012, Rodney Gomes proposed that an undetected planet was responsible for the orbits of some ETNOs with detached orbits and the large semi-major axis Centaurs, small Solar System bodies that cross the orbits of the giant planets. The proposed Neptune-massed planet would be in a distant eccentric and steeply inclined orbit. Like Planet Nine it would cause the perihelia of objects with semi-major axes greater than 300 AU to oscillate, delivering some into planet-crossing orbits and others into detached orbits like that of Sedna. An article by Gomes, Soares, and Brasser was published in 2015, detailing their arguments. In 2014, astronomers Chad Trujillo and Scott S. Sheppard noted the similarities in the orbits of Sedna and and several other ETNOs. They proposed that an unknown planet in a circular orbit between 200 and 300 AU was perturbing their orbits. Later that year, Raúl and Carlos de la Fuente Marcos argued that two massive planets in orbital resonance were necessary to produce the similarities of so many orbits, 13 known at the time. Using a larger sample of 39 ETNOs, they estimated that the nearer planet had a semi-major axis in the range of 300–400 AU, a relatively low eccentricity, and an inclination of nearly 14°. Batygin and Brown hypothesis In early 2016, California Institute of Technology's Batygin and Brown described how the similar orbits of six ETNOs could be explained by Planet Nine and proposed a possible orbit for the planet. This hypothesis could also explain ETNOs with orbits perpendicular to the inner planets and others with extreme inclinations, and had been offered as an explanation of the tilt of the Sun's axis. Orbit Planet Nine was initially hypothesized to follow an elliptical orbit around the Sun with an eccentricity of , and its semi-major axis was estimated to be , roughly 13–26 times the distance from Neptune to the Sun. It would take the planet between to make one full orbit around the Sun, and its inclination to the ecliptic, the plane of the Earth's orbit, was projected to be . The aphelion, or farthest point from the Sun, would be in the general direction of the constellation of Taurus, whereas the perihelion, the nearest point to the Sun, would be in the general direction of the southerly areas of Serpens (Caput), Ophiuchus, and Libra. Brown thinks that if Planet Nine exists, a probe could reach it in as little as 20 years by using a powered slingshot trajectory around the Sun. Mass and radius The planet is estimated to have 5–10 times the mass and 2–4 times the radius of the Earth. Brown thinks that if Planet Nine exists, its mass is sufficient to clear its orbit of large bodies in 4.5 billion years, the age of the Solar System, and that its gravity dominates the outer edge of the Solar System, which is sufficient to make it a planet by current definitions. Astronomer Jean-Luc Margot has also stated that Planet Nine satisfies his criteria and would qualify as a planet if and when it is detected. Later simulations by Amir Siraj and colleagues in 2025 proposed that Planet Nine's mass would instead be 4.4 ± 1.1 times that of Earth. Internal composition Given a hypothesized ~10 Earth masses and using a theory of exoplanet sizes in the Kepler-454 system, Esther Linder and Christoph Mordasini assumed that Planet Nine's radius would be 3.66 times that of Earth's (23,300 km versus 6,378 km), and that its internal composition would be similar to Uranus and Neptune's: Planet Nine would likely have a hydrogen-helium atmosphere averaging 47° Kelvin, with a core composed of iron and a mantle filled with magnesium silicate and water ice. However, Siraj et al. (2025) suggest that Planet Nine's mass and orbital characteristics would render its composition closer to that of a rocky planet like Earth. Origin Several possible origins for Planet Nine have been examined, including its ejection from the neighborhood of the known giant planets, capture from another star, and in situ formation. In their initial article, Batygin and Brown proposed that Planet Nine formed closer to the Sun and was ejected into a distant eccentric orbit following a close encounter with Jupiter or Saturn during the nebular epoch. Then, either the gravity of a nearby star or drag from the gaseous remnants of the Solar nebula reduced the eccentricity of its orbit. This process raised its perihelion, leaving it in a very wide but stable orbit beyond the influence of the other planets. The odds of this occurring has been estimated at a few percent. If it had not been flung into the Solar System's farthest reaches, Planet Nine could have accreted more mass from the proto-planetary disk and developed into the core of a gas giant or ice giant. Instead, its growth was halted early, leaving it with a lower mass than Uranus or Neptune. Dynamical friction from a massive belt of planetesimals also could have enabled Planet Nine's capture into a stable orbit. Recent models propose that a disk of planetesimals could have formed as the gas was cleared from the outer parts of the proto-planetary disk. As Planet Nine passed through this disk its gravity would alter the paths of the individual objects in a way that reduced Planet Nine's velocity relative to it. This would lower the eccentricity of Planet Nine and stabilize its orbit. If this disk had a distant inner edge, 100–200 AU, a planet encountering Neptune would have a 20% chance of being captured in an orbit similar to that proposed for Planet Nine, with the observed clustering more likely if the inner edge is at 200 AU. Unlike the gas nebula, the planetesimal disk is likely to have been long lived, potentially allowing a later capture. An encounter with another star could also alter the orbit of a distant planet, shifting it from a circular to an eccentric orbit. The in situ formation of a planet at this distance would require a very massive and extensive disk, or the outward drift of solids in a dissipating disk forming a narrow ring from which the planet accreted over a billion years. If a planet formed at such a great distance while the Sun was in its original cluster, the probability of it remaining bound to the Sun in a highly eccentric orbit is roughly 10%. However, while the Sun remained in the open cluster where it formed, any extended disk would have been subject to gravitational disruption by passing stars and by mass loss due to photoevaporation. Planet Nine could have been captured from outside the Solar System during a close encounter between the Sun and another star. If a planet was in a distant orbit around this star, three-body interactions during the encounter could alter the planet's path, leaving it in a stable orbit around the Sun. A planet originating in a system without Jupiter-massed planets could remain in a distant eccentric orbit for a longer time, increasing its chances of capture. The wider range of possible orbits would reduce the odds of its capture in a relatively low inclination orbit to 1–2%. Amir Siraj and Avi Loeb found that the odds of the Sun capturing Planet Nine increases by 20× if the Sun once had a distant, equal-mass binary companion. This process could also occur with rogue planets, but the likelihood of their capture is much smaller, with only 0.05–0.10% being captured in orbits similar to that proposed for Planet Nine. Evidence The gravitational influence of Planet Nine would explain four peculiarities of the Solar System: the clustering of the orbits of ETNOs; the high perihelia of objects like Sedna that are detached from Neptune's influence; the high inclinations of ETNOs with orbits roughly perpendicular to the orbits of the eight known planets; high-inclination trans-Neptunian objects (TNOs) with semi-major axis less than 100 AU. Planet Nine was initially proposed to explain the clustering of orbits, via a mechanism that would also explain the high perihelia of objects like Sedna. The evolution of some of these objects into perpendicular orbits was unexpected, but found to match objects previously observed. The orbits of some objects with perpendicular orbits were later found to evolve toward smaller semi-major axes when the other planets were included in simulations. Although other mechanisms have been offered for many of these peculiarities, the gravitational influence of Planet Nine is the only one that explains all four. The gravity of Planet Nine would also increase the inclinations of other objects that cross its orbit, however, which could leave the scattered disk objects, bodies orbiting beyond Neptune with semi-major axes greater than 50 AU, and short-period comets with a broader inclination distribution than is observed. Previously Planet Nine was hypothesized to be responsible for the 6° tilt of the Sun's axis relative to the orbits of the planets, but recent updates to its predicted orbit and mass limit this shift to ≈1°. Observations: Orbital clustering of high perihelion objects The clustering of the orbits of TNOs with large semi-major axes was first described by Trujillo and Sheppard, who noted similarities between the orbits of Sedna and . Without the presence of Planet Nine, these orbits should be distributed randomly, without preference for any direction. Upon further analysis, Trujillo and Sheppard observed that the arguments of perihelion of 12 TNOs with perihelia greater than and semi-major axes greater than were clustered near 0°, meaning that they rise through the ecliptic when they are closest to the Sun. Trujillo and Sheppard proposed that this alignment was caused by a massive unknown planet beyond Neptune via the Kozai mechanism. For objects with similar semi-major axes the Kozai mechanism would confine their arguments of perihelion near to either 0° or 180°. This confinement allows objects with eccentric and inclined orbits to avoid close approaches to the planet because they would cross the plane of the planet's orbit at their closest and farthest points from the Sun, and cross the planet's orbit when they are well above or below its orbit. Trujillo and Sheppard's hypothesis about how the objects would be aligned by the Kozai mechanism has been supplanted by further analysis and evidence. Batygin and Brown, looking to refute the mechanism proposed by Trujillo and Sheppard, also examined the orbits of the TNOs with large semi-major axes. After eliminating the objects in Trujillo and Sheppard's original analysis that were unstable due to close approaches to Neptune or were affected by Neptune's mean-motion resonances, Batygin and Brown determined that the arguments of perihelion for the remaining six objects (Sedna, , 474640 Alicanto, , , and ) were clustered around . This finding did not agree with how the Kozai mechanism would tend to align orbits with arguments of perihelion at 0° or 180°. Batygin and Brown also found that the orbits of the six ETNOs with semi-major axis greater than 250 AU and perihelia beyond 30 AU (Sedna, , Alicanto, , , and ) were aligned in space with their perihelia in roughly the same direction, resulting in a clustering of their longitudes of perihelion, the location where they make their closest approaches to the Sun. The orbits of the six objects were also tilted with respect to that of the ecliptic and approximately coplanar, producing a clustering of their longitudes of ascending nodes, the directions where they each rise through the ecliptic. They determined that there was only a 0.007% likelihood that this combination of alignments was due to chance. These six objects had been discovered by six different surveys on six telescopes. That made it less likely that the clumping might be due to an observation bias such as pointing a telescope at a particular part of the sky. The observed clustering should be smeared out in a few hundred million years due to the locations of the perihelia and the ascending nodes changing, or precessing, at differing rates due to their varied semi-major axes and eccentricities. This indicates that the clustering could not be due to an event in the distant past, for example a passing star, and is most likely maintained by the gravitational field of an object orbiting the Sun. Two of the six objects ( and Alicanto) also have very similar orbits and spectra. This has led to the suggestion that they were a binary object disrupted near aphelion during an encounter with a distant object. The disruption of a binary would require a relatively close encounter, which becomes less likely at large distances from the Sun. In a later article Trujillo and Sheppard noted a correlation between the longitude of perihelion and the argument of perihelion of the TNOs with semi-major axes greater than 150 AU. Those with a longitude of perihelion of 0–120° have arguments of perihelion between 280 and 360°, and those with longitude of perihelion between ° and ° have arguments of perihelion between ° and °. The statistical significance of this correlation was 99.99%. They suggested that the correlation is due to the orbits of these objects avoiding close approaches to a massive planet by passing above or below its orbit. A 2017 article by Carlos and Raúl de la Fuente Marcos noted that distribution of the distances to the ascending nodes of the ETNOs, and those of centaurs and comets with large semi-major axes, may be bimodal. They suggest it is due to the ETNOs avoiding close approaches to a planet with a semi-major axis of 300–400 AU. With more data (40 objects), the distribution of mutual nodal distances of the ETNOs shows a statistically significant asymmetry between the shortest mutual ascending and descending nodal distances that may not be due to observational bias but likely the result of external perturbations. Simulations: Observed clustering reproduced The clustering of the orbits of ETNOs and raising of their perihelia is reproduced in simulations that include Planet Nine. In simulations conducted by Batygin and Brown, swarms of scattered disk objects with semi-major axes up to 550 AU that began with random orientations were sculpted into roughly collinear and coplanar groups of spatially confined orbits by a massive distant planet in a highly eccentric orbit. This left most of the objects' perihelia pointed in similar directions and the objects' orbits with similar tilts. Many of these objects entered high-perihelion orbits like Sedna and, unexpectedly, some entered perpendicular orbits that Batygin and Brown later noticed had been previously observed. In their original analysis Batygin and Brown found that the distribution of the orbits of the first six ETNOs was best reproduced in simulations using a planet in the following orbit: semi-major axis ≈ (orbital period 7001.5 ) eccentricity ≈ 0.6, (perihelion ≈ , aphelion ≈ ) inclination ≈ 30° to the ecliptic longitude of the ascending node ≈ . argument of perihelion ≈ 140° and longitude of perihelion ≡ + ≈ These parameters for Planet Nine produce different simulated effects on TNOs. Objects with semi-major axis greater than 250 AU are strongly anti-aligned with Planet Nine, with perihelia opposite Planet Nine's perihelion. Objects with semi-major axes between 150 and 250 AU are weakly aligned with Planet Nine, with perihelia in the same direction as Planet Nine's perihelion. Little effect is found on objects with semi-major axes less than 150 AU. The simulations also revealed that objects with semi-major axes greater than could have stable, aligned orbits if they had lower eccentricities. These objects have yet to be observed. Other possible orbits for Planet Nine were also examined, with semi-major axes between and , eccentricities up to 0.8, and a wide range of inclinations. These orbits yield varied results. Batygin and Brown found that orbits of the ETNOs were more likely to have similar tilts if Planet Nine had a higher inclination, but anti-alignment also decreased. Simulations by Becker et al. showed that their orbits were more stable if Planet Nine had a smaller eccentricity, but that anti-alignment was more likely at higher eccentricities. Lawler et al. found that the population captured in orbital resonances with Planet Nine was smaller if it had a circular orbit, and that fewer objects reached high inclination orbits. Investigations by Cáceres et al. showed that the orbits of the ETNOs were better aligned if Planet Nine had a lower perihelion orbit, but its perihelion would need to be higher than 90 AU. Later investigations by Batygin et al. found that higher eccentricity orbits reduced the average tilts of the ETNOs' orbits. While there are many possible combinations of orbital parameters and masses for Planet Nine, none of the alternative simulations were better at predicting the observed alignment of the original ETNOs. The discovery of additional distant Solar System objects would allow astronomers to make more accurate predictions about the orbit of the hypothesized planet. These may also provide further support for, or refutation of, the Planet Nine hypothesis. Simulations that included the migration of giant planets resulted in a weaker alignment of the ETNOs' orbits. The direction of alignment also switched, from more aligned to anti-aligned with increasing semi-major axis, and from anti-aligned to aligned with increasing perihelion distance. The latter would result in the sednoids' orbits being oriented opposite most of the other ETNOs. Dynamics: How Planet Nine modifies the orbits of ETNOs Planet Nine modifies the orbits of ETNOs via a combination of effects. On very long timescales Planet Nine exerts a torque on the orbits of the ETNOs that varies with the alignment of their orbits with Planet Nine's. The resulting exchanges of angular momentum cause the perihelia to rise, placing them in Sedna-like orbits, and later fall, returning them to their original orbits after several hundred million years. The motion of their directions of perihelion also reverses when their eccentricities are small, keeping the objects anti-aligned, see blue curves on diagram, or aligned, red curves. On shorter timescales mean-motion resonances with Planet Nine provides phase protection, which stabilizes their orbits by slightly altering the objects' semi-major axes, keeping their orbits synchronized with Planet Nine's and preventing close approaches. The gravity of Neptune and the other giant planets, and the inclination of Planet Nine's orbit, weaken this protection. This results in a chaotic variation of semi-major axes as objects hop between resonances, including high-order resonances such as 27:17, on million-year timescales. The mean-motion resonances may not be necessary for the survival of ETNOs if they and Planet Nine are both on inclined orbits. The orbital poles of the objects precess around, or circle, the pole of the Solar System's Laplace plane. At large semi-major axes the Laplace plane is warped toward the plane of Planet Nine's orbit. This causes orbital poles of the ETNOs on average to be tilted toward one side and their longitudes of ascending nodes to be clustered. In 2024, Brown and Batygin completed a simulation which showed that the presence of Planet Nine, over time, would increase the eccentricities of a significant subset of objects with semi-major axes above 100 AU until their perihelion reduced under 30 AU, which would mean that their orbits cross that of Neptune. They also conducted a survey of Neptune-crossing objects with inclinations below 40 degrees and semi-major axes between 100 and 1000 AU and argued that the results aligned with the presence of Planet Nine, which would produce a ratio of Neptune-crossers to objects with a perihelion beyond Neptune's orbit of 3%, compared to 0.5% in the absence of Planet Nine. Objects in perpendicular orbits with large semi-major axis Planet Nine can deliver ETNOs into orbits roughly perpendicular to the ecliptic. Several objects with high inclinations, greater than 50°, and large semi-major axes, above 250 AU, have been observed. These orbits are produced when some low inclination ETNOs enter a secular resonance with Planet Nine upon reaching low eccentricity orbits. The resonance causes their eccentricities and inclinations to increase, delivering the ETNOs into perpendicular orbits with low perihelia where they are more readily observed. The ETNOs then evolve into retrograde orbits with lower eccentricities, after which they pass through a second phase of high eccentricity perpendicular orbits, before returning to low eccentricity and inclination orbits. The secular resonance with Planet Nine involves a linear combination of the orbit's arguments and longitudes of perihelion: Unlike the Kozai mechanism this resonance causes objects to reach their maximum eccentricities when in nearly perpendicular orbits. In simulations conducted by Batygin and Morbidelli this evolution was relatively common, with 38% of stable objects undergoing it at least once. The arguments of perihelion of these objects are clustered near or opposite Planet Nine's and their longitudes of ascending node are clustered around 90° in either direction from Planet Nine's when they reach low perihelia. This is in rough agreement with observations with the differences attributed to distant encounters with the known giant planets. Orbits of high-inclination objects A population of high-inclination TNOs with semi-major axes less than 100 AU may be generated by the combined effects of Planet Nine and the other giant planets. The ETNOs that enter perpendicular orbits have perihelia low enough for their orbits to intersect those of Neptune or the other giant planets. An encounter with one of these planets can lower an ETNO's semi-major axis to below 100 AU, where the object's orbits is no longer controlled by Planet Nine, leaving it in an orbit like . The predicted orbital distribution of the longest lived of these objects is nonuniform. Most would have orbits with perihelia ranging from 5 AU to 35 AU and inclinations below 110°; beyond a gap with few objects are would be others with inclinations near 150° and perihelia near 10 AU. Previously it was proposed that these objects originated in the Oort cloud, a theoretical cloud of icy planetesimals surrounding the Sun at distances of 2,000 to 200,000 AU. In simulations without Planet Nine an insufficient number are produced from the Oort cloud relative to observations, however. A few of the high-inclination TNOs may become retrograde Jupiter Trojans. Oort cloud and comets Planet Nine would alter the source regions and the inclination distribution of comets. In simulations of the migration of the giant planets described by the Nice model fewer objects are captured in the Oort cloud when Planet Nine is included. Other objects would be captured in a cloud of objects dynamically controlled by Planet Nine. This Planet Nine cloud, made up of the ETNOs and the perpendicular objects, would extend from semi-major axes of and contain roughly . When the perihelia of objects in the Planet Nine cloud drop low enough for them to encounter the other planets some would be scattered into orbits that enter the inner Solar System where they could be observed as comets. If Planet Nine exists these would make up roughly one third of the Halley-type comets. Interactions with Planet Nine would also increase the inclinations of the scattered disk objects that cross its orbit. This could result in more with moderate inclinations of 15–30° than are observed. The inclinations of the Jupiter-family comets derived from that population would also have a broader inclination distribution than is observed. Recent estimates of a smaller mass and eccentricity for Planet Nine would reduce its effect on these inclinations. 2019 estimate In February 2019, the total of ETNOs that fit the original hypothesis of having semi-major axis of over 250 AU had increased to fourteen objects. The orbit parameters for Planet Nine favored by Batygin and Brown after an analysis using these objects were: semi-major axis of 400–500 AU; orbital eccentricity of 0.15–0.3; orbital inclination around 20°; mass of about . 2021 estimate In August 2021, Batygin and Brown reanalyzed the data related to ETNO observations while accounting for observational biases, they found that observations were more likely in some directions than others. They stated that the orbital clustering observed "remains significant at a 99.6% confidence level". Combining observational biases with numerical simulations, they predicted the characteristics of Planet Nine: semi-major axis of (300–520 AU); perihelion of (240–385 AU); orbital inclination of (11°–21°); mass of 6.2 Earth masses Reception Batygin was cautious in interpreting the results of the simulation developed for his and Brown's research article, saying, "Until Planet Nine is caught on camera it does not count as being real. All we have now is an echo." In 2016, Brown put the odds for the existence of Planet Nine at about 90%. Greg Laughlin, one of the few researchers who knew in advance about this article, gives an estimate of 68.3%. Other skeptical scientists demand more data in terms of additional KBOs to be analyzed or final evidence through photographic confirmation. Brown, though conceding the skeptics' point, still thinks that there is enough data to mount a search for a new planet. The Planet Nine hypothesis is supported by several astronomers and academics. In January 2016 Jim Green, director of NASA's Science Mission Directorate, said, "the evidence is stronger now than it's been before". But Green also cautioned about the possibility of other explanations for the observed motion of distant ETNOs and, quoting Carl Sagan, he said, "extraordinary claims require extraordinary evidence." Massachusetts Institute of Technology Professor Tom Levenson concluded that, for now, Planet Nine seems the only satisfactory explanation for everything now known about the outer regions of the Solar System. Astronomer Alessandro Morbidelli, who reviewed the research article for The Astronomical Journal, concurred, saying, "I don't see any alternative explanation to that offered by Batygin and Brown." Astronomer Renu Malhotra remains agnostic about Planet Nine, but noted that she and her colleagues have found that the orbits of ETNOs seem tilted in a way that is difficult to otherwise explain. "The amount of warp we see is just crazy," she said. "To me, it's the most intriguing evidence for Planet Nine I've run across so far." Other experts have varying degrees of skepticism. American astrophysicist Ethan Siegel, who previously speculated that planets may have been ejected from the Solar System during an early dynamical instability, is skeptical of the existence of an undiscovered planet in the Solar System. In a 2018 article discussing a survey that did not find evidence of clustering of the ETNOs' orbits, he suggests the previously observed clustering could have been the result of observational bias and claims most scientists think Planet Nine does not exist. Planetary scientist Hal Levison thinks that the chance of an ejected object ending up in the inner Oort cloud is about 2%, and speculates that many objects must have been thrown past the Oort cloud if one has entered a stable orbit. Further skepticism about the Planet Nine hypothesis arose in 2020, based on results from the Outer Solar System Origins Survey and the Dark Energy Survey, with the OSSOS documenting over 800 trans-Neptunian objects and the DES discovering 316 new ones. Both surveys adjusted for observational bias and concluded that of the objects observed there was no evidence for clustering. The authors go further to explain that practically all objects' orbits can be explained by physical phenomena rather than a ninth planet as proposed by Brown and Batygin. An author of one of the studies, Samantha Lawler, said the hypothesis of Planet Nine proposed by Brown and Batygin "does not hold up to detailed observations" pointing out the much larger sample size of 800 objects compared to the much smaller 14 and that conclusive studies based on said objects were "premature". She went further to explain the phenomenon of these extreme orbits could be due to gravitational occultation from Neptune when it migrated outwards earlier in the Solar System's history. Alternative hypotheses Nice Planet #5 Planet Nine has been proposed as a potential remnant of the early Solar System's evolution. According to the Five-planet Nice model, the early Solar System contained five giant planets: Jupiter, Saturn, Uranus, Neptune, and a fifth, now-missing ice giant. Simulations of the Nice model suggest that gravitational interactions among these planets, coupled with interactions with a disk of planetesimals, led to the ejection of the fifth giant from the Solar System approximately 4 billion years ago. Some researchers propose that Planet Nine could be this fifth giant, lingering in a distant, eccentric orbit far beyond Neptune instead of being entirely ejected from the Solar System. This hypothesis aligns with observations suggesting Planet Nine's orbit would be stable over the Solar System's lifetime, supporting its survival as an outer-system object. The hypothesis that Planet Nine may be the fifth giant is bolstered by its proposed mass and orbital characteristics, which are consistent with those of an ice giant. Numerical simulations of the Nice model show that the ejection of the fifth giant often leaves a gravitational signature in the form of altered orbits for the remaining planets and small bodies. The observed clustering of certain trans-Neptunian objects (TNOs) has been cited as indirect evidence of Planet Nine's gravitational influence, possibly originating from its early interactions with the outer Solar System. Temporary or coincidental clustering The results of the Outer Solar System Survey (OSSOS) suggest that the observed clustering is the result of a combination of observational bias and small number statistics. OSSOS, a well-characterized survey of the outer Solar System with known biases, observed eight objects with semi-major axis with orbits oriented in a wide range of directions. After accounting for the observational biases of the survey, no evidence for the arguments of perihelion () clustering identified by Trujillo and Sheppard was seen, and the orientation of the orbits of the objects with the largest semi-major axis was statistically consistent with being random. Pedro Bernardinelli and his colleagues also found that the orbital elements of the ETNOs found by the Dark Energy Survey showed no evidence of clustering. However, they also noted that the sky coverage and number of objects found were insufficient to show that there was no Planet Nine. A similar result was found when these two surveys were combined with a survey by Trujillo and Sheppard. These results differed from an analysis of discovery biases in the previously observed ETNOs by Mike Brown. He found that after observation biases were accounted for, the clustering of longitudes of perihelion of 10 known ETNOs would be observed only 1.2% of the time if their actual distribution was uniform. When combined with the odds of the observed clustering of the arguments of perihelion, the probability was 0.025%. A later analysis of the discovery biases of fourteen ETNOs used by Brown and Batygin determined the probability of the observed clustering of the longitudes of perihelion and the orbital pole locations to be 0.2% . Simulations of 15 known objects evolving under the influence of Planet Nine also revealed differences from observations. Cory Shankman and his colleagues included Planet Nine in a simulation of many clones (objects with similar orbits) of 15 objects with semi-major axis and perihelion While they observed alignment of the orbits opposite that of Planet Nine's for the objects with semi-major axis greater than 250 AU, clustering of the arguments of perihelion was not seen. Their simulations also showed that the perihelia of the ETNOs rose and fell smoothly, leaving many with perihelion distances between 50 and 70 AU where none had been observed, and predicted that there would be many other unobserved objects. These included a large reservoir of high-inclination objects that would have been missed due to most observations being at small inclinations, and a large population of objects with perihelia so distant that they would be too faint to observe. Many of the objects were also ejected from the Solar System after encountering the other giant planets. The large unobserved populations and the loss of many objects led Shankman et al. to estimate that the mass of the original population was tens of Earth masses, requiring that a much larger mass had been ejected during the early Solar System. Shankman et al. concluded that the existence of Planet Nine is unlikely and that the currently observed alignment of the existing ETNOs is a temporary phenomenon that will disappear as more objects are detected. Inclination instability in a massive disk Ann-Marie Madigan and Michael McCourt postulate that an inclination instability in a distant massive belt hypothetically termed a Zderic-Madigan, or ZM belt is responsible for the alignment of the arguments of perihelion of the ETNOs. An inclination instability could occur in such a disk of particles with high eccentricity orbits around a central body, such as the Sun. The self-gravity of this disk would cause its spontaneous organization, increasing the inclinations of the objects and aligning the arguments of perihelion, forming it into a cone above or below the original plane. This process would require an extended time and significant mass of the disk, on the order of a billion years for a 1–10 Earth-mass disk. Ann-Marie Madigan argues that some already discovered trans-neptunian objects like Sedna and 2012 VP113 may be members of this disk. If this is the case there would likely be thousands of similar objects in the region. Mike Brown considers Planet Nine a more probable explanation, noting that current surveys have not revealed a large enough scattered-disk to produce an "inclination instability". In Nice model simulations of the Solar System that included the self-gravity of the planetesimal disk an inclination instability did not occur. Instead, the simulation produced a rapid precession of the objects' orbits and most of the objects were ejected on too short of a timescale for an inclination instability to occur. Madigan and colleagues have shown that the inclination instability would require 20 Earth masses in a disk of objects with semi-major axes of a few hundred AU. An inclination instability in this disk could also reproduce the observed gap in the perihelion distances of the extreme TNOs, and the observed apsidal alignment following the inclination instability given sufficient time. Simulations show that the Vera C. Rubin Observatory Legacy Survey of Space and Time (LSST) project should be able to supply strong evidence for or against the ZM belt. Shepherding by a massive disk Antranik Sefilian and Jihad Touma propose that a massive disk of moderately eccentric TNOs is responsible for the clustering of the longitudes of perihelion of the ETNOs. This disk would contain 10 Earth-mass of TNOs with aligned orbits and eccentricities that increased with their semi-major axes ranging from zero to 0.165. The gravitational effects of the disk would offset the forward precession driven by the giant planets so that the orbital orientations of its individual objects are maintained. The orbits of objects with high eccentricities, such as the observed ETNOs, would be stable and have roughly fixed orientations, or longitudes of perihelion, if their orbits were anti-aligned with this disk. Although Brown thinks the proposed disk could explain the observed clustering of the ETNOs, he finds it implausible that the disk could survive over the age of the Solar System. Batygin thinks that there is insufficient mass in the Kuiper belt to explain the formation of the disk, and asks "why would the protoplanetary disk end near 30 AU and restart beyond 100 AU?" Planet in lower eccentricity orbit The Planet Nine hypothesis includes a set of predictions about the mass and orbit of the planet. An alternative hypothesis predicts a planet with different orbital parameters. Renu Malhotra, Kathryn Volk, and Xianyu Wang have proposed that the four detached objects with the longest orbital periods, those with perihelia beyond and semi-major axes greater than , are in n:1 or n:2 mean-motion resonances with a hypothetical planet. Two other objects with semi-major axes greater than are also potentially in resonance with this planet. Their proposed planet could be on a lower eccentricity, low inclination orbit, with eccentricity e < 0.18 and inclination i ≈ 11°. The eccentricity is limited in this case by the requirement that close approaches of to the planet be avoided. If the ETNOs are in periodic orbits of the third kind, with their stability enhanced by the libration of their arguments of perihelion, the planet could be in a higher inclination orbit, with i ≈ 48°. Unlike Batygin and Brown, Malhotra, Volk and Wang do not specify that most of the distant detached objects would have orbits anti-aligned with the massive planet. Alignment due to the Kozai mechanism Trujillo and Sheppard argued in 2014 that a massive planet in a circular orbit with an average distance between and was responsible for the clustering of the arguments of perihelion of twelve TNOs with large semi-major axes. Trujillo and Sheppard identified a clustering near zero degrees of the arguments of perihelion of the orbits of twelve TNOs with perihelia greater than and semi-major axes greater than . After numerical simulations showed that the arguments of perihelion should circulate at varying rates, leaving them randomized after billions of years, they suggested that a massive planet in a circular orbit at a few hundred astronomical units was responsible for this clustering. This massive planet would cause the arguments of perihelion of the TNOs to librate about 0° or 180° via the Kozai mechanism so that their orbits crossed the plane of the planet's orbit near perihelion and aphelion, the closest and farthest points from the planet. In numerical simulations including a 2–15 Earth mass body in a circular low-inclination orbit between and the arguments of perihelia of Sedna and librated around 0° for billions of years (although the lower perihelion objects did not) and underwent periods of libration with a Neptune mass object in a high inclination orbit at 1,500 AU. Another process such as a passing star would be required to account for the absence of objects with arguments of perihelion near 180°. These simulations showed the basic idea of how a single large planet can shepherd the smaller TNOs into similar types of orbits. They were basic proof of concept simulations that did not obtain a unique orbit for the planet as they state there are many possible orbital configurations the planet could have. Thus they did not fully formulate a model that successfully incorporated all the clustering of the ETNOs with an orbit for the planet. But they were the first to notice there was a clustering in the orbits of TNOs and that the most likely reason was from an unknown massive distant planet. Their work is very similar to how Alexis Bouvard noticed Uranus' motion was peculiar and suggested that it was likely gravitational forces from an unknown 8th planet, which led to the discovery of Neptune. Raúl and Carlos de la Fuente Marcos proposed a similar model but with two distant planets in resonance. An analysis by Carlos and Raúl de la Fuente Marcos with Sverre J. Aarseth confirmed that the observed alignment of the arguments of perihelion could not be due to observational bias. They speculated that instead it was caused by an object with a mass between that of Mars and Saturn that orbited at some from the Sun. Like Trujillo and Sheppard they theorized that the TNOs are kept bunched together by a Kozai mechanism and compared their behavior to that of Comet 96P/Machholz under the influence of Jupiter. They also struggled to explain the orbital alignment using a model with only one unknown planet, and therefore suggested that this planet is itself in resonance with a more-massive world about from the Sun. In their article, Brown and Batygin noted that alignment of arguments of perihelion near 0° or 180° via the Kozai mechanism requires a ratio of the semi-major axes nearly equal to one, indicating that multiple planets with orbits tuned to the data set would be required, making this explanation too unwieldy. Primordial black hole In 2019, Jakub Scholtz and James Unwin proposed that a primordial black hole was responsible for the clustering of the orbits of the ETNOs. Their analysis of OGLE gravitational lensing data revealed a population of planetary mass objects in the direction of the galactic bulge more numerous than the local population of stars. They propose that instead of being free floating planets, these objects are primordial black holes. Since their estimate of the size of this population is greater than the estimated population of free floating planets from planetary formation models they argue that the capture of a hypothetical primordial black hole would be more probable than the capture of a free floating planet. This could also explain why an object responsible for perturbing the orbits of the ETNOs, if it exists, has yet to be seen. A detection method was proposed in the paper, stating that the black hole is too cold to be detected over the CMB, but interaction with surrounding dark matter would produce gamma rays detectable by the FERMILAT. Konstantin Batygin commented on this, saying while it is possible for Planet Nine to be a primordial black hole, there is currently not enough evidence to make this idea more plausible than any other alternative. Edward Witten proposed a fleet of probes accelerated by radiation pressure that could discover a Planet Nine primordial black hole's location, however Thiem Hoang and Avi Loeb showed that any signal would be dominated by noise from the interstellar medium. Amir Siraj and Avi Loeb proposed a method for the Vera C. Rubin Observatory to detect flares from any low-mass black hole in the outer Solar System, including a possible Planet Nine primordial black hole. Modified Newtonian dynamics In 2023, it was shown that a gravity theory known as modified Newtonian dynamics (MOND), which can explain galactic rotation without invoking dark matter, can provide an alternative explanation using secular approximations. It predicts that the major axes of the KBO orbits will be aligned with the direction toward the Galactic Center and that the orbits cluster in phase space, in agreement with observations. Detection attempts Visibility and location Due to its extreme distance from the Sun, Planet Nine would reflect little sunlight, potentially evading telescope sightings. It is expected to have an apparent magnitude fainter than 22, making it at least 600 times fainter than Pluto. If Planet Nine exists and is close to perihelion, astronomers could identify it based on existing images. At aphelion, the largest telescopes would be required, but if the planet is currently located in between, many observatories could spot Planet Nine. Statistically, the planet is more likely to be close to its aphelion at a distance greater than 600 AU. This is because objects move more slowly when near their aphelion, in accordance with Kepler's second law. A 2019 study estimated that Planet Nine, if it exists, may be smaller and closer than originally thought. This would make the hypothetical planet brighter and easier to spot, with an apparent magnitude of 21–22. Observation and analysis of the orbital dynamics of Kuiper Belt objects constrain the possible orbital parameters of a Planet Nine, and at the current rate of new observations, University of Michigan professor Fred Adams believes enough data will have been gathered to pinpoint Planet Nine or rule out its existence by 2035. Searches of existing data The search of databases of stellar objects by Batygin and Brown has already excluded much of the sky along Planet Nine's predicted orbit. The remaining regions include the direction of its aphelion, where it would be too faint to be spotted by these surveys, and near the plane of the Milky Way, where it would be difficult to distinguish from the numerous stars. This search included the archival data from the Catalina Sky Survey to magnitude 21–22, Pan-STARRS to magnitude 21.5, and infrared data from the Wide-field Infrared Survey Explorer (WISE) satellite. In 2021, they also searched the first three years of data from the Zwicky Transient Facility (ZTF) without identifying Planet Nine. The search of the ZTF data alone has ruled out 56% of the parameter space for possible Planet Nine positions. As a result of ruling out mostly objects with small semi-major axes, the expected orbit of Planet Nine was pushed slightly further away. Other researchers have been conducting searches of existing data. David Gerdes, who helped develop the camera used in the Dark Energy Survey, claims that software designed to identify distant Solar System objects such as could find Planet Nine if it was imaged as part of that survey, which covered a quarter of the southern sky. Michael Medford and Danny Goldstein, graduate students at the University of California, Berkeley, are also examining archived data using a technique that combines images taken at different times. Using a supercomputer they will offset the images to account for the calculated motion of Planet Nine, allowing many faint images of a faint moving object to be combined to produce a brighter image. A search combining multiple images collected by WISE and NEOWISE data has also been conducted without detecting Planet Nine. This search covered regions of the sky away from the galactic plane at the "W1" wavelength (the 3.4 μm wavelength used by WISE) and is estimated to be able to detect a 10-Earth mass object out to 800–900 AU. Malena Rice and Gregory Laughlin applied a targeted shift-stacking search algorithm to analyze data from TESS sectors 18 and 19 looking for Planet Nine and candidate outer Solar System objects. Their search generated no serious evidence for the presence of a distant planet, but it produced 17 new outer Solar System body candidates located at geocentric distances in the range 80–200 AU, that need follow-up observations with ground-based telescope resources for confirmation. Early results from a survey with WHT aimed at recovering these distant TNO candidates have failed to confirm two of them. By 2022, a comparison between IRAS and AKARI data yielded no Planet Nine detection. It was noted that far-infrared data in the major portion of the sky are heavily contaminated by the emission from the galactic nebulae, making detection of Planet Nine thermal emission problematic close to the galactic plane or bulge. Ongoing searches Because the planet is predicted to be visible in the Northern Hemisphere, the primary search is expected to be carried out using the Subaru Telescope, which has both an aperture large enough to see faint objects and a wide field of view to shorten the search. Two teams of astronomers—Batygin and Brown, as well as Trujillo and Sheppard—are undertaking this search together, and both teams expect the search to take up to five years. Brown and Batygin initially narrowed the search for Planet Nine to roughly 2,000 square degrees of sky near Orion, a swath of space that Batygin thinks could be covered in about 20 nights by the Subaru Telescope. Subsequent refinements by Batygin and Brown have reduced the search space to 600–800 square degrees of sky. In December 2018, they spent four half–nights and three full nights observing with the Subaru Telescope. Due to the elusiveness of the hypothetical planet, it has been proposed that different detection methods be used when looking for a super-Earth mass planet ranging from using differing telescopes to using multiple spacecraft. In late April and early May 2020, Scott Lawrence and Zeeve Rogoszinski proposed the latter method for finding it as multiple spacecraft would have advantages that land-based telescopes do not have. Radiation Although a distant planet such as Planet Nine would reflect little light, due to its large mass it would still be radiating the heat from its formation as it cools. At its estimated temperature of , the peak of its emissions would be at infrared wavelengths; its apparent magnitude in the V filter (540 nm wavelength) would be 21.7. This radiation signature could be detected by Earth-based submillimeter telescopes, such as ALMA, and a search could be conducted by cosmic microwave background experiments operating at mm wavelengths. A search of part of the sky using archived data of the Atacama Cosmology Telescope has not detected Planet Nine. Jim Green of NASA's Science Mission Directorate is optimistic that it could be observed by the James Webb Space Telescope, the successor to the Hubble Space Telescope. Citizen science The Zooniverse "Catalina Outer Solar System Survey" project, operating from August 2020 to April 2023, was using archived data from the Catalina Sky Survey to search for TNOs. Attempts to predict location Measurements of Saturn's orbit by the Cassini probe Precise observations of Saturn's orbit using data from Cassini suggest that Planet Nine could not be in certain sections of its proposed orbit because its gravity would cause a noticeable effect on Saturn's position. This data neither proves nor disproves that Planet Nine exists. An initial analysis by Fienga, Laskar, Manche, and Gastineau using Cassini data to search for Saturn's orbital residuals, small differences with its predicted orbit due to the Sun and the known planets, was inconsistent with Planet Nine being located with a true anomaly, the location along its orbit relative to perihelion, of −130° to −110° or −65° to 85°. The analysis, using Batygin and Brown's orbital parameters for Planet Nine, suggests that the lack of perturbations to Saturn's orbit is best explained if Planet Nine is located at a true anomaly of . At this location, Planet Nine would be approximately from the Sun, with right ascension close to 2h and declination close to −20°, in Cetus. In contrast, if the putative planet is near aphelion it would be located near right ascension 3.0h to 5.5h and declination −1° to 6°. A later analysis of Cassini data by astrophysicists Matthew Holman and Matthew Payne tightened the constraints on possible locations of Planet Nine. Holman and Payne developed a more efficient model that allowed them to explore a broader range of parameters than the previous analysis. The parameters identified using this technique to analyze the Cassini data was then intersected with Batygin and Brown's dynamical constraints on Planet Nine's orbit. Holman and Payne concluded that Planet Nine is most likely to be located within 20° of RA = 40°, Dec = −15°, in an area of the sky near the constellation Cetus. William Folkner, a planetary scientist at the Jet Propulsion Laboratory (JPL), has stated that the Cassini spacecraft was not experiencing unexplained deviations in its orbit around Saturn. An undiscovered planet would affect the orbit of Saturn, not Cassini. This could produce a signature in the measurements of Cassini, but JPL has seen no unexplained signatures in Cassini data. Analysis of Pluto's orbit An analysis in 2016 of Pluto's orbit by Holman and Payne found perturbations much larger than predicted by Batygin and Brown's proposed orbit for Planet Nine. Holman and Payne suggested three possible explanations: systematic errors in the measurements of Pluto's orbit; an unmodeled mass in the Solar System, such as a small planet in the range of 60– (potentially explaining the Kuiper cliff); or a planet more massive or closer to the Sun instead of the planet predicted by Batygin and Brown. Orbits of nearly parabolic comets An analysis of the orbits of comets with nearly parabolic orbits identifies five new comets with hyperbolic orbits that approach the nominal orbit of Planet Nine described in Batygin and Brown's initial article. If these orbits are hyperbolic due to close encounters with Planet Nine the analysis estimates that Planet Nine is currently near aphelion with a right ascension of 83–90° and a declination of 8–10°. Scott Sheppard, who is skeptical of this analysis, notes that many different forces influence the orbits of comets. Occultations by Jupiter trojans Malena Rice and Gregory Laughlin have proposed that a network of telescopes be built to detect occultations by Jupiter trojans. The timing of these occultations would provide precise astrometry of these objects enabling their orbits to be monitored for variations due to the tide from Planet Nine. Possible encounter with interstellar meteor In May 2022, it was suggested that the peculiar meteor CNEOS 2014-01-08 may have entered Earth-crossing orbit after a swing-by of Planet Nine. If that hypothesis is true, the trajectory back-tracing of CNEOS 2014-01-08 means Planet Nine may be currently located in the constellation of Aries, at right ascension 53°, and declination 9.2°. Attempts to predict the semi-major axis An analysis by Sarah Millholland and Gregory Laughlin identified a pattern of commensurabilities (ratios between orbital periods of pairs of objects consistent with both being in resonance with another object) of the ETNOs. They identify five objects that would be near resonances with Planet Nine if it had a semi-major axis of 654 AU: Sedna (3:2), 474640 Alicanto (3:1), (4:1), (5:1), and (5:1). They identify this planet as Planet Nine but propose a different orbit with an eccentricity e ≈ 0.5, inclination i ≈ 30°, argument of perihelion ω ≈ 150°, and longitude of ascending node Ω ≈ 50° (the last differs from Brown and Batygin's value of 90°). Carlos and Raúl de la Fuente Marcos also note commensurabilities among the known ETNOs similar to that of the Kuiper belt, where accidental commensurabilities occur due to objects in resonances with Neptune. They find that some of these objects would be in 5:3 and 3:1 resonances with a planet that had a semi-major axis of ≈700 AU. Three objects with smaller semi-major axes near 172 AU (, and (594337) 2016 QU89) have also been proposed to be in resonance with Planet Nine. These objects would be in resonance and anti-aligned with Planet Nine if it had a semi-major axis of 315 AU, below the range proposed by Batygin and Brown. Alternatively, they could be in resonance with Planet Nine, but have orbital orientations that circulate instead of being confined by Planet Nine if it had a semi-major axis of 505 AU. A later analysis by Elizabeth Bailey, Michael Brown, and Konstantin Batygin found that if Planet Nine is in an eccentric and inclined orbit the capture of many of the ETNOs in higher-order resonances and their chaotic transfer between resonances prevent the identification of Planet Nine's semi-major axis using current observations. They also determined that the odds of the first six objects observed being in N/1 or N/2 period ratios with Planet Nine are less than 5% if it has an eccentric orbit. A 2025 study by Amir Siraj, Christopher F. Chyba, and Scott Tremaine using an expanded sample of 51 ETNOs to inform 300 simulations in the Rebound program, proposed new orbital characteristics for Planet Nine: that its semi-major axis is 290 ± 30 AU, its eccentricity is 0.29 ± 0.13, and its inclination is roughly 6°. The authors noted that it would put Planet Nine in the field of view of the Rubin Observatory's early observations. In late 2020 it was determined HD 106906 b, a candidate exoplanet, had an eccentric orbit that took it outside the debris disk of its binary host stars. Its orbit appears to be similar to the predictions made for Planet Nine's semi-major axis and it may serve as a proxy for Planet Nine that helps explain how such planetary orbits evolve, although this exoplanet is well over ten times as massive as Jupiter. Naming Planet Nine does not have an official name and will not receive one unless its existence is confirmed via imaging. Only two planets, Uranus and Neptune, have been discovered in the Solar System during recorded history. However, many minor planets, including dwarf planets such as Pluto, asteroids, and comets have been discovered and named. Consequently, there is a well-established process for naming newly discovered Solar System objects. If Planet Nine is observed, the International Astronomical Union will certify a name, with priority usually given to a name proposed by its discoverers. It is likely to be a name chosen from Roman or Greek mythology. In their original article, Batygin and Brown simply referred to the object as "perturber", and only in later press releases did they use "Planet Nine". They have also used the names "Jehoshaphat" and "George" (a reference to William Herschel's proposed name for Uranus) for Planet Nine. Brown has stated: "We actually call it Phattie when we're just talking to each other." In 2018, Batygin has also informally suggested, based on a petition on Change.org, to name the planet after singer David Bowie, and to name any potential moons of the planet after characters from Bowie's song catalogue, such as Ziggy Stardust and Major Tom. Jokes have been made connecting "Planet Nine" to Ed Wood's 1959 science-fiction horror film Plan 9 from Outer Space. In connection with the Planet Nine hypothesis, the film title recently found its way into academic discourse. In 2016, an article titled Planet Nine from Outer Space about the hypothesized planet in the outer region of the Solar System was published in Scientific American. Several conference talks since then have used the same word play, as did a lecture by Mike Brown given in 2019. Persephone, the wife of the deity Pluto, had been a popular name commonly used in science fiction for a planet beyond Neptune, most notably in the works of Arthur C. Clarke and Larry Niven. However, it is unlikely that Planet Nine or any other conjectured planet beyond Neptune will be given the name Persephone once its existence is confirmed, as it is already the name for asteroid 399 Persephone. In 2017, physicist Lorenzo Iorio informally suggested to name the hypothetical planet as ″Telisto″, from the ancient Greek word "τήλιστος" for "farthest" or "most remote". Another classical mythological name, suggested by Jet Propulsion Laboratory physicist Makan Mohageg, is Chronos, after the Greek personification of time; Mohageg's method of finding Planet Nine would revolve around precision timing. In 2018, planetary scientist Alan Stern objected to the name Planet Nine, saying, "It is an effort to erase Clyde Tombaugh's legacy and it's frankly insulting", suggesting the name Planet X until its discovery. He signed a statement with 34 other scientists saying, "We further believe the use of this term [Planet Nine] should be discontinued in favor of culturally and taxonomically neutral terms for such planets, such as Planet X, Planet Next, or Giant Planet Five." According to Brown, Planet X' is not a generic reference to some unknown planet, but a specific prediction of Lowell's which led to the (accidental) discovery of Pluto. Our prediction is not related to this prediction."
Physical sciences
Solar System
Astronomy
68611391
https://en.wikipedia.org/wiki/Aquamarine%20%28gem%29
Aquamarine (gem)
Aquamarine is a pale-blue to light-green variety of the beryl family, with its name relating to water and sea. The color of aquamarine can be changed by heat, with a goal to enhance its physical appearance (though this practice is frowned upon by collectors and jewelers). It is the birth stone of March. Aquamarine is a fairly common gemstone, rendering it more accessible for purchase, compared to other gems in the beryl family. Overall, its value is determined by weight, color, cut, and clarity. It is transparent to translucent and possesses a hexagonal crystal system. Aquamarine mainly forms in granite pegmatites and hydrothermal veins, and it is a very lengthy process that can take millions of years to form. Aquamarine occurs in many countries over the world, and is most commonly used for jewelry, decoration and its properties. Aquamarine is mainly extracted through open-pit mining, however underground mining is also a possibility to access aquamarine reserves. Aquamarine is a durable gemstone, but it is highly recommended to conserve it on its own to prevent damage/scratches. Famous aquamarines include the Dom Pedro, the Roosevelt Aquamarine, the Hirsch Aquamarine, Queen Elizabeth's Tiara, Meghan Markle's ring, and the Schlumberger bow. Name and etymology The name aquamarine comes from , and marine, deriving from . The word aquamarine was first used in the year 1677. The word aquamarine has been used as a modifier for other minerals like aquamarine tourmaline, aquamarine emerald, aquamarine chrysolite, aquamarine sapphire, or aquamarine topaz. Physical properties Aquamarine is blue with hues of green, caused by trace amounts of iron found within the crystal structure. It can vary from pale to vibrant and transparent to translucent. Better transparency in aquamarine gemstones means that light may go through the crystal with less interference. The hexagonal crystal system is where aquamarine crystallizes. It forms prismatic crystals with a hexagonal cross-section. These crystals can be microscopic to enormous in size and frequently feature faces with vertical striating. The lustre of aquamarine ranges from vitreous to resinous. It can have a glass-like brilliance and a sheen when cut and polished correctly. Chemical composition Aquamarine has a chemical composition of , also containing Fe2+. It belongs to the beryl family, being a beryllium aluminum silicate mineral. It is closely related to emerald, morganite, and heliodor. Aquamarine is chemically stable and resistant to most common chemicals and acids. It has a hardness of 7.5–8 on the Mohs scale. While aquamarine often contains no inclusions, it may possess them, with content such as mica, hematite, saltwater, biotite, rutile or pyrite. Its hardness on the Mohs scale of mineral hardness is rated as 7.5-8. This rating gives aquamarine the chance to be a very suitable gem for everyday wear. Geological formation Aquamarine mainly forms in granite pegmatites (coarse-grained igneous rock) and hydrothermal vents. The remaining liquid that is left behind after granitic magma crystallizes is what gives rise to pegmatites. The residual fluids, which are rich in volatile elements and minerals such as silicon, aluminum, and beryllium, concentrate when the magma cools and solidifies. Aquamarine may also be formed by hydrothermal fluids, which are hot, mineral-rich solutions. These liquids contain dissolved minerals and metals as they move through fissures and cavities in the crust of the Earth. Fractures, faults, and veins are just a few of the geological environments that hydrothermal systems can be linked to. Beryllium is a necessary component for the production of aquamarine, a type of beryl. Although beryllium is a relatively uncommon element in the crust of the Earth, it can be found in concentrated forms in some geological settings. These include beryllium-rich hydrothermal systems and granite pegmatites, which contain large amounts of beryllium-bearing minerals. The dissolved elements start to precipitate out of the solution and form crystals as the hydrothermal fluids cool and come into contact with the right minerals and circumstances. Crystals of beryl, which include aquamarine, begin to form in pegmatite veins and host rock fissures or cavities. Aquamarine crystals grow over long periods, which enables them to take on their distinctive hexagonal prismatic shape. This is a very long process that can take millions of years to form. The settings in which aquamarine forms can vary and may lead to variations in gem quality, size, and color. Value The value of aquamarine is determined by its weight, color, cut, and clarity. Due to its relative abundance, aquamarine is comparatively less expensive than other gemstones within the beryl group, such as emerald or bixbite (red beryl), however it is typically more expensive than similarly colored gemstones such as blue topaz. Maxixe is a rarer variant of aquamarine, with its deep blue coloration, however, its color can fade due to sunlight. The color of maxixe is caused by NO3. Dark-blue maxixe color can be produced in green, pink or yellow beryl by irradiating it with high-energy radiation (gamma rays, neutrons or even X-rays). Naturally occurring blue hued aquamarine specimens are more expensive than those that have undergone heat treatment to reduce yellow tones caused by ferric iron. Cut aquamarines that are over 25 carats will have a lower price per carat than smaller ones of the same quality. Overall, the quality and color will vary depending on the source of the gem. In culture Aquamarine is the birth stone for the month of March. It has historically been used a symbol for youth and happiness due to its color, which has also, along with its name, made Western culture connect it with the ocean. Ancient tales have claimed that aquamarine came from the treasure chests of mermaids; which led to sailors using this gemstone as a lucky charm to protect against shipwreck. Additionally, ancient Romans believed this stone had healing properties, due to the stone being almost invisible when submerged in water. The Chinese used it to make seals, and showpiece dolls. The Japanese used it to make netsuke. The Egyptians, Greeks, Hebrews, and Sumerians all believed that aquamarine stones were worn by the High Priest of the Second Temple. It was said that these stones were engraved to represent the six tribes of Israel. Greeks also engraved designs into aquamarine 2 thousand years ago and turned them into intaglios. In our modern era, aquamarine is mainly used for jewelry, decoration and its properties. It can be cut and shaped into rings, earrings, necklaces, and bracelets. Aquamarine became a state gem for Colorado in 1971. Occurrence Aquamarine can be found in countries like Afghanistan, China, Kenya, Pakistan, Russia, Mozambique, the United States, Brazil, Nigeria, Madagascar, Zambia, Tanzania, Sri Lanka, Malawi, India, Zimbabwe, Australia, Myanmar, and Namibia. The state of Minas Gerais is a major source for aquamarine. Aquamarine can mostly be found in granite pegmatites. It can also be found in veins of metamorphic rocks that became mineralized by hydrothermal activity. The largest known example is the Dom Pedro aquamarine found in Pedra Azul, Minas Gerais, Brazil, in the late 1980's. It weighs roughly 4.6 pounds, cut from a 100-pound aquamarine crystal, and measures 10,363 carats. It resides in the National Museum of Natural History in Washington. Mining and extraction The initial stages of the aquamarine mining process involve prospecting and exploration. Finding prospective locations or regions with aquamarine reserves is necessary. Geological mapping, remote sensing, mapping, remote sensing, sampling, and other methods are used by geologists and mining firms to locate potentially aquamarine-containing geological formations and structures. Preparation of the site is the next step, which includes removing any vegetation, leveling the land, and constructing the facilities - such as access roads and workspaces. It is possible to mine aquamarine using both open-pit and underground techniques. This will depend on the size of the operation, the features of the deposit, and environmental conditions. The most popular technique for extracting aquamarine on a large scale is open-pit mining. In order to reveal the aquamarine-bearing ore, the soil, vegetation, and rock cover must be removed. The ore is extracted using trucks, bulldozers, and excavators, to remove the material. Underground mining may occasionally be used to obtain aquamarine reserves. This process entails digging shafts and tunnels to reach the ore bodies or veins that contain gems. When the aquamarine deposit is deep or the surrounding rock is too hard for open-pit extraction, underground mining is used, even though it can be more difficult and expensive than open-pit mining. After extraction, the ore containing aquamarine is delivered to a processing plant. To extract the aquamarine crystals from the surrounding rock and other minerals, the ore is crushed, processed, and occasionally cleaned. The aquamarine can be concentrated and purified using a variety of methods, such as magnetic separation, froth flotation, and gravity separation. The aquamarine crystals are then sorted according to size, shape, color, and clarity following the initial processing. The gemstones are assessed and graded by gemologists and experts according to predetermined standards, such as the four C's (color, clarity, cut, and carat weight). Only the best aquamarine crystals are chosen to be used in jewelry made of gemstones. Care and maintenance Aquamarine is classified as a durable gem, however, it may still be damaged. In storage, it is advised to place it on its own, without the interruption of other gemstones to prevent scratches. Warm soapy water and a soft brush are the best ways to clean this gemstone, however, ultrasonic cleaners are relatively safe for aquamarine. Alternative uses Although aquamarine is mainly used for jewelry, aquamarine powder has proven to be a beneficial ingredient in cosmetics. It has a binding and skin protecting function that ensures protection of the skin from external influences. Notable examples
Physical sciences
Silicate minerals
Earth science
51503872
https://en.wikipedia.org/wiki/Free%20neutron%20decay
Free neutron decay
When embedded in an atomic nucleus, neutrons are (usually) stable particles. Outside the nucleus, free neutrons are unstable and have a mean lifetime of or (about and or , respectively). Therefore, the half-life for this process (which differs from the mean lifetime by a factor of ) is (about , ). The beta decay of the neutron described in this article can be notated at four slightly different levels of detail, as shown in four layers of Feynman diagrams in a section below. The hard-to-observe quickly decays into an electron and its matching antineutrino. The subatomic reaction shown immediately above depicts the process as it was first understood, in the first half of the 20th century. The boson () vanished so quickly that it was not detected until much later. Later, beta decay was understood to occur by the emission of a weak boson (), sometimes called a charged weak current. Beta decay specifically involves the emission of a boson from one of the down quarks hidden within the neutron, thereby converting the down quark into an up quark and consequently the neutron into a proton. The following diagram gives a summary sketch of the beta decay process according to the present level of understanding. For diagrams at several levels of detail, see § Decay process, below. Energy budget For the free neutron, the decay energy for this process (based on the rest masses of the neutron, proton and electron) is . That is the difference between the rest mass of the neutron and the sum of the rest masses of the products. That difference has to be carried away as kinetic energy. The maximal energy of the beta decay electron (in the process wherein the neutrino receives a vanishingly small amount of kinetic energy) has been measured at . The latter number is not well-enough measured to determine the comparatively tiny rest mass of the neutrino (which must in theory be subtracted from the maximal electron kinetic energy); furthermore, neutrino mass is constrained by many other methods. A small fraction (about 1 in 1,000) of free neutrons decay with the same products, but add an extra particle in the form of an emitted gamma ray: This gamma ray may be thought of as a sort of "internal bremsstrahlung" that arises as the emitted beta particle (electron) interacts with the charge of the proton in an electromagnetic way. In this process, some of the decay energy is carried away as photon energy. Gamma rays produced in this way are also a minor feature of beta decays of bound neutrons, that is, those within a nucleus. A very small minority of neutron decays (about four per million) are so-called "two-body (neutron) decays", in which a proton, electron and antineutrino are produced as usual, but the electron fails to gain the 13.6 eV necessary energy to escape the proton (the ionization energy of hydrogen), and therefore simply remains bound to it, as a neutral hydrogen atom (one of the "two bodies"). In this type of free neutron decay, in essence all of the neutron decay energy is carried off by the antineutrino (the other "body"). The transformation of a free proton to a neutron (plus a positron and a neutrino) is energetically impossible, since a free neutron has a greater mass than a free proton. However, see proton decay. Decay process viewed from multiple levels Understanding of the beta decay process developed over several years, with the initial understanding of Enrico Fermi and colleagues starting at the "superficial" first level in the diagram below. Current understanding of weak processes rest at the fourth level, at the bottom of the chart, where the nucleons (the neutron and its successor proton) are largely ignored, and attention focuses only on the interaction between two quarks and a charged boson, with the decay of the boson almost treated as an afterthought. Because the charged weak boson () vanishes so quickly, it was not actually observed during the first half of the 20th century, so the diagram at level 1 omits it; even at present it is for the most part inferred by its after-effects. {| style="text-align:left;vertical-align:bottom;" |- |colspan=9| |   |- | | | | + | | | | +   |   |    |- |colspan=9| | |- | | | | | +   | | | | |    |- | | | | | | | | +   | |    |- |colspan=9|     | |- | | | | + | | | | | |    |- | | | | | | | | +   | |    |- |colspan=9| | |- | | | | + | | | | | |    |- | | | | | | | | +   | |    |} Neutron lifetime puzzle While the neutron lifetime has been studied for decades, there currently exists a lack of consilience on its exact value, due to different results from two experimental methods ("bottle" versus "beam"). The "neutron lifetime anomaly" was discovered after the refinement of experiments with ultracold neutrons. While the error margin was once overlapping, increasing refinement in technique which should have resolved the issue has failed to demonstrate convergence to a single value. The difference in mean lifetime values obtained as of 2014 was approximately 9 seconds. Further, a prediction of the value based on quantum chromodynamics as of 2018 is still not sufficiently precise to support one over the other. As explained by Wolchover (2018), the beam test would be incorrect if there is a decay mode that does not produce a proton. On 13 October 2021 the lifetime from the bottle method was updated to increasing the difference to 10 seconds below the beam method value of and also on the same date a novel third method using data from the past NASA's Lunar prospector mission reported a value of but with great uncertainty. Yet another approach similar to the beam method has been explored with the Japan Proton Accelerator Research Complex (J-PARC) but it is too imprecise at the moment to be of significance on the analysis of the discrepancy.
Physical sciences
Particle physics: General
Physics
41465868
https://en.wikipedia.org/wiki/Shannon%20%28unit%29
Shannon (unit)
The shannon (symbol: Sh) is a unit of information named after Claude Shannon, the founder of information theory. IEC 80000-13 defines the shannon as the information content associated with an event when the probability of the event occurring is . It is understood as such within the realm of information theory, and is conceptually distinct from the bit, a term used in data processing and storage to denote a single instance of a binary signal. A sequence of n binary symbols (such as contained in computer memory or a binary data transmission) is properly described as consisting of n bits, but the information content of those n symbols may be more or less than n shannons depending on the a priori probability of the actual sequence of symbols. The shannon also serves as a unit of the information entropy of an event, which is defined as the expected value of the information content of the event (i.e., the probability-weighted average of the information content of all potential events). Given a number of possible outcomes, unlike information content, the entropy has an upper bound, which is reached when the possible outcomes are equiprobable. The maximum entropy of n bits is n Sh. A further quantity that it is used for is channel capacity, which is generally the maximum of the expected value of the information content encoded over a channel that can be transferred with negligible probability of error, typically in the form of an information rate. Nevertheless, the term bits of information or simply bits is more often heard, even in the fields of information and communication theory, rather than shannons; just saying bits can therefore be ambiguous. Using the unit shannon is an explicit reference to a quantity of information content, information entropy or channel capacity, and is not restricted to binary data, whereas bits can as well refer to the number of binary symbols involved, as is the term used in fields such as data processing. Similar units The shannon is connected through constants of proportionality to two other units of information: The hartley, a seldom-used unit, is named after Ralph Hartley, an electronics engineer interested in the capacity of communications channels. Although of a more limited nature, his early work, preceding that of Shannon, makes him recognized also as a pioneer of information theory. Just as the shannon describes the maximum possible information capacity of a binary symbol, the hartley describes the information that can be contained in a 10-ary symbol, that is, a digit value in the range 0 to 9 when the a priori probability of each value is . The conversion factor quoted above is given by log10(2). In mathematical expressions, the nat is a more natural unit of information, but 1 nat does not correspond to a case in which all possibilities are equiprobable, unlike with the shannon and hartley. In each case, formulae for the quantification of information capacity or entropy involve taking the logarithm of an expression involving probabilities. If base-2 logarithms are employed, the result is expressed in shannons, if base-10 (common logarithms) then the result is in hartleys, and if natural logarithms (base e), the result is in nats. For instance, the information capacity of a 16-bit sequence (achieved when all 65536 possible sequences are equally probable) is given by log(65536), thus , , or . Information measures In information theory and derivative fields such as coding theory, one cannot quantify the 'information' in a single message (sequence of symbols) out of context, but rather a reference is made to the model of a channel (such as bit error rate) or to the underlying statistics of an information source. There are thus various measures of or related to information, all of which may use the shannon as a unit. For instance, in the above example, a 16-bit channel could be said to have a channel capacity of 16 Sh, but when connected to a particular information source that only sends one of 8 possible messages, one would compute the entropy of its output as no more than 3 Sh. And if one already had been informed through a side channel in which set of 4 possible messages the message is, then one could calculate the mutual information of the new message (having 8 possible states) as no more than 2 Sh. Although there are infinite possibilities for a real number chosen between 0 and 1, so-called differential entropy can be used to quantify the information content of an analog signal, such as related to the enhancement of signal-to-noise ratio or confidence of a hypothesis test.
Physical sciences
Information
Basics and measurement
57687117
https://en.wikipedia.org/wiki/Rare-earth%20barium%20copper%20oxide
Rare-earth barium copper oxide
Rare-earth barium copper oxide (ReBCO) is a family of chemical compounds known for exhibiting high-temperature superconductivity (HTS). ReBCO superconductors have the potential to sustain stronger magnetic fields than other superconductor materials. Due to their high critical temperature and critical magnetic field, this class of materials are proposed for use in technical applications where conventional low-temperature superconductors do not suffice. This includes magnetic confinement fusion reactors such as the ARC reactor, allowing a more compact and potentially more economical construction, and superconducting magnets to use in future particle accelerators to come after the Large Hadron Collider, which utilizes low-temperature superconductors. Materials Any rare-earth element can be used in a ReBCO; popular choices include yttrium (YBCO), lanthanum (LBCO), samarium (Sm123), neodymium (Nd123 and Nd422), gadolinium (Gd123) and europium (Eu123), where the numbers among parenthesis indicate the molar ratio among rare-earth, barium and copper. YBCO The most famous ReBCO is yttrium barium copper oxide, YBa2Cu3O7−x (or Y123), the first superconductor found with a critical temperature above the boiling point of liquid nitrogen. Its molar ratio is 1 to 2 to 3 for yttrium, barium, and copper and it has a unit cell consisting of subunits, which is the typical structure of perovskites. In particular, the subunits are three, overlapping and containing an yttrium atom at the center of the middle one and a barium atom at the center of the others. Therefore, yttrium and barium are stacked according to the sequence [Ba-Y-Ba], along an axis conventionally denoted by c, (the vertical direction in the figure at the top right). The resulting cell has an orthorhombic structure, unlike other superconducting cuprates that generally have a tetragonal structure. All the corner sites of the unit cell are occupied by copper, which has two different coordinates, Cu(1) and Cu(2), with respect to oxygen. It offers four possible crystallographic sites for oxygen: O(1), O(2), O(3), and O(4). History Because these kind of materials are brittle it was difficult to create wires from them. After 2010, industrial manufacturers started to produce tapes, with different layers encapsulating the ReBCO material, opening the way to commercial uses. In September 2021 Commonwealth Fusion Systems (CFS) created a test magnet with ReBCO tape that handled a current of 40,000 amperes, with a magnetic field of 20 tesla at 20 K. One important innovation was to avoid insulating the tape, saving space and lowering required voltages. Another was the size of the magnet: 10 tons, far larger than any prior experiment. The magnet assembly consisted of 16 plates, called pancakes, each hosting a spiral winding of tape on one side and cooling channels on the other. In 2023, the National High Magnetic Field Laboratory generated 32 tesla with a ReBCO superconducting magnet. A 40T superconducting magnet is under construction.
Physical sciences
Ceramic compounds
Chemistry
61981357
https://en.wikipedia.org/wiki/Ribose
Ribose
Ribose is a simple sugar and carbohydrate with molecular formula C5H10O5 and the linear-form composition H−(C=O)−(CHOH)4−H. The naturally occurring form, , is a component of the ribonucleotides from which RNA is built, and so this compound is necessary for coding, decoding, regulation and expression of genes. It has a structural analog, deoxyribose, which is a similarly essential component of DNA. is an unnatural sugar that was first prepared by Emil Fischer and Oscar Piloty in 1891. It was not until 1909 that Phoebus Levene and Walter Jacobs recognised that was a natural product, the enantiomer of Fischer and Piloty's product, and an essential component of nucleic acids. Fischer chose the name "ribose" as it is a partial rearrangement of the name of another sugar, arabinose, of which ribose is an epimer at the 2' carbon; both names also relate to gum arabic, from which arabinose was first isolated and from which they prepared . Like most sugars, ribose exists as a mixture of cyclic forms in equilibrium with its linear form, and these readily interconvert especially in aqueous solution. The name "ribose" is used in biochemistry and biology to refer to all of these forms, though more specific names for each are used when required. In its linear form, ribose can be recognised as the pentose sugar with all of its hydroxyl functional groups on the same side in its Fischer projection. has these hydroxyl groups on the right hand side and is associated with the systematic name (2R,3R,4R)-2,3,4,5-tetrahydroxypentanal, whilst has its hydroxyl groups appear on the left hand side in a Fischer projection. Cyclisation of ribose occurs via hemiacetal formation due to attack on the aldehyde by the C4' hydroxyl group to produce a furanose form or by the C5' hydroxyl group to produce a pyranose form. In each case, there are two possible geometric outcomes, named as α- and β- and known as anomers, depending on the stereochemistry at the hemiacetal carbon atom (the "anomeric carbon"). At room temperature, about 76% of is present in pyranose forms (α:β = 1:2) and 24% in the furanose forms (α:β = 1:3), with only about 0.1% of the linear form present. The ribonucleosides adenosine, cytidine, guanosine, and uridine are all derivatives of β--ribofuranose. Metabolically important species that include phosphorylated ribose include ADP, ATP, coenzyme A, and NADH. cAMP and cGMP serve as secondary messengers in some signaling pathways and are also ribose derivatives. The ribose moiety appears in some pharmaceutical agents, including the antibiotics neomycin and paromomycin. Synthesis and sources Ribose as its 5-phosphate ester is typically produced from glucose by the pentose phosphate pathway. In at least some archaea, alternative pathways have been identified. Ribose can be synthesized chemically, but commercial production relies on fermentation of glucose. Using genetically modified strains of B. subtilis, 90 g/liter of ribose can be produced from 200 g of glucose. The conversion entails the intermediacy of gluconate and ribulose. Ribose has been detected in meteorites. Structure Ribose is an aldopentose (a monosaccharide containing five carbon atoms that, in its open chain form, has an aldehyde functional group at one end). In the conventional numbering scheme for monosaccharides, the carbon atoms are numbered from C1' (in the aldehyde group) to C5'. The deoxyribose derivative found in DNA differs from ribose by having a hydrogen atom in place of the hydroxyl group at C2'. This hydroxyl group performs a function in RNA splicing. The "-" in the name -ribose refers to the stereochemistry of the chiral carbon atom farthest away from the aldehyde group (C4'). In -ribose, as in all -sugars, this carbon atom has the same configuration as in -glyceraldehyde.Relative abundance of forms of ribose in solution: β--ribopyranose (59%), α--ribopyranose (20%), β--ribofuranose (13%), α--ribofuranose (7%) and open chain (0.1%). For ribose residues in nucleosides and nucleotide, the torsion angles for the rotation encompassing the bonds influence the configuration of the respective nucleoside and nucleotide. The secondary structure of a nucleic acid is determined by the rotation of its 7 torsion angles. Having a large amount of torsion angles allows for greater flexibility. In closed ring riboses, the observed flexibility mentioned above is not observed because the ring cycle imposes a limit on the number of torsion angles possible in the structure. Conformers of closed form riboses differ in regards to how the lone oxygen in the molecule is positioned respective to the nitrogenous base (also known as a nucleobase or just a base) attached to the ribose. If a carbon is facing towards the base, then the ribose is labeled as endo. If a carbon is facing away from the base, then the ribose is labeled as exo. If there is an oxygen molecule attached to the 2' carbon of a closed cycle ribose, then the exo confirmation is more stable because it decreases the interactions of the oxygen with the base. The difference itself is quite small, but when looking at an entire chain of RNA the slight difference amounts to a sizable impact.A ribose molecule is typically represented as a planar molecule on paper. Despite this, it is typically non-planar in nature. Even between hydrogen atoms, the many constituents on a ribose molecule cause steric hindrance and strain between them. To relieve this crowding and ring strain, the ring puckers, i.e. becomes non-planar. This puckering is achieved by displacing an atom from the plane, relieving the strain and yielding a more stable conformation. Puckering, otherwise known as the sugar ring conformation (specifically ribose sugar), can be described by the amplitude of pucker as well as the pseudorotation angle. The pseudo-rotation angle can be described as either "north (N)" or "south (S)" range. While both ranges are found in double helices, the north range is commonly associated with RNA and the A form of DNA. In contrast, the south range is associated with B form DNA. Z-DNA contains sugars in both the north and south ranges. When only a single atom is displaced, it is referred to as an "envelope" pucker. When two atoms are displaced, it is referred to as a "twist" pucker, in reference to the zigzag orientation. In an "endo" pucker, the major displacement of atoms is on the β-face, the same side as the C4'-C5' bond and the base. In an "exo" pucker, the major displacement of atoms is on the α-face, on the opposite side of the ring. The major forms of ribose are the 3'-endo pucker (commonly adopted by RNA and A-form DNA) and 2'-endo pucker (commonly adopted by B-form DNA). These ring puckers are developed from changes in ring torsion angles; there are infinite combinations of angles so therefore, there is an infinite number of transposable pucker conformations, each separated by disparate activation energies. Functions ATP is derived from ribose; it contains one ribose, three phosphate groups, and an adenine base. ATP is created during cellular respiration from adenosine diphosphate (ATP with one less phosphate group). Signaling pathways Ribose is a building block in secondary signaling molecules such as cyclic adenosine monophosphate (cAMP) which is derived from ATP. One specific case in which cAMP is used is in cAMP-dependent signaling pathways. In cAMP signaling pathways, either a stimulative or inhibitory hormone receptor is activated by a signal molecule. These receptors are linked to a stimulative or inhibitory regulative G-protein. When a stimulative G-protein is activated, adenylyl cyclase catalyzes ATP into cAMP by using Mg2+ or Mn2+. cAMP, a secondary messenger, then goes on to activate protein kinase A, which is an enzyme that regulates cell metabolism. Protein kinase A regulates metabolic enzymes by phosphorylation which causes a change in the cell depending on the original signal molecule. The opposite occurs when an inhibitory G-protein is activated; the G-protein inhibits adenylyl cyclase and ATP is not converted to cAMP. Metabolism Ribose is referred to as the "molecular currency" because of its involvement in intracellular energy transfers. For example, nicotinamide adenine dinucleotide (NAD), flavin adenine dinucleotide (FAD), and nicotinamide adenine dinucleotide phosphate (NADP) all contain the -ribofuranose moiety. They can each be derived from -ribose after it is converted to -ribose 5-phosphate by the enzyme ribokinase. NAD, FAD, and NADP act as electron acceptors in biochemical redox reactions in major metabolic pathways including glycolysis, the citric acid cycle, fermentation, and the electron transport chain. Nucleotide biosynthesis Nucleotides are synthesized through salvage or de novo synthesis. Nucleotide salvage uses pieces of previously made nucleotides and re-synthesizes them for future use. In de novo, amino acids, carbon dioxide, folate derivatives, and phosphoribosyl pyrophosphate (PRPP) are used to synthesize nucleotides. Both de novo and salvage require PRPP which is synthesized from ATP and ribose 5-phosphate by an enzyme called PRPP synthetase. Modifications Modifications in nature Ribokinase catalyzes the conversion of -ribose to -ribose 5-phosphate. Once converted, -ribose-5-phosphate is available for the manufacturing of the amino acids tryptophan and histidine, or for use in the pentose phosphate pathway. The absorption of -ribose is 88–100% in the small intestines (up to 200 mg/kg·h). One important modification occurs at the C2' position of the ribose molecule. By adding an O-alkyl group, the nuclear resistance of the RNA is increased because of additional stabilizing forces. These forces are stabilizing because of the increase of intramolecular hydrogen bonding and an increase in the glycosidic bond stability. The resulting increase of resistance leads to increases in the half-life of siRNA and the potential therapeutic potential in cells and animals. The methylation of ribose at particular sites is correlated with a decrease in immune stimulation. Synthetic modifications Along with phosphorylation, ribofuranose molecules can exchange their oxygen with selenium and sulfur to produce similar sugars that only vary at the 4' position. These derivatives are more lipophilic than the original molecule. Increased lipophilicity makes these species more suitable for use in techniques such as PCR, RNA aptamer post-modification, antisense technology, and for phasing X-ray crystallographic data. Similar to the 2' modifications in nature, a synthetic modification of ribose includes the addition of fluorine at the 2' position. This fluorinated ribose acts similar to the methylated ribose because it is capable of suppressing immune stimulation depending on the location of the ribose in the DNA strand. The big difference between methylation and fluorination, is the latter only occurs through synthetic modifications. The addition of fluorine leads to an increase in the stabilization of the glycosidic bond and an increase of intramolecular hydrogen bonds. Medical uses -ribose has been suggested for use in management of congestive heart failure (as well as other forms of heart disease) and for chronic fatigue syndrome (CFS), also called myalgic encephalomyelitis (ME) in an open-label non-blinded, non-randomized, and non-crossover subjective study. Supplemental -ribose can bypass part of the pentose phosphate pathway, an energy-producing pathway, to produce -ribose-5-phosphate. The enzyme glucose-6-phosphate-dehydrogenase (G-6-PDH) is often in short supply in cells, but more so in diseased tissue, such as in myocardial cells in patients with cardiac disease. The supply of -ribose in the mitochondria is directly correlated with ATP production; decreased -ribose supply reduces the amount of ATP being produced. Studies suggest that supplementing -ribose following tissue ischemia (e.g. myocardial ischemia) increases myocardial ATP production, and therefore mitochondrial function. Essentially, administering supplemental -ribose bypasses an enzymatic step in the pentose phosphate pathway by providing an alternate source of 5-phospho--ribose 1-pyrophosphate for ATP production. Supplemental -ribose enhances recovery of ATP levels while also reducing cellular injury in humans and other animals. One study suggested that the use of supplemental -ribose reduces the instance of angina in men with diagnosed coronary artery disease. -Ribose has been used to treat many pathological conditions, such as chronic fatigue syndrome, fibromyalgia, and myocardial dysfunction. It is also used to reduce symptoms of cramping, pain, stiffness, etc. after exercise and to improve athletic performance.
Biology and health sciences
Carbohydrates
Biology
44362809
https://en.wikipedia.org/wiki/Symmetry%20%28geometry%29
Symmetry (geometry)
In geometry, an object has symmetry if there is an operation or transformation (such as translation, scaling, rotation or reflection) that maps the figure/object onto itself (i.e., the object has an invariance under the transform). Thus, a symmetry can be thought of as an immunity to change. For instance, a circle rotated about its center will have the same shape and size as the original circle, as all points before and after the transform would be indistinguishable. A circle is thus said to be symmetric under rotation or to have rotational symmetry. If the isometry is the reflection of a plane figure about a line, then the figure is said to have reflectional symmetry or line symmetry; it is also possible for a figure/object to have more than one line of symmetry. The types of symmetries that are possible for a geometric object depend on the set of geometric transforms available, and on what object properties should remain unchanged after a transformation. Because the composition of two transforms is also a transform and every transform has, by definition, an inverse transform that undoes it, the set of transforms under which an object is symmetric form a mathematical group, the symmetry group of the object. Euclidean symmetries in general The most common group of transforms applied to objects are termed the Euclidean group of "isometries", which are distance-preserving transformations in space commonly referred to as two-dimensional or three-dimensional (i.e., in plane geometry or solid geometry Euclidean spaces). These isometries consist of reflections, rotations, translations, and combinations of these basic operations. Under an isometric transformation, a geometric object is said to be symmetric if, after transformation, the object is indistinguishable from the object before the transformation. A geometric object is typically symmetric only under a subset or "subgroup" of all isometries. The kinds of isometry subgroups are described below, followed by other kinds of transform groups, and by the types of object invariance that are possible in geometry. By the Cartan–Dieudonné theorem, an orthogonal transformation in n-dimensional space can be represented by the composition of at most n reflections. Reflectional symmetry Reflectional symmetry, linear symmetry, mirror symmetry, mirror-image symmetry, or bilateral symmetry is symmetry with respect to reflection. In one dimension, there is a point of symmetry about which reflection takes place; in two dimensions, there is an axis of symmetry (a.k.a., line of symmetry), and in three dimensions there is a plane of symmetry. An object or figure for which every point has a one-to-one mapping onto another, equidistant from and on opposite sides of a common plane is called mirror symmetric (for more, see mirror image). The axis of symmetry of a two-dimensional figure is a line such that, if a perpendicular is constructed, any two points lying on the perpendicular at equal distances from the axis of symmetry are identical. Another way to think about it is that if the shape were to be folded in half over the axis, the two halves would be identical as mirror images of each other. For example. a square has four axes of symmetry, because there are four different ways to fold it and have the edges match each other. Another example would be that of a circle, which has infinitely many axes of symmetry passing through its center for the same reason. If the letter T is reflected along a vertical axis, it appears the same. This is sometimes called vertical symmetry. Thus one can describe this phenomenon unambiguously by saying that "T has a vertical symmetry axis", or that "T has left-right symmetry". The triangles with reflection symmetry are isosceles, the quadrilaterals with this symmetry are kites and isosceles trapezoids. For each line or plane of reflection, the symmetry group is isomorphic with Cs (see point groups in three dimensions for more), one of the three types of order two (involutions), hence algebraically isomorphic to C2. The fundamental domain is a half-plane or half-space. Point reflection and other involutive isometries Reflection symmetry can be generalized to other isometries of -dimensional space which are involutions, such as in a certain system of Cartesian coordinates. This reflects the space along an -dimensional affine subspace. If  = , then such a transformation is known as a point reflection, or an inversion through a point. On the plane ( = 2), a point reflection is the same as a half-turn (180°) rotation; see below. Antipodal symmetry is an alternative name for a point reflection symmetry through the origin. Such a "reflection" preserves orientation if and only if is an even number. This implies that for  = 3 (as well as for other odd ), a point reflection changes the orientation of the space, like a mirror-image symmetry. That explains why in physics, the term P-symmetry (P stands for parity) is used for both point reflection and mirror symmetry. Since a point reflection in three dimensions changes a left-handed coordinate system into a right-handed coordinate system, symmetry under a point reflection is also called a left-right symmetry. Rotational symmetry Rotational symmetry is symmetry with respect to some or all rotations in -dimensional Euclidean space. Rotations are direct isometries, which are isometries that preserve orientation. Therefore, a symmetry group of rotational symmetry is a subgroup of the special Euclidean group E+(). Symmetry with respect to all rotations about all points implies translational symmetry with respect to all translations (because translations are compositions of rotations about distinct points), and the symmetry group is the whole E+(). This does not apply for objects because it makes space homogeneous, but it may apply for physical laws. For symmetry with respect to rotations about a point, one can take that point as origin. These rotations form the special orthogonal group SO(), which can be represented by the group of orthogonal matrices with determinant 1. For  = 3, this is the rotation group SO(3). Phrased slightly differently, the rotation group of an object is the symmetry group within E+(), the group of rigid motions; that is, the intersection of the full symmetry group and the group of rigid motions. For chiral objects, it is the same as the full symmetry group. Laws of physics are SO(3)-invariant if they do not distinguish different directions in space. Because of Noether's theorem, rotational symmetry of a physical system is equivalent to the angular momentum conservation law. For more, see rotational invariance. Translational symmetry Translational symmetry leaves an object invariant under a discrete or continuous group of translations . The illustration on the right shows four congruent footprints generated by translations along the arrow. If the line of footprints were to extend to infinity in both directions, then they would have a discrete translational symmetry; any translation that mapped one footprint onto another would leave the whole line unchanged. Glide reflection symmetry In 2D, a glide reflection symmetry (also called a glide plane symmetry in 3D, and a transflection in general) means that a reflection in a line or plane combined with a translation along the line or in the plane, results in the same object (such as in the case of footprints). The composition of two glide reflections results in a translation symmetry with twice the translation vector. The symmetry group comprising glide reflections and associated translations is the frieze group p11g, and is isomorphic with the infinite cyclic group Z. Rotoreflection symmetry In 3D, a rotary reflection, rotoreflection or improper rotation is a rotation about an axis combined with reflection in a plane perpendicular to that axis. The symmetry groups associated with rotoreflections include: if the rotation angle has no common divisor with 360°, the symmetry group is not discrete. if the rotoreflection has a 2n-fold rotation angle (angle of 180°/n), the symmetry group is S2n of order 2n (not to be confused with symmetric groups, for which the same notation is used; the abstract group is C2n). A special case is n = 1, an inversion, because it does not depend on the axis and the plane. It is characterized by just the point of inversion. The group Cnh (angle of 360°/n); for odd n, this is generated by a single symmetry, and the abstract group is C2n, for even n. This is not a basic symmetry but a combination. For more, see point groups in three dimensions. Helical symmetry In 3D geometry and higher, a screw axis (or rotary translation) is a combination of a rotation and a translation along the rotation axis. Helical symmetry is the kind of symmetry seen in everyday objects such as springs, Slinky toys, drill bits, and augers. The concept of helical symmetry can be visualized as the tracing in three-dimensional space that results from rotating an object at a constant angular speed, while simultaneously translating at a constant linear speed along its axis of rotation. At any point in time, these two motions combine to give a coiling angle that helps define the properties of the traced helix. When the tracing object rotates quickly and translates slowly, the coiling angle will be close to 0°. Conversely, if the object rotates slowly and translates quickly, the coiling angle will approach 90°. Three main classes of helical symmetry can be distinguished, based on the interplay of the angle of coiling and translation symmetries along the axis: Infinite helical symmetry: If there are no distinguishing features along the length of a helix or helix-like object, the object will have infinite symmetry much like that of a circle, but with the additional requirement of translation along the long axis of the object—to return it to its original appearance. A helix-like object is one that has at every point the regular angle of coiling of a helix, but which can also have a cross section of indefinitely high complexity, provided only that precisely the same cross section exists (usually after a rotation) at every point along the length of the object. Simple examples include evenly coiled springs, slinkies, drill bits, and augers. Stated more precisely, an object has infinite helical symmetries if for any small rotation of the object around its central axis, there exists a point nearby (the translation distance) on that axis at which the object will appear exactly as it did before. It is this infinite helical symmetry that gives rise to the curious illusion of movement along the length of an auger or screw bit that is being rotated. It also provides the mechanically useful ability of such devices to move materials along their length, provided that they are combined with a force such as gravity or friction that allows the materials to resist simply rotating along with the drill or auger. n-fold helical symmetry: If the requirement that every cross section of the helical object be identical is relaxed, then additional lesser helical symmetries would become possible. For example, the cross section of the helical object may change, but may still repeat itself in a regular fashion along the axis of the helical object. Consequently, objects of this type will exhibit a symmetry after a rotation by some fixed angle θ and a translation by some fixed distance, but will not in general be invariant for any rotation angle. If the angle of rotation at which the symmetry occurs divides evenly into a full circle (360°), then the result is the helical equivalent of a regular polygon. This case is called n-fold helical symmetry, where n = 360° (such as the case of a double helix). This concept can be further generalized to include cases where is a multiple of 360° – that is, the cycle does eventually repeat, but only after more than one full rotation of the helical object. Non-repeating helical symmetry: This is the case in which the angle of rotation θ required to observe the symmetry is irrational. The angle of rotation never repeats exactly, no matter how many times the helix is rotated. Such symmetries are created by using a non-repeating point group in two dimensions. DNA, with approximately 10.5 base pairs per turn, is an example of this type of non-repeating helical symmetry. Double rotation symmetry In 4D, a double rotation symmetry can be generated as the composite of two orthogonal rotations. It is similar to 3D screw axis which is the composite of a rotation and an orthogonal translation. Non-isometric symmetries A wider definition of geometric symmetry allows operations from a larger group than the Euclidean group of isometries. Examples of larger geometric symmetry groups are: The group of similarity transformations; i.e., affine transformations represented by a matrix  that is a scalar times an orthogonal matrix. Thus homothety is added, self-similarity is considered a symmetry. The group of affine transformations represented by a matrix  with determinant 1 or −1; i.e., the transformations which preserve area. This adds, e.g., oblique reflection symmetry. The group of all bijective affine transformations. The group of Möbius transformations which preserve cross-ratios. This adds, e.g., inversive reflections such as circle reflection on the plane. In Felix Klein's Erlangen program, each possible group of symmetries defines a geometry in which objects that are related by a member of the symmetry group are considered to be equivalent. For example, the Euclidean group defines Euclidean geometry, whereas the group of Möbius transformations defines projective geometry. Scale symmetry and fractals Scale symmetry means that if an object is expanded or reduced in size, the new object has the same properties as the original. This self-similarity is seen in many natural structures such as cumulus clouds, lightning, ferns and coastlines, over a wide range of scales. It is generally not found in gravitationally bound structures, for example the shape of the legs of an elephant and a mouse (so-called allometric scaling). Similarly, if a soft wax candle were enlarged to the size of a tall tree, it would immediately collapse under its own weight. A more subtle form of scale symmetry is demonstrated by fractals. As conceived by Benoît Mandelbrot, fractals are a mathematical concept in which the structure of a complex form looks similar at any degree of magnification, well seen in the Mandelbrot set. A coast is an example of a naturally occurring fractal, since it retains similar-appearing complexity at every level from the view of a satellite to a microscopic examination of how the water laps up against individual grains of sand. The branching of trees, which enables small twigs to stand in for full trees in dioramas, is another example. Because fractals can generate the appearance of patterns in nature, they have a beauty and familiarity not typically seen with mathematically generated functions. Fractals have also found a place in computer-generated movie effects, where their ability to create complex curves with fractal symmetries results in more realistic virtual worlds. Abstract symmetry Klein's view With every geometry, Felix Klein associated an underlying group of symmetries. The hierarchy of geometries is thus mathematically represented as a hierarchy of these groups, and hierarchy of their invariants. For example, lengths, angles and areas are preserved with respect to the Euclidean group of symmetries, while only the incidence structure and the cross-ratio are preserved under the most general projective transformations. A concept of parallelism, which is preserved in affine geometry, is not meaningful in projective geometry. Then, by abstracting the underlying groups of symmetries from the geometries, the relationships between them can be re-established at the group level. Since the group of affine geometry is a subgroup of the group of projective geometry, any notion invariant in projective geometry is a priori meaningful in affine geometry; but not the other way round. If you add required symmetries, you have a more powerful theory but fewer concepts and theorems (which will be deeper and more general). Thurston's view William Thurston introduced a similar version of symmetries in geometry. A model geometry is a simply connected smooth manifold X together with a transitive action of a Lie group G on X with compact stabilizers. The Lie group can be thought of as the group of symmetries of the geometry. A model geometry is called maximal if G is maximal among groups acting smoothly and transitively on X with compact stabilizers, i.e. if it is the maximal group of symmetries. Sometimes this condition is included in the definition of a model geometry. A geometric structure on a manifold M is a diffeomorphism from M to X/Γ for some model geometry X, where Γ is a discrete subgroup of G acting freely on X. If a given manifold admits a geometric structure, then it admits one whose model is maximal. A 3-dimensional model geometry X is relevant to the geometrization conjecture if it is maximal and if there is at least one compact manifold with a geometric structure modelled on X. Thurston classified the 8 model geometries satisfying these conditions; they are listed below and are sometimes called Thurston geometries. (There are also uncountably many model geometries without compact quotients.)
Mathematics
Other
null
57718573
https://en.wikipedia.org/wiki/Mainshock
Mainshock
In seismology, the mainshock is the largest earthquake in a sequence, sometimes preceded by one or more foreshocks, and almost always followed by many aftershocks. Foreshock A foreshock is an earthquake that occurs before a larger seismic event (the mainshock) and is related to it in both time and space. The designation of an earthquake as foreshock, mainshock or aftershock is only possible after the full sequence of events has happened. Aftershock In seismology, an aftershock is a smaller earthquake that follows a larger earthquake, in the same area of the main shock, caused as the displaced crust adjusts to the effects of the main shock. Large earthquakes can have hundreds to thousands of instrumentally detectable aftershocks, which steadily decrease in magnitude and frequency according to known laws. In some earthquakes the main rupture happens in two or more steps, resulting in multiple main shocks. These are known as doublet earthquakes, and in general can be distinguished from aftershocks in having similar magnitudes and nearly identical seismic waveforms..
Physical sciences
Seismology
Earth science
49242352
https://en.wikipedia.org/wiki/AlphaGo
AlphaGo
AlphaGo is a computer program that plays the board game Go. It was developed by the London-based DeepMind Technologies, an acquired subsidiary of Google. Subsequent versions of AlphaGo became increasingly powerful, including a version that competed under the name Master. After retiring from competitive play, AlphaGo Master was succeeded by an even more powerful version known as AlphaGo Zero, which was completely self-taught without learning from human games. AlphaGo Zero was then generalized into a program known as AlphaZero, which played additional games, including chess and shogi. AlphaZero has in turn been succeeded by a program known as MuZero which learns without being taught the rules. AlphaGo and its successors use a Monte Carlo tree search algorithm to find its moves based on knowledge previously acquired by machine learning, specifically by an artificial neural network (a deep learning method) by extensive training, both from human and computer play. A neural network is trained to identify the best moves and the winning percentages of these moves. This neural network improves the strength of the tree search, resulting in stronger move selection in the next iteration. In October 2015, in a match against Fan Hui, the original AlphaGo became the first computer Go program to beat a human professional Go player without handicap on a full-sized 19×19 board. In March 2016, it beat Lee Sedol in a five-game match, the first time a computer Go program has beaten a 9-dan professional without handicap. Although it lost to Lee Sedol in the fourth game, Lee resigned in the final game, giving a final score of 4 games to 1 in favour of AlphaGo. In recognition of the victory, AlphaGo was awarded an honorary 9-dan by the Korea Baduk Association. The lead up and the challenge match with Lee Sedol were documented in a documentary film also titled AlphaGo, directed by Greg Kohs. The win by AlphaGo was chosen by Science as one of the Breakthrough of the Year runners-up on 22 December 2016. At the 2017 Future of Go Summit, the Master version of AlphaGo beat Ke Jie, the number one ranked player in the world at the time, in a three-game match, after which AlphaGo was awarded professional 9-dan by the Chinese Weiqi Association. After the match between AlphaGo and Ke Jie, DeepMind retired AlphaGo, while continuing AI research in other areas. The self-taught AlphaGo Zero achieved a 100–0 victory against the early competitive version of AlphaGo, and its successor AlphaZero was perceived as the world's top player in Go by the end of the 2010s. History Go is considered much more difficult for computers to win than other games such as chess, because its strategic and aesthetic nature makes it hard to directly construct an evaluation function, and its much larger branching factor makes it prohibitively difficult to use traditional AI methods such as alpha–beta pruning, tree traversal and heuristic search. Almost two decades after IBM's computer Deep Blue beat world chess champion Garry Kasparov in the 1997 match, the strongest Go programs using artificial intelligence techniques only reached about amateur 5-dan level, and still could not beat a professional Go player without a handicap. In 2012, the software program Zen, running on a four PC cluster, beat Masaki Takemiya (9p) twice at five- and four-stone handicaps. In 2013, Crazy Stone beat Yoshio Ishida (9p) at a four-stone handicap. According to DeepMind's David Silver, the AlphaGo research project was formed around 2014 to test how well a neural network using deep learning can compete at Go. AlphaGo represents a significant improvement over previous Go programs. In 500 games against other available Go programs, including Crazy Stone and Zen, AlphaGo running on a single computer won all but one. In a similar matchup, AlphaGo running on multiple computers won all 500 games played against other Go programs, and 77% of games played against AlphaGo running on a single computer. The distributed version in October 2015 was using 1,202 CPUs and 176 GPUs. Match against Fan Hui In October 2015, the distributed version of AlphaGo defeated the European Go champion Fan Hui, a 2-dan (out of 9 dan possible) professional, five to zero. This was the first time a computer Go program had beaten a professional human player on a full-sized board without handicap. The announcement of the news was delayed until 27 January 2016 to coincide with the publication of a paper in the journal Nature describing the algorithms used. Match against Lee Sedol AlphaGo played South Korean professional Go player Lee Sedol, ranked 9-dan, one of the best players at Go, with five games taking place at the Four Seasons Hotel in Seoul, South Korea on 9, 10, 12, 13, and 15 March 2016, which were video-streamed live. Out of five games, AlphaGo won four games and Lee won the fourth game which made him recorded as the only human player who beat AlphaGo in all of its 74 official games. AlphaGo ran on Google's cloud computing with its servers located in the United States. The match used Chinese rules with a 7.5-point komi, and each side had two hours of thinking time plus three 60-second byoyomi periods. The version of AlphaGo playing against Lee used a similar amount of computing power as was used in the Fan Hui match. The Economist reported that it used 1,920 CPUs and 280 GPUs. At the time of play, Lee Sedol had the second-highest number of Go international championship victories in the world after South Korean player Lee Changho who kept the world championship title for 16 years. Since there is no single official method of ranking in international Go, the rankings may vary among the sources. While he was ranked top sometimes, some sources ranked Lee Sedol as the fourth-best player in the world at the time. AlphaGo was not specifically trained to face Lee nor was designed to compete with any specific human players. The first three games were won by AlphaGo following resignations by Lee. However, Lee beat AlphaGo in the fourth game, winning by resignation at move 180. AlphaGo then continued to achieve a fourth win, winning the fifth game by resignation. The prize was US$1 million. Since AlphaGo won four out of five and thus the series, the prize will be donated to charities, including UNICEF. Lee Sedol received $150,000 for participating in all five games and an additional $20,000 for his win in Game 4. In June 2016, at a presentation held at a university in the Netherlands, Aja Huang, one of the Deep Mind team, revealed that they had patched the logical weakness that occurred during the 4th game of the match between AlphaGo and Lee, and that after move 78 (which was dubbed the "divine move" by many professionals), it would play as intended and maintain Black's advantage. Before move 78, AlphaGo was leading throughout the game, but Lee's move caused the program's computing powers to be diverted and confused. Huang explained that AlphaGo's policy network of finding the most accurate move order and continuation did not precisely guide AlphaGo to make the correct continuation after move 78, since its value network did not determine Lee's 78th move as being the most likely, and therefore when the move was made AlphaGo could not make the right adjustment to the logical continuation. Sixty online games On 29 December 2016, a new account on the Tygem server named "Magister" (shown as 'Magist' at the server's Chinese version) from South Korea began to play games with professional players. It changed its account name to "Master" on 30 December, then moved to the FoxGo server on 1 January 2017. On 4 January, DeepMind confirmed that the "Magister" and the "Master" were both played by an updated version of AlphaGo, called AlphaGo Master. As of 5 January 2017, AlphaGo Master's online record was 60 wins and 0 losses, including three victories over Go's top-ranked player, Ke Jie, who had been quietly briefed in advance that Master was a version of AlphaGo. After losing to Master, Gu Li offered a bounty of 100,000 yuan (US$14,400) to the first human player who could defeat Master. Master played at the pace of 10 games per day. Many quickly suspected it to be an AI player due to little or no resting between games. Its adversaries included many world champions such as Ke Jie, Park Jeong-hwan, Yuta Iyama, Tuo Jiaxi, Mi Yuting, Shi Yue, Chen Yaoye, Li Qincheng, Gu Li, Chang Hao, Tang Weixing, Fan Tingyu, Zhou Ruiyang, Jiang Weijie, Chou Chun-hsun, Kim Ji-seok, Kang Dong-yun, Park Yeong-hun, and Won Seong-jin; national champions or world championship runners-up such as Lian Xiao, Tan Xiao, Meng Tailing, Dang Yifei, Huang Yunsong, Yang Dingxin, Gu Zihao, Shin Jinseo, Cho Han-seung, and An Sungjoon. All 60 games except one were fast-paced games with three 20 or 30 seconds byo-yomi. Master offered to extend the byo-yomi to one minute when playing with Nie Weiping in consideration of his age. After winning its 59th game Master revealed itself in the chatroom to be controlled by Dr. Aja Huang of the DeepMind team, then changed its nationality to the United Kingdom. After these games were completed, the co-founder of DeepMind, Demis Hassabis, said in a tweet, "we're looking forward to playing some official, full-length games later [2017] in collaboration with Go organizations and experts". Go experts were impressed by the program's performance and its nonhuman play style; Ke Jie stated that "After humanity spent thousands of years improving our tactics, computers tell us that humans are completely wrong... I would go as far as to say not a single human has touched the edge of the truth of Go." Future of Go Summit In the Future of Go Summit held in Wuzhen in May 2017, AlphaGo Master played three games with Ke Jie, the world No.1 ranked player, as well as two games with several top Chinese professionals, one pair Go game and one against a collaborating team of five human players. Google DeepMind offered 1.5 million dollar winner prizes for the three-game match between Ke Jie and Master while the losing side took 300,000 dollars. Master won all three games against Ke Jie, after which AlphaGo was awarded professional 9-dan by the Chinese Weiqi Association. After winning its three-game match against Ke Jie, the top-rated world Go player, AlphaGo retired. DeepMind also disbanded the team that worked on the game to focus on AI research in other areas. After the Summit, Deepmind published 50 full length AlphaGo vs AlphaGo matches, as a gift to the Go community. AlphaGo Zero and AlphaZero AlphaGo's team published an article in the journal Nature on 19 October 2017, introducing AlphaGo Zero, a version without human data and stronger than any previous human-champion-defeating version. By playing games against itself, AlphaGo Zero surpassed the strength of AlphaGo Lee in three days by winning 100 games to 0, reached the level of AlphaGo Master in 21 days, and exceeded all the old versions in 40 days. In a paper released on arXiv on 5 December 2017, DeepMind claimed that it generalized AlphaGo Zero's approach into a single AlphaZero algorithm, which achieved within 24 hours a superhuman level of play in the games of chess, shogi, and Go by defeating world-champion programs, Stockfish, Elmo, and 3-day version of AlphaGo Zero in each case. Teaching tool On 11 December 2017, DeepMind released an AlphaGo teaching tool on its website to analyze winning rates of different Go openings as calculated by AlphaGo Master. The teaching tool collects 6,000 Go openings from 230,000 human games each analyzed with 10,000,000 simulations by AlphaGo Master. Many of the openings include human move suggestions. Versions An early version of AlphaGo was tested on hardware with various numbers of CPUs and GPUs, running in asynchronous or distributed mode. Two seconds of thinking time was given to each move. The resulting Elo ratings are listed below. In the matches with more time per move higher ratings are achieved. In May 2016, Google unveiled its own proprietary hardware "tensor processing units", which it stated had already been deployed in multiple internal projects at Google, including the AlphaGo match against Lee Sedol. In the Future of Go Summit in May 2017, DeepMind disclosed that the version of AlphaGo used in this Summit was AlphaGo Master, and revealed that it had measured the strength of different versions of the software. AlphaGo Lee, the version used against Lee, could give AlphaGo Fan, the version used in AlphaGo vs. Fan Hui, three stones, and AlphaGo Master was even three stones stronger. Algorithm As of 2016, AlphaGo's algorithm uses a combination of machine learning and tree search techniques, combined with extensive training, both from human and computer play. It uses Monte Carlo tree search, guided by a "value network" and a "policy network", both implemented using deep neural network technology. A limited amount of game-specific feature detection pre-processing (for example, to highlight whether a move matches a nakade pattern) is applied to the input before it is sent to the neural networks. The networks are convolutional neural networks with 12 layers, trained by reinforcement learning. The system's neural networks were initially bootstrapped from human gameplay expertise. AlphaGo was initially trained to mimic human play by attempting to match the moves of expert players from recorded historical games, using a database of around 30 million moves. Once it had reached a certain degree of proficiency, it was trained further by being set to play large numbers of games against other instances of itself, using reinforcement learning to improve its play. To avoid "disrespectfully" wasting its opponent's time, the program is specifically programmed to resign if its assessment of win probability falls beneath a certain threshold; for the match against Lee, the resignation threshold was set to 20%. Style of play Toby Manning, the match referee for AlphaGo vs. Fan Hui, has described the program's style as "conservative". AlphaGo's playing style strongly favours greater probability of winning by fewer points over lesser probability of winning by more points. Its strategy of maximising its probability of winning is distinct from what human players tend to do which is to maximise territorial gains, and explains some of its odd-looking moves. It makes a lot of opening moves that have never or seldom been made by humans. It likes to use shoulder hits, especially if the opponent is over concentrated. Responses to 2016 victory AI community AlphaGo's March 2016 victory was a major milestone in artificial intelligence research. Go had previously been regarded as a hard problem in machine learning that was expected to be out of reach for the technology of the time. Most experts thought a Go program as powerful as AlphaGo was at least five years away; some experts thought that it would take at least another decade before computers would beat Go champions. Most observers at the beginning of the 2016 matches expected Lee to beat AlphaGo. With games such as checkers (that has been solved by the Chinook computer engine), chess, and now Go won by computers, victories at popular board games can no longer serve as major milestones for artificial intelligence in the way that they used to. Deep Blue's Murray Campbell called AlphaGo's victory "the end of an era... board games are more or less done and it's time to move on." When compared with Deep Blue or Watson, AlphaGo's underlying algorithms are potentially more general-purpose and may be evidence that the scientific community is making progress towards artificial general intelligence. Some commentators believe AlphaGo's victory makes for a good opportunity for society to start preparing for the possible future impact of machines with general purpose intelligence. As noted by entrepreneur Guy Suter, AlphaGo only knows how to play Go and doesn't possess general-purpose intelligence; "[It] couldn't just wake up one morning and decide it wants to learn how to use firearms." AI researcher Stuart Russell said that AI systems such as AlphaGo have progressed quicker and become more powerful than expected, and we must therefore develop methods to ensure they "remain under human control". Some scholars, such as Stephen Hawking, warned (in May 2015 before the matches) that some future self-improving AI could gain actual general intelligence, leading to an unexpected AI takeover; other scholars disagree: AI expert Jean-Gabriel Ganascia believes that "Things like 'common sense'... may never be reproducible", and says "I don't see why we would speak about fears. On the contrary, this raises hopes in many domains such as health and space exploration." Computer scientist Richard Sutton said "I don't think people should be scared... but I do think people should be paying attention." In China, AlphaGo was a "Sputnik moment" which helped convince the Chinese government to prioritize and dramatically increase funding for artificial intelligence. In 2017, the DeepMind AlphaGo team received the inaugural IJCAI Marvin Minsky medal for Outstanding Achievements in AI. "AlphaGo is a wonderful achievement, and a perfect example of what the Minsky Medal was initiated to recognise", said Professor Michael Wooldridge, Chair of the IJCAI Awards Committee. "What particularly impressed IJCAI was that AlphaGo achieves what it does through a brilliant combination of classic AI techniques as well as the state-of-the-art machine learning techniques that DeepMind is so closely associated with. It's a breathtaking demonstration of contemporary AI, and we are delighted to be able to recognise it with this award." Go community Go is a popular game in China, Japan and Korea, and the 2016 matches were watched by perhaps a hundred million people worldwide. Many top Go players characterized AlphaGo's unorthodox plays as seemingly-questionable moves that initially befuddled onlookers, but made sense in hindsight: "All but the very best Go players craft their style by imitating top players. AlphaGo seems to have totally original moves it creates itself." AlphaGo appeared to have unexpectedly become much stronger, even when compared with its October 2015 match where a computer had beaten a Go professional for the first time ever without the advantage of a handicap. The day after Lee's first defeat, Jeong Ahram, the lead Go correspondent for one of South Korea's biggest daily newspapers, said "Last night was very gloomy... Many people drank alcohol." The Korea Baduk Association, the organization that oversees Go professionals in South Korea, awarded AlphaGo an honorary 9-dan title for exhibiting creative skills and pushing forward the game's progress. China's Ke Jie, an 18-year-old generally recognized as the world's best Go player at the time, initially claimed that he would be able to beat AlphaGo, but declined to play against it for fear that it would "copy my style". As the matches progressed, Ke Jie went back and forth, stating that "it is highly likely that I (could) lose" after analysing the first three matches, but regaining confidence after AlphaGo displayed flaws in the fourth match. Toby Manning, the referee of AlphaGo's match against Fan Hui, and Hajin Lee, secretary general of the International Go Federation, both reason that in the future, Go players will get help from computers to learn what they have done wrong in games and improve their skills. After game two, Lee said he felt "speechless": "From the very beginning of the match, I could never manage an upper hand for one single move. It was AlphaGo's total victory." Lee apologized for his losses, stating after game three that "I misjudged the capabilities of AlphaGo and felt powerless." He emphasized that the defeat was "Lee Se-dol's defeat" and "not a defeat of mankind". Lee said his eventual loss to a machine was "inevitable" but stated that "robots will never understand the beauty of the game the same way that we humans do." Lee called his game four victory a "priceless win that I (would) not exchange for anything." AlphaGo documentary film (2016) Reception On Rotten Tomatoes the documentary has an average rating of 100% from 10 reviews. Michael Rechtshaffen of the Los Angeles Times gave the documentary a positive review and said: "It helps matters when you have a group of engaging human subjects like soft-spoken Sedol, who's as intensively contemplative as the game itself, contrasted by the spirited, personable Fan Hui, the Paris-based European champ who accepts an offer to serve as an advisor for the DeepMind team after suffering a demoralizing AI trouncing". He also mentioned that with the passion of Hauschka's Volker Bertelmann, the film's producer, this documentary shows many unexpected sequences, including strategic and philosophical components. (Rechtshaffen, 2017 John Defore of The Hollywood Reporter, wrote this documentary is "an involving sports-rivalry doc with an AI twist." "In the end, observers wonder if AlphaGo's odd variety of intuition might not kill Go as an intellectual pursuit but shift its course, forcing the game's scholars to consider it from new angles. So maybe it isn't time to welcome our computer overlords, and won't be for a while - maybe they'll teach us to be better thinkers before turning us into their slaves." Greg Kohs, the director of the film, said "The complexity of the game of Go, combined with the technical depth of an emerging technology like artificial intelligence seemed like it might create an insurmountable barrier for a film like this. The fact that I was so innocently unaware of Go and AlphaGo actually proved to be beneficial. It allowed me to approach the action and interviews with pure curiosity, the kind that helps make any subject matter emotionally accessible." Kohs also said that "Unlike the film's human characters – who turn their curious quest for knowledge into an epic spectacle with great existential implications, who dare to risk their reputation and pride to contest that curiosity – AI might not yet possess the ability to empathize. But it can teach us profound things about our humanness – the way we play board games, the way we think and feel and grow. It's a deep, vast premise, but my hope is, by sharing it, we can discover something within ourselves we never saw before". Professional Go player Hajin Lee, a former professional Go player, described this documentary as being "beautifully filmed". In addition to the story itself, the feelings and atmosphere were also conveyed through different scene arrangements. For example, the close-up shots of Lee Sedol when he realizes that the AlphaGo AI is intelligent, the atmospheric scene of the Korean commentator's distress and affliction following the first defeat, and the tension being held inside the room. The documentary also tells a story by describing the background of AlphaGo technology and the customs of the Korean Go community. She suggests some areas to be covered additionally. For instance, the details of the AI prior to AlphaGo, the confidence and pride of the professional Go players, and the shifting of perspective to the Go AI between and after the match as "If anything could be added, I would include information about the primitive level of top Go A.I.s before AlphaGo, and more about professional Go players' lives and pride, to provide more context for Lee Sedol's pre-match confidence, and Go players' changing perception of AlphaGo as the match advanced".(Lee, 2017). Fan Hui, a professional Go player, and former player with AlphaGo said that "DeepMind had trained AlphaGo by showing it many strong amateur games of Go to develop its understanding of how a human plays before challenging it to play versions of itself thousands of times, a novel form of reinforcement learning which had given it the ability to rival an expert human. History had been made, and centuries of received learning overturned in the process. The program was free to learn the game for itself. Technology and AI-related fields James Vincent, a reporter from The Verge, comments that "It prods and pokes viewers with unsubtle emotional cues, like a reality TV show would. "Now, you should be nervous; now you should feel relieved". The AlphaGo footage slowly captures the moment when Lee Sedol acknowledges the true power of AlphaGo AI. In the first game, he had more experience than his human-programmed AI, so he thought it would be easy to beat the AI. However, the early game dynamics were not what he expected. After losing the first match, he became more nervous and lost confidence. Afterward, he reacted to attacks by saying that he just wanted to win the match, unintentionally displaying his anger, and acting in an unusual way. Also, he spends 12 minutes on one move, while AlphaGo only takes a minute and a half to respond. AlphaGo weighs each alternative equally and consistently. No reaction to Lee's fight. Instead, the game continues as if he was not there. James also said that "suffice to say that humanity does land at least one blow on the machines, through Lee's so-called "divine move". "More likely, the forces of automation we'll face will be impersonal and incomprehensible. They'll come in the form of star ratings we can't object to, and algorithms we can't fully understand. Dealing with the problems of AI will take a perspective that looks beyond individual battles. AlphaGo is worth seeing because it raises these questions" (Vincent, 2017) Murray Shanahan, a professor of cognitive robotics at Imperial College London, critics that "Go is an extraordinary game but it represents what we can do with AI in all kinds of other spheres," says Murray Shanahan, professor of cognitive robotics at Imperial College London and senior research scientist at DeepMind, says. "In just the same way there are all kinds of realms of possibility within Go that have not been discovered, we could never have imagined the potential for discovering drugs and other materials." Similar systems Facebook has also been working on its own Go-playing system darkforest, also based on combining machine learning and Monte Carlo tree search. Although a strong player against other computer Go programs, as of early 2016, it had not yet defeated a professional human player. Darkforest has lost to CrazyStone and Zen and is estimated to be of similar strength to CrazyStone and Zen. DeepZenGo, a system developed with support from video-sharing website Dwango and the University of Tokyo, lost 2–1 in November 2016 to Go master Cho Chikun, who holds the record for the largest number of Go title wins in Japan. A 2018 paper in Nature cited AlphaGo's approach as the basis for a new means of computing potential pharmaceutical drug molecules. Systems consisting of Monte Carlo tree search guided by neural networks have since been explored for a wide array of applications. Example game AlphaGo Master (white) v. Tang Weixing (31 December 2016), AlphaGo won by resignation. White 36 was widely praised. Impacts on Go The documentary film AlphaGo raised hopes that Lee Sedol and Fan Hui would have benefitted from their experience of playing AlphaGo, but , their ratings were little changed; Lee Sedol was ranked 11th in the world, and Fan Hui 545th. On 19 November 2019, Lee announced his retirement from professional play, arguing that he could never be the top overall player of Go due to the increasing dominance of AI. Lee referred to them as being "an entity that cannot be defeated".
Technology
Artificial Intelligence
null
55984460
https://en.wikipedia.org/wiki/Halszkaraptor
Halszkaraptor
Halszkaraptor (; meaning "Halszka's seizer") is a genus of waterfowl-like dromaeosaurid dinosaurs from Mongolia that lived during the Late Cretaceous period. It contains only one known species, Halszkaraptor escuilliei. The type specimen (holotype) has been compared to the bones of extant crocodilians and aquatic birds, and displayed evidence of a semiaquatic lifestyle, while some researchers question a semiaquatic ecology. A phylogenetic analysis revealed it was a member of the basal subfamily Halszkaraptorinae, along with Mahakala and Hulsanpes. History of discovery The holotype specimen of Halszkaraptor likely came from the Djadochta Formation at Ukhaa Tolgod in southern Mongolia, and was illegally removed by fossil poachers in or before 2011. The fossil found its way to Japan and Great Britain, being owned by several collectors for some years until the Eldonia company of fossil dealer François Escuillié obtained it. He identified it as a new species, and in 2015 took it to the Royal Belgian Institute of Natural Sciences in Brussels, showing it to paleontologists Pascal Godefroit and Andrea Cau for further verification. After verifying its authenticity, among other means by scanning it with synchrotron radiation, a beam of X-rays, at the European Synchrotron Radiation Facility, Cau and other prominent paleontologists described the genus in a detailed study published in the journal Nature. The fossil was returned to the Mongolian authorities. The holotype, MPC D-102/109, was found in a layer of orange sandstone of the Bayn Dzak Member of the Djadochta Formation, dating from the late Campanian, about seventy-five million years old. It consists of a relatively complete skeleton with skull. In 2017, the fossil was not further prepared. Work by the fossil dealers had at that point generally exposed the left side of the skeleton. The synchrotron revealed that the bones continued into the rock and that the piece was probably not a chimaera, an artificial assembly of bones of disparate species, though the top of the snout had been restored with plaster and some elements had been reattached to the rock by glue. The skeleton is largely articulated and not compressed. It represents a subadult individual, about one year old. The type species Halszkaraptor escuilliei was in 2017 named and described by Andrea Cau, Vincent Beyrand, Dennis F. A. E. Voeten, Vincent Fernandez, Paul Tafforeau, Koen Stein, Rinchen Barsbold, Khishigjav Tsogtbaatar, Philip John Currie, and Pascal Godefroit. The generic name combines a reference to the late Polish paleontologist Halszka Osmólska, who was involved in many expeditions to Mongolia and named the closely related Hulsanpes, with Latin raptor, "robber." The specific name honours Escuillié for having made the specimen available to science. Description Halszkaraptor was about the size of a mallard duck. The head was about long, the neck , the back and the sacrum . The describing authors indicated some distinguishing traits. Some of these were autapomorphies, unique derived characters. The premaxilla, the front snout bone, forms a flattened snout, occupying 32% of the snout length. The premaxilla bears eleven teeth. The jugal bone is rod-shaped and its ascending branch occupies only a tenth of the bar behind the eye socket, not reaching the orbit. The neck is extremely elongated, representing half of the snout-sacrum length. The postzygapophyses, rear joint processes, of the neck vertebrae bear no epipophyses, additional processes on their upper rim. The neck vertebrae have extremely reduced neural spines: on the second to fifth vertebrae these are only low ridges and subsequent neck vertebrae lack them completely. In the second to fifth neck vertebrae the normally paired postzygapophyses have fused into a single lobe-shaped process. The neural spines of the tail vertebrae are extremely shortened: at the first three tail vertebrae they are formed like low bumps and subsequent tail vertebrae lack them completely. The chevrons of the tail base are large with a pentagonal profile. The first phalanx of the third finger has 47% of the length of the third metacarpal. Furthermore, a unique combination is present of traits that in themselves are not unique. The external bony nostril is situated behind the main body of the premaxilla, the point where it connects to the front branch of the maxilla. The descending branch of the postorbital bone is rod-shaped. The number of vertebrae of the neck and back totals twenty-two. Only the seventh, eighth and ninth neck vertebrae have pleurocoels, pneumatic depressions on their sides. The transition between the tail base and the middle tail is situated at the seventh to eighth vertebra. The third finger is longer than the second finger. Skull The snout, though elongated, is transversely expanded in front, creating a spoon-shaped profile in top view. It is also flat, and its width is 180% of its height. The top profile in side view is hollow. The expanded area consists of a relatively long premaxilla. This bone is internally excavated by a system of air chambers. From a larger chamber in the rear, neurovascular channels permeate the entire bone, not just the sides as in Neovenator, but the top also. These channels probably housed electro-sensory organs. Each premaxilla bears eleven teeth, a record among the entire Dinosauria. Theropods normally have four premaxillary teeth and the previous record for this group was seven, as found in spinosaurids. In Halskaraptor, the premaxillary teeth are very closely packed, touching each other, and are very elongated, gradually recurving. The teeth in the maxilla, estimated in number at twenty to twenty-five, are more robust, curve only at their tips, and are spaced at a larger distance. They are more transversely flattened, with an oval cross-section. The dentary of the lower jaw likewise bears an estimated twenty to twenty-five teeth. The nostrils are relatively retracted. They are also unique for a theropod in being obliquely oriented to the top in front view. Despite the length of the snout, the main opening in the side of the front skull, the antorbital fenestra, is short; shorter than high. The rear skull roof is vaulted. Postcranial skeleton The vertebral column of Halszkaraptor contains ten neck vertebrae, twelve back vertebrae and six sacral vertebrae. The preserved tail vertebrae include the first twenty caudals and a series of six from the middle tail. The neck is very elongated. It equals 290% of the skull length and 150% of the back length. This implies that it represents half of the snout-sacrum length, a value that is the highest for all known Mesozoic paravians. Within the Paraves, only some more recent birds have a proportionally longer neck. Among more basal theropods only some oviraptorosaurs approach this value; even ornithomimosaurs never surpass 40%. The length is not caused by a greater number of vertebrae, as in the Oviraptorosauria, but by an elongation of the individual vertebrae. The sixth cervical vertebra is the longest, being four times longer than tall. The neck vertebrae generally have a simplified structure, as exemplified by the lack of rear epipophyses. Most are not pneumatised by pleurocoels, depressions in which diverticula of air sacs penetrate the bone walls. On the front neck, the neural spines, normally rectangular plates, have been reduced to a low ridge; more to behind they have disappeared. At the first five neck vertebrae, the postzygapophyses have no separating space in between them but are fused into a single lobe. With other basal maniraptoriforms these rear joint processes are sometimes connected by a plate, but in that case the bony shelf is notched by a postspinal fossa causing a concave profile in top view; in Halszkaraptor this groove is absent and the profile is convex. The neck ribs are short, no longer than the vertebral bodies. The back vertebrae are not pneumatised. The tail is not stiffened by long zygapophyses or chevrons as in derived eudromaeosaurs. The tail base is rather short in that the transition point to the middle tail, where the transverse processes cease to exist, is at the eighth vertebra. The transition is also very gradual in morphology. The neural spines of the front tail are already strongly reduced: only the first three vertebrae possess them and they are formed like low bumps. Classification Halszkaraptor was placed in the Dromaeosauridae in 2017. A new clade Halszkaraptorinae was coined, containing Halszkaraptor and its close relatives Hulsanpes and Mahakala. The cladogram below is based on the phylogenetic analysis conducted in 2017 by Cau etal. using updated data from the Theropod Working Group. The analysis showed that Halszkaraptorinae was the basalmost known dromaeosaurid group. Halszkaraptor occupied a basal position within Halszkaraptorinae, as the sister group of a clade formed by Hulsanpes and Mahakala. Paleobiology Andrea Cau argues that Halszkaraptor had characteristics that allowed it to spend time both in water and on land, including strong hindlimbs for running and smaller flipper-like forelimbs for swimming. The short tail would have brought the centre of gravity more to the front, which is more useful for swimming than walking. The torso would have been held more vertical than is normal with theropods. To this end, there are adaptations for an improved extension of the hindlimb, in the hip joint and the thighbone. It had many sharp, backward-curving teeth in its mouth, a long neck and sensory neurons in its snout that may have allowed it to detect vibrations in water, leading scientists to believe that it hunted aquatic prey. It had to come up onto land to reproduce, because, like all dinosaurs, it needed to lay its eggs on land. A more recent analysis performed by Cau specifically points out similarities to modern-day mergansers. He stated that these birds are probably the closest ecological analogs to Halszkaraptor as they share similar traits with this dromaeosaurid taxon, such as the long neck and a serrated snout edge used to catch small prey. While they are less active moving on land, assuming a hip-extended body posture, on water, they use a distinct swimming model including forelimb-propelled locomotion. This particular behaviour has also been inferred for Halszkaraptor, and seems to support a piscivorous and aquatic life-style similar to that of mergansers. Other researchers have either disagreed with or merely followed Cau's interpretation. In 2019, Brownstein argued that the features noted for Halszkaraptor do not directly support its ability to swim. He also suggested that this dinosaur may be a basal dromaeosaur with transitional features, although Cau rebutted his claims a year later. In 2021, Hone and Holtz noted that since Halszkaraptor and many modern aquatic birds with no flattened unguals is said to be semi-aquatic, having flattened unguals like Spinosaurus do not necessarily suggest that the animal can swim; they didn't propose their own view on this dinosaur's potential ability to swim. In 2022, Fabbri and his colleagues argued against a semi-aquatic ecology for Halszkaraptor, noting that it had low bone density, a trait not observed in semi-aquatic animals. In response, Cau has pointed out on his blog that swans similarly have low bone density yet have adaptations for semi-aquatic feeding. A 2024 study by Tse, Miller, and Pittman, focusing on the skull morphology and bite forces of various dromaeosaurids discovered that Halszkaraptor had a rapid bite unsuited to piscivorous feeding as previously hypothesized based on its skull morphology, and instead suggest it was an insectivore that hunted small invertebrates possibly in low-light conditions (at night or in murky water), since it likely had exceptional low-light vision among dromaeosaurids based on its relatively large orbit size.
Biology and health sciences
Theropods
Animals
51556830
https://en.wikipedia.org/wiki/Southern%20giraffe
Southern giraffe
The southern giraffe (Giraffa giraffa), also known as two-horned giraffe, is a species of giraffe native to Southern Africa. However, the IUCN currently recognizes only one species of giraffe with nine subspecies. Southern giraffes have rounded or blotched spots, some with star-like extensions on a light tan background, running down to the hooves. They range from South Africa, Angola, Namibia, Botswana, Zambia, Zimbabwe, Mozambique. Their approximate population is composed of 44,500 to 50,000 individuals. Giraffes as one species are considered Vulnerable to extinction by the IUCN. Taxonomy and evolution Living giraffes were originally classified as one species by Carl Linnaeus in 1758, under the binomial name Cervus camelopardalis. Morten Thrane Brünnich classified the genus Giraffa in 1772. Once considered a subspecies of the conglomerate Giraffa camelopardalis species, recent studies proposed the southern giraffe as a separate species of a reorganised genus Giraffa, under the binomial name Giraffa giraffa. However, the taxonomic scheme has been criticized, and currently the IUCN recognizes only one species of giraffe with nine subspecies. Subspecies Two subspecies of southern giraffe are proposed. Descriptions The Cape subspecies of the southern giraffe has dark, somewhat rounded patches "with some fine projections" on a tawny background colour. The spots extend down the legs and get smaller. The median lump of bulls is less developed. Distribution and habitat The southern giraffes live in the savannahs and woodlands of northern South Africa, Angola, southern Botswana, southern Zimbabwe, Zambia and south-western Mozambique. After local extinctions in various places, the South African giraffes have been reintroduced in many parts of Southern Africa, including in Eswatini. They are common in both inside and outside of protected areas. Ecology and behavior Southern giraffes usually live in savannahs and woodlands where food plants are available. Southern giraffes are herbivorous mammals. They feed on leaves, flowers, fruits and shoots of woody plants such as Acacia. Threats Southern giraffes are not threatened, as their population is increasing.
Biology and health sciences
Giraffidae
Animals
53120575
https://en.wikipedia.org/wiki/Disodium%20helide
Disodium helide
Disodium helide (Na2He) is a compound of helium and sodium that is stable at high pressures above . It was first predicted using the USPEX crystal structure prediction algorithm and then synthesised in 2016. Synthesis Na2He was predicted to be thermodynamically stable over 160 GPa and dynamically stable over 100 GPa. This means it should be possible to form at the higher pressure and then decompress to 100 GPa, but below that it would decompose. Compared with other binary compounds of other elements and helium, it was predicted to be stable at the lowest pressure of any such combination. This also means, for example, that a helium-potassium compound is predicted to require much higher pressures of the order of terapascals. The material was synthesized by putting tiny plates of sodium in a diamond anvil cell along with helium at 1600 bar and then compressing to 130 GPa and heating to 1,500 K with a laser. Disodium helide is predicted to be an insulator and transparent. At 200 GPa the sodium atoms have a Bader charge of +0.599, the helium charge is −0.174, and the two-electron spots are each near −0.511. This phase could be called disodium helium electride. Disodium helide melts at a high temperature near 1,500 K, much higher than the melting point of sodium. When decompressed, it can keep its form as low as 113 GPa. As pressure increases, the sodium is predicted to gain more positive charge, the helium to lose negative charge and the free electron density to increase. Energy is compensated by the relative shrinking of the helium atoms and the space for electrons. Structure Disodium helide has a cubic crystal structure, resembling that of fluorite. At 300 GPa the edge of a unit cell of the crystal has . Each unit cell contains four helium atoms on the centre of the cube faces and corners, and eight sodium atoms at coordinates halfway between the center and each corner. Electron pairs (2e−) are positioned on each edge and the centre of the unit cell. Each pair of electrons is spin paired. The presence of these isolated electrons makes this an electride. The helium atoms do not participate in any bonding; however, the electron pairs can be considered as an eight-centre two-electron bond.
Physical sciences
Noble gas compounds
Chemistry
70161362
https://en.wikipedia.org/wiki/Freight%20train
Freight train
A freight train, also called a goods train or cargo train, is a railway train that is used to carry cargo, as opposed to passengers. Freight trains are made up of one or more locomotives which provide propulsion, along with one or more railroad cars (also known as wagons) which carry freight. A wide variety of cargos are carried on trains, but the low friction inherent to rail transport means that freight trains are especially suited to carrying bulk and heavy loads over longer distances. History The earliest recorded use of rail transport for freight was in Babylon, circa 2,200 B.C.E. This use took the form of wagons pulled on wagonways by horses or even humans. Locomotives Freight trains are almost universally powered by locomotives. Historically, steam locomotives were predominant, but beginning in the 1920s diesel and electric locomotives displaced steam due to their greater reliability, cleaner emissions, and lower costs. Freight cars Freight trains carry cargo in freight cars, also known as goods wagons, which are unpowered and designed to carry various types of goods. Different types of freight cars may be used by a train, such as: Boxcar Tank Car Hopper Car Covered Hopper Car Centerbeam Car Flatcar Intermodal Well Car Gondola Car Autorack Car As of April 2020, there were 1.6 million rail cars in North America. Operations Freight trains often operate between classification yards, which are hubs where incoming freight trains are received, and then broken up, with the cars then being assembled into new trains for other destinations. In contrast to this type of operation, which is known as wagonload (or carload) freight, there are also unit trains, which exclusively carry one type of cargo. They normally operate directly between origin and destination points, such as a coal mine and a power plant, without any changes to the makeup of the freight cars in between. This allows cargo to reach its destination faster, and increases utilization of freight cars, lowering operating costs. Unlike passenger trains, freight trains often do not follow fixed schedules, but are run as needed. When sharing tracks with passenger trains, freight trains are scheduled to use lines during specific times to minimize their impact on passenger train operations, especially during the morning and evening rush hours.
Technology
Rail and cable transport
null
42946389
https://en.wikipedia.org/wiki/Swift%20%28programming%20language%29
Swift (programming language)
Swift is a high-level general-purpose, multi-paradigm, compiled programming language created by Chris Lattner in 2010 for Apple Inc. and maintained by the open-source community. Swift compiles to machine code and uses an LLVM-based compiler. Swift was first released in June 2014 and the Swift toolchain has shipped in Xcode since Xcode version 6, released in September 2014. Apple intended Swift to support many core concepts associated with Objective-C, notably dynamic dispatch, widespread late binding, extensible programming, and similar features, but in a "safer" way, making it easier to catch software bugs; Swift has features addressing some common programming errors like null pointer dereferencing and provides syntactic sugar to help avoid the pyramid of doom. Swift supports the concept of protocol extensibility, an extensibility system that can be applied to types, structs and classes, which Apple promotes as a real change in programming paradigms they term "protocol-oriented programming" (similar to traits and type classes). Swift was introduced at Apple's 2014 Worldwide Developers Conference (WWDC). It underwent an upgrade to version 1.2 during 2014 and a major upgrade to Swift 2 at WWDC 2015. It was initially a proprietary language, but version 2.2 was made open-source software under the Apache License 2.0 on December 3, 2015, for Apple's platforms and Linux. Through version 3.0 the syntax of Swift went through significant evolution, with the core team making source stability a focus in later versions. In the first quarter of 2018 Swift surpassed Objective-C in measured popularity. Swift 4.0, released in 2017, introduced several changes to some built-in classes and structures. Code written with previous versions of Swift can be updated using the migration functionality built into Xcode. Swift 5, released in March 2019, introduced a stable binary interface on Apple platforms, allowing the Swift runtime to be incorporated into Apple operating systems. It is source compatible with Swift 4. Swift 5.1 was officially released in September 2019. Swift 5.1 builds on the previous version of Swift 5 by extending the stable features of the language to compile-time with the introduction of module stability. The introduction of module stability makes it possible to create and share binary frameworks that will work with future releases of Swift. Swift 5.5, officially announced by Apple at the 2021 WWDC, significantly expands language support for concurrency and asynchronous code, notably introducing a unique version of the actor model. Swift 5.9, was released in September 2023 and includes a macro system, generic parameter packs, and ownership features like the new consume operator. Swift 5.10, was released in March 2024. This version improves the language's concurrency model, allowing for full data isolation to prevent data races. It is also the last release before Swift 6. Version 5.10 is currently available for macOS, Windows and for Linux. Swift 6 was released in September 2024. History Development of Swift started in July 2010 by Chris Lattner, with the eventual collaboration of many other programmers at Apple. Swift was motivated by the need for a replacement for Apple's earlier programming language Objective-C, which had been largely unchanged since the early 1980s and lacked modern language features. Swift took language ideas "from Objective-C, Rust, Haskell, Ruby, Python, C#, CLU, and far too many others to list". On June 2, 2014, the Apple Worldwide Developers Conference (WWDC) application became the first publicly released app written with Swift. A beta version of the programming language was released to registered Apple developers at the conference, but the company did not promise that the final version of Swift would be source code compatible with the test version. Apple planned to make source code converters available if needed for the full release. The Swift Programming Language, a free 500-page manual, was also released at WWDC, and is available on the Apple Books Store and the official website. Swift reached the 1.0 milestone on September 9, 2014, with the Gold Master of Xcode 6.0 for iOS. Swift 1.1 was released on October 22, 2014, alongside the launch of Xcode 6.1. Swift 1.2 was released on April 8, 2015, along with Xcode 6.3. Swift 2.0 was announced at WWDC 2015, and was made available for publishing apps in the App Store on September 21, 2015. Swift 3.0 was released on September 13, 2016. Swift 4.0 was released on September 19, 2017. Swift 4.1 was released on March 29, 2018. Swift won first place for Most Loved Programming Language in the Stack Overflow Developer Survey 2015 and second place in 2016. On December 3, 2015, the Swift language, supporting libraries, debugger, and package manager were open-sourced under the Apache 2.0 license with a Runtime Library Exception, and Swift.org was created to host the project. The source code is hosted on GitHub, where it is easy for anyone to get the code, build it themselves, and even create pull requests to contribute code back to the project. In December 2015, IBM announced its Swift Sandbox website, which allows developers to write Swift code in one pane and display output in another. The Swift Sandbox was deprecated in January 2018. During the WWDC 2016, Apple announced an iPad exclusive app, named Swift Playgrounds, intended to teach people how to code in Swift. The app is presented in a 3D video game-like interface which provides feedback when lines of code are placed in a certain order and executed. In January 2017, Chris Lattner announced his departure from Apple for a new position with Tesla Motors, with the Swift project lead role going to team veteran Ted Kremenek. During WWDC 2019, Apple announced SwiftUI with Xcode 11, which provides a framework for declarative UI structure design across all Apple platforms. Official downloads of the SDK and toolchain for the Ubuntu distribution of Linux have been available since Swift 2.2, with more distros added since Swift 5.2.4, CentOS and Amazon Linux. There is an unofficial SDK and native toolchain package for Android too. Platforms The platforms Swift supports are Apple's operating systems (Darwin, iOS, iPadOS, macOS, tvOS, watchOS), Linux, Windows, and Android. A key aspect of Swift's design is its ability to interoperate with the huge body of existing Objective-C code developed for Apple products over the previous decades, such as Cocoa and the Cocoa Touch frameworks. On Apple platforms, it links with the Objective-C runtime library, which allows C, Objective-C, C++ and Swift code to run within one program. Version history Features Swift is a general purpose programming language that employs modern programming-language theory concepts and strives to present a simple, yet powerful syntax. Swift incorporates innovations and conventions from various programming languages, with notable inspiration from Objective-C, which it replaced as the primary development language on Apple Platforms. Swift was designed to be safe and friendly to new programmers while not sacrificing speed. By default Swift manages all memory automatically and ensures variables are always initialized before use. Array accesses are checked for out-of-bounds errors and integer operations are checked for overflow. Parameter names allow creating clear APIs. Protocols define interfaces that types may adopt, while extensions allow developers to add more function to existing types. Swift enables object-oriented programming with the support for classes, subtyping, and method overriding. Optionals allow nil values to be handled explicitly and safely. Concurrent programs can be written using async/await syntax and actors isolate shared mutable state in order to eliminate data races. Basic syntax Swift's syntax is similar to C-style languages. Code begins executing in the global scope by default. Alternatively, the attribute can be applied a structure, class, or enumeration declaration to indicate that it contains the program's entry point. Swift's "Hello, World!" program is:print("Hello, world!")The function used here is included in Swift's standard library, which is available to all programs without the need to import external modules. Statements in Swift don't have to end with a semicolon, however semicolons are required to separate multiple statements written on the same line. Single-line comments begin with and continue until the end of the current line. Multiline comments are contained by and characters. Constants are declared with the keyword and variables with the keyword. Values must be initialized before they are read. Values may infer their type based on the type of the provided initial value. If the initial value is set after the value's declaration, a type must be declared explicitly.let highScoreThreshold = 1000 // A constant with type Int. The type was inferred based on the provided value. var currentScore = 980 // A variable with type Int. currentScore = 1200 // The value of variables can change over time. let playerMessage: String // A constant with explicit type String. if currentScore > highScoreThreshold { playerMessage = "You are a top player!" } else { playerMessage = "Better luck next time." } print(playerMessage) // Prints "You are a top player!"Control flow in Swift is managed with if-else, guard, and switch statements, along with while and for-in loops. The statements take a Boolean parameter and execute the body of the statement if the condition is true, otherwise it executes the optional body. syntax provides syntactic sugar for checking for the existence of an optional value and unwrapping it at the same time.let someNumber = 42 if someNumber % 2 == 0 { // Use the remainder operator to find the remainder of someNumber divided by 2. print("\(someNumber) is even.") } else { print("\(someNumber) is odd.") } // Prints "42 is even."Functions are defined with the keyword. Function parameters may have names which allow function calls to read like phrases. An underscore before the parameter name allows the argument label to be omitted from the call site. Tuples can be used by functions to return multiple pieces of data at once.func constructGreeting(for name: String) -> String { return "Hello \(name)!" } let greeting = constructGreeting(for: "Craig") print(greeting) // Prints "Hello Craig!"Functions, and anonymous functions known as closures, can be assigned to properties and passed around the program like any other value.func divideByTwo(_ aNum: Int) -> Int { return aNum / 2 } func multiplyByTwo(_ aNum: Int) -> Int { return aNum * 2 } let mathOperation = multiplyByTwo print(mathOperation(21)) // Prints "42" statements require that the given condition is true before continuing on past the statement, otherwise the body of the provided clause is run. The clause must exit control of the code block in which the statement appears. statements are useful for ensuring that certain requirements are met before continuing on with program execution. In particular they can be used to create an unwrapped version of an optional value that is guaranteed to be non-nil for the remainder of the enclosing scope.func divide(numerator: Int?, byDenominator denominator: Int) -> Int? { guard denominator != 0 else { print("Can't divide by 0.") return nil } guard let numerator else { print("The provided numerator is nil.") return nil } return numerator / denominator } let result = divide(numerator: 3, byDenominator: 0) print("Division result is: \(result)") // Prints: // "Can't divide by 0." // "Division result is: nil."switch statements compare a value with multiple potential values and then executes an associated code block. switch statements must be made exhaustive, either by including cases for all possible values or by including a default case which is run when the provided value doesn't match any of the other cases. switch cases do not implicitly fall through, although they may explicitly do so with the fallthrough keyword. Pattern matching can be used in various ways inside switch statements. Here is an example of an integer being matched against a number of potential ranges:let someNumber = 42 switch someNumber { case ..<0: print("\(someNumber) negative.") case 0: print("\(someNumber) is 0.") case 1...9: print("\(someNumber) greater than 0, but less than 10.") default: print("\(someNumber) is greater than 9.") } // Prints "42 is greater than 9."for-in loops iterate over a sequence of values:let names = ["Will", "Anna", "Bart"] for name in names { print(name) } // Prints: // Will // Anna // Bartwhile loops iterate as long as the given Boolean condition evaluates to true:// Add together all the numbers from 1 to 5. var i = 1 var result = 0 while i <= 5 { // The loop performs its body as long as i is less than or equal to 5. result += i // Add i to the current result. i += 1 // Increment i by 1. } print(result) // Prints "15" Closure support Swift supports closures, which are self-contained blocks of functionality that can be passed around and used in code, and can also be used as anonymous functions. Here are some examples: // Closure type, defined by its input and output values, can be specified outside the closure: let closure1: (Int, Int) -> Int = { arg1, arg2 in return arg1 + arg2 } // …or inside it: let closure2 = { (arg1: Int, arg2: Int) -> Int in return arg1 + arg2 } // In most cases, closure's return type can be inferred automatically by the compiler. let closure3 = { arg1: Int, arg2: Int in return arg1 + arg2 } Closures can be assigned to variables and constants, and can be passed into other functions or closures as parameters. Single-expression closures may drop the keyword. Swift also has a trailing closure syntax, which allows the closure to be written after the end of the function call instead of within the function's parameter list. Parentheses can be omitted altogether if the closure is the function's only parameter: // This function takes a closure which receives no input parameters and returns an integer, // evaluates it, and uses the closure's return value (an Int) as the function's return value. func foo(closure bar: () -> Int) -> Int { return bar() } // Without trailing closure syntax: foo(closure: { return 1 }) // With trailing closure syntax, and implicit return: foo { 1 } Starting from version 5.3, Swift supports multiple trailing closures: // This function passes the return of the first closure as the parameter of the second, // and returns the second closure's result: func foo(bar: () -> Int, baz: (Int) -> Int) -> Int { return baz(bar()) } // With no trailing closures: foo(bar: { return 1 }, baz: { x in return x + 1 }) // With 1 trailing closure: foo(bar: { return 1 }) { x in return x + 1 } // With 2 trailing closures (only the first closure's argument name is omitted): foo { return 1 } baz: { x in return x + 1 }Swift will provide shorthand argument names for inline closures, removing the need to explicitly name all of the closures parameters. Arguments can be referred to with the names $0, $1, $2, and so on:let names = ["Josephine", "Steve", "Chris", "Barbara"] // filter calls the given closure for each value in names. // Values with a character count less than 6 are kept, the others are dropped. let shortNames = names.filter { $0.count < 6 } print(shortNames) // Prints "["Steve", "Chris"]"Closures may capture values from their surrounding scope. The closure will refer to this captured value for as long as the closure exists:func makeMultiplier(withMultiple multiple: Int) -> (Int) -> (Int) { // Create and return a closure that takes in an Int and returns the input multiplied by the value of multiple. return { $0 * multiple } } let multiplier = makeMultiplier(withMultiple: 3) print(multiplier(3)) // Prints "9" print(multiplier(10)) // Prints "30" String support The Swift standard library includes unicode-compliant and types. String values can be initialized with a String literal, a sequence of characters surrounded by double quotation marks. Strings can be concatenated with the operator: var someString = "Hello," someString += " world!"String interpolation allows for the creation of a new string from other values and expressions. Values written between parentheses preceded by a will be inserted into the enclosing string literal: var currentScore = 980 print("Your score is \(currentScore).") // Prints "Your score is 980."A for-in loop can be used to iterate over the characters contained in a string: for character in "Swift" { print(character) } // S // w // i // f // tWhen the Foundation framework is imported Swift invisibly bridges the String type to NSString, the String class commonly used in Objective-C. Callable objects Access control Swift supports five access control levels for symbols: , , , , and . Unlike many object-oriented languages, these access controls ignore inheritance hierarchies: indicates that a symbol is accessible only in the immediate scope, indicates it is accessible only from within the file, indicates it is accessible within the containing module, indicates it is accessible from any module, and (only for classes and their methods) indicates that the class may be subclassed outside of the module. Optionals and chaining An important feature in Swift is option types, which allow references or values to operate in a manner similar to the common pattern in C, where a pointer may either refer to a specific value or no value at all. This implies that non-optional types cannot result in a null-pointer error; the compiler can ensure this is not possible. Optional types are created with the Optional enum. To make an Integer that is nullable, one would use a declaration similar to var optionalInteger: Optional<Int>. As in C#, Swift also includes syntactic sugar for this, allowing one to indicate a variable is optional by placing a question mark after the type name, var optionalInteger: Int?. Variables or constants that are marked optional either have a value of the underlying type or are nil. Optional types wrap the base type, resulting in a different instance. String and String? are fundamentally different types, the former is of type String while the latter is an Optional that may be holding some String value. To access the value inside, assuming it is not nil, it must be unwrapped to expose the instance inside. This is performed with the ! operator: let myValue = anOptionalInstance!.someMethod() In this case, the ! operator unwraps anOptionalInstance to expose the instance inside, allowing the method call to be made on it. If anOptionalInstance is nil, a null-pointer error occurs, terminating the program. This is known as force unwrapping. Optionals may be safely unwrapped using optional chaining which first tests whether the instance is nil, and then unwrap it if it is non-null: let myValue = anOptionalInstance?.someMethod() In this case the runtime calls someMethod only if anOptionalInstance is not nil, suppressing the error. A ? must be placed after every optional property. If any of these properties are nil the entire expression evaluates as nil. The origin of the term chaining comes from the more common case where several method calls/getters are chained together. For instance: let aTenant = aBuilding.tenantList[5] let theirLease = aTenant.leaseDetails let leaseStart = theirLease?.startDate can be reduced to: let leaseStart = aBuilding.tenantList[5].leaseDetails?.startDate Swift's use of optionals allows the compiler to use static dispatch because the unwrapping action is called on a defined instance (the wrapper), versus occurring in a runtime dispatch system. Value types In many object-oriented languages, objects are represented internally in two parts. The object is stored as a block of data placed on the heap, while the name (or "handle") to that object is represented by a pointer. Objects are passed between methods by copying the value of the pointer, allowing the same underlying data on the heap to be accessed by anyone with a copy. In contrast, basic types like integers and floating-point values are represented directly; the handle contains the data, not a pointer to it, and that data is passed directly to methods by copying. These styles of access are termed pass-by-reference in the case of objects, and pass-by-value for basic types. Both concepts have their advantages and disadvantages. Objects are useful when the data is large, like the description of a window or the contents of a document. In these cases, access to that data is provided by copying a 32- or 64-bit value, versus copying an entire data structure. However, smaller values like integers are the same size as pointers (typically both are one word), so there is no advantage to passing a pointer, versus passing the value. Swift offers built-in support for objects using either pass-by-reference or pass-by-value semantics, the former using the class declaration and the latter using struct. Structs in Swift have almost all the same features as classes: methods, implementing protocols and using the extension mechanisms. For this reason, Apple terms all data generically as instances, versus objects or values. Structs do not support inheritance, however. The programmer is free to choose which semantics are more appropriate for each data structure in the application. Larger structures like windows would be defined as classes, allowing them to be passed around as pointers. Smaller structures, like a 2D point, can be defined as structs, which will be pass-by-value and allow direct access to their internal data with no indirection or reference counting. The performance improvement inherent to the pass-by-value concept is such that Swift uses these types for almost all common data types, including Int and Double, and types normally represented by objects, like String and Array. Using value types can result in significant performance improvements in user applications as well. Array, Dictionary, and Set all utilize copy on write so that their data are copied only if and when the program attempts to change a value in them. This means that the various accessors have what is in effect a pointer to the same data storage. So while the data is physically stored as one instance in memory, at the level of the application, these values are separate and physical separation is enforced by copy on write only if needed. Extensions Extensions add new functionality to an existing type, without the need to subclass or even have access to the original source code. Extensions can add new methods, initializers, computed properties, subscripts, and protocol conformances. An example might be to add a spell checker to the base String type, which means all instances of String in the program gain the ability to spell-check. The system is also widely used as an organizational technique, allowing related code to be gathered into library-like extensions. Extensions are declared with the extension keyword.struct Rectangle { let width: Double let height: Double } extension Rectangle { var area: Double { return height * width } } Protocol-oriented programming Protocols promise that a particular type implements a set of methods or properties, meaning that other instances in the system can call those methods on any instance implementing that protocol. This is often used in modern object-oriented languages as a substitute for multiple inheritance, although the feature sets are not entirely similar. In Objective-C, and most other languages implementing the protocol concept, it is up to the programmer to ensure that the required methods are implemented in each class. Swift adds the ability to add these methods using extensions, and to use generic programming (generics) to implement them. Combined, these allow protocols to be written once and support a wide variety of instances. Also, the extension mechanism can be used to add protocol conformance to an object that does not list that protocol in its definition. For example, a protocol might be declared called Printable, which ensures that instances that conform to the protocol implement a description property and a printDetails() method requirement: // Define a protocol named Printable protocol Printable { var description: String { get } // A read-only property requirement func printDetails() // A method requirement } This protocol can now be adopted by other types: // Adopt the Printable protocol in a class class MyClass: Printable { var description: String { return "An instance of MyClass" } func printDetails() { print(description) } } Extensions can be used to add protocol conformance to types. Protocols themselves can also be extended to provide default implementations of their requirements. Adopters may define their own implementations, or they may use the default implementation:extension Printable { // All Printable instances will receive this implementation, or they may define their own. func printDetails() { print(description) } } // Bool now conforms to Printable, and inherits the printDetails() implementation above. extension Bool: Printable { var description: String { return "An instance of Bool with value: \(self)" } } In Swift, like many modern languages supporting interfaces, protocols can be used as types, which means variables and methods can be defined by protocol instead of their specific type: func getSomethingPrintable() -> any Printable { return true } var someSortOfPrintableInstance = getSomethingPrintable() print(someSortOfPrintableInstance.description) // Prints "An instance of Bool with value: true" It does not matter what concrete type of someSortOfPrintableInstance is, the compiler will ensure that it conforms to the protocol and thus this code is safe. This syntax also means that collections can be based on protocols also, like let printableArray = [any Printable]. Both extensions and protocols are used extensively in Swift's standard library; in Swift 5.9, approximately 1.2 percent of all symbols within the standard library were protocols, and another 12.3 percent were protocol requirements or default implementations. For instance, Swift uses extensions to add the Equatable protocol to many of their basic types, like Strings and Arrays, allowing them to be compared with the == operator. The Equatable protocol also defines this default implementation: func !=<T : Equatable>(lhs: T, rhs: T) -> Bool This function defines a method that works on any instance conforming to Equatable, providing a not equals operator. Any instance, class or struct, automatically gains this implementation simply by conforming to Equatable. Protocols, extensions, and generics can be combined to create sophisticated APIs. For example, constraints allow types to conditionally adopt protocols or methods based on the characteristics of the adopting type. A common use case may be adding a method on collection types only when the elements contained within the collection are Equatable:extension Array where Element: Equatable { // allEqual will be available only on instances of Array that contain Equatable elements. func allEqual() -> Bool { for element in self { if element != self.first { return false } } return true } } Concurrency Swift 5.5 introduced structured concurrency into the language. Structured concurrency uses Async/await syntax similar to Kotlin, JavaScript, and Rust. An async function is defined with the async keyword after the parameter list. When calling an async function the await keyword must be written before the function to indicate that execution will potentially suspend while calling function. While a function is suspended the program may run some other concurrent function in the same program. This syntax allows programs to clearly call out potential suspension points and avoid a version of the Pyramid of doom (programming) caused by the previously widespread use of closure callbacks. func downloadText(name: String) async -> String { let result = // ... some asynchronous downloading code ... return result } let text = await downloadText("text1")The async let syntax allows multiple functions to run in parallel. await is again used to mark the point at which the program will suspend to wait for the completion of the async functions called earlier.// Each of these calls to downloadText will run in parallel. async let text1 = downloadText(name: "text1") async let text2 = downloadText(name: "text2") async let text3 = downloadText(name: "text3") let textToPrint = await [text1, text2, text3] // Suspends until all three downloadText calls have returned. print(textToPrint)Tasks and TaskGroups can be created explicitly to create a dynamic number of child tasks during runtime:let taskHandle = Task { await downloadText(name: "someText") } let result = await taskHandle.valueSwift uses the Actor model to isolate mutable state, allowing different tasks to mutate shared state in a safe manner. Actors are declared with the actor keyword and are reference types, like classes. Only one task may access the mutable state of an actor at the same time. Actors may access and mutate their own internal state freely, but code running in separate tasks must mark each access with the await keyword to indicate that the code may suspend until other tasks finish accessing the actor's state.actor Directory { var names: [String] = [] func add(name: String) { names.append(name) } } let directory = Directory() // Code suspends until other tasks finish accessing the actor. await directory.add(name: "Tucker") print(await directory.names) Libraries, runtime, development On Apple systems, Swift uses the same runtime as the extant Objective-C system, but requires iOS 7 or macOS 10.9 or higher. It also depends on Grand Central Dispatch. Swift and Objective-C code can be used in one program, and by extension, C and C++ also. Beginning in Swift 5.9, C++ code can be used directly from Swift code. In the case of Objective-C, Swift has considerable access to the object model, and can be used to subclass, extend and use Objective-C code to provide protocol support. The converse is not true: a Swift class cannot be subclassed in Objective-C. To aid development of such programs, and the re-use of extant code, Xcode 6 and higher offers a semi-automated system that builds and maintains a bridging header to expose Objective-C code to Swift. This takes the form of an additional header file that simply defines or imports all of the Objective-C symbols that are needed by the project's Swift code. At that point, Swift can refer to the types, functions, and variables declared in those imports as though they were written in Swift. Objective-C code can also use Swift code directly, by importing an automatically maintained header file with Objective-C declarations of the project's Swift symbols. For instance, an Objective-C file in a mixed project called "MyApp" could access Swift classes or functions with the code #import "MyApp-Swift.h". Not all symbols are available through this mechanism, however—use of Swift-specific features like generic types, non-object optional types, sophisticated enums, or even Unicode identifiers may render a symbol inaccessible from Objective-C. Swift also has limited support for attributes, metadata that is read by the development environment, and is not necessarily part of the compiled code. Like Objective-C, attributes use the @ syntax, but the currently available set is small. One example is the @IBOutlet attribute, which marks a given value in the code as an outlet, available for use within Interface Builder (IB). An outlet is a device that binds the value of the on-screen display to an object in code. On non-Apple systems, Swift does not depend on an Objective-C runtime or other Apple system libraries; a set of Swift "Corelib" implementations replace them. These include a "swift-corelibs-foundation" to stand in for the Foundation Kit, a "swift-corelibs-libdispatch" to stand in for the Grand Central Dispatch, and an "swift-corelibs-xctest" to stand in for the XCTest APIs from Xcode. As of 2019, with Xcode 11, Apple has also added a major new UI paradigm called SwiftUI. SwiftUI replaces the older Interface Builder paradigm with a new declarative development paradigm. Memory management Swift uses Automatic Reference Counting (ARC) to manage memory. Every instance of a class or closure maintains a reference count which keeps a running tally of the number of references the program is holding on to. When this count reaches 0 the instance is deallocated. This automatic deallocation removes the need for a garbage collector as instances are deallocated as soon as they are no longer needed. A strong reference cycle can occur if two instances each strongly reference each other (e.g. A references B, B references A). Since neither instances reference count can ever reach zero neither is ever deallocated, resulting in a memory leak. Swift provides the keywords weak and unowned to prevent strong reference cycles. These keywords allow an instance to be referenced without incrementing its reference count. weak references must be optional variables, since they can change and become nil. Attempting to access an unowned value that has already been deallocated results in a runtime error. A closure within a class can also create a strong reference cycle by capturing self references. Self references to be treated as weak or unowned can be indicated using a capture list.class Person { let name: String weak var home: Home? // Defined as a weak reference in order to break the reference cycle. weak references do not increment the reference count of the instance that they refer to. init(name: String) { self.name = name } deinit { print("De-initialized \(name)") } } class Home { let address: String var owner: Person? init(address: String, owner: Person?) { self.address = address self.owner = owner } deinit { print("De-initialized \(address)") } } var stacy: Person? = Person(name: "Stacy") var house21b: Home? = Home(address: "21b Baker Street", owner: stacy) stacy?.home = house21b // stacy and house42b now refer to each other. stacy = nil // The reference count for stacy is now 1, because house21b is still holding a reference to it. house21b = nil // house21b's reference count drops to 0, which in turn drops stacy's count to 0 because house21b was the last instance holding a strong reference to stacy. // Prints: // De-initialized 21b Baker Street // De-initialized Stacy Debugging A key element of the Swift system is its ability to be cleanly debugged and run within the development environment, using a read–eval–print loop (REPL), giving it interactive properties more in common with the scripting abilities of Python than traditional system programming languages. The REPL is further enhanced with playgrounds, interactive views running within the Xcode environment or Playgrounds app that respond to code or debugger changes on-the-fly. Playgrounds allow programmers to add in Swift code along with markdown documentation. Programmers can step through code and add breakpoints using LLDB either in a console or an IDE like Xcode. Comparisons to other languages Swift is considered a C family programming language and is similar to C in various ways: Most operators in C also appear in Swift, although some operators such as + have slightly different behavior. For example, in Swift, + traps on overflow, whereas &+ is used to denote the C-like behavior of wrapping on overflow. Curly braces are used to group statements. Variables are assigned using an equals sign, but compared using two consecutive equals signs. A new identity operator, ===, is provided to check if two data elements refer to the same object. Control statements while, if, and switch are similar, but have extended functions, e.g., a switch that takes non-integer cases, while and if supporting pattern matching and conditionally unwrapping optionals, for uses the syntax. Square brackets are used with arrays, both to declare them and to get a value at a given index in one of them. It also has similarities to Objective-C: Basic numeric types: Int, UInt, Float, Double Class methods are inherited, like instance methods; self in class methods is the class the method was called on. Similar for...in enumeration syntax. Differences from Objective-C include: Statements need not end with semicolons (;), though these must be used to allow more than one statement on one line. No header files. Uses type inference. Generic programming. Functions are first-class objects. Enumeration cases can have associated data (algebraic data types). Operators can be redefined for classes (operator overloading), and new operators can be defined. Strings fully support Unicode. Most Unicode characters can be used in either identifiers or operators. No exception handling. Swift 2 introduces a different and incompatible error-handling model. Several features of earlier C-family languages that are easy to misuse have been removed: Pointers are not exposed by default. There is no need for the programmer to keep track of and mark names for referencing or dereferencing. Assignments return no value. This prevents the common error of writing i = 0 instead of i == 0 (which throws a compile-time error). No need to use break statements in switch blocks. Individual cases do not fall through to the next case unless the fallthrough statement is used. Variables and constants are always initialized and array bounds are always checked. Integer overflows, which result in undefined behavior for signed integers in C, are trapped as a run-time error in Swift. Programmers can choose to allow overflows by using the special arithmetical operators &+, &-, &*, &/ and &%. The properties min and max are defined in Swift for all integer types and can be used to safely check for potential overflows, versus relying on constants defined for each type in external libraries. The one-statement form of if and while, which allows for the omission of braces around the statement, is unsupported. C-style enumeration for (int i = 0; i < c; i++), which is prone to off-by-one errors, is unsupported (from Swift 3 onward). The pre- and post- increment and decrement operators (i++, --i ...) are unsupported (from Swift 3 onward), more so since C-style for statements are also unsupported from Swift 3 onward. Development and other implementations Because Swift can run on Linux, it is sometimes also used as a server-side language. Some web frameworks have already been developed, such as IBM's Kitura (now discontinued), Perfect, Vapor, and Hummingbird. . An official "Server APIs" work group has also been started by Apple, with members of the Swift developer community playing a central role. A second free implementation of Swift that targets Cocoa, Microsoft's Common Language Infrastructure (.NET Framework, now .NET), and the Java and Android platform exists as part of the Elements Compiler from RemObjects Software. Subsets of Swift have been ported to additional platforms, such as Arduino and Mac OS 9.
Technology
Programming languages
null
49271581
https://en.wikipedia.org/wiki/Discord
Discord
Discord is an instant messaging and VoIP social platform which allows communication through voice calls, video calls, text messaging, and media. Communication can be private or take place in virtual communities called "servers". A server is a collection of persistent chat rooms and voice channels which can be accessed via invite links. Discord runs on Windows, macOS, Android, iOS, iPadOS, Linux, and in web browsers. the service has about 150 million monthly active users and 19 million weekly active servers. It is primarily used by gamers, although the share of users interested in other topics is growing. Discord was the 30th most visited website in the world with 22.98% of its traffic coming from the United States. In March 2022, Discord employed 600 people globally. History The concept of Discord came from Jason Citron, who had founded OpenFeint, a social gaming platform for mobile games, and Stanislav Vishnevskiy, who had founded Guildwork, another social gaming platform. Citron sold OpenFeint to GREE in 2011 for million, which he used to found Hammer & Chisel, a game development studio, in 2012. Their first product was Fates Forever, released in 2014, which Citron anticipated to be the first multiplayer online battle arena (MOBA) game on mobile platforms, but it did not become commercially successful. According to Citron, during the development process, he noticed how difficult it was for his team to work out tactics in games like Final Fantasy XIV and League of Legends using available voice over IP (VoIP) software. This led to the development of a chat service with a focus on user friendliness with minimal impact on performance. The name Discord was chosen because it "sounds cool and has to do with talking", was easy to say, spell, remember, and was available for trademark and website. In addition, "Discord in the gaming community" was the problem they wished to solve. To develop Discord, Hammer & Chisel gained additional funding from YouWeb's 9+ incubator, which had also funded the startup of Hammer & Chisel, and from Benchmark capital and Tencent. Discord was publicly released in May 2015 under the domain name discordapp.com. According to Citron, they made no specific moves to target any specific audience, but some gaming-related subreddits quickly began to replace their IRC links with Discord links. Discord became widely used by esports and LAN tournament gamers. The company benefited from relationships with Twitch streamers and subreddit communities for Diablo and World of Warcraft. In January 2016, Discord raised an additional $20million in funding, including an investment from WarnerMedia (then TimeWarner). WarnerMedia was acquired by AT&T in 2018 and WarnerMedia Investment Group was shut down in 2019, selling its equity. Microsoft announced in April 2018 that it would provide Discord support for Xbox Live users, allowing them to link their Discord and Xbox Live accounts so that they can connect with their Xbox Live friends list through Discord. In December 2018, the company announced it had raised $150million in funding at a $2billion valuation. The round was led by Greenoaks Capital with participation from Firstmark, Tencent, IVP, Index Ventures and Technology Opportunity Partners. Starting in June 2020, Discord announced it was shifting focus away from video gaming specifically to a more all-purpose communication and chat client for all functions, revealing its new slogan "Your place to talk", along with a revised website. Among other planned changes was to reduce the number of gaming in-jokes it used within the client, improving the user onboarding experience, and increasing server capacity and reliability. The company announced it had received an additional $100million in investments to help with these changes. In March 2021, Discord announced it had hired its first finance chief, former head of finance for Pinterest Tomasz Marcinkowski. An inside source called this one of the first steps for the company towards a potential initial public offering, though co-founder and chief executive officer Jason Citron had stated earlier in the month he was not thinking about taking the company public. Discord doubled its monthly user base to about 140million in 2020. The same month, Bloomberg News and The Wall Street Journal reported that several companies were looking to purchase Discord, with Microsoft named as the likely lead buyer at a value estimated at . However, they ended talks with Microsoft, opting to stay independent. Instead, Discord launched another round of investment in April 2021. Among those investing into the company was Sony Interactive Entertainment; the company stated that it intended to integrate a portion of Discord's services into the PlayStation Network by 2022. In May 2021, Discord rebranded its game controller-shaped logo "Clyde" in celebration of its sixth anniversary. The company also changed the color palette of its branding and user interfaces, making it much more saturated, to be more "bold and playful". They also changed its slogan from "your place to talk", to "imagine a place", believing that it would be easier to attach to additional taglines; these changes were met with backlash and criticism from Discord users. In July 2021, Discord acquired Sentropy, an internet moderation company. Ahead of a funding round in August 2021, Discord had reported in 2020 revenues, triple from the prior year, and had an estimated valuation of . According to Citron, the increased valuation was due to the shift away from "broadcast wide-open social media communication services to more small, intimate places", as well as increased usage from the COVID-19 pandemic. They captured users that were leaving Facebook and other platforms due to privacy concerns. Citron states that they are still in talks with several potential buyers including all major gaming console manufacturers. From this, the company secured an additional in further investments in September 2021. In September 2021, Google sent cease and desist notices to the developers of two of the most popular music bots used on Discord–Groovy and Rythm–which were used on an estimated 36million servers in total. These bots allowed users to request and play songs in a voice channel, taking the songs from YouTube ad-free. Two weeks later, Discord partnered with YouTube to test a "Watch Together" feature, which allows Discord users to watch YouTube videos together. Citron posted mockup images of Discord around the proposed Web3 principles with integrated cryptocurrency and non-fungible token support in November 2021, leading to criticism from its user base. Citron later stated that "We [...] want to clarify we have no plans to ship it at this time." The CNIL fined Discord €800,000 in November 2022 for being in violation of the European Union's General Data Protection Regulation (GDPR). The violations found by CNIL were that the application would continue to run in the background after it was closed and would not disconnect the user from a voice chat, as well as allowing users to create passwords that only consist of six characters. In early 2023, Discord was used to publish classified United States documents in one of the most significant intelligence leaks in recent history. The documents, distributed on a Minecraft Discord server as photos, detailed the state of the Russo-Ukrainian War, surveillance of allied and adversarial nations, and indicated cracks in alliances with nations aligned with the United States. In August 2023, Discord cut 4% of its staff, laying off 40 employees as part of a restructuring effort. On December 5, 2023, Discord revamped their mobile app for iOS and Android devices. They added new features such as dark mode for OLED screens, voice messages, and new icons. After a five-fold increase in employees between 2020 and 2024, the company laid off 17%, or—170 employees, in January 2024. On April Fool's 2024, Discord accidentally broke the record of the most viewed YouTube video in 24 hours. The cause of this record was the Discord client playing the announcement video on loop in the app itself. However, more than 1.3 billion views were removed 2 days later after YouTube fixed the views count, and no records were broken by the Discord Loot Boxes video. Features Discord is centered around managing communities. Communication tools such as voice and video calls, persistent chat rooms, and integrations with other gamer-focused services along with the general ability to send direct messages and create personal groups are present. Servers Discord communities are organized into discrete collections of channels called servers. Although they are referred to as servers on the front end, they are called "guilds" in the developer documentation, to distinguish themselves from actual servers. Users can create servers for free, manage their public visibility, and create voice channels, text channels, and categories to sort the channels into. Most servers have a limit of 250,000 members, but this limit can be raised if the server owner contacts Discord. Users can also create roles and assign them to server members. Roles can, among other things, determine which channels users have access to, change users’ colors, and designate a server's moderation team. The previously largest known Discord server was Snowsgiving 2021, an official Discord-controlled server made for the 2021 winter holiday season. This server reached 1 million members. In 2023, the server for Midjourney reached over 15 million members, making it the largest server on Discord. Starting October 2017, Discord allows game developers and publishers to verify their servers. Verified servers, like verified accounts on social media sites, have badges to mark them as official communities. A verified server is moderated by its developers' or publishers' own moderation team. Verification was later extended in February 2018 to include esports teams and musical artists. By the end of 2017, about 450 servers were verified. In 2023, Discord paused their verification program while they performed maintenance. The program has not been reopened Channel types Channels may be used either for voice chat and streaming or for instant messaging and file sharing, or both. Discord launched Stage Channels in May 2021, a feature similar to Clubhouse which allows for live, moderated channels, for audio talks, discussions, and other uses, which can further be potentially gated to only invited or ticketed users. Initially, users could search for open Stage Channels relevant to their interests through a Stage Discovery tool, which was discontinued in October 2021. In August 2021, Discord launched Threads, which are temporary text channels that can be set to automatically disappear. This is meant to help foster more communication within servers. Forum Channels, which allow for longer and separate conversations were introduced to the platform in September 2022. These channels bring an Internet forum experience to Discord. Discord launched Media Channels in June 2023. Media Channels are restricted to videos and images only. User profiles Users register for Discord with an email address and must create a username. Until mid-2023, to allow multiple users to use the same username, each user was assigned a four-digit number called a "discriminator" (colloquially a "Discord tag"), prefixed with "#", which was added to the end of their username. Users who subscribed to Discord Nitro had the ability to change this tag to any four-digit number. This system was ultimately changed to a handle-based system in May 2023, removing the discriminator from usernames. This new system mandated a change of username. Users selected their new usernames in priority based on how early they registered for Discord, Nitro status, and ownership of partner and verified servers. Users criticized the possible impersonation risk that may arise if their previous username was claimed by another user. In June 2021, Discord added a feature that allows the user to add an about me section to their profile, as well as a custom colored banner at the top of their profile. Subscribers to Discord Nitro have the added ability to upload static or animated images as their banners instead of solid colors. Video calls and streaming Video calling and screen sharing were added in October 2017, allowing users to create private video calls with up to 10 users, later increased to 50 due to the increased popularity of video calling during the COVID-19 pandemic. In August 2019, this was expanded with live streaming channels in servers. A user can share their entire screen, or a specific application, and others in that channel can choose to watch the stream. While these features somewhat mimic the livestreaming capabilities of platforms like Twitch, the company does not plan to compete with these services, as these features were made for small groups. Digital distribution In August 2018, Discord launched a games storefront beta, allowing users to purchase a curated set of games through the service. This will include a "First on Discord" featured set of games that their developers attest to Discord's help in getting launched, giving these games 90 days of exclusivity on the Discord marketplace. Discord Nitro subscribers will also gain access to a rotating set of games as part of their subscription, with the price of Nitro being bumped from $4.99 to $9.99 a month. A cheaper service called 'Nitro Classic' was also released that has the same perks as Nitro but does not include free games. Following the launch of the Epic Games Store, which challenged Valve's Steam storefront by only taking a 12% cut of game revenue, Discord announced in December 2018 that it would reduce its own revenue cut to 10%. To further support developers, starting in March 2019 Discord gave the ability for developers and publishers that ran their own servers to offer their games through a dedicated store channel on their server, with Discord managing the payment processing and distribution. This can be used, for example, to give select users access to alpha- and beta-builds of a game in progress as an early access alternative. Also in March 2019, Discord removed the digital storefront, instead choosing to focus on the Nitro subscription and having direct sales be done through developer's own servers. In September 2019, Discord announced that it was ending its free game service in October 2019 as they found too few people were playing the games offered. Developer tools and bots In December 2016, the company introduced its GameBridge API, which allows game developers to directly integrate with Discord within games. In December 2017, Discord added a software development kit that allows developers to integrate their games with the service, called "rich presence". This integration is commonly used to allow players to join each other's games through Discord or to display information about a player's game progression in their Discord profile. Bots are community-made tools to automate tasks. When installed by server owners, they may aid in moderation, host mini games, and perform myriad of other automated tasks. there are around 430,000 total bots active in estimated 30% of all servers. Discord provides official bot APIs which allow custom elements such as dropdowns and buttons. In spring 2022, Discord released an official "app directory" where server owners can add bots to their servers in-Discord. The Verge described bots as an "important part of Discord". Unofficial extensions Although Discord disallows modifications, many unofficial extensions have been created. BetterDiscord, for example, is an open-source desktop modification that allows various plugins to be installed. These plugins augment existing functionality or add features that are not offered by Discord. One plugin, for example, allows its users to apply custom skins for free; another plugin allows increasing the volume of a voice-call participant beyond the default. BetterDiscord has generally been well-received, though PC Gamer has said it is prone to crashes and bugs. According to BetterDiscord's developers, users of the modification are not at risk of being sanctioned by Discord so long as they do not use additional modifications that violate Discord's terms of service. Infrastructure Discord is a persistent group chat software, based on an eventually consistent database architecture. Discord was originally built on MongoDB. The infrastructure was migrated to Apache Cassandra when the platform reached a billion messages, then later migrated to ScyllaDB when it reached a trillion messages. The desktop, web, and iOS apps use React, using React Native on iOS/iPadOS. The Android app was originally written natively, but now shares code with the iOS app. The desktop client is built on the Electron software framework using web technologies, which allows it to be multi-platform and operate as an installed application on personal computers. The software is supported by Google Cloud Platform's infrastructure in more than thirty data centres located in thirteen regions to keep latency with clients low. In July 2020, Discord added noise suppression into its mobile app using the Krisp audio-filtering technology. Discord's backend is written mostly in Elixir and Python, as well as Rust, Go, and C++. Monetization While the software itself comes at no cost, the developers investigated ways to monetize it, with potential options including paid customization options such as emoji or stickers. In January 2017, the first paid subscription and features were released with "Discord Nitro Classic" (originally released as "Discord Nitro"). For a monthly subscription fee of $4.99, users can get an animated avatar, use custom and/or animated emojis across all servers (non-Nitro users can only use custom emoji on the server they were added to), an increased maximum file size on file uploads (from 8 MB to 50 MB), the ability to screen share in higher resolutions, the ability to choose their own discriminator (from #0001 to #9999) and a unique profile badge. In October 2018, "Discord Nitro" was renamed "Discord Nitro Classic" with the introduction of the new "Discord Nitro", which cost $9.99 and included access to free games through the Discord game store. Monthly subscribers of Discord Nitro Classic at the time of the introduction of the Discord games store were gifted with Discord Nitro, lasting until January 1, 2020, and yearly subscribers of Discord Nitro Classic were gifted with Discord Nitro until January 1, 2021. In October 2019, Discord ended their free game service with Nitro. In June 2019, Discord introduced Server Boosts, a way to benefit specific servers by purchasing a "boost" for it, with enough boosts granting various benefits for the users in that particular server. Each boost is a subscription costing $4.99 a month. For example, if a server maintains 2 boosts, it unlocks perks such as a higher maximum audio quality in voice channels and the ability to use an animated server icon. Users with Discord Nitro or Discord Nitro Classic have a 30% discount on server boost costs, with Nitro subscribers specifically also getting 2 free server boosts. Discord began testing digital stickers on its platform in October 2020 for users in Canada. Most stickers cost between $1.50 and $2.25 and are part of Discord's monetization strategy. Discord Nitro subscribers received a free "What's Up Wumpus" sticker pack focused on Discord's mascot, Wumpus. In May 2023, Discord made most stickers free to all users. In October 2022, the "Discord Nitro Classic" subscription tier was replaced by a $2.99 "Discord Nitro Basic", which features a subset of features from the $9.99 "Nitro" tier. Discord added Avatar Decorations and Profile Themes in October 2023. Users can purchase animated decorations for their profiles from Discord's Shop. Another way Discord makes money is through a 10% commission as the distribution fee from all games sold through game developers' verified servers. Reception By January 2016, Hammer & Chisel reported Discord had been used by 3million people, with growth of 1million per month, reaching 11million users in July that year. By December 2016, the company reported it had 25million users worldwide. By the end of 2017, the service had drawn nearly 90million users, with roughly 1.5million new users each week. With the service's third anniversary, Discord stated that it had 130million unique registered users. The company observed that while the bulk of its servers are used for gaming-related purposes, a small number have been created by users for non-gaming activities, like stock trading, fantasy football, and other shared interest groups. In May 2016, one year after the software's release, Tom Marks, writing for PC Gamer, described Discord as the best VoIP service available. Lifehacker has praised Discord's interface, ease of use, and platform compatibility. In 2021, Discord had at least 350million registered users across its web and mobile platforms. It was used by 56million people every month, sending a total of 25billion messages per month. By June 2020, the company reported it had 100million active users each month. the service has over 227million monthly active users. Criticisms and controversies Cyberbullying and moderation Discord has had problems with hostile behavior and abuse within chats, with some communities of chat servers being "raided" (a large number of users joining a server) by other communities. This includes flooding chats with controversial topics related to race, religion, politics, and pornography. Discord has stated that it has plans to implement changes that would "rid the platform of the issue". Discord has a Trust and Safety department, where they respond to user reports. However, because Discord is centered around private communities, it is difficult to research on its effectiveness. A study published in New Media & Society criticized Discord's offloading of server search functions to unmoderated third-party apps, saying that it facilitates hateful communities to find new audience. In January 2018, The Daily Beast reported that it found several Discord servers that were specifically engaged in distributing revenge porn and facilitating real-world harassment of the victims of these images and videos. Such actions are against Discord's terms of service and Discord shut down servers and banned users identified from these servers. Data privacy In September 2024, the Federal Trade Commission released a report summarizing 9 company responses (including from Discord) to orders made by the agency pursuant to Section 6(b) of the Federal Trade Commission Act of 1914 to provide information about user and non-user data collection (including of children and teenagers) and data use by the companies that found that the companies' user and non-user data practices put individuals vulnerable to identity theft, stalking, unlawful discrimination, emotional distress and mental health issues, social stigma, and reputational harm. Use by extremist users and groups Discord gained popularity with the alt-right due to the pseudonymity and privacy offered by Discord's service. Analyst Keegan Hankes from the Southern Poverty Law Center stated:It's pretty unavoidable to be a leader in this [alt-right] movement without participating in Discord. Citron stated that servers found to be engaged in illegal activities or violations of the terms of service would be shut down, but would not disclose any examples. Following the violent events that occurred during the Unite the Right rally in Charlottesville, Virginia, on August 12, 2017, it was found that Discord had been used to plan and organize the white nationalist rally. This included participation by Richard Spencer and Andrew Anglin, high-level figures in the movement. Discord responded by closing servers that supported the alt-right and far-right, and banning users who had participated. Discord's executives condemned "white supremacy" and "neo-Nazism", and said that these groups "are not welcome on Discord". Discord has worked with the Southern Poverty Law Center to identify hateful groups using Discord and ban those groups from the service. Since then, several neo-Nazi and alt-right servers have been shut down by Discord, including those operated by neo-Nazi terrorist group Atomwaffen Division, Nordic Resistance Movement, Iron March, and European Domas. In March 2019, the media collective Unicorn Riot published the contents of a Discord server used by several members of the white nationalist group Identity Evropa who were also members of the United States Armed Forces. Unicorn Riot has since published member lists and contents of several dozen servers connected to alt-right, white supremacist, and other such movements. In January 2021, two days after the U.S. Capitol attack, Discord deleted the pro-Donald Trump server The Donald, "due to its overt connection to an online forum used to incite violence, plan an armed insurrection in the United States, and spread harmful misinformation related to 2020 U.S. election fraud", while stating that there was no evidence the server was used to organize the attack on the Capitol building. The server had been used by former members of the r/The_Donald subreddit, which Reddit had deleted several months previous. In January 2022, the British anti-disinformation organization Logically reported that Holocaust denial, neo-Nazism and other forms of hate speech were flourishing on the Discord and Telegram groups of the German website Disclose.tv. In May 2022, Payton S. Gendron was named as the suspect in a race-driven mass shooting in Buffalo, New York, that killed ten people. It was reported that Gendron used a private Discord server as a diary for weeks as he prepared for the attack. Approximately 30 minutes before the shooting, several users were invited by Gendron to view the server and read the messages. The messages were later published on 4chan. Discord told the press that the server was deleted by moderators shortly after the shooting. The New York state attorney general's office announced an investigation of Discord among other online services in the wake of the shooting to determine if they had taken enough steps to prevent such content from being broadcast on their services, with which Discord said they would comply. Child grooming and safety CNN has reported that Discord has had problems with sexual exploitation of children and young teenagers on its platform. In July 2018, Discord updated its terms of service to ban drawn pornography with underage subjects. Some Discord users subsequently criticized the moderation staff for selectively allowing "cub" content, or underage pornographic furry artwork, under the same guidelines. The staff held that "cub porn" was separate from lolicon and shotacon, being "allowable as long as it is tagged properly". After numerous complaints from the community, Discord amended its community guidelines in February 2019 to include "non-humanoid animals and mythological creatures as long as they appear to be underage" in its list of disallowed categories, in addition to announcing periodic transparency reports to better communicate with users. In June 2023, NBC News reported that they had identified 35 cases of adults being charged with "kidnapping, grooming, or sexual assault" that allegedly involved the platform. They additionally discovered 165 cases of prosecution for the sharing of child sexual exploitation material on the platform. In March 2024, a joint investigation by The Washington Post, Wired, Der Spiegel and Recorder outlined the extensive child grooming, sexual abuse (including sextortion) and murder conducted by a group known as 764 on Discord. The investigation linked 764 and its associated groups and servers to cases in Germany, United States and Romania, going as far back as April 2021. Discord's representative stated that the service filed hundreds of reports, in addition to removing over 34,000 accounts associated with the group. Bans On January 27, 2021, Discord banned the r/WallStreetBets server during the GameStop short squeeze, citing "hateful and discriminatory content", which users found contentious. One day later, Discord allowed another server to be created and began assisting with moderation on it. Censorship In September 2024, according to Russian media Kommersant, Russian regulator Roskomnadzor was planning to block the platform. Russian regulator Roskomnadzor demanded that the platform remove 947 posts containing illegal content and imposed a 3.5 million roubles (USD$37,493) fine. On 8 October 2024, Russia officially blocked Discord. Following a decision made by the Ankara 1st Criminal Court of Peace, several hours apart from Russia's block, Turkey blocked Discord. Discord is blocked by the Great Firewall in China. Chinese police will find and interrogate people who make sensitive comments on the platform.
Technology
Social network and blogging
null
47490358
https://en.wikipedia.org/wiki/Solar%20phenomena
Solar phenomena
Solar phenomena are natural phenomena which occur within the atmosphere of the Sun. They take many forms, including solar wind, radio wave flux, solar flares, coronal mass ejections, coronal heating and sunspots. These phenomena are believed to be generated by a helical dynamo, located near the center of the Sun's mass, which generates strong magnetic fields, as well as a chaotic dynamo, located near the surface, which generates smaller magnetic field fluctuations. All solar fluctuations together are referred to as solar variation, producing space weather within the Sun's gravitational field. Solar activity and related events have been recorded since the eighth century BCE. Throughout history, observation technology and methodology advanced, and in the 20th century, interest in astrophysics surged and many solar telescopes were constructed. The 1931 invention of the coronagraph allowed the corona to be studied in full daylight. Sun The Sun is a star located at the center of the Solar System. It is almost perfectly spherical and consists of hot plasma and magnetic fields. It has a diameter of about , around 109 times that of Earth, and its mass (1.989 kilograms, approximately 330,000 times that of Earth) accounts for some 99.86% of the total mass of the Solar System. Chemically, about three quarters of the Sun's mass consists of hydrogen, while the rest is mostly helium. The remaining 1.69% (equal to 5,600 times the mass of Earth) consists of heavier elements, including oxygen, carbon, neon and iron. The Sun formed about 4.567 billion years ago from the gravitational collapse of a region within a large molecular cloud. Most of the matter gathered in the center, while the rest flattened into an orbiting disk that became the balance of the Solar System. The central mass became increasingly hot and dense, eventually initiating thermonuclear fusion in its core. The Sun is a G-type main-sequence star (G2V) based on spectral class, and it is informally designated as a yellow dwarf because its visible radiation is most intense in the yellow-green portion of the spectrum. It is actually white, but from the Earth's surface, it appears yellow because of atmospheric scattering of blue light. In the spectral class label, G2 indicates its surface temperature, of approximately 5770 K ( the UAI will accept in 2014 5772 K ) and V indicates that the Sun, like most stars, is a main-sequence star, and thus generates its energy via fusing hydrogen into helium. In its core, the Sun fuses about 620 million metric tons of hydrogen each second. The Earth's mean distance from the Sun is approximately , though the distance varies as the Earth moves from perihelion in January to aphelion in July. At this average distance, light travels from the Sun to Earth in about 8 minutes, 19 seconds. The energy of this sunlight supports almost all life on Earth by photosynthesis, and drives Earth's climate and weather. As recent as the 19th century, scientists had little knowledge of the Sun's physical composition and source of energy. This understanding is still developing; a number of present-day anomalies in the Sun's behavior remain unexplained. Solar cycle Many solar phenomena change periodically over an average interval of about 11 years. This solar cycle affects solar irradiation and influences space weather, terrestrial weather, and climate. The solar cycle also modulates the flux of short-wavelength solar radiation, from ultraviolet to X-ray and influences the frequency of solar flares, coronal mass ejections and other solar eruptive phenomena. Types Coronal mass ejections A coronal mass ejection (CME) is a massive burst of solar wind and magnetic fields rising above the solar corona. Near solar maxima, the Sun produces about three CMEs every day, whereas solar minima feature about one every five days. CMEs, along with solar flares of other origin, can disrupt radio transmissions and damage satellites and electrical transmission line facilities, resulting in potentially massive and long-lasting power outages. Coronal mass ejections often appear with other forms of solar activity, most notably solar flares, but no causal relationship has been established. Most weak flares do not have CMEs; however, most powerful ones do. Most ejections originate from active regions on the Sun's surface, such as sunspot groupings associated with frequent flares. Other forms of solar activity frequently associated with coronal mass ejections are eruptive prominences, coronal dimming, coronal waves and Moreton waves, also called solar tsunami. Magnetic reconnection is responsible for CME and solar flares. Magnetic reconnection is the name given to the rearrangement of magnetic field lines when two oppositely directed magnetic fields are brought together. This rearrangement is accompanied with a sudden release of energy stored in the original oppositely directed fields. When a CME impacts the Earth's magnetosphere, it temporarily deforms the Earth's magnetic field, changing the direction of compass needles and inducing large electrical ground currents in Earth itself; this is called a geomagnetic storm, and it is a global phenomenon. CME impacts can induce magnetic reconnection in Earth's magnetotail (the midnight side of the magnetosphere); this launches protons and electrons downward toward Earth's atmosphere, where they form the aurora. Flares A solar flare is a sudden flash of brightness observed over the Sun's surface or the solar limb, which is interpreted as an energy release of up to 6 × 1025 joules (about a sixth of the total Sun's energy output each second or 160 billion megatons of TNT equivalent, over 25,000 times more energy than released from the impact of Comet Shoemaker–Levy 9 with Jupiter). It may be followed by a coronal mass ejection. The flare ejects clouds of electrons, ions and atoms through the corona into space. These clouds typically reach Earth a day or two after the event. Similar phenomena in other stars are known as stellar flares. Solar flares strongly influence space weather near the Earth. They can produce streams of highly energetic particles in the solar wind, known as a solar proton event. These particles can impact the Earth's magnetosphere in the form of a geomagnetic storm and present radiation hazards to spacecraft and astronauts. Solar proton events A solar proton event (SPE), or "proton storm", occurs when particles (mostly protons) emitted by the Sun become accelerated either close to the Sun during a flare or in interplanetary space by CME shocks. The events can include other nuclei such as helium ions and HZE ions. These particles cause multiple effects. They can penetrate the Earth's magnetic field and cause ionization in the ionosphere. The effect is similar to auroral events, except that protons rather than electrons are involved. Energetic protons are a significant radiation hazard to spacecraft and astronauts. Energetic protons can reach Earth within 30 minutes of a major flare's peak. Prominences A prominence is a large, bright, gaseous feature extending outward from the Sun's surface, often in the shape of a loop. Prominences are anchored to the Sun's surface in the photosphere and extend outwards into the corona. While the corona consists of high temperature plasma, which does not emit much visible light, prominences contain much cooler plasma, similar in composition to that of the chromosphere. Prominence plasma is typically a hundred times cooler and denser than coronal plasma. A prominence forms over timescales of about an earthly day and may persist for weeks or months. Some prominences break apart and form CMEs. A typical prominence extends over many thousands of kilometers; the largest on record was estimated at over long – roughly the solar radius. When a prominence is viewed against the Sun instead of space, it appears darker than the background. This formation is called a solar filament. It is possible for a projection to be both a filament and a prominence. Some prominences are so powerful that they eject matter at speeds ranging from 600 km/s to more than 1000 km/s. Other prominences form huge loops or arching columns of glowing gases over sunspots that can reach heights of hundreds of thousands of kilometers. Sunspots Sunspots are relatively dark areas on the Sun's radiating 'surface' (photosphere) where intense magnetic activity inhibits convection and cools the Photosphere. Faculae are slightly brighter areas that form around sunspot groups as the flow of energy to the photosphere is re-established and both the normal flow and the sunspot-blocked energy elevate the radiating 'surface' temperature. Scientists began speculating on possible relationships between sunspots and solar luminosity in the 17th century. Luminosity decreases caused by sunspots (generally < - 0.3%) are correlated with increases (generally < + 0.05%) caused both by faculae that are associated with active regions as well as the magnetically active 'bright network'. The net effect during periods of enhanced solar magnetic activity is increased radiant solar output because faculae are larger and persist longer than sunspots. Conversely, periods of lower solar magnetic activity and fewer sunspots (such as the Maunder Minimum) may correlate with times of lower irradiance. Sunspot activity has been measured using the Wolf number for about 300 years. This index (also known as the Zürich number) uses both the number of sunspots and the number of sunspot groups to compensate for measurement variations. A 2003 study found that sunspots had been more frequent since the 1940s than in the previous 1150 years. Sunspots usually appear as pairs with opposite magnetic polarity. Detailed observations reveal patterns, in yearly minima and maxima and in relative location. As each cycle proceeds, the latitude of spots declines, from 30 to 45° to around 7° after the solar maximum. This latitudinal change follows Spörer's law. For a sunspot to be visible to the human eye it must be about 50,000 km in diameter, covering or 700 millionths of the visible area. Over recent cycles, approximately 100 sunspots or compact sunspot groups are visible from Earth. Sunspots expand and contract as they move about and can travel at a few hundred meters per second when they first appear. Wind The solar wind is a stream of plasma released from the Sun's upper atmosphere. It consists of mostly electrons and protons with energies usually between 1.5 and 10 keV. The stream of particles varies in density, temperature and speed over time and over solar longitude. These particles can escape the Sun's gravity because of their high energy. The solar wind is divided into the slow solar wind and the fast solar wind. The slow solar wind has a velocity of about , a temperature of 2 K and a composition that is a close match to the corona. The fast solar wind has a typical velocity of 750 km/s, a temperature of 8 K and nearly matches the photosphere's. The slow solar wind is twice as dense and more variable in intensity than the fast solar wind. The slow wind has a more complex structure, with turbulent regions and large-scale organization. Both the fast and slow solar winds can be interrupted by large, fast-moving bursts of plasma called interplanetary CMEs, or ICMEs. They cause shock waves in the thin plasma of the heliosphere, generating electromagnetic waves and accelerating particles (mostly protons and electrons) to form showers of ionizing radiation that precede the CME. Effects Space weather Space weather is the environmental condition within the Solar System, including the solar wind. It is studied especially surrounding the Earth, including conditions from the magnetosphere to the ionosphere and thermosphere. Space weather is distinct from terrestrial weather of the troposphere and stratosphere. The term was not used until the 1990s. Prior to that time, such phenomena were considered to be part of physics or aeronomy. Solar storms Solar storms are caused by disturbances on the Sun, most often coronal clouds associated with solar flare CMEs emanating from active sunspot regions, or less often from coronal holes. The Sun can produce intense geomagnetic and proton storms capable of causing power outages, disruption or communications blackouts (including GPS systems) and temporary/permanent disabling of satellites and other spaceborne technology. Solar storms may be hazardous to high-latitude, high-altitude aviation and to human spaceflight. Geomagnetic storms cause aurorae. The most significant known solar storm occurred in September 1859 and is known as the Carrington event. Aurora An aurora is a natural light display in the sky, especially in the high latitude (Arctic and Antarctic) regions, in the form of a large circle around the pole. It is caused by the collision of solar wind and charged magnetospheric particles with the high altitude atmosphere (thermosphere). Most auroras occur in a band known as the auroral zone, which is typically 3° to 6° wide in latitude and observed at 10° to 20° from the geomagnetic poles at all longitudes, but often most vividly around the spring and autumn equinoxes. The charged particles and solar wind are directed into the atmosphere by the Earth's magnetosphere. A geomagnetic storm expands the auroral zone to lower latitudes. Auroras are associated with the solar wind. The Earth's magnetic field traps its particles, many of which travel toward the poles where they are accelerated toward Earth. Collisions between these ions and the atmosphere release energy in the form of auroras appearing in large circles around the poles. Auroras are more frequent and brighter during the solar cycle's intense phase when CMEs increase the intensity of the solar wind. Geomagnetic storm A geomagnetic storm is a temporary disturbance of the Earth's magnetosphere caused by a solar wind shock wave and/or cloud of magnetic field that interacts with the Earth's magnetic field. The increase in solar wind pressure compresses the magnetosphere and the solar wind's magnetic field interacts with the Earth's magnetic field to transfer increased energy into the magnetosphere. Both interactions increase plasma movement through the magnetosphere (driven by increased electric fields) and increase the electric current in the magnetosphere and ionosphere. The disturbance in the interplanetary medium that drives a storm may be due to a CME or a high speed stream (co-rotating interaction region or CIR) of the solar wind originating from a region of weak magnetic field on the solar surface. The frequency of geomagnetic storms increases and decreases with the sunspot cycle. CME driven storms are more common during the solar maximum of the solar cycle, while CIR-driven storms are more common during the solar minimum. Several space weather phenomena are associated with geomagnetic storms. These include Solar Energetic Particle (SEP) events, geomagnetically induced currents (GIC), ionospheric disturbances that cause radio and radar scintillation, disruption of compass navigation and auroral displays at much lower latitudes than normal. A 1989 geomagnetic storm energized ground induced currents that disrupted electric power distribution throughout most of the province of Quebec and caused aurorae as far south as Texas. Sudden ionospheric disturbance A sudden ionospheric disturbance (SID) is an abnormally high ionization/plasma density in the D region of the ionosphere caused by a solar flare. The SID results in a sudden increase in radio-wave absorption that is most severe in the upper medium frequency (MF) and lower high frequency (HF) ranges, and as a result, often interrupts or interferes with telecommunications systems. Geomagnetically induced currents Geomagnetically induced currents are a manifestation at ground level of space weather, which affect the normal operation of long electrical conductor systems. During space weather events, electric currents in the magnetosphere and ionosphere experience large variations, which manifest also in the Earth's magnetic field. These variations induce currents (GIC) in earthly conductors. Electric transmission grids and buried pipelines are common examples of such conductor systems. GIC can cause problems such as increased corrosion of pipeline steel and damaged high-voltage power transformers. Carbon-14 The production of carbon-14 (radiocarbon: 14C) is related to solar activity. Carbon-14 is produced in the upper atmosphere when cosmic ray bombardment of atmospheric nitrogen (14N) induces the nitrogen to undergo β+ decay, thus transforming into an unusual isotope of carbon with an atomic weight of 14 rather than the more common 12. Because galactic cosmic rays are partially excluded from the Solar System by the outward sweep of magnetic fields in the solar wind, increased solar activity reduces 14C production. Atmospheric 14C concentration is lower during solar maxima and higher during solar minima. By measuring the captured 14C in wood and counting tree rings, production of radiocarbon relative to recent wood can be measured and dated. A reconstruction of the past 10,000 years shows that the 14C production was much higher during the mid-Holocene 7,000 years ago and decreased until 1,000 years ago. In addition to variations in solar activity, long-term trends in carbon-14 production are influenced by changes in the Earth's geomagnetic field and by changes in carbon cycling within the biosphere (particularly those associated with changes in the extent of vegetation between ice ages). Observation history Solar activity and related events have been regularly recorded since the time of the Babylonians. Early records described solar eclipses, the corona and sunspots. Soon after the invention of telescopes, in the early 1600s, astronomers began observing the Sun. Thomas Harriot was the first to observe sunspots, in 1610. Observers confirmed the less-frequent sunspots and aurorae during the Maunder minimum. One of these observers was the renowned astronomer Johannes Hevelius who recorded a number of sunspots from 1653 to 1679 in the early Maunder minimum, listed in the book Machina Coelestis (1679). Solar spectrometry began in 1817. Rudolf Wolf gathered sunspot observations as far back as the 1755–1766 cycle. He established a relative sunspot number formulation (the Wolf or Zürich sunspot number) that became the standard measure. Around 1852, Sabine, Wolf, Gautier and von Lamont independently found a link between the solar cycle and geomagnetic activity. On 2 April 1845, Fizeau and Foucault first photographed the Sun. Photography assisted in the study of solar prominences, granulation, spectroscopy and solar eclipses. On 1 September 1859, Richard C. Carrington and separately R. Hodgson first observed a solar flare. Carrington and Gustav Spörer discovered that the Sun exhibits differential rotation, and that the outer layer must be fluid. In 1907–08, George Ellery Hale uncovered the Sun's magnetic cycle and the magnetic nature of sunspots. Hale and his colleagues later deduced Hale's polarity laws that described its magnetic field. Bernard Lyot's 1931 invention of the coronagraph allowed the corona to be studied in full daylight. The Sun was, until the 1990s, the only star whose surface had been resolved. Other major achievements included understanding of: X-ray-emitting loops (e.g., by Yohkoh) Corona and solar wind (e.g., by SoHO) Variance of solar brightness with level of activity, and verification of this effect in other solar-type stars (e.g., by ACRIM) The intense fibril state of the magnetic fields at the visible surface of a star like the Sun (e.g., by Hinode) The presence of magnetic fields of 0.5×105 to 1×105 gauss at the base of the conductive zone, presumably in some fibril form, inferred from the dynamics of rising azimuthal flux bundles. Low-level electron neutrino emission from the Sun's core. In the later twentieth century, satellites began observing the Sun, providing many insights. For example, modulation of solar luminosity by magnetically active regions was confirmed by satellite measurements of total solar irradiance (TSI) by the ACRIM1 experiment on the Solar Maximum Mission (launched in 1980).
Physical sciences
Solar System
Astronomy
47491846
https://en.wikipedia.org/wiki/Solar%20activity%20and%20climate
Solar activity and climate
Patterns of solar irradiance and solar variation have been a main driver of climate change over the millions to billions of years of the geologic time scale. Evidence that this is the case comes from analysis on many timescales and from many sources, including: direct observations; composites from baskets of different proxy observations; and numerical climate models. On millennial timescales, paleoclimate indicators have been compared to cosmogenic isotope abundances as the latter are a proxy for solar activity. These have also been used on century times scales but, in addition, instrumental data are increasingly available (mainly telescopic observations of sunspots and thermometer measurements of air temperature) and show that, for example, the temperature fluctuations do not match the solar activity variations and that the commonly-invoked association of the Little Ice Age with the Maunder minimum is far too simplistic as, although solar variations may have played a minor role, a much bigger factor is known to be Little Ice Age volcanism. In recent decades observations of unprecedented accuracy, sensitivity and scope (of both solar activity and terrestrial climate) have become available from spacecraft and show unequivocally that recent global warming is not caused by changes in the Sun. Geologic time Earth formed around 4.54 billion years ago by accretion from the solar nebula. Volcanic outgassing probably created the primordial atmosphere, which contained almost no oxygen and would have been toxic to humans and most modern life. Much of the Earth was molten because of frequent collisions with other bodies which led to extreme volcanism. Over time, the planet cooled and formed a solid crust, eventually allowing liquid water to exist on the surface. Three to four billion years ago the Sun emitted only 70% of its current power. Under the present atmospheric composition, this past solar luminosity would have been insufficient to prevent water from uniformly freezing. There is nonetheless evidence that liquid water was already present in the Hadean and Archean eons, leading to what is known as the faint young Sun paradox. Hypothesized solutions to this paradox include a vastly different atmosphere, with much higher concentrations of greenhouse gases than currently exist. Over the following approximately 4 billion years, the Sun's energy output increased and the composition of the Earth atmosphere changed. The Great Oxygenation Event around 2.4 billion years ago was the most notable alteration of the atmosphere. Over the next five billion years, the Sun's ultimate death as it becomes a very bright red giant and then a very faint white dwarf will have dramatic effects on climate, with the red giant phase likely already ending any life on Earth. Measurement Since 1978, solar irradiance has been directly measured by satellites with very good accuracy. These measurements indicate that the Sun's total solar irradiance fluctuates by +-0.1% over the ~11 years of the solar cycle, but that its average value has been stable since the measurements started in 1978. Solar irradiance before the 1970s is estimated using proxy variables, such as tree rings, the number of sunspots, and the abundances of cosmogenic isotopes such as 10Be, all of which are calibrated to the post-1978 direct measurements. Solar activity has been on a declining trend since the 1960s, as indicated by solar cycles 19–24, in which the maximum number of sunspots were 201, 111, 165, 159, 121 and 82, respectively. In the three decades following 1978, the combination of solar and volcanic activity is estimated to have had a slight cooling influence. A 2010 study found that the composition of solar radiation might have changed slightly, with in an increase of ultraviolet radiation and a decrease in other wavelengths." Modern era In the modern era, the Sun has operated within a sufficiently narrow band that climate has been little affected. Models indicate that the combination of solar variations and volcanic activity can explain periods of relative warmth and cold between A.D. 1000 and 1900. The Holocene Numerous paleoenvironmental reconstructions have looked for relationships between solar variability and climate. Arctic paleoclimate, in particular, has linked total solar irradiance variations and climate variability. A 2001 paper identified a ~1500 year solar cycle that was a significant influence on North Atlantic climate throughout the Holocene. Little Ice Age One historical long-term correlation between solar activity and climate change is the 1645–1715 Maunder minimum, a period of little or no sunspot activity which partially overlapped the "Little Ice Age" during which cold weather prevailed in Europe. The Little Ice Age encompassed roughly the 16th to the 19th centuries. Whether the low solar activity or other factors caused the cooling is debated. The Spörer Minimum between 1460 and 1550 was matched to a significant cooling period. A 2012 paper instead linked the Little Ice Age to volcanism, through an "unusual 50-year-long episode with four large sulfur-rich explosive eruptions," and claimed "large changes in solar irradiance are not required" to explain the phenomenon. A 2010 paper suggested that a new 90-year period of low solar activity would reduce global average temperatures by about 0.3 °C, which would be far from enough to offset the increased forcing from greenhouse gases. Fossil fuel era The link between recent solar activity and climate has been quantified and is not a major driver of the warming that has occurred since early in the twentieth century. Human-induced forcings are needed to reproduce the late-20th century warming. Some studies associate solar cycle-driven irradiation increases with part of twentieth century warming. Three mechanisms are proposed by which solar activity affects climate: Solar irradiance changes directly affecting the climate ("radiative forcing"). This is generally considered to be a minor effect, as the measured amplitudes of the variations are too small to have significant effect, absent some amplification process. Variations in the ultraviolet component. The UV component varies by more than the total, so if UV were for some (as yet unknown) reason to have a disproportionate effect, this might explain a larger solar signal. Effects mediated by changes in galactic cosmic rays (which are affected by the solar wind) such as changes in cloud cover. Climate models have been unable to reproduce the rapid warming observed in recent decades when they only consider variations in total solar irradiance and volcanic activity. Hegerl et al. (2007) concluded that greenhouse gas forcing had "very likely" caused most of the observed global warming since the mid-20th century. In making this conclusion, they allowed for the possibility that climate models had been underestimating the effect of solar forcing. Another line of evidence comes from looking at how temperatures at different levels in the Earth's atmosphere have changed. Models and observations show that greenhouse gas results in warming of the troposphere, but cooling of the stratosphere. Depletion of the ozone layer by chemical refrigerants stimulated a stratospheric cooling effect. If the Sun was responsible for observed warming, warming of the troposphere at the surface and warming at the top of the stratosphere would be expected as the increased solar activity would replenish ozone and oxides of nitrogen. Lines of evidence The assessment of the solar activity/climate relationship involves multiple, independent lines of evidence. Sunspots Early research attempted to find a correlation between weather and sunspot activity, mostly without notable success. Later research has concentrated more on correlating solar activity with global temperature. Irradiation Accurate measurement of solar forcing is crucial to understanding possible solar impact on terrestrial climate. Accurate measurements only became available during the satellite era, starting in the late 1970s, and even that is open to some residual disputes: different teams find different values, due to different methods of cross-calibrating measurements taken by instruments with different spectral sensitivity. Scafetta and Willson argue for significant variations of solar luminosity between 1980 and 2000, but Lockwood and Frohlich find that solar forcing declined after 1987. The 2001 Intergovernmental Panel on Climate Change (IPCC) Third Assessment Report (TAR) concluded that the measured impact of recent solar variation is much smaller than the amplification effect due to greenhouse gases, but acknowledged that scientific understanding is poor with respect to solar variation. Estimates of long-term solar irradiance changes have decreased since the TAR. However, empirical results of detectable tropospheric changes have strengthened the evidence for solar forcing of climate change. The most likely mechanism is considered to be some combination of direct forcing by TSI changes and indirect effects of ultraviolet (UV) radiation on the stratosphere. Least certain are indirect effects induced by galactic cosmic rays. In 2002, Lean et al. stated that while "There is ... growing empirical evidence for the Sun's role in climate change on multiple time scales including the 11-year cycle", "changes in terrestrial proxies of solar activity (such as the 14C and 10Be cosmogenic isotopes and the aa geomagnetic index) can occur in the absence of long-term (i.e., secular) solar irradiance changes ... because the stochastic response increases with the cycle amplitude, not because there is an actual secular irradiance change." They conclude that because of this, "long-term climate change may appear to track the amplitude of the solar activity cycles," but that "Solar radiative forcing of climate is reduced by a factor of 5 when the background component is omitted from historical reconstructions of total solar irradiance ...This suggests that general circulation model (GCM) simulations of twentieth century warming may overestimate the role of solar irradiance variability." A 2006 review suggested that solar brightness had relatively little effect on global climate, with little likelihood of significant shifts in solar output over long periods of time. Lockwood and Fröhlich, 2007, found "considerable evidence for solar influence on the Earth's pre-industrial climate and the Sun may well have been a factor in post-industrial climate change in the first half of the last century", but that "over the past 20 years, all the trends in the Sun that could have had an influence on the Earth's climate have been in the opposite direction to that required to explain the observed rise in global mean temperatures." In a study that considered geomagnetic activity as a measure of known solar-terrestrial interaction, Love et al. found a statistically significant correlation between sunspots and geomagnetic activity, but not between global surface temperature and either sunspot number or geomagnetic activity. Benestad and Schmidt concluded that "the most likely contribution from solar forcing a global warming is 7 ± 1% for the 20th century and is negligible for warming since 1980." This paper disagreed with Scafetta and West, who claimed that solar variability has a significant effect on climate forcing. Based on correlations between specific climate and solar forcing reconstructions, they argued that a "realistic climate scenario is the one described by a large preindustrial secular variability (e.g., the paleoclimate temperature reconstruction by Moberg et al.) with TSI experiencing low secular variability (as the one shown by Wang et al.). Under this scenario, they claimed the Sun might have contributed 50% of the observed global warming since 1900. Stott et al. estimated that the residual effects of the prolonged high solar activity during the last 30 years account for between 16% and 36% of warming from 1950 to 1999. Direct measurement and time series Neither direct measurements nor proxies of solar variation correlate well with Earth global temperature, particularly in recent decades when both quantities are best known. The oppositely-directed trends highlighted by Lockwood and Fröhlich in 2007, with global mean temperatures continuing to rise while solar activity fell, have continued and become even more pronounced since then. In 2007 the difference in the trends was apparent after about 1987 and that difference has grown and accelerated in subsequent years. The updated figure (right) shows the variations and contrasts solar cycles 14 and 24, a century apart, that are quite similar in all solar activity measures (in fact cycle 24 is slightly less active than cycle 14 on average), yet the global mean air surface temperature is more than 1 degree Celsius higher for cycle 24 than cycle 14, showing the rise is not associated with solar activity. The total solar irradiance (TSI) panel shows the PMOD composite of observations with a modelled variation from the SATIRE-T2 model of the effect of sunspots and faculae with the addition of a quiet -Sun variation (due to sub-resolution photospheric features and any solar radius changes) derived from correlations with comic ray fluxes and cosmogenic isotopes. The finding that solar activity was approximately the same in cycles 14 and 24 applies to all solar outputs that have, in the past, been proposed as a potential cause of terrestrial climate change and includes total solar irradiance, cosmic ray fluxes, spectral UV irradiance, solar wind speed and/or density, heliospheric magnetic field and its distribution of orientations and the consequent level of geomagnetic activity. Daytime/nighttime Global average diurnal temperature range has decreased. Daytime temperatures have not risen as fast as nighttime temperatures. This is the opposite of the expected warming if solar energy (falling primarily or wholly during daylight, depending on energy regime) were the principal means of forcing. It is, however, the expected pattern if greenhouse gases were preventing radiative escape, which is more prevalent at night. Hemisphere and latitude The Northern Hemisphere is warming faster than the Southern Hemisphere. This is the opposite of the expected pattern if the Sun, currently closer to the Earth during austral summer, were the principal climate forcing. In particular, the Southern Hemisphere, with more ocean area and less land area, has a lower albedo ("whiteness") and absorbs more light. The Northern Hemisphere, however, has higher population, industry and emissions. Furthermore, the Arctic region is warming faster than the Antarctic and faster than northern mid-latitudes and subtropics, despite polar regions receiving less sun than lower latitudes. Altitude Solar forcing should warm Earth's atmosphere roughly evenly by altitude, with some variation by wavelength/energy regime. However, the atmosphere is warming at lower altitudes while cooling higher up. This is the expected pattern if greenhouse gases drive temperature, as on Venus. Solar variation theory A 1994 study of the US National Research Council concluded that TSI variations were the most likely cause of significant climate change in the pre-industrial era, before significant human-generated carbon dioxide entered the atmosphere. Scafetta and West correlated solar proxy data and lower tropospheric temperature for the preindustrial era, before significant anthropogenic greenhouse forcing, suggesting that TSI variations may have contributed 50% of the warming observed between 1900 and 2000 (although they conclude "our estimates about the solar effect on climate might be overestimated and should be considered as an upper limit.") If interpreted as a detection rather than an upper limit, this would contrast with global climate models predicting that solar forcing of climate through direct radiative forcing makes an insignificant contribution. In 2000, Stott and others reported on the most comprehensive model simulations of 20th century climate to that date. Their study looked at both "natural forcing agents" (solar variations and volcanic emissions) as well as "anthropogenic forcing" (greenhouse gases and sulphate aerosols). They found that "solar effects may have contributed significantly to the warming in the first half of the century although this result is dependent on the reconstruction of total solar irradiance that is used. In the latter half of the century, we find that anthropogenic increases in greenhouses gases are largely responsible for the observed warming, balanced by some cooling due to anthropogenic sulphate aerosols, with no evidence for significant solar effects." Stott's group found that combining these factors enabled them to closely simulate global temperature changes throughout the 20th century. They predicted that continued greenhouse gas emissions would cause additional future temperature increases "at a rate similar to that observed in recent decades". In addition, the study notes "uncertainties in historical forcing" — in other words, past natural forcing may still be having a delayed warming effect, most likely due to the oceans. Stott's 2003 work largely revised his assessment, and found a significant solar contribution to recent warming, although still smaller (between 16 and 36%) than that of greenhouse gases. A study in 2004 concluded that solar activity affects the climate - based on sunspot activity, yet plays only a small role in the current global warming. Correlations to solar cycle length In 1991, Friis-Christensen and Lassen claimed a strong correlation of the length of the solar cycle with northern hemispheric temperature changes. They initially used sunspot and temperature measurements from 1861 to 1989 and later extended the period using four centuries of climate records. Their reported relationship appeared to account for nearly 80 per cent of measured temperature changes over this period. The mechanism behind these claimed correlations was a matter of speculation. In a 2003 paper Laut identified problems with some of these correlation analyses. Damon and Laut claimed: the apparent strong correlations displayed on these graphs have been obtained by incorrect handling of the physical data. The graphs are still widely referred to in the literature, and their misleading character has not yet been generally recognized. Damon and Laut stated that when the graphs are corrected for filtering errors, the sensational agreement with the recent global warming, which drew worldwide attention, totally disappeared. In 2000, Lassen and Thejll updated their 1991 research and concluded that while the solar cycle accounted for about half the temperature rise since 1900, it failed to explain a rise of 0.4 °C since 1980. Benestad's 2005 review found that the solar cycle did not follow Earth's global mean surface temperature. In 2022, Chatzistergos updated the cycle length series with recent sunspot and solar plages data, extending them to more recent periods than previous studies, and also considering the various available time series. This is important because of the plentiful updates and corrections that have been applied to the sunspot data over the last decade. He showed that cycle lengths significantly diverge from Earth's temperatures and concluded that the strong correlation reported by Friis-Christensen and Lassen was an artefact of their analysis. Owing largely to their guess of next extrema times, arbitrarily restricting the analysis over a specific time period, along with other arbitrarities in their methodology. Weather Solar activity may also impact regional climates, such as for the rivers Paraná and Po. Measurements from NASA's Solar Radiation and Climate Experiment show that solar UV output is more variable than total solar irradiance. Climate modelling suggests that low solar activity may result in, for example, colder winters in the US and northern Europe and milder winters in Canada and southern Europe, with little change in global averages. More broadly, links have been suggested between solar cycles, global climate and regional events such as El Niño. Hancock and Yarger found "statistically significant relationships between the double [~21-year] sunspot cycle and the 'January thaw' phenomenon along the East Coast and between the double sunspot cycle and 'drought' (June temperature and precipitation) in the Midwest." Cloud condensation Recent research at CERN's CLOUD facility examined links between cosmic rays and cloud condensation nuclei, demonstrating the effect of high-energy particulate radiation in nucleating aerosol particles that are precursors to cloud condensation nuclei. Kirkby (CLOUD team leader) said, "At the moment, it [the experiment] actually says nothing about a possible cosmic-ray effect on clouds and climate." After further investigation, the team concluded that "variations in cosmic ray intensity do not appreciably affect climate through nucleation." 1983–1994 global low cloud formation data from the International Satellite Cloud Climatology Project (ISCCP) was highly correlated with galactic cosmic ray (GCR) flux; subsequent to this period, the correlation broke down. Changes of 3–4% in cloudiness and concurrent changes in cloud top temperatures correlated to the 11 and 22-year solar (sunspot) cycles, with increased GCR levels during "antiparallel" cycles. Global average cloud cover change was measured at 1.5–2%. Several GCR and cloud cover studies found positive correlation at latitudes greater than 50° and negative correlation at lower latitudes. However, not all scientists accept this correlation as statistically significant, and some who do attribute it to other solar variability (e.g. UV or total irradiance variations) rather than directly to GCR changes. Difficulties in interpreting such correlations include the fact that many aspects of solar variability change at similar times, and some climate systems have delayed responses. Historical perspective Physicist and historian Spencer R. Weart in The Discovery of Global Warming (2003) wrote:
Physical sciences
Climate change
Earth science
42964218
https://en.wikipedia.org/wiki/Balinese%20cat
Balinese cat
The Balinese is a long-haired breed of domestic cat with Siamese-style point coloration and sapphire-blue eyes. The Balinese is also known as the purebred long-haired Siamese since it originated as a natural mutation of that breed and hence is essentially the same cat but with a medium-length silky coat and a distinctively plumed tail. As is the case with their short-haired counterparts, a genetic distinction is made between traditional or "old-style" and modern body types. In the American standard, color variants derived from the Colorpoint Shorthair are further considered a separate breed, known as the Javanese. There is no particular connection between these cats and the Indonesian islands of Bali and Java, from which they derive their names. Like their Siamese ancestors, Balinese are sociable, vocal, playful, inquisitive, and intelligent. History and development The "Balinese" is not actually from Bali or any part of Indonesia. Its history begins with the first Siamese cats that were imported from Thailand to the U.S. and U.K. in the mid-1800s, some of whom carried the recessive long-haired gene. The Balinese breed subsequently originated from deliberate breeding efforts based on this naturally expressed genetic trait. Initially, occasional long-haired kittens in Siamese litters were considered a fault in the bloodline and sold exclusively as pets. There are records of these cats as early as the 1900s; "Long-haired Siamese" were first registered as show cats with the American Cat Fanciers' Federation in 1928. In the mid-1950s, breeders in the US began serious efforts to develop the long-haired variant as a separate breed. Considering Long-haired Siamese too cumbersome a name, initial breeder Helen Smith dubbed the new breed "Balinese" as a reference to the grace of Balinese dancers. A breeder named Sylvia Holland (who was also an illustrator for Walt Disney Studios) worked to further establish the breed standard in the 1960s and 1970s. She recognized only cats showing the classic Siamese points in seal, chocolate, blue, and lilac as true Balinese, refusing to accept others because they had likely originated from crosses with other breeds. The American Cat Fanciers' Association had meanwhile officially classified Siamese with the newer red and cream as well as lynx (tabby) and tortoiseshell (or "tortie") patterned points as a separate breed, the Colorpoint Shorthair, and the long-haired cats derived from these colors and patterns were subsequently likewise classified separately as "Javanese", in keeping with the Indonesian island theme. Like their Siamese ancestors, the Balinese gradually split into two separate varieties based on physical type. The traditional Siamese (also called old-style or "apple-head", now being separately developed as the Thai), was the type in vogue when the Balinese was established, and hence used in its development; these old-style Balinese still closely resemble those from the early breeding programs. As the parent short-haired Siamese gained in popularity, however, a trend developed in favor of a more extremely elongated, slender type with a distinctively wedge-shaped head. The modern (or "contemporary") Balinese was subsequently derived directly from this newer Siamese ideal. By the mid-1980s, the old-style Balinese, like their Siamese counterparts, had disappeared from most cat shows, except a few breeders who maintained the original Balinese type. The two varieties of Balinese thus have very few if any recent ancestors in common. Balinese-Javanese There was discussion in the Cat Fanciers' Association about merging the two breeds into one breed with two color divisions as early as 2006. The Javanese is a cross between the Siamese, Colorpoint Shorthair, and Balinese. In 2008, breeders in the Balinese Breed Council and Javanese Breed Council voted to combine the Balinese and Javanese as one breed and declared Javanese as a color division of the Balinese. The Cat Fanciers' Association was the only organization to believe that Javanese was a separate breed. This does not affect the colors or description of Balinese, since they are two separate divisions but they are just placed under the Balinese. Javanese will still have the same colors as before, along with Balinese having the same colors mentioned below. This movement has brought The Cat Fanciers' Association more in line with the other worldwide registries. The Cat Fanciers' Association made this change since the two councils in their organization (Balinese and Javanese) were overlapping around an average of 50 to 75% with the same members who breed and exhibited the two types. It is hoped that combining the two breeds will increase Balinese registration in the Cat Fanciers' Association, by encouraging new breeders and exhibitors of Balinese to come forth and present their cat. They also wished to show more Javanese of the appropriate coat length in the shows. This will also help decrease the number of cats needed to maintain a healthy breeding program. Description Appearance The two types of Balinese are still analogous to their Siamese counterparts. While both are relatively slender, graceful fine-boned cats with long legs and tails, neat oval paws, almond-shaped eyes, and large pointed ears, the traditional type is overall the more substantial, with a broader head and sturdier body. The modern type features a noticeably more wedge-shaped head with long tapering muzzle and longer, broader ears, atop a more slender and elongated body. Coat and color The coat is medium-length (although there can be considerable variance by individual) and should be soft and silky, without the fluffy undercoat typical of most long-haired breeds. The offspring of two Balinese will have a longer coat than that of a Balinese and a Siamese. In all cases, the tail should have a definite plume, or fringe, of longer hair. Eye color ranges from pale blue through sapphire/violet; the intensity of color can change slightly with age and diet. The paw pad color can be used to identify the color point in kittens. Pink pads are found in chocolate and lilac points; while dark pads are found in blue and seal points. Like all cats with the point pattern, Balinese kittens are born pure cream or white and gradually develop visible points in colder parts of their body – the face, ears, paws and tail. Their color is identifiable by the time they are four weeks old. Some cats tend to darken with age, and generally, adult Balinese cats living in warm climates have lighter coats than those in cool climates. The Cat Fanciers' Federation and most other associations worldwide accept the Balinese breed in seal, blue, chocolate, lilac, red, and cream point, besides tortoiseshell and lynx points in all of these colors. The Cat Fanciers' Association standard continues to accept the Balinese in only the classic seal, blue, chocolate, and lilac points, with all other possible colors and patterns classed separately as Javanese. Temperament Balinese share the traits of the short-haired Siamese, and hence are notably social and playful cats with an intense interest in the activity around them and a tendency to vocalize often and persistently, albeit at a lower volume. Akin to their short-haired counterparts (i.e the Siamese cats), they are quite clingy and can be high-maintenance in terms of attention, they are often described as dog-like. They also tend to have high energy levels and are quite active and playful like their short-haired Siamese counterparts. They are reputed to have the highest intelligence of all the long-haired breeds. They are also reputed to be notably acrobatic and to enjoy intimate contact with their owners. Genetics The pointed pattern is a form of partial albinism, resulting from a mutation in tyrosinase, an enzyme involved in melanin production. The mutated enzyme is heat-sensitive; it fails to work at normal body temperatures but becomes active in cooler areas of the skin. This results in dark coloration in the coolest parts of the cat's body, including the extremities and the face, which is cooled by the passage of air through the sinuses. Though crossbreeding with other breeds took place to produce the less traditional Javanese colors, they are considered purebred cats if they are registered and have at least 3-4 or more generations of Siamese or Balinese lineage. Health They are a pedigree breed, which means they are developed from such a small gene pool of Siamese with the long hair mutation. The smaller the gene pool, the more chances they have to inherit many unknown health disorders. A possible confirmed disease for Balinese is Progressive Retinal Atrophy (PRA), which is a degeneration of the retina in the eye; that may lead to weak or impaired vision.
Biology and health sciences
Cats
Animals
53147698
https://en.wikipedia.org/wiki/Metascience
Metascience
Metascience (also known as meta-research) is the use of scientific methodology to study science itself. Metascience seeks to increase the quality of scientific research while reducing inefficiency. It is also known as "research on research" and "the science of science", as it uses research methods to study how research is done and find where improvements can be made. Metascience concerns itself with all fields of research and has been described as "a bird's eye view of science". In the words of John Ioannidis, "Science is the best thing that has happened to human beings... but we can do it better." In 1966, an early meta-research paper examined the statistical methods of 295 papers published in ten high-profile medical journals. It found that "in almost 73% of the reports read... conclusions were drawn when the justification for these conclusions was invalid." Meta-research in the following decades found many methodological flaws, inefficiencies, and poor practices in research across numerous scientific fields. Many scientific studies could not be reproduced, particularly in medicine and the soft sciences. The term "replication crisis" was coined in the early 2010s as part of a growing awareness of the problem. Measures have been implemented to address the issues revealed by metascience. These measures include the pre-registration of scientific studies and clinical trials as well as the founding of organizations such as CONSORT and the EQUATOR Network that issue guidelines for methodology and reporting. There are continuing efforts to reduce the misuse of statistics, to eliminate perverse incentives from academia, to improve the peer review process, to systematically collect data about the scholarly publication system, to combat bias in scientific literature, and to increase the overall quality and efficiency of the scientific process. As such, metascience is a big part of methods underlying the Open Science Movement. History In 1966, an early meta-research paper examined the statistical methods of 295 papers published in ten high-profile medical journals. It found that, "in almost 73% of the reports read ... conclusions were drawn when the justification for these conclusions was invalid." A paper in 1976 called for funding for meta-research: "Because the very nature of research on research, particularly if it is prospective, requires long periods of time, we recommend that independent, highly competent groups be established with ample, long term support to conduct and support retrospective and prospective research on the nature of scientific discovery". In 2005, John Ioannidis published a paper titled "Why Most Published Research Findings Are False", which argued that a majority of papers in the medical field produce conclusions that are wrong. The paper went on to become the most downloaded paper in the Public Library of Science and is considered foundational to the field of metascience. In a related study with Jeremy Howick and Despina Koletsi, Ioannidis showed that only a minority of medical interventions are supported by 'high quality' evidence according to The Grading of Recommendations Assessment, Development and Evaluation (GRADE) approach. Later meta-research identified widespread difficulty in replicating results in many scientific fields, including psychology and medicine. This problem was termed "the replication crisis". Metascience has grown as a reaction to the replication crisis and to concerns about waste in research. Many prominent publishers are interested in meta-research and in improving the quality of their publications. Top journals such as Science, The Lancet, and Nature, provide ongoing coverage of meta-research and problems with reproducibility. In 2012 PLOS ONE launched a Reproducibility Initiative. In 2015 Biomed Central introduced a minimum-standards-of-reporting checklist to four titles. The first international conference in the broad area of meta-research was the Research Waste/EQUATOR conference held in Edinburgh in 2015; the first international conference on peer review was the Peer Review Congress held in 1989. In 2016, Research Integrity and Peer Review was launched. The journal's opening editorial called for "research that will increase our understanding and suggest potential solutions to issues related to peer review, study reporting, and research and publication ethics". Fields and topics of meta-research Metascience can be categorized into five major areas of interest: Methods, Reporting, Reproducibility, Evaluation, and Incentives. These correspond, respectively, with how to perform, communicate, verify, evaluate, and reward research. Methods Metascience seeks to identify poor research practices, including biases in research, poor study design, abuse of statistics, and to find methods to reduce these practices. Meta-research has identified numerous biases in scientific literature. Of particular note is the widespread misuse of p-values and abuse of statistical significance. Scientific data science Scientific data science is the use of data science to analyse research papers. It encompasses both qualitative and quantitative methods. Research in scientific data science includes fraud detection and citation network analysis. Journalology Journalology, also known as publication science, is the scholarly study of all aspects of the academic publishing process. The field seeks to improve the quality of scholarly research by implementing evidence-based practices in academic publishing. The term "journalology" was coined by Stephen Lock, the former editor-in-chief of The BMJ. The first Peer Review Congress, held in 1989 in Chicago, Illinois, is considered a pivotal moment in the founding of journalology as a distinct field. The field of journalology has been influential in pushing for study pre-registration in science, particularly in clinical trials. Clinical-trial registration is now expected in most countries. Reporting Meta-research has identified poor practices in reporting, explaining, disseminating and popularizing research, particularly within the social and health sciences. Poor reporting makes it difficult to accurately interpret the results of scientific studies, to replicate studies, and to identify biases and conflicts of interest in the authors. Solutions include the implementation of reporting standards, and greater transparency in scientific studies (including better requirements for disclosure of conflicts of interest). There is an attempt to standardize reporting of data and methodology through the creation of guidelines by reporting agencies such as CONSORT and the larger EQUATOR Network. Reproducibility The replication crisis is an ongoing methodological crisis in which it has been found that many scientific studies are difficult or impossible to replicate. While the crisis has its roots in the meta-research of the mid- to late 20th century, the phrase "replication crisis" was not coined until the early 2010s as part of a growing awareness of the problem. The replication crisis has been closely studied in psychology (especially social psychology) and medicine, including cancer research. Replication is an essential part of the scientific process, and the widespread failure of replication puts into question the reliability of affected fields. Moreover, replication of research (or failure to replicate) is considered less influential than original research, and is less likely to be published in many fields. This discourages the reporting of, and even attempts to replicate, studies. Evaluation and incentives Metascience seeks to create a scientific foundation for peer review. Meta-research evaluates peer review systems including pre-publication peer review, post-publication peer review, and open peer review. It also seeks to develop better research funding criteria. Metascience seeks to promote better research through better incentive systems. This includes studying the accuracy, effectiveness, costs, and benefits of different approaches to ranking and evaluating research and those who perform it. Critics argue that perverse incentives have created a publish-or-perish environment in academia which promotes the production of junk science, low quality research, and false positives. According to Brian Nosek, "The problem that we face is that the incentive system is focused almost entirely on getting research published, rather than on getting research right." Proponents of reform seek to structure the incentive system to favor higher-quality results. For example, by quality being judged on the basis of narrative expert evaluations ("rather than [only or mainly] indices"), institutional evaluation criteria, guaranteeing of transparency, and professional standards. Contributorship Studies proposed machine-readable standards and (a taxonomy of) badges for science publication management systems that hones in on contributorship – who has contributed what and how much of the research labor – rather that using traditional concept of plain authorship – who was involved in any way creation of a publication. A study pointed out one of the problems associated with the ongoing neglect of contribution nuanciation – it found that "the number of publications has ceased to be a good metric as a result of longer author lists, shorter papers, and surging publication numbers". Assessment factors Factors other than a submission's merits can substantially influence peer reviewers' evaluations. Such factors may however also be important such as the use of track-records about the veracity of a researchers' prior publications and its alignment with public interests. Nevertheless, evaluation systems – include those of peer-review – may substantially lack mechanisms and criteria that are oriented or well-performingly oriented towards merit, real-world positive impact, progress and public usefulness rather than analytical indicators such as number of citations or altmetrics even when such can be used as partial indicators of such ends. Rethinking of the academic reward structure "to offer more formal recognition for intermediate products, such as data" could have positive impacts and reduce data withholding. Recognition of training A commentary noted that academic rankings don't consider where (country and institute) the respective researchers were trained. Scientometrics Scientometrics concerns itself with measuring bibliographic data in scientific publications. Major research issues include the measurement of the impact of research papers and academic journals, the understanding of scientific citations, and the use of such measurements in policy and management contexts. Studies suggest that "metrics used to measure academic success, such as the number of publications, citation number, and impact factor, have not changed for decades" and have to some degrees "ceased" to be good measures, leading to issues such as "overproduction, unnecessary fragmentations, overselling, predatory journals (pay and publish), clever plagiarism, and deliberate obfuscation of scientific results so as to sell and oversell". Novel tools in this area include systems to quantify how much the cited-node informs the citing-node. This can be used to convert unweighted citation networks to a weighted one and then for importance assessment, deriving "impact metrics for the various entities involved, like the publications, authors etc" as well as, among other tools, for search engine- and recommendation systems. Science governance Science funding and science governance can also be explored and informed by metascience. Incentives Various interventions such as prioritization can be important. For instance, the concept of differential technological development refers to deliberately developing technologies – e.g. control-, safety- and policy-technologies versus risky biotechnologies – at different precautionary paces to decrease risks, mainly global catastrophic risk, by influencing the sequence in which technologies are developed. Relying only on the established form of legislation and incentives to ensure the right outcomes may not be adequate as these may often be too slow or inappropriate. Other incentives to govern science and related processes, including via metascience-based reforms, may include ensuring accountability to the public (in terms of e.g. accessibility of, especially publicly-funded, research or of it addressing various research topics of public interest in serious manners), increasing the qualified productive scientific workforce, improving the efficiency of science to improve problem-solving in general, and facilitating that unambiguous societal needs based on solid scientific evidence – such as about human physiology – are adequately prioritized and addressed. Such interventions, incentives and intervention-designs can be subjects of metascience. Science funding and awards Scientific awards are one category of science incentives. Metascience can explore existing and hypothetical systems of science awards. For instance, it found that work honored by Nobel Prizes clustered in only a few scientific fields with only 36/71 having received at least one Nobel Prize. Of the 114/849 domains science could be divided into their DC2 and DC3 classification systems, five were shown to comprise over half of the Nobel Prizes awarded between 1995 and 2017 (particle physics [14%], cell biology [12.1%], atomic physics [10.9%], neuroscience [10.1%], molecular chemistry [5.3%]). A study found that delegation of responsibility by policy-makers – a centralized authority-based top-down approach – for knowledge production and appropriate funding to science with science subsequently somehow delivering "reliable and useful knowledge to society" is too simple. Measurements show that allocation of bio-medical resources can be more strongly correlated to previous allocations and research than to burden of diseases. A study suggests that "[i]f peer review is maintained as the primary mechanism of arbitration in the competitive selection of research reports and funding, then the scientific community needs to make sure it is not arbitrary". Studies indicate there to is a need to "reconsider how we measure success" . Funding data Funding information from grant databases and funding acknowledgment sections can be sources of data for scientometrics studies, e.g. for investigating or recognition of the impact of funding entities on the development of science and technology. Research questions and coordination Risk governance Science communication and public use It has been argued that "science has two fundamental attributes that underpin its value as a global public good: that knowledge claims and the evidence on which they are based are made openly available to scrutiny, and that the results of scientific research are communicated promptly and efficiently". Metascientific research is exploring topics of science communication such as media coverage of science, science journalism and online communication of results by science educators and scientists. A study found that the "main incentive academics are offered for using social media is amplification" and that it should be "moving towards an institutional culture that focuses more on how these [or such] platforms can facilitate real engagement with research". Science communication may also involve the communication of societal needs, concerns and requests to scientists. Alternative metrics tools Alternative metrics tools can be used not only for help in assessment (performance and impact) and findability, but also aggregate many of the public discussions about a scientific paper in social media such as reddit, citations on Wikipedia, and reports about the study in the news media which can then in turn be analyzed in metascience or provided and used by related tools. In terms of assessment and findability, altmetrics rate publications' performance or impact by the interactions they receive through social media or other online platforms, which can for example be used for sorting recent studies by measured impact, including before other studies are citing them. The specific procedures of established altmetrics are not transparent and the used algorithms can not be customized or altered by the user as open source software can. A study has described various limitations of altmetrics and points "toward avenues for continued research and development". They are also limited in their use as a primary tool for researchers to find received constructive feedback. Societal implications and applications It has been suggested that it may benefit science if "intellectual exchange—particularly regarding the societal implications and applications of science and technology—are better appreciated and incentivized in the future". Knowledge integration Primary studies "without context, comparison or summary are ultimately of limited value" and various types of research syntheses and summaries integrate primary studies. Progress in key social-ecological challenges of the global environmental agenda is "hampered by a lack of integration and synthesis of existing scientific evidence", with a "fast-increasing volume of data", compartmentalized information and generally unmet evidence synthesis challenges. According to Khalil, researchers are facing the problem of too many papers – e.g. in March 2014 more than 8,000 papers were submitted to arXiv – and to "keep up with the huge amount of literature, researchers use reference manager software, they make summaries and notes, and they rely on review papers to provide an overview of a particular topic". He notes that review papers are usually (only)" for topics in which many papers were written already, and they can get outdated quickly" and suggests "wiki-review papers" that get continuously updated with new studies on a topic and summarize many studies' results and suggest future research. A study suggests that if a scientific publication is being cited in a Wikipedia article this could potentially be considered as an indicator of some form of impact for this publication, for example as this may, over time, indicate that the reference has contributed to a high-level of summary of the given topic. Science journalism Science journalists play an important role in the scientific ecosystem and in science communication to the public and need to "know how to use, relevant information when deciding whether to trust a research finding, and whether and how to report on it", vetting the findings that get transmitted to the public. Science education Some studies investigate science education, e.g. the teaching about selected scientific controversies and historical discovery process of major scientific conclusions, and common scientific misconceptions. Education can also be a topic more generally such as how to improve the quality of scientific outputs and reduce the time needed before scientific work or how to enlarge and retain various scientific workforces. Science misconceptions and anti-science attitudes Many students have misconceptions about what science is and how it works. Anti-science attitudes and beliefs are also a subject of research. Hotez suggests antiscience "has emerged as a dominant and highly lethal force, and one that threatens global security", and that there is a need for "new infrastructure" that mitigates it. Evolution of sciences Scientific practice Metascience can investigate how scientific processes evolve over time. A study found that teams are growing in size, "increasing by an average of 17% per decade". It was found that prevalent forms of non-open access publication and prices charged for many conventional journals – even for publicly funded papers – are unwarranted, unnecessary – or suboptimal – and detrimental barriers to scientific progress. Open access can save considerable amounts of financial resources, which could be used otherwise, and level the playing field for researchers in developing countries. There are substantial expenses for subscriptions, gaining access to specific studies, and for article processing charges. Paywall: The Business of Scholarship is a documentary on such issues. Another topic are the established styles of scientific communication (e.g. long text-form studies and reviews) and the scientific publishing practices – there are concerns about a "glacial pace" of conventional publishing. The use of preprint-servers to publish study-drafts early is increasing and open peer review, new tools to screen studies, and improved matching of submitted manuscripts to reviewers are among the proposals to speed up publication. Science overall and intrafield developments Studies have various kinds of metadata which can be utilized, complemented and made accessible in useful ways. OpenAlex is a free online index of over 200 million scientific documents that integrates and provides metadata such as sources, citations, author information, scientific fields and research topics. Its API and open source website can be used for metascience, scientometrics and novel tools that query this semantic web of papers. Another project under development, Scholia, uses metadata of scientific publications for various visualizations and aggregation features such as providing a simple user interface summarizing literature about a specific feature of the SARS-CoV-2 virus using Wikidata's "main subject" property. Subject-level resolutions Beyond metadata explicitly assigned to studies by humans, natural language processing and AI can be used to assign research publications to topics – one study investigating the impact of science awards used such to associate a paper's text (not just keywords) with the linguistic content of Wikipedia's scientific topics pages ("pages are created and updated by scientists and users through crowdsourcing"), creating meaningful and plausible classifications of high-fidelity scientific topics for further analysis or navigability. Growth or stagnation of science overall Metascience research is investigating the growth of science overall, using e.g. data on the number of publications in bibliographic databases. A study found segments with different growth rates appear related to phases of "economic (e.g., industrialization)" – money is considered as necessary input to the science system – "and/or political developments (e.g., Second World War)". It also confirmed a recent exponential growth in the volume of scientific literature and calculated an average doubling period of 17.3 years. However, others have pointed out that is difficult to measure scientific progress in meaningful ways, partly because it's hard to accurately evaluate how important any given scientific discovery is. A variety of perspectives of the trajectories of science overall (impact, number of major discoveries, etc) have been described in books and articles, including that science is becoming harder (per dollar or hour spent), that if science "slowing today, it is because science has remained too focused on established fields", that papers and patents are increasingly less likely to be "disruptive" in terms of breaking with the past as measured by the "CD index", and that there is a great stagnation – possibly as part of a larger trend – whereby e.g. "things haven't changed nearly as much since the 1970s" when excluding the computer and the Internet. Better understanding of potential slowdowns according to some measures could be a major opportunity to improve humanity's future. For example, emphasis on citations in the measurement of scientific productivity, information overloads, reliance on a narrower set of existing knowledge (which may include narrow specialization and related contemporary practices) , and risk-avoidant funding structures may have "toward incremental science and away from exploratory projects that are more likely to fail". The study that introduced the "CD index" suggests the overall number of papers has risen while the total of "highly disruptive" papers as measured by the index hasn't (notably, the 1998 discovery of the accelerating expansion of the universe has a CD index of 0). Their results also suggest scientists and inventors "may be struggling to keep up with the pace of knowledge expansion". Various ways of measuring "novelty" of studies, novelty metrics, have been proposed to balance a potential anti-novelty bias – such as textual analysis or measuring whether it makes first-time-ever combinations of referenced journals, taking into account the difficulty. Other approaches include pro-actively funding risky projects. Topic mapping Science maps could show main interrelated topics within a certain scientific domain, their change over time, and their key actors (researchers, institutions, journals). They may help find factors determine the emergence of new scientific fields and the development of interdisciplinary areas and could be relevant for science policy purposes. Theories of scientific change could guide "the exploration and interpretation of visualized intellectual structures and dynamic patterns". The maps can show the intellectual, social or conceptual structure of a research field. Beyond visual maps, expert survey-based studies and similar approaches could identify understudied or neglected societally important areas, topic-level problems (such as stigma or dogma), or potential misprioritizations. Examples of such are studies about policy in relation to public health and the social science of climate change mitigation where it has been estimated that only 0.12% of all funding for climate-related research is spent on such despite the most urgent puzzle at the current juncture being working out how to mitigate climate change, whereas the natural science of climate change is already well established. There are also studies that map a scientific field or a topic such as the study of the use of research evidence in policy and practice, partly using surveys. Controversies, current debates and disagreement Some research is investigating scientific controversy or controversies, and may identify currently ongoing major debates (e.g. open questions), and disagreement between scientists or studies. One study suggests the level of disagreement was highest in the social sciences and humanities (0.61%), followed by biomedical and health sciences (0.41%), life and earth sciences (0.29%); physical sciences and engineering (0.15%), and mathematics and computer science (0.06%). Such research may also show, where the disagreements are, especially if they cluster, including visually such as with cluster diagrams. Challenges of interpretation of pooled results Studies about a specific research question or research topic are often reviewed in the form of higher-level overviews in which results from various studies are integrated, compared, critically analyzed and interpreted. Examples of such works are scientific reviews and meta-analyses. These and related practices face various challenges and are a subject of metascience. Various issues with included or available studies such as, for example, heterogeneity of methods used may lead to faulty conclusions of the meta-analysis. Knowledge integration and living documents Various problems require swift integration of new and existing science-based knowledge. Especially setting where there are a large number of loosely related projects and initiatives benefit from a common ground or "commons". Evidence synthesis can be applied to important and, notably, both relatively urgent and certain global challenges: "climate change, energy transitions, biodiversity loss, antimicrobial resistance, poverty eradication and so on". It was suggested that a better system would keep summaries of research evidence up to date via living systematic reviews – e.g. as living documents. While the number of scientific papers and data (or information and online knowledge) has risen substantially, the number of published academic systematic reviews has risen from "around 6,000 in 2011 to more than 45,000 in 2021". An evidence-based approach is important for progress in science, policy, medical and other practices. For example, meta-analyses can quantify what is known and identify what is not yet known and place "truly innovative and highly interdisciplinary ideas" into the context of established knowledge which may enhance their impact. Factors of success and progress It has been hypothesized that a deeper understanding of factors behind successful science could "enhance prospects of science as a whole to more effectively address societal problems". Novel ideas and disruptive scholarship Two metascientists reported that "structures fostering disruptive scholarship and focusing attention on novel ideas" could be important as in a growing scientific field citation flows disproportionately consolidate to already well-cited papers, possibly slowing and inhibiting canonical progress. A study concluded that to enhance impact of truly innovative and highly interdisciplinary novel ideas, they should be placed in the context of established knowledge. Mentorship, partnerships and social factors Other researchers reported that the most successful – in terms of "likelihood of prizewinning, National Academy of Science (NAS) induction, or superstardom" – protégés studied under mentors who published research for which they were conferred a prize after the protégés' mentorship. Studying original topics rather than these mentors' research-topics was also positively associated with success. Highly productive partnerships are also a topic of research – e.g. "super-ties" of frequent co-authorship of two individuals who can complement skills, likely also the result of other factors such as mutual trust, conviction, commitment and fun. Study of successful scientists and processes, general skills and activities The emergence or origin of ideas by successful scientists is also a topic of research, for example reviewing existing ideas on how Mendel made his discoveries, – or more generally, the process of discovery by scientists. Science is a "multifaceted process of appropriation, copying, extending, or combining ideas and inventions" [and other types of knowledge or information], and not an isolated process. There are also few studies investigating scientists' habits, common modes of thinking, reading habits, use of information sources, digital literacy skills, and workflows. Labor advantage A study theorized that in many disciplines, larger scientific productivity or success by elite universities can be explained by their larger pool of available funded laborers. The study found that university prestige was only associated with higher productivity for faculty with group members, not for faculty publishing alone or the group members themselves. This is presented as evidence that the outsize productivity of elite researchers is not from a more rigorous selection of talent by top universities, but from labor advantages accrued through greater access to funding and the attraction of prestige to graduate and postdoctoral researchers. Ultimate impacts Success in science (as indicated in tenure review processes) is often measured in terms of metrics like citations, not in terms of the eventual or potential impact on lives and society, which awards sometimes do. Problems with such metrics are roughly outlined elsewhere in this article and include that reviews replace citations to primary studies. There are also proposals for changes to the academic incentives systems that increase the recognition of societal impact in the research process. Progress studies A proposed field of "Progress Studies" could investigate how scientists (or funders or evaluators of scientists) should be acting, "figuring out interventions" and study progress itself. The field was explicitly proposed in a 2019 essay and described as an applied science that prescribes action. As and for acceleration of progress A study suggests that improving the way science is done could accelerate the rate of scientific discovery and its applications which could be useful for finding urgent solutions to humanity's problems, improve humanity's conditions, and enhance understanding of nature. Metascientific studies can seek to identify aspects of science that need improvement, and develop ways to improve them. If science is accepted as the fundamental engine of economic growth and social progress, this could raise "the question of what we – as a society – can do to accelerate science, and to direct science toward solving society's most important problems." However, one of the authors clarified that a one-size-fits-all approach is not thought to be right answer – for example, in funding, DARPA models, curiosity-driven methods, allowing "a single reviewer to champion a project even if his or her peers do not agree", and various other approaches all have their uses. Nevertheless, evaluation of them can help build knowledge of what works or works best. Reforms Meta-research identifying flaws in scientific practice has inspired reforms in science. These reforms seek to address and fix problems in scientific practice which lead to low-quality or inefficient research. A 2015 study lists "fragmented" efforts in meta-research. Pre-registration The practice of registering a scientific study before it is conducted is called pre-registration. It arose as a means to address the replication crisis. Pregistration requires the submission of a registered report, which is then accepted for publication or rejected by a journal based on theoretical justification, experimental design, and the proposed statistical analysis. Pre-registration of studies serves to prevent publication bias (e.g. not publishing negative results), reduce data dredging, and increase replicability. Reporting standards Studies showing poor consistency and quality of reporting have demonstrated the need for reporting standards and guidelines in science, which has led to the rise of organisations that produce such standards, such as CONSORT (Consolidated Standards of Reporting Trials) and the EQUATOR Network. The EQUATOR (Enhancing the QUAlity and Transparency Of health Research) Network is an international initiative aimed at promoting transparent and accurate reporting of health research studies to enhance the value and reliability of medical research literature. The EQUATOR Network was established with the goals of raising awareness of the importance of good reporting of research, assisting in the development, dissemination and implementation of reporting guidelines for different types of study designs, monitoring the status of the quality of reporting of research studies in the health sciences literature, and conducting research relating to issues that impact the quality of reporting of health research studies. The Network acts as an "umbrella" organisation, bringing together developers of reporting guidelines, medical journal editors and peer reviewers, research funding bodies, and other key stakeholders with a mutual interest in improving the quality of research publications and research itself. Applications Information and communications technologies Metascience is used in the creation and improvement of technical systems (ICTs) and standards of science evaluation, incentivation, communication, commissioning, funding, regulation, production, management, use and publication. Such can be called "applied metascience" and may seek to explore ways to increase quantity, quality and positive impact of research. One example for such is the development of alternative metrics. Study screening and feedback Various websites or tools also identify inappropriate studies and/or enable feedback such as PubPeer, Cochrane's Risk of Bias Tool and RetractionWatch. Medical and academic disputes are as ancient as antiquity and a study calls for research into "constructive and obsessive criticism" and into policies to "help strengthen social media into a vibrant forum for discussion, and not merely an arena for gladiator matches". Feedback to studies can be found via altmetrics which is often integrated at the website of the study – most often as an embedded Altmetrics badge – but may often be incomplete, such as only showing social media discussions that link to the study directly but not those that link to news reports about the study. Tools used, modified, extended or investigated Tools may get developed with metaresearch or can be used or investigated by such. Notable examples may include: Search engines like Google Scholar are used to find studies and the notification service Google Alerts enables notifications for new studies matching specified search terms. Scholarly communication infrastructure includes search databases. Shadow library Sci-hub is a topic of metascience Personal knowledge management systems for research-, knowledge- and task management, such as saving information in organized ways with multi-document text editors for future use Such systems could be described as part of, along with e.g. Web browser (tabs-addons etc) and search software, "mind-machine partnerships" that could be investigated by metascience for how they could improve science. Scholia – efforts to open scholarly publication metadata and use it via Wikidata. Various software enables common metascientific practices such as bibliometric analysis. Development According to a study "a simple way to check how often studies have been repeated, and whether or not the original findings are confirmed" is needed due to reproducibility issues in science. A study suggests a tool for screening studies for early warning signs for research fraud. Medicine Clinical research in medicine is often of low quality, and many studies cannot be replicated. An estimated 85% of research funding is wasted. Additionally, the presence of bias affects research quality. The pharmaceutical industry exerts substantial influence on the design and execution of medical research. Conflicts of interest are common among authors of medical literature and among editors of medical journals. While almost all medical journals require their authors to disclose conflicts of interest, editors are not required to do so. Financial conflicts of interest have been linked to higher rates of positive study results. In antidepressant trials, pharmaceutical sponsorship is the best predictor of trial outcome. Blinding is another focus of meta-research, as error caused by poor blinding is a source of experimental bias. Blinding is not well reported in medical literature, and widespread misunderstanding of the subject has resulted in poor implementation of blinding in clinical trials. Furthermore, failure of blinding is rarely measured or reported. Research showing the failure of blinding in antidepressant trials has led some scientists to argue that antidepressants are no better than placebo. In light of meta-research showing failures of blinding, CONSORT standards recommend that all clinical trials assess and report the quality of blinding. Studies have shown that systematic reviews of existing research evidence are sub-optimally used in planning a new research or summarizing the results. Cumulative meta-analyses of studies evaluating the effectiveness of medical interventions have shown that many clinical trials could have been avoided if a systematic review of existing evidence was done prior to conducting a new trial. For example, Lau et al. analyzed 33 clinical trials (involving 36974 patients) evaluating the effectiveness of intravenous streptokinase for acute myocardial infarction. Their cumulative meta-analysis demonstrated that 25 of 33 trials could have been avoided if a systematic review was conducted prior to conducting a new trial. In other words, randomizing 34542 patients was potentially unnecessary. One study analyzed 1523 clinical trials included in 227 meta-analyses and concluded that "less than one quarter of relevant prior studies" were cited. They also confirmed earlier findings that most clinical trial reports do not present systematic review to justify the research or summarize the results. Many treatments used in modern medicine have been proven to be ineffective, or even harmful. A 2007 study by John Ioannidis found that it took an average of ten years for the medical community to stop referencing popular practices after their efficacy was unequivocally disproven. Psychology Metascience has revealed significant problems in psychological research. The field suffers from high bias, low reproducibility, and widespread misuse of statistics. The replication crisis affects psychology more strongly than any other field; as many as two-thirds of highly publicized findings may be impossible to replicate. Meta-research finds that 80-95% of psychological studies support their initial hypotheses, which strongly implies the existence of publication bias. The replication crisis has led to renewed efforts to re-test important findings. In response to concerns about publication bias and p-hacking, more than 140 psychology journals have adopted result-blind peer review, in which studies are pre-registered and published without regard for their outcome. An analysis of these reforms estimated that 61 percent of result-blind studies produce null results, in contrast with 5 to 20 percent in earlier research. This analysis shows that result-blind peer review substantially reduces publication bias. Psychologists routinely confuse statistical significance with practical importance, enthusiastically reporting great certainty in unimportant facts. Some psychologists have responded with an increased use of effect size statistics, rather than sole reliance on the p values. Physics Richard Feynman noted that estimates of physical constants were closer to published values than would be expected by chance. This was believed to be the result of confirmation bias: results that agreed with existing literature were more likely to be believed, and therefore published. Physicists now implement blinding to prevent this kind of bias. Computer Science Web measurement studies are essential for understanding the workings of the modern Web, particularly in the fields of security and privacy. However, these studies often require custom-built or modified crawling setups, leading to a plethora of analysis tools for similar tasks. In a paper by Nurullah Demir et al., the authors surveyed 117 recent research papers to derive best practices for Web-based measurement studies and establish criteria for reproducibility and replicability. They found that experimental setups and other critical information for reproducing and replicating results are often missing. In a large-scale Web measurement study on 4.5 million pages with 24 different measurement setups, the authors demonstrated the impact of slight differences in experimental setups on the overall results, emphasizing the need for accurate and comprehensive documentation. Organizations and institutes There are several organizations and universities across the globe which work on meta-research – these include the Meta-Research Innovation Center at Berlin, the Meta-Research Innovation Center at Stanford, the Meta-Research Center at Tilburg University, the Meta-research & Evidence Synthesis Unit, The George Institute for Global Health at India and Center for Open Science. Organizations that develop tools for metascience include OurResearch, Center for Scientific Integrity and altmetrics companies. There is an annual Metascience Conference hosted by the Association for Interdisciplinary Meta-Research and Open Science (AIMOS) and biannual conference hosted by the Centre for Open Science.
Physical sciences
Science basics
Basics and measurement
51596724
https://en.wikipedia.org/wiki/Anatomical%20variation
Anatomical variation
An anatomical variation, anatomical variant, or anatomical variability is a presentation of body structure with morphological features different from those that are typically described in the majority of individuals. Anatomical variations are categorized into three types including morphometric (size or shape), consistency (present or absent), and spatial (proximal/distal or right/left). Variations are seen as normal in the sense that they are found consistently among different individuals, are mostly without symptoms, and are termed anatomical variations rather than abnormalities. Anatomical variations are mainly caused by genetics and may vary considerably between different populations. The rate of variation considerably differs between single organs, particularly in muscles. Knowledge of anatomical variations is important in order to distinguish them from pathological conditions. A very early paper published in 1898, presented anatomic variations to have a wide range and significance, and before the use of X-ray technology, anatomic variations were mostly only found on cadaver studies. The use of imaging techniques have defined many such variations. Some variations are found in different species such as polydactyly, having more than the usual number of digits. Variants of structures Muscles Kopsch gave a detailed listing of muscle variations. These included the absence of muscles; muscles that were doubled; muscles that were divided into two or more parts; an increase or decrease in the origin or insertion of the muscle; and the joining to adjacent organs. The palmaris longus muscle in the forearm is sometimes absent, as is the plantaris muscle in the leg. The sternalis muscle is a variant that lies in front of the pectoralis major and may show up on a mammogram. Bones Usually there are five lumbar vertebrae but sometimes there are six, and sometimes there are four. Joints A discoid meniscus is a rare thickened lateral meniscus in the knee joint that can sometimes be swollen and painful. Organs The lungs are subject to anatomical variations. Clinical significance Accessory small bones called ossicles may be mistaken for avulsion fractures.
Biology and health sciences
Basic anatomy
Biology
44419326
https://en.wikipedia.org/wiki/Bitter%20taste%20evolution
Bitter taste evolution
The evolution of bitter taste receptors has been one of the most dynamic evolutionary adaptations to arise in multiple species. This phenomenon has been widely studied in the field of evolutionary biology because of its role in the identification of toxins often found on the leaves of inedible plants. A palate more sensitive to these bitter tastes would, theoretically, have an advantage over members of the population less sensitive to these poisonous substances because they would be much less likely to ingest toxic plants. Bitter-taste genes have been found in a host of vertebrates, including sharks and rays, and the same genes have been well characterized in several common laboratory animals such as primates and mice, as well as in humans. The primary gene responsible for encoding this ability in humans is the TAS2R gene family which contains 25 functional loci as well as 11 pseudogenes. The development of this gene has been well characterized, with proof that the ability evolved before the human migration out of Africa. The gene continues to evolve in the present day. TAS2R The bitter taste receptor family, T2R (TAS2R), is encoded on chromosome 7 and chromosome 12. Genes on the same chromosome have shown remarkable similarity with each other, suggesting that the primary mutagenic forces in evolution of TAS2R are duplication events. These events have occurred in at least seven primate species: chimpanzee, human, gorilla, orangutan, rhesus macaque and baboon. The high variety among primate and rodent populations additionally suggests that, while selective constraint on these genes certainly exists, its effect is rather slight. Members of the T2R family encode alpha subunits of G-protein-coupled receptors, which are involved in intracellular taste transduction, not only on the taste buds but also in the pancreas and gastrointestinal tract. The mechanism of transduction is shown by exposure of the endocrine and gastrointestinal cells containing the receptors to bitter compounds, most famously phenylthiocarbamide (PTC). Exposure to PTC causes an intracellular cascade as evidenced by a large and rapid increase in intracellular calcium ions. Toxins as the primary selective force The primary selective adaptation that arises from bitter taste is to detect poisonous compounds, as most poisonous compounds in nature are bitter. However, this trait is not always advantageous, as bitter compounds exist in nature that are not poisonous. Exclusive rejection of these compounds would in fact be a disadvantageous trait, as it would make it more difficult to find food. Toxic and bitter compounds do, however, exist in different diets at different frequencies. Sensitivities to bitter compounds should follow the requirements of different diets logically, as species that can afford to reject plants due to their low plant diet (carnivores) have a higher sensitivity to bitter compounds than those that exclusively ingest plants. Exposure to the bitter marker quinine hydrochloride supported this fact, as the sensitivities to bitter compounds were highest in carnivores, followed by omnivores, then grazers and browsers. This identifies toxic plants as the primary selective force for bitter taste. This phenomenon is confirmed with genetic analysis. One measure of positive selection is Ka/Ks, the ratio of synonymous to non-synonymous mutations. If the rate of synonymous mutation is higher than the rate of non-synonymous mutation, then the trait created by the non-synonymous mutation is being selected for relative to the neutral synonymous mutations. For the bitter taste gene family, TAS2R, this ratio is over one in the loci responsible for the extracellular binding domains of the receptors. This indicates that the part of the receptor responsible for binding the bitter ligands is under positive selective pressure. TAS2R development in human history The pseudogenes mentioned earlier are produced by a number of gene silencing events, the rate of which is constant throughout primate species. Several of these pseudogenes maintain a role in modulating taste response, however. By studying the silencing events in humans, it is possible to theorize the selective pressures on humans throughout their evolutionary history. As is the case with the usual distribution of human genetic variation, the highest rate of diversity in TAS2R pseudogenes was often found in African populations. This was not the case with two pseudogene loci: TAS2R6P and TAS2R18P, where the highest diversity was found in non-African populations. This suggests that the functional versions of these genes arose before the human migration out of Africa into an area where selective constraint did not remove non-functional versions of these gene loci. This allowed the pseudogene frequency to increase, creating genetic variance at those loci. This is an example of relaxed environmental constraint allowing silencing mutations to lead to pseudogenization of once important loci. The gene locus, TAS2R16, also tells a story about bitter taste evolution. Varying rates of positive selection in different areas of the world give an indication of the selective pressures and events in those areas. At this locus, the 172Asn allele is the most common, especially in areas of Eurasia and in pygmy tribes in Africa, where it is nearly fixed. This suggests that the gene has had a relaxed selective constraint in most areas of Africa in comparison to Eurasia. This has been attributed to the increased knowledge of toxic plants in the area that arose around 10,000 years ago. The increased frequency of 172Asn in Eurasia suggests that the migration out of Africa into areas with different climates and foliage rendered the knowledge of toxic plants in Africa useless, forcing the populations to rely once again on the 172Asn allele, causing higher rates of positive selection. The high rate of 172Asn in Pygmy populations is more difficult to explain. The effective population size of these isolated populations is quite small, indicating that genetic drift explained by the founder effect is the cause of these atypically high rates. The different environments that have contained humans have placed different levels of selection on the population, forcing a wide variety in at the TAS2R loci across humanity. Relaxed constraint Neutral evolution in the bitter taste trait in humans is well documented by evolutionary biologists. In all human populations there have been high rates of synonymous and non-synonymous substitutions that cause pseudogenization. These events cause alleles that are present to this day because of relaxed selective constraint by the environment. The genes under neutral evolution in humans are very similar to several genes in chimpanzees in both their synonymous and non-synonymous mutation rates, suggesting that relaxed selective constraint started before the divergence of the two species. The cause of this relaxed constraint was primarily in lifestyle changes in hominids. Roughly two million years ago, the hominid diet shifted from a primarily vegetarian diet to an increasingly meat-based diet. This led to a reduction in the amount of toxic foods regularly encountered by humanity's early ancestors. Additionally, the use of fire began around 800,000 years ago, which further detoxified food and led to a decreased dependence on TAS2R to detect poisonous food. Evolutionary biologists have theorized how, with fire being an exclusively human tool, relaxed selective constraint has been found in chimpanzees as well. Meat does account for about 15% of the chimpanzee diet, with much of the other 85% being made up of ripe fruits, which very rarely contains toxins. This comes in contrast to other primates whose diets are entirely composed of leaves, unripe fruits, and bark, which have comparatively high levels of toxins. The differences in diets between chimpanzees and other primates accounts for the different levels of selective constraint.
Biology and health sciences
Sensory nervous system
Biology
47512577
https://en.wikipedia.org/wiki/Effects%20of%20climate%20change%20on%20agriculture
Effects of climate change on agriculture
There are numerous effects of climate change on agriculture, many of which are making it harder for agricultural activities to provide global food security. Rising temperatures and changing weather patterns often result in lower crop yields due to water scarcity caused by drought, heat waves and flooding. These effects of climate change can also increase the risk of several regions suffering simultaneous crop failures. Currently this risk is regarded as rare but if these simultaneous crop failures did happen they would have significant consequences for the global food supply. Many pests and plant diseases are also expected to either become more prevalent or to spread to new regions. The world's livestock are also expected to be affected by many of the same issues, from greater heat stress to animal feed shortfalls and the spread of parasites and vector-borne diseases. The increased atmospheric level from human activities (mainly burning of fossil fuels) causes a fertilization effect. This effect offsets a small portion of the detrimental effects of climate change on agriculture. However, it comes at the expense of lower levels of essential micronutrients in the crops. Furthermore, CO2 fertilization has little effect on C4 crops like maize. On the coasts, some agricultural land is expected to be lost to sea level rise, while melting glaciers could result in less irrigation water being available. On the other hand, more arable land may become available as frozen land thaws. Other effects include erosion and changes in soil fertility and the length of growing seasons. Also, bacteria like Salmonella and fungi that produce mycotoxins grow faster as the climate warms. Their growth has negative effects on food safety, food loss and prices. There has been extensive research on the effects of climate change on individual crops, particularly on the four staple crops: corn (maize), rice, wheat and soybeans. These crops are responsible for around two-thirds of all calories consumed by humans (both directly and indirectly as animal feed). The research investigates important uncertainties, for example future population growth, which will increase global food demand for the foreseeable future. The future degree of soil erosion and groundwater depletion are further uncertainties. On the other hand, a range of improvements to agricultural yields, collectively known as the Green Revolution, has increased yields per unit of land area by between 250% and 300% since 1960. Some of that progress will likely continue. The scientific consensus is that global food security will change relatively little in the near-term. 720 million to 811 million people were undernourished in 2021, with around 200,000 people being at a catastrophic level of food insecurity. Climate change is expected to add an additional 8 to 80 million people who are at risk of hunger by 2050. The estimated range depends on the intensity of future warming and the effectiveness of adaptation measures. Agricultural productivity growth will likely have improved food security for hundreds of millions of people by then. Predictions that reach further into the future (to 2100 and beyond) are rare. There is some concern about the effects on food security from more extreme weather events in future. Nevertheless, at this stage there is no expectation of a widespread global famine due to climate change within the 21st century. Direct effects from changing weather patterns Observed changes in adverse weather conditions Agriculture is sensitive to weather, and major events like heatwaves or droughts or heavy rains (also known as low and high precipitation extremes) can cause substantial losses. For example, Australia's farmers are very likely to suffer losses during the El Nino weather conditions, while 2003 European heat wave led to 13 billion euros in uninsured agriculture losses. Climate change is known to increase the frequency and severity of heatwaves, and to make precipitation less predictable and more prone to extremes, but since climate change attribution is still a relatively new field, connecting specific weather events and the shortfalls they cause to climate change over natural variability is often difficult. Exceptions include West Africa, where the climate-induced intensification of extreme weather was found to have already decreased millet yields by 10–20%, and sorghum yields 5–15%. Similarly, it was found that climate change had intensified drought conditions in Southern Africa in 2007, which elevated food prices and caused "acute food insecurity" in the country of Lesotho. Agriculture in Southern Africa was also adversely affected by drought after climate change intensified the effects of 2014–2016 El Niño event. In Europe, between 1950 and 2019, heat extremes have become more frequent and also more likely to occur consecutively, while cold extremes have declined. At the same time, Northern Europe and much of Eastern Europe was found to experience extreme precipitation more often, while the Mediterranean became more affected by drought. Similarly, the severity of heatwave and drought effects on European crop production was found to have tripled over a 50-year period – from losses of 2.2% during 1964–1990 to losses of 7.3% in 1991–2015. In the summer of 2018, heat waves probably linked to climate change greatly reduced average yield in many parts of the world, especially Europe. During the month of August, more crop failures resulted in a rise in global food prices. On the other hand, floods, often linked to climate change, have also had notable adverse effects on agriculture in the recent years. In May 2019, floods shortened corn planting season in the Midwestern United States, lowering the projected yield from 15 billion bushels to 14.2. During the 2021 European floods, estimates pointed to severe damage to the agricultural sector of Belgium, one of the countries hardest hit by the floods, including long-term effects like soil erosion. In China, 2023 research found that extreme rainfall had cost the country about 8% of its rice output over the two preceding decades. This was considered comparable to losses caused by extreme heat over this period. Projected effects from temperature increase Changes in temperature and weather patterns will alter areas suitable for farming. The current prediction is that temperatures will increase and precipitation will decrease in arid and semi-arid regions (Middle East, Africa, Australia, Southwest United States, and Southern Europe). In addition, crop yields in tropical regions will be negatively affected by the projected moderate increase in temperature (1–2 °C) expected to occur during the first half of the century. During the second half of the century, further warming is projected to decrease crop yields in all regions including Canada and the Northern United States. Many staple crops are extremely sensitive to heat and when temperatures rise over , soybean seedlings are killed and corn pollen loses its vitality. Higher winter temperatures and more frost-free days in some regions can currently be disruptive, as they can cause phenological mismatch between flowering time of plants and the activity of pollinators, threatening their reproductive success. In the longer term, however, they would result in longer growing seasons. For example, a 2014 study found that maize yields in the Heilongjiang region of China increased by between 7 and 17% per decade as a result of rising temperatures. On the other hand, a year 2017 meta-analysis comparing data from four different methods of estimating effect of warming (two types of climate model, statistical regressions and field experiments where land around certain crops was warmed by a certain amount to compare them with the controls) concluded that on a global scale, warming alone has consistently negative effects on the yields of four most important crops, suggesting that any increases would be down to precipitation changes and the CO2 fertilization effect. Heat stress in livestock Changes in agricultural water availability and reliability Both droughts and floods contribute to decreases in crop yields. On average, climate change increases the overall amount of water contained in the atmosphere by 7% per every , thus increasing precipitation. However, this increase in precipitation is not distributed evenly in space (atmospheric circulation patterns already cause different areas to receive different amounts of rainfall) or time: heavy rainfall, with the potential to cause floods, becomes more frequent. This means that under the probable mid-range climate change scenario, SSP2-4.5, precipitation events globally will become larger by 11.5%, yet the time between them will increase by an average of 5.1%. Under the highest-emission scenario SSP5-8.5, there will be an 18.5% increase in size of events and 9.6% increase in the duration between them. At the same time, water losses by plants through evotranspiration will increase almost everywhere due to higher temperatures. While the fertilization effect also reduces such losses by plants, it depends on the area's climate which effect will dominate. As such, the 2020–2023 Horn of Africa drought has been primarily attributed to the great increase in evotranspiration exacerbating the effect of persistent low rainfall, which would have been more manageable in the cooler preindustrial climate. In total, this means that droughts have been occurring more frequently on average because of climate change. Africa, southern Europe, the Middle East, most of the Americas, Australia, South and Southeast Asia are the parts of the globe where droughts are expected to become more frequent and intense in spite of the global increase in precipitation. Droughts disturb , and these effects can be aggravated by population growth and urban expansion spurring on increased demand for water. The ultimate outcome is water scarcity, which results in crop failures and the loss of pasture grazing land for livestock, exacerbating pre-existing poverty in developing countries and leading to malnutrition and potentially famine. Irrigation of crops is able to reduce or even remove the effects on yields of lower rainfall and higher temperatures – through localized cooling.  However, using water resources for irrigation has downsides and is expensive. Further, some sources of irrigation water may become less reliable. This includes irrigation driven by water runoff from glaciers during the summer, as there has already been an observed retreat of glaciers since 1850, and it is expected to continue, depleting the glacial ice and reducing or outright eliminating runoff. In Asia, global warming of will reduce the ice mass of Asia's high mountains by about 29–43%,: Approximately 2.4 billion people live in the drainage basin of the Himalayan rivers: In India alone, the river Ganges provides water for drinking and farming for more than 500 million people. In the Indus River watershed, these mountain water resources contribute to up to 60% of irrigation outside of the monsoon season, and an additional 11% of total crop production. Since Effects of climate change on the water cycle are projected to substantially increase precipitation in all but the westernmost parts of the watershed, the loss of the glaciers is expected to be offset: however, agriculture in the region will become more reliable on monsoon than ever, and hydropower generation would become less predictable and reliable. Effects on plants caused by increasing atmospheric CO2 and methane Elevated atmospheric carbon dioxide affects plants in a variety of ways. Elevated CO2 increases crop yields and growth through an increase in photosynthetic rate, and it also decreases water loss as a result of stomatal closing. Higher yields due to CO2 fertilization Reduced nutritional value of crops Changes in atmospheric carbon dioxide may reduce the nutritional quality of some crops, with for instance wheat having less protein and less of some minerals. The nutritional quality of C3 plants (e.g. wheat, oats, rice) is especially at risk: lower levels of protein as well as minerals (for example zinc and iron) are expected. Food crops could see a reduction of protein, iron and zinc content in common food crops of 3 to 17%. This is the projected result of food grown under the expected atmospheric carbon-dioxide levels of 2050. Using data from the UN Food and Agriculture Organization as well as other public sources, the authors analysed 225 different staple foods, such as wheat, rice, maize, vegetables, roots and fruits. The effect of increased levels of atmospheric carbon dioxide on the nutritional quality of plants is not limited only to the above-mentioned crop categories and nutrients. A 2014 meta-analysis has shown that crops and wild plants exposed to elevated carbon dioxide levels at various latitudes have lower density of several minerals such as magnesium, iron, zinc, and potassium. Studies using Free-Air Concentration Enrichment have also shown that increases in CO2 lead to decreased concentrations of micronutrients in crop and non-crop plants with negative consequences for human nutrition, including decreased B vitamins in rice. This may have knock-on effects on other parts of ecosystems as herbivores will need to eat more food to gain the same amount of protein. Empirical evidence shows that increasing levels of result in lower concentrations of many minerals in plant tissues. Doubling levels results in an 8% decline, on average, in the concentration of minerals. Declines in magnesium, calcium, potassium, iron, zinc and other minerals in crops can worsen the quality of human nutrition. Researchers report that the levels expected in the second half of the 21st century will likely reduce the levels of zinc, iron, and protein in wheat, rice, peas, and soybeans. Some two billion people live in countries where citizens receive more than 60 per cent of their zinc or iron from these types of crops. Deficiencies of these nutrients already cause an estimated loss of 63 million life-years annually. Alongside a decrease in minerals, evidence shows that plants contain 6% more carbon, 15% less nitrogen, 9% less phosphorus, and 9% less sulfur at double conditions. The increase in carbon is mostly attributed to carbohydrates without a structural role in plants – the human-digestable, calorie-providing starch and simple sugars. The decrease in nitrogen translates directly into a decrease in the protein content. As a result, higher not only reduce a plant's micronutrients, but also the quality of its macronutrient combination. Increasing damages from surface-level ozone Anthropogenic methane emissions have a significant contribution to warming due to the high global warming potential of methane. At the same time, methane also acts as a precursor to surface ozone, which is a significant air pollutant. Its effects include lowering physiological functions and therefore the yield and quality of crops. Following methane levels, tropospheric ozone levels "increased substantially since the late 19th century", and according to a 2016 estimate, the four major crops (see later section) experienced yield losses of 5±1.5% relative to a no-climate change scenario due to ozone increases alone, which is nearly half of the negative effects caused by the other effects of climate change (10.9±3.2%), and cancels out most of the fertilization effect (6.5±1.0%). Changes in the extent and quality of agricultural land Erosion and soil fertility The warmer atmospheric temperatures observed over the past decades are expected to lead to a more vigorous hydrological cycle, including more extreme rainfall events. Erosion and soil degradation is more likely to occur. Soil fertility would also be affected by global warming. Increased erosion in agricultural landscapes from anthropogenic factors can occur with losses of up to 22% of soil carbon in 50 years. Climate change will also cause soils to warm. In turn, this could cause the soil microbe population size to dramatically increase 40–150%. Warmer conditions would favour growth of certain bacteria species, shifting the bacterial community composition. Elevated carbon dioxide would increase the growth rates of plants and soil microbes, slowing the soil carbon cycle and favouring oligotrophs, which are slower-growing and more resource efficient than copiotrophs. Agricultural land loss from sea level rise A rise in the sea level would result in an agricultural land loss, in particular in areas such as South East Asia. Erosion, submergence of shorelines, salinity of the water table due to the increased sea levels, could mainly affect agriculture through inundation of low-lying lands. Low-lying areas such as Bangladesh, India and Vietnam will experience major loss of rice crop if sea levels rise as expected by the end of the century. Vietnam for example relies heavily on its southern tip, where the Mekong Delta lies, for rice planting. A one-metre rise in sea level will cover several square kilometres of rice paddies in Vietnam. Besides simply flooding agricultural land, sea level rise can also cause saltwater intrusion into freshwater wells, particularly if they are already below sea level. Once the concentration of saltwater exceeds 2–3%, the well becomes unusable. Notably, areas along an estimated 15% of the US coastline already have the majority of local groundwater below the sea level. Thawing of potentially arable land Climate change may increase the amount of arable land by reducing the amount of frozen land. A 2005 study reports that temperature in Siberia has increased three-degree Celsius in average since 1960 (much more than the rest of the world). However, reports about the effect of global warming on Russian agriculture indicate conflicting probable effects: while they expect a northward extension of farmable lands, they also warn of possible productivity losses and increased risk of drought. The Arctic region is expected to benefit from increased opportunities for agriculture and forestry. Response of insects, plant diseases and weeds Climate change will alter pest, plant disease and weed distributions, with potential to reduce crop yields, including of staple crops like wheat, soybeans, and corn (maize). Warmer temperatures can increase the metabolic rate and number of breeding cycles of insect populations. Historically, cold temperatures at night and in the winter months would kill off insects, bacteria and fungi. The warmer, wetter winters are promoting fungal plant diseases like wheat rusts (stripe and brown/leaf) and soybean rust to travel northward. The increasing incidence of flooding and heavy rains also promotes the growth of various other plant pests and diseases. Insect pollinators and pests Climate change is expected to have a negative effect on many insects, greatly reducing their species distribution and thus increasing their risk of going extinct. Around 9% of agricultural production is dependent in some way on insect pollination, and some pollinator species are also adversely affected, with wild bumblebees known to be particularly vulnerable to recent warming. At the same time, insects are the most diverse animal taxa, and some species will benefit from the changes, including notable agricultural pests and disease vectors. Insects that previously had only two breeding cycles per year could gain an additional cycle if warm growing seasons extend, causing a population boom. Temperate places and higher latitudes are more likely to experience a dramatic change in insect populations: for instance, the Mountain Pine Beetle epidemic in British Columbia, Canada had killed millions of pine trees, partially because the winters were not cold enough to slow or kill the growing beetle larvae. Likewise, potato tuber moth and Colorado potato beetle are predicted to spread into areas currently too cold for them. Further, effects of climate change on the water cycle often mean that both wet seasons and drought seasons will become more intense. Some insect species will breed more rapidly because they are better able to take advantage of such changes in conditions. This includes certain insect pests, such as aphids and whiteflies: similarly, locust swarms could also cause more damage as the result. A notable example was the 2019–2022 locust infestation focused on East Africa, considered the worst of its kind in many decades. The fall armyworm, Spodoptera frugiperda, is a highly invasive plant pest, which can cause have massive damage to crops, especially maize. In the recent years, it has spread to countries in sub-Saharan Africa, and this spread is linked to climate change. It is expected that these highly invasive crop pests will spread to other parts of the planet since they have a high capacity to adapt to different environments. Invasive plant species Weeds A changing climate may favour the more biologically diverse weeds over the monocrops on many farms. Characteristics of weeds such as their genetic diversity, cross-breeding ability, and fast-growth rates put them at an advantage in changing climates as these characteristics allow them to adapt readily in comparison to most farm's uniform crops, and give them a biological advantage. Weeds also undergo the same acceleration of cycles as cultivated crops, and would also benefit from CO2 fertilization. Since most weeds are C3 plants, they are likely to compete even more than now against C4 crops such as corn. The increased levels are also expected to increase the tolerance of weeds to herbicides, reducing their efficiency. However, this may be counteracted by increased temperatures elevating their effectiveness. Plant pathogens Currently, pathogens result in losses of 10–16% of the global harvest and this level is likely to rise as plants are at an ever-increasing risk of exposure to pests and pathogens. Research has shown that climate change may alter the developmental stages of plant pathogens that can affect crops. This includes several pathogens associated with potato blackleg disease (e.g. Dickeya), as they grow and reproduce faster at higher temperatures. The warming is also expected to elevate food safety issues and food spoilage caused by mycotoxin-producing fungi, and bacteria such as Salmonella. Climate change would cause an increase in rainfall in some areas, which would lead to an increase of atmospheric humidity and the duration of the wet seasons. Combined with higher temperatures, these conditions could favour the development of fungal diseases, such as late blight, or bacterial infections such as Ralstonia solanacearum, which may also be able to spread more easily through flash flooding. Climate change has the capability of altering pathogen and host interactions, specifically the rates of pathogen infection and the resistance of the host plant. Also affected by plant disease are the economic costs associated with growing different plants that might yield less profit as well as treating and managing already diseased crops. For instance, soybean rust is a vicious plant pathogen that can kill off entire fields in a matter of days, devastating farmers and costing billions in agricultural losses. Change in weather patterns and temperature due to climate change leads to dispersal of plant pathogens as hosts migrate to areas with more favourable conditions. This increases crop losses due to diseases. For instance, aphids act as vectors for many potato viruses and will be able to spread further due to increased temperatures. Effects on crop yields Observed effects According to the IPCC Sixth Assessment Report from 2022, there is high confidence that in and of itself, climate change to date has left primarily negative effects on both crop yields and quality of produce, although there has been some regional variation: more negative effects have been observed for some crops in low-latitudes (maize and wheat), while positive effects of climate change have been observed in some crops in high-latitudes (maize, wheat, and sugar beets). I.e. during the period 1981 to 2008, global warming has had negative effects on wheat yield in especially tropical regions, with decreases in average global yields by 5.5%.  A study in 2019 tracked ~20,000 political units globally for 10 crops (maize, rice, wheat, soybean, barley, cassava, oil palm, rapeseed, sorghum and sugarcane), providing more detail on the spatial resolution and a larger number of crops than previously studied. It found that crop yields across Europe, sub-Saharan Africa, and Australia had in general decreased because of climate change (compared to the baseline value of 2004–2008 average data), though exceptions are present. The effect of global climate change on yields of different crops from climate trends ranged from −13.4% (oil palm) to 3.5% (soybean). The study also showed that effects are generally positive in Latin America. Effects in Asia and Northern and Central America are mixed. While the Green Revolution had ensured the growth of overall crop production per land area of 250% to 300% since the 1960, with around 44% attributed to newer crop varieties alone, it is believed this growth would have been even greater without the counteracting role of climate change on major crop yields over the same period. Between 1961 and 2021, global agricultural productivity could have been 21% greater than it actually was, if it did not have to contend with climate change. Such shortfalls would have affected food security of vulnerable populations the most: a study in 2019 showed that climate change has already increased the risk of food insecurity in many food insecure countries. Even in developed countries such as Australia, extreme weather associated with climate change has been found to cause a wide range of cascading spillovers through supply chain disruption, in addition to its primary effect on fruit, vegetable, and livestock sectors and the rural communities reliant on them. Between 1961 and 1985, cereal production more than doubled in developing nations, largely due to the development of irrigation, fertilizer, and seed varieties. Even in the absence of further scientific/technological developments, many of the existing advancements have not been evenly distributed, and their spread from the developed world to the developing world is expected to drive some improvements on its own. Further, agricultural expansion has slowed down in the recent years, but this trend is widely expected to reverse in the future in order to maintain the global food supply under all but the most optimistic climate change scenarios consistent with the Paris Agreement. Generalized yield projections In 2007, the IPCC Fourth Assessment Report had suggested that global production potential would increase up to around of globally averaged warming, as productivity increases for cereals in high latitudes would outweigh decreases in the low latitudes and global aggregate yields of rain-fed agriculture would increase by 5–20% in the first half of the 21st century. Warming exceeding this level would very likely see global declines in yields. Since then, subsequent reports had been more negative on the global production potential. The US National Research Council assessed the literature on the effects of climate change on crop yields in 2011, and provided central estimates for key crops. A meta-analysis in 2014 revealed consensus that yield is expected to decrease in the second half of the century, and with greater effect in tropical than temperate regions. Effects on yields for four major crops There is a large number of agricultural crops, but not all of them are equally important. Most climate change assessments focus on "four major crops" – maize (corn), rice, wheat and soybeans – which are consumed directly and indirectly, as animal feed (the main purpose of soybeans). The three cereals are collectively responsible for half of the total human calorie intake, and together with soybeans, they account for two thirds. Different methods have been used to project future yields of these crops, and by 2019, the consensus was that warming would lead to aggregate declines of the four. Maize and soybean would decrease with any warming, whereas rice and wheat production might peak at of warming. In 2021, a paper which used an ensemble of 21 climate models estimated that under the most intense climate change scenario used at the time, RCP8.5, global yields of these four crops would decline by between 3–12% around 2050 and by 11–25% by the year 2100. The losses were concentrated in what are currently the major agricultural producers and exporters. For instance, even by 2050, some agricultural areas of Australia, Brazil, South Africa, Southeast China, Southern Europe and the United States would suffer production losses of mostly maize and soybeans exceeding 25%. A similar finding - that some major "breadbaskets" would begin to see unequivocal effects of climate change, both positive and negative, before the year 2040 - had been established in another study from the same year. Since it represents the worst-case scenario of continually increasing emissions with no efforts to reduce them, RCP8.5 is often considered unrealistic, and a less intense RCP4.5 scenario (which still leads to nearly by century's end, far in excess of the Paris Agreement goals) is now usually considered a better match for the current trajectory. Maize Out of the four crops, maize is considered the most vulnerable to warming, with one meta-analysis concluding that every of global warming reduces maize yields by 7.4%. It is also a C4 carbon fixation plant, meaning that it experiences little benefit from the increased levels. When the results from modelling experiments comparing the combined output of latest earth system models and dedicated agricultural crop models were published in 2021, the most notable new finding was the substantial reduction in projected global yields of maize. While the previous generation suggested that under the low-warming scenario, maize productivity would increase by around 5% by the end of the century, the latest had shown a reduction of 6% under the equivalent scenario, SSP1-2.6. Under the high-emission SSP5-8.5, there was a global decline of 24% by 2100, as opposed to the earlier suggestion of a 1% increase. Rice Studies indicate that on their own, temperature changes reduce global rice yields by 3.2% for every of global warming. Projections become more complicated once the changes in precipitation, fertilization effect and other factors need to be taken into account: for instance, climate effects on rice growth in East Asia had been a net positive so far, although 2023 research suggested that by the end of the century, China could lose up to 8% of its rice yield due to increases in extreme rainfall events alone. As of 2021, global projections of rice yields from the most advanced climate and agricultural models were less consistent than they were for wheat and maize, and less able to identify a clear statistically significant trend. Wheat Climate change effects on rainfed wheat will vary depending on the region and local climatic conditions. Studies in Iran surrounding changes in temperature and rainfall are representative for several different parts of the world since there exists a wide range of climatic conditions. They range from temperate to hot-arid to cold semi-arid. Scenarios based on increasing temperature by up to and rainfall decreases by up to 25% show wheat grain yield losses can be significant. The losses can be as much as 45% in temperate areas and over 50% in hot-arid areas. But in cold semi-arid areas yields can be increased somewhat (about 15%). Adaptation strategies with the most promise center around dates for seed planting. Late planting in November to January can have significant positive effects on yields due to the seasonality of rainfall. However, those experiments did not consider the effects of increases. Globally, temperature changes alone are expected to reduce annual wheat yield by 6% for every of global warming. However, other factors such as precipitation and the fertilization effect benefit wheat yields far more. In November 2021, the results from modelling experiments comparing the combined output of latest earth system models and dedicated agricultural crop models were published. While it projected a consistent decrease in future global yields of maize, particularly under greater warming, it found the opposite for wheat yields. When the previous generation of models suggested a 9% increase in global wheat yields by 2100 under the high-emission scenario, the updated results indicate that under its highest-warming SSP5-8.5 scenario, they would increase by 18%. Soybeans Studies have shown that when levels rise, soybean leaves are less nutritious; therefore plant-eating beetles have to eat more to get their required nutrients. In addition, soybeans are less capable of defending themselves against the predatory insects under high . The diminishes the plant's jasmonic acid production, an insect-killing poison that is excreted when the plant senses it is being attacked. Without this protection, beetles are able to eat the soybean leaves freely, resulting in a lower crop yield. This is not a problem unique to soybeans, and many plant species' defence mechanisms are impaired in a high environment. Studies indicate that on their own, temperature changes reduce global soybean yields by 3.1% for every of global warming. These projections become more complicated once the changes in precipitation, fertilization effect and other factors need to be taken into account: as of 2021, global projections of soybean yields from the most advanced climate and agricultural models were less able to establish a strong trend when compared to the projections for maize and wheat. Other crops Climate change induced by increasing greenhouse gases is likely to differ across crops and countries. Millet and sorghum Millet and sorghum are not as widely consumed as the four major crops, but they are crucial staples in many African countries. A paper published in the year 2022 found that under the highest-warming SSP5-8.5 scenario, changes in temperature and soil moisture would reduce the aggregate yields of millet, sorghum, maize and soybeans by between 9% and 32%, depending on the model. Notably, this was a less pessimistic result than in the earlier models, which the authors attributed to simulating soil moisture directly, rather than attempting to indirectly account for it by tracking precipitation changes caused by effects of climate change on the water cycle. Lentils (besides soybeans) Climate change induced drought stress in Africa will likely lead to a reduction in the nutritional quality of the common bean. This would primarily impact populations in poorer countries less able to compensate by eating more food, more varied diets, or possibly taking supplements. Potatoes As well as affecting potatoes directly, climate change will also affect the distributions and populations of many potato diseases and pests. For instance, late blight is predicted to become a greater threat in some areas (e.g. in Finland) and become a lesser threat in others (e.g. in the United Kingdom Altogether, one 2003 estimate suggests that future (2040–2069) worldwide potato yield would be 18–32% lower than it was at the time, driven by declines in hotter areas like Sub-Saharan Africa, unless farmers and potato cultivars can adapt to the new environment. Grapevines (wine production) Effects on livestock rearing Global food security and undernutrition Scientific understanding of how climate change would affect global food security has evolved over time. The latest IPCC Sixth Assessment Report in 2022 suggested that by 2050, the number of people at risk of hunger will increase under all scenarios by between 8 and 80 million people, with nearly all of them in Sub-Saharan Africa, South Asia and Central America. However, this comparison was done relative to a world where no climate change had occurred, and so it does not rule out the possibility of an overall reduction in hunger risk when compared to present-day conditions. The earlier Special Report on Climate Change and Land suggested that under a relatively high emission scenario (RCP6.0), cereals may become 1–29% more expensive in 2050 depending on the socioeconomic pathway. Compared to a scenario where climate change is absent, this would put between 1–181 million people with low income at risk of hunger. It is difficult to project the effect of climate change on utilization (protecting food against spoilage, being healthy enough to absorb nutrients, etc.) In 2016, a modelling study suggested that by mid-century, the most intense climate change scenario would reduce per capita global food availability by 3.2%, with a 0.7% decrease in red meat consumption and a 4% decrease in fruit and vegetable consumption. According to its numbers, 529,000 people would die between 2010 and 2050 as the result, primarily in South Asia and East Asia: two-thirds of those deaths would be caused by the lack of micronutrients from reduced fruit and vegetable supply, rather than outright starvation. Acting to slow climate change would reduce these projections by up to 71%. Food prices are also expected to become more volatile. As of 2017, around 821 million people had suffered from hunger. This was equivalent to about 11% of the world population: regionally, this included 23.2% of sub-Saharan Africa, 16.5% of the Caribbean and 14.8% of South Asia. In 2021, 720 million to 811 million people were considered undernourished in 2021 (of whom 200,000, 32.3 million and 112.3 million people were at a "catastrophic", "emergency" and "crisis" levels of food insecurity, respectively). In 2020, research suggested that the baseline projected level of socioeconomic development (Shared Socioeconomic Pathway 2) would reduce this number to 122 million globally by 2050, even as the population grows to reach 9.2 billion. The effect of climate change would at most increase this 2050 figure by around 80 million, and the negative effect could be reduced to 20 million through enabling easier food trade with measures such as eliminating tariffs. In 2021, a meta-analysis of 57 studies on food security was more pessimistic, suggesting that the year 2050 population at risk of hunger would be around 500 million under SSP2. Some variations of Shared Socioeconomic Pathways with high climate change and a lack of equitable global development instead resulted in an outright increase of global hunger by up to 30% from its 2010 levels. For the earlier IPCC Fourth Assessment Report in 2007, the analysis of four main SRES pathways had shown with medium confidence (about 50% certainty)) that trends of social and economic development in three of them (A1, B1, B2) would see the number of undernourished people decline to 100–130 million people by the year 2080, while trends in A2 projected 770 million undernourished – similar to the contemporary (early 21st century) figures of ~700 million people. Once the effect of climate change implied by those scenarios was taken into account, A1, B1 and B2 scenarios would see 100–380 million undernourished by 2080 (still a major decline in hunger from 2006 levels), and A2 would see 740–1,300 million, although there was only low (20% certainty) to medium certainty in these figures. Sub-Saharan Africa would likely overtake Asia as the world's most food-insecure region, primarily due to differing socioeconomic trends. Effects of extreme weather and synchronized crop failures Some scientists consider the aforementioned projections of crop yields and food security of limited use, because in their view, they primarily model climate change as a change in mean climate state, and are not as well-equipped to consider climatic extremes. For instance, a paper published in 2021 had also attempted to calculate the number of people facing hunger in 2050 – but now on the assumption that a climate event with a 1% (i.e. once in 100 years) likelihood of occurring in the new climate (meaning it would have been effectively impossible in the present climate) were to impact that year. It estimated that such an event would increase the baseline number by 11–33% even in the low-emission scenario, and by 20–36% in the high-emission one. If such an event were to affect more vulnerable regions like South Asia, then they would have required triple their 2021 level of known food reserves to absorb the blow. Notably, other papers show that simulating recent historical extreme events in climate models, such as the 2003 European heatwave, typically results in lower effects than what had been observed in the real world, indicating that the effects of future extreme events are also likely to be underestimated. The difference between climatic mean and extremes may be particularly important for determining areas where agriculture may stop being viable. In 2021, a research team aimed to extend climate model projections of mean changes in temperature and the water cycle to the year 2500. They suggested that under the second-strongest warming scenario RCP6.0, land area capable of supporting four major temperate crops (maize, potato, soybean and wheat) would become about 11% smaller by 2100 and 18.3% smaller by 2500, while for major tropical crops (cassava, rice, sweet potato, sorghum, taro, and yam), it would decline by only 2.3% around 2100, yet by around 15% by 2500. Under the low-emission scenario RCP2.6, changes are much smaller, with around 3% decline in suitable land area for temperate crops by 2500 and an equivalent gain for tropical crops by then. Yet, another paper from 2021 suggested that by 2100, under the high-emission SSP5-8.5, 31% and 34% of the current crop and livestock production would leave what the authors have defined as a "safe climatic space": that is, those areas (most of South Asia and the Middle East, as well as parts of sub-Saharan Africa and Central America) would experience very rapid shift in Holdridge life zones (HLZ) and associated weather, while also being low in social resilience. Notably, a similar fraction of global crop and livestock production would also experience a large change in HLZ, but in more developed areas which would have better chances of adapting. In contrast, under the low-emissions SSP1-2.6, 5% and 8% of crop and livestock production would leave what is defined as the safe climatic space. Also in 2021, it was suggested that the high-emission scenario would result in a 4.5-fold increase in the probability of breadbasket failures (defined as a yield loss of 10% or more) by 2030, which could then increase 25 times by 2050. This corresponds to reaching and thresholds under that scenario: earlier research suggested that for maize, this would increase risks for multiple simultaneous breadbasket failures (yield loss of 10% or more) from 6% under the late-20th century climate to 40% and 54%, respectively. Some countries are particularly dependent on imports from certain exporters, meaning that a crop failure in those countries would hit them disproportionately. I.e. a ban on export of staple crops from Russia, Thailand and the United States alone would place around 200 million people (90% from Sub-Saharan Africa) at risk of starvation. Additionally, there is the issue of representing synchronization - where extreme climate events happen to strike multiple important producer regions around the same time. It was estimated that if hypothetically, every region with a synchronized growing season were to experience crop failure at the same time, it would cause losses of four major crops between 17% and 34%. More realistically, analysis of historic data suggested that there have already been synchronized climate events associated with up to 20% yield losses. According to a 2016 estimate, if global maize, rice and wheat exports declined by 10%, 55 million people in 58 poor countries lose at least 5% of their food supply. Further, two specific Rossby wave pattern are known to induce simultaneous heat extremes in either Eastern Asia, Eastern Europe and Central North America, or in Western Asia, Western Europe and Western Central North America, respectively. These heat extremes have already been shown to cause 3–4% declines in crop yield across the affected regions: yet, concerningly, climate models overestimate the effects of such historic events in North America and underestimate them elsewhere, simulating effectively no net yield loss. Labour and economic effects As extreme weather events become more common and more intense, floods and droughts can destroy crops and eliminate food supply, while disrupting agricultural activities and rendering workers jobless. With more costs to the farmer, some will no longer find it financially feasible to farm: i.e. some farmers may choose to permanently leave drought-affected areas. Agriculture employs the majority of the population in most low-income countries and increased costs can result in worker layoffs or pay cuts. Other farmers will respond by raising their food prices; a cost that is directly passed on to the consumer and affects the affordability of food. Some farms do not sell their produce but instead feed a family or community; without that food, people will not have enough to eat. This results in decreased production, increased food prices, and potential starvation in parts of the world. The agriculture industry in India makes up 52% of their employment and the Canadian Prairies supply 51% of Canadian agriculture; any changes in the production of food crops from these areas could have profound effects on the economy. Notably, one estimate suggests that a warming of relative to late 20th century (i.e. closer to when compared to preindustrial temperatures – a level associated with the SSP5-8.5 scenario) would cause labour capacity in Sub-Saharan Africa and Southeast Asia to decline by 30 to 50%, as the number of days when outdoor workers experience heat stress increases: up to 250 days the worst-affected parts of these two continents and of Central and South America. This could then increase crop prices by around 5%. Similarly, North China Plain is also expected to be highly affected, in part due to the region's extensive irrigation networks resulting in unusually moist air. In scenarios without aggressive action to stop climate change, some heatwaves could become extreme enough to cause mass mortality in outdoor labourers, although they will remain relatively uncommon (up to around once per decade starting from 2l00 under the most extreme scenario). Further, the role of climate change in undernutrition and micronutrient deficiencies can be calculated as the loss of "years of full health".One estimate presented in 2016 suggests that under the scenario of strong warming and low adaptation due to high global conflict and rivalry, such losses may take up 0.4% of the global GDP and 4% of the GDP in India and the South Asian region by the year 2100. Long-term predictions (beyond 2050) There are fewer projections looking beyond 2050. In general, even as climate change would cause increasingly severe effects on food production, most scientists do not anticipate it to result in mass human mortality within this century. This is in part because the studies also anticipate at least some continuation of the ongoing agricultural improvements, yet also because of agricultural expansion. For instance, a 2013 paper estimated that if the high warming of RCP 8.5 scenario was not alleviated by fertilization effect, it would reduce aggregate yields by 17% by the year 2050: yet, it anticipated that this would be mostly offset through an 11% increase in cropland area. Similarly, one of the assumptions of Shared Socioeconomic Pathways is a significant increase in land allocated to agriculture (and a corresponding decrease in forest and "other natural land" area) in every pathway besides the SSP1 (officially subtitled "Sustainability" or "Taking the Green Road"), where the inverse occurs – and which has both the lowest level of future warming and the lowest projected population growth. Regional effects Africa Asia For East and Southeast Asia, an estimate in 2007 stated that crop yields could increase up to 20% by the mid-21st century. In Central and South Asia, projections suggested that yields might decrease by up to 30%, over the same time period. Taken together, the risk of hunger was projected to remain very high in several developing countries. Different Asian Countries have various effects from climate change. China, for example, benefits from a temperature increase scenario accompanying with carbon fertilization and leading to a 3% gain of US$18 billion per year; however, India will face two thirds of the continent's aggregate losses on agriculture because its high corp net revenue suffers from the high spring temperature. In the Indo-Gangetic plain of India, heat stress and water availability are predicted to have significant negative effects on yield of wheat. Direct effects of increased mean and maximum temperatures is predicted to reduce wheat yields by up to 10%. The effect of reduced availability of water for irrigation is more significant, running at yield losses up to 35%. Due to climate change, livestock production will be decreased in Bangladesh by diseases, scarcity of forage, heat stress and breeding strategies. Australia and New Zealand Without further adaptation to climate change, projected effects would likely be substantial. By 2030, production from agriculture and forestry was projected to decline over much of southern and eastern Australia, and over parts of eastern New Zealand. In New Zealand, initial benefits were projected close to major rivers and in western and southern areas. Europe For Southern Europe, it was predicted in 2007 that climate change would reduce crop productivity. In Central and Eastern Europe, forest productivity was expected to decline. In Northern Europe, the initial effect of climate change was projected to increase crop yields. The 2019 European Environment Agency report "Climate change adaptation in the agricultural sector in Europe" again confirmed this. According to this 2019 report, projections indicate that yields of non-irrigated crops like wheat, corn and sugar beet would decrease in southern Europe by up to 50% by 2050 (under a high-end emission scenario). This could result in a substantial decrease in farm income by that date. Also farmland values are projected to decrease in parts of southern Europe by more than 80% by 2100, which could result in land abandonment. The trade patterns are also said to be affected, in turn affecting agricultural income. Also, increased food demand worldwide could exert pressure on food prices in the coming decades. In Ukraine, where temperatures are increasing throughout the year and precipitation is predicted to increase, winter wheat yields (wheat sown in winter) could increase by 20–40% in the north and northwestern regions by 2050, as compared to 2010. Latin America The major agricultural products of Latin America include livestock and grains; such as maize, wheat, soybeans, and rice. Increased temperatures and altered hydrological cycles are predicted to translate to shorter growing seasons, overall reduced biomass production, and lower grain yields. Brazil, Mexico and Argentina alone contribute 70-90% of the total agricultural production in Latin America. In these and other dry regions, maize production is expected to decrease. A study summarising a number of impact studies of climate change on agriculture in Latin America indicated that wheat is expected to decrease in Brazil, Argentina and Uruguay. Livestock, which is the main agricultural product for parts of Argentina, Uruguay, southern Brazil, Venezuela, and Colombia is likely to be reduced. Variability in the degree of production decrease among different regions of Latin America is likely. For example, one 2003 study that estimated future maize production in Latin America predicted that by 2055 maize in eastern Brazil will have moderate changes while Venezuela is expected to have drastic decreases. Increased rainfall variability has been one of the most devastating consequences of climate change in Central America and Mexico. From 2009 to 2019, the region saw years of heavy rainfall in between years of below average rainfall. The spring rains of May and June have been particularly erratic, posing issues for farmers plant their maize crops at the onset of the spring rains. Most subsistence farmers in the region have no irrigation and thus depend on the rains for their crops to grow. In Mexico, only 21% of farms are irrigated, leaving the remaining 79% dependent on rainfall. Suggested potential adaptation strategies to mitigate the effects of global warming on agriculture in Latin America include using plant breeding technologies and installing irrigation infrastructure. North America Droughts are becoming more frequent and intense in arid and semiarid western North America as temperatures have been rising, advancing the timing and magnitude of spring snow melt floods and reducing river flow volume in summer. Direct effects of climate change include increased heat and water stress, altered crop phenology, and disrupted symbiotic interactions. These effects may be exacerbated by climate changes in river flow, and the combined effects are likely to reduce the abundance of native trees in favour of non-native herbaceous and drought-tolerant competitors, reduce the habitat quality for many native animals, and slow litter decomposition and nutrient cycling. Climate change effects on human water demand and irrigation may intensify these effects. In Canada, notable increases are predicted for spring-sown wheat. Adaptation Climate change adaptation measures may reduce the risk of negative effects on agriculture from climate change. Adaptation can occur through changes in management practices, agricultural innovation, institutional changes, and climate-smart agriculture. To create a sustainable food system, these measures are considered as essential as changes needed to reduce global warming in general. Agricultural innovation is essential to addressing the potential issues of climate change. This includes better management of soil, water-saving technology, matching crops to environments, introducing different crop varieties, crop rotations, appropriate fertilization use, and supporting community-based adaptation strategies. On a government and global level, research and investments into agricultural productivity and infrastructure must be done to get a better picture of the issues involved and the best methods to address them. Government policies and programs must provide environmentally sensitive government subsidies, educational campaigns, and economic incentives as well as funds, insurance, and safety nets for vulnerable populations. In addition, providing early warning systems, and accurate weather forecasts to poor or remote areas will allow for better preparation. Greenhouse gas emissions from agriculture
Physical sciences
Climate change
Earth science
53165654
https://en.wikipedia.org/wiki/Descriptor%20%28chemistry%29
Descriptor (chemistry)
In chemical nomenclature, a descriptor is a notational prefix placed before the systematic substance name, which describes the configuration or the stereochemistry of the molecule. Some of the listed descriptors should not be used in publications, as they no longer accurately correspond with the recommendations of the IUPAC. Stereodescriptors are often used in combination with locants to clearly identify a chemical structure unambiguously. The descriptors, usually placed at the beginning of the systematic name, are not taken into account in the alphabetical sorting. Configuration descriptors cis, trans See: cis–trans isomerism The descriptors cis (Latin, on this side of) and trans (Latin, over, beyond) are used in various contexts for the description of chemical configurations: In organic structural chemistry, the configuration of a double bond can be described with cis and trans, in case it has a simple substitution pattern with only two residues. The position of two residues relative to one another at different points in a ring system or a larger molecule can also be described with cis and trans if the structure's configuration is rigid and does not allow simple inversion. In inorganic complex chemistry, the descriptors cis and trans are used to characterize the positional isomers in octahedral complexes with A2B4X configuration or square planar complexes with A2B2X configuration. The typographic presentation of cis and trans is italicised and in lower case letters. The cis/trans nomenclature is not unambiguous for more highly substituted double bonds and is nowadays largely replaced by the (E)/(Z) nomenclature. (E), (Z) See: E-Z notation The descriptors (E) (from German entgegen, 'opposite') and (Z) (from German zusammen, 'together') are used to provide a distinct description of the substitution pattern for alkenes, cumulenes or other double bond systems such as oximes. For the attribution of (E) or (Z) is based on the relative position of the two substituents of highest priority are on each side of the double bond, while the priority is based on the CIP nomenclature. The (E)/(Z) nomenclature can be applied to any double bond systems (including heteroatoms), but not to substituted ring systems. The descriptors (E) and (Z) are always capitalized, set italic, and surrounded by parentheses that are set as normal just like additional locants or commas. o-, m-, p- See: Arene substitution pattern The abbreviation o- (short for ortho, from Greek orthós for upright, straight), m- (meta, Greek (roughly) for between) and p- (para, from Greek pará for adjoining, to the side) describe the three possible positional isomers of two substituents on a benzene ring. These are usually two independent single substituents, but in case of fused ring systems, ortho-fusing is also mentioned unless the substitution pattern is regarded in the name like in [2.2]paracyclophane. In the current systematic nomenclature, o-, m- and p- are often replaced by using locants (1,2-dimethylbenzene instead of o-xylene). o-, m- and p- (written out ortho-, meta- and para-) are written in lowercase letters and italic. exo, endo See: Endo-exo isomerism exo (from Greek = outside) or endo (from Greek endon = inside) denotes the relative configuration of bridged bicyclic compounds. The position of a substituent in the main ring relative to the shortest bridge is decisive for the assignment of exo or endo (according to IUPAC: the bridge with the highest locant digits in the bridged ring system). The substituent to be classified is attributed with the exo descriptor when facing the bridge. It is endo configured when facing away from the bridge. If two different substituents are located on the same C atom, the exo/endo assignment is based on the substituent with higher priority according to the CIP rules. syn, anti If a bridged bicyclic system carries a substituent at the shortest bridge, the exo or endo descriptor can not be used for its assignment. Such isomers are classified by the syn/anti notation. If the substituent to be assigned points towards the ring with the highest number of segments it is syn configured (from Greek syn = together). Otherwise it is attributed with the anti descriptor (Greek anti = against). If both rings possess an equal number of segments the ring with the most significant substituent according to the CIP rules is chosen. The use of syn and anti to indicate the configuration of double bonds is nowadays obsolete, especially in case of aldoximes and hydrazones derived from aldehydes. Here, the compounds were designated as syn configured when the aldehyde H and the O (of the oxime) or the N (of the hydrazone) were cis aligned. These compounds are now described by the (E)/(Z) nomenclature. Aldoximes and hydrazones classified as syn are therefore by now described as (E) configurated. When talking of diastereomers, syn and anti are used to describe groups on the same or opposite sites in zigzag prijection, see Diastereomer#Syn / anti syn and anti are always written small and italic, locants (if used) are placed in front of the word and separated by hyphens. fac, mer The terms fac (from Latin facies, 'external face') and mer (from 'meridional') can specify the arrangement of three identical ligands around the central atom in octahedral complexes. Today, this nomenclature is considered obsolete, but is still permissible. The prefix fac describes the situation when the three identical ligands occupy the three vertices of an octahedron triangular surface. In mer configuration the three ligands span a plane in which the central atom is located. fac and mer are prefixed in small and italic to the complex name. n, iso, neo, cyclo The prefixes n (normal), iso (from Greek ísos = equal), neo (Greek néos = young, new) and cyclo (Greek kyklos = circle) are primarily used to describe the arrangement of atoms, usually of carbon atoms in carbon skeleton. n, iso and neo are no longer used in the systematic nomenclature, but still frequently in trivial names and in laboratory jargon. The prefix n describes a straight-chain carbon skeleton without branches, whereas iso describes a branched skeleton, without specifying any further details. More generally, iso is a compound which is isomeric to the n compound (a compound in which individual atoms or atomic groups are rearranged) neo is a non-specific term for "new", usually synthetically produced substances or isomers of long-known n compounds or natural substances (for example neomenthol derived from menthol or neoabietic acid from abietic acid). According to IUPAC neo is only recommended in neopentane or the neopentyl residue. cyclo is a frequently used prefix for all cyclic and heterocyclic compounds. In many proper names of chemical substances cyclo is not used as a prefix but directly part of the name, for example in cyclohexane or cyclooctatetraene. While n, iso and neo are written in small and italic letters, for cyclo this is only the case in inorganic compounds. In organic compounds, "cyclo" is frequently used as a name component, not separated by a hyphen and also considered in alphabetical sorting. sec-, tert- The prefixes sec and tert are used to indicate the substituent environment in a molecule. Thus, not the exact position of the substituent is described but only the substitution pattern of the adjacent atom (usually a carbon atom). In n-butanol, the OH group is attached to a primary carbon atom, in sec-butanol to a secondary carbon and in tert-butanol to a tertiary carbon atom. The terms sec and tert are considered obsolete and should only be used for unsubstituted sec-butoxy, sec-butyl or tert-butyl groups. There are various spellings such as "sec-butyl", "s-butyl", "sBu" or "bus" which are also considered obsolete. spiro The prefix "spiro" followed by a Von-Baeyer descriptor describes in the nomenclature of organic compounds ring systems linked by only one common atom, the spiro atom. If several spiro atoms are present in the molecule, the prefix "spiro" is provided with a prefix ("dispiro", "trispiro", etc.) corresponding to the number of spiro atoms. Typically "spiro" is set as normal. catena The term catena (Latin: "chain") is used in the inorganic nomenclature to describe linear, chain-like polymers from identical polyatomic units. One example is are catenatriphosphazenes. Related compounds in organic chemistry are the catenanes. sn The notation sn stands for stereospecific numbering, and indicates a particular way of numbering the carbon atoms in a molecule based on glycerol. Stereodescriptors of absolute configurations (R), (S) See: Cahn–Ingold–Prelog priority rules The stereochemical descriptors (R) (from Latin rectus = right) and (S) (from lat. sinister = left) are used to describe the absolute configuration of a stereocenter (usually a chiral carbon atom). For this purpose, all substituents at the stereocentre are prioritized according to the CIP rules and the substituent with the lowest priority ("D") is pointed backwards (away from the viewing direction). The stereocenter is (S) configured if the remaining substituents describe a circle descending in priority ("A" → "B" → "C") to the left. The (R) configuration is assigned to the stereocenter if the direction of rotation is directed to the right. If one molecule contains several stereocenters, a locant must be placed before the descriptor (for example, in (1R, 2S)-2-amino-1-phenylpropan-1-ol, the systematic designation of norephedrine). If all stereocenters are configured the same, the naming of the locants can be omitted in favor of an "all-R" or "(all-S)" spelling. Typographically, (R) and (S) are placed in uppercase and italic; the frequently preceding locants, the enclosing round brackets and the commas, on the other hand, as normal. (r), (s) The descriptors (r) and (s) are used to describe the absolute configuration of pseudoasymmetric centers. Pseudoasymmetry occurs when four different substituents are attached to one carbon atom, two of which differ only by their absolute stereochemical configuration. Examples of such are meso compounds such the tropane alkaloids; the parent compound is tropine, whose systematic name is (1R, 3r, 5S)-8-methyl-8-azabicyclo[3.2.1]octane-3-ol. In this structure, the C3 atom—the carbon to which the hydroxyl group is attached—is pseudo-asymmetric; therefore, the stereochemical descriptor in the systematic name is written in lower-case italics rather than upper-case italics as for regular chiral atoms. D-, L- See: Fischer projection The stereoscriptors D- (from Latin dexter, right) and L- (Latin laevus, left) are used to describe the configuration of α-amino acids and sugars. First, the three-dimensional molecule must be transformed in a defined notation as a two-dimensional image ("Fischer projection"). For this, the C atom with the highest priority according to the normal nomenclature rules is arranged on top and the further carbon chain is arranged vertically underneath. The chiral C-atom most remote from the group with the highest priority is used for the assignment of D- or L-. If the residue located on this carbon atom (usually an OH group) points to the left, the molecule originates from the L-series. If the residue points to the right, the descriptor D- is used. The descriptors D- and L- are written as small capitals and separated by a hyphen from the rest of the name. d-, l- Sometimes the small capital D- and L- stereodescriptors mentioned above are mistakenly confused with the obsolete italic d- and l- stereodescriptors, which are equivalent with dextrorotatory and levorotatory optical rotation, i.e. (+)- and (−)- stereodescriptors, respectively.
Physical sciences
Nomenclature
Chemistry
68728719
https://en.wikipedia.org/wiki/Central%20Asian%20Orogenic%20Belt
Central Asian Orogenic Belt
The Central Asian Orogenic Belt (CAOB), also called the Altaids, is one of the world's largest Phanerozoic accretionary orogens, and thus a leading laboratory of geologically recent crustal growth. The orogenic belt is bounded by the East European Craton and the North China Craton in the Northwest-Southeast direction, as well as Siberia Craton and Tarim Craton in the Northeast-Southwest direction. It formed by ocean closures during Neoproterozoic to the late Phanerozoic time, from around 750 to 150 Ma. Like many other accretionary orogenic belts, the Central Asian Orogenic Belt consists of a huge amount of magmatic arcs, arc-related basins, accretionary complexes, seamounts, continental fragments and ophiolites. It is also considered a relatively distinctive collisional orogenic belt because widespread subduction-accretion complexes and arc magmatic rocks can be found in the region, but collision-related foreland basins are not common. The formation history of the Central Asian Orogenic Belt is complex and highly disputed among academic scientists. Currently, there are two major evolutionary hypotheses that could potentially explain the geological history of the Central Asian Orogenic Belt. One of the hypothesis stated by geologist Celal Sengor proposed that the Central Asian Orogenic Belt formed due to the accretion of multiple oceanic arcs and continental crusts, while another hypothesis proposed that it was produced by accumulating subduction-accretion complexes on a magmatic arc. The Central Asian Orogenic Belt is now one of the most researched orogenic belts in the world due to its high significance in researching continental accretion and ore formation. It contains plentiful natural resources, including mineral ores, oil and gas. These rich mineral resources explain why the Central Asian Orogenic Belt is also called the Central Asian metallogenic domain, which is one of the largest metallogenic domains in the world. Location Like any typical accretionary orogen, the Central Asian Orogenic Belt is long and wide. It occupies roughly 30% of the land surface area of the entire Asia. It is located within the boundary of six nations, which are China, Kazakhstan, Kyrgyzstan, Mongolia, Russia, and Uzbekistan. The Central Asian Orogenic Belt is located between the East European craton and North China craton on the Northwest-Southeast direction, and between Siberian craton and Tarim craton on the Northeast-Southwest direction. The belt extends for approximately 2500 km in the East-West direction. Geology The Central Asian Orogenic Belt has a long and complicated geological history. Through mapping, geologists concluded that the geological formation has a southward younging direction, meaning that the rocks in the north are older than the rocks in the south. Cenozoic-Mesozoic sedimentary basins can be found at the eastern portion of the Central Asian Orogenic Belt while volcanic-plutonic rocks formed from the Paleozoic to Mesozoic can be found in the middle and western portion of the Orogenic Belt. It has an extensive granitoid development as around 60% of the exposed area of the belt is made of granitoids, while most of the exposed bedrock was formed between 550 Ma and 100 Ma. Main Regions of the CAOB The Central Asian Orogenic Belt has complex accretionary tectonics, which is well documented in two main areas. One of them, namely "Kazakhstan Orocline", is located in the western portion of the belt, which is in North Xinjiang in China and Kokchetav-Balkash in Kazakhstan. Another one, namely "Tuva-Mongol Orocline", is located in the eastern portion of the belt, which is in Inner Mongolia, Mongolia, and southern Russia. Kazakhstan Orocline The Kazakhstan orocline, which is located in the north of the Tarim craton and Karakum craton, as well as at the south-east of Baltica, is a bend of the Central Asian Orogenic Belt, which consists of broken fragments of continents formed in the late Paleozoic. In Precambrian time, the major terrane of the Kazakhstan orocline was mainly Mesoproterozoic metamorphic rocks, which potentially had Gondwana affinity. They were then covered by the sediments from Neoproterozoic and Cambrian to Lower Ordovician. Island arc volcanic rocks, and chert formed in deep sea environments were the dominant rock types in Paleozoic. By the end of the Ordovician and Silurian, the accretion of paleo-Kazakhstan completed, meaning that materials were added to the paleo-Kazakhstan at a subduction zone. The subsequent Devonian and Carboniferous rocks deposited on paleo-Kazakhstan were mainly volcanic rocks formed from continental arcs. During Devonian to early Carboniferous, several unconformities were formed, together with the thrusting in the back of the Balkhash-Yili volcanic belt, documenting the event of lateral accretion of the continental crust. The collision between paleo-Kazakhstan and Tarim occurred from the middle Carboniferous to the beginning of the Permian. The south-verging thrusts in the northern part of the South Tienshan consist of ophiolites, accreted high-grade metamorphic rocks, basalts and cherts formed in deep sea environments. These rocks were thrusted upon the carbonates and turbidites of the southern continents during Silurian to Carboniferous. In the late Paleozoic, these rocks were deformed in two phases. Some well-developed strike slip faults can be found in Kazakhstan. Tuva-Mongolia Orocline The geology of Tuva-Mongolia orocline can be divided into two major parts. One of which was formed in the Precambrian, while the other one consists of sedimentary rocks in the north and volcanic rocks which formed in the Paleozoic in the south of the orocline. For the northern portion of the orocline, it contains Precambrian to early Paleozoic metamorphic rocks, Neoproterozoic ophiolites, volcanic rocks formed in the early Paleozoic island arcs, and some associated volcaniclastic sediments. These rocks were then covered by the Devonian to Carboniferous sediments and were influenced by the volcanic activities during the Permian. For the southern portion of the Tuva-Mongolia Orocline, the majority of rocks there are early to late Paleozoic volcanic rocks with ophiolites formed during ocean closures, most notably the closure of the Palaeo-Asian Ocean that began in the Early Carboniferous and ended in the Late Permian or Early Triassic. The volcaniclastic sediments formed during Late Carboniferous to Permian were also common in this region. For both portions in the Tuva-Mongolia Orocline, intrusions of granites occurred after the mountain building events and were covered by the volcanic and sedimentary rocks which formed during Jurassic to Cretaceous. Ophiolites in CAOB Ophiolites, which are uplifted and exposed fragments of oceanic crusts with pieces of upper mantle, are considered to be able to provide important information regarding the history of formation and evolution of the orogenic belt. The following table shows the locations of some of the ophiolites that can be found in the Central Asian Orogenic Belt and the related interpretation on the evolutionary history of the Central Asian Orogenic Belt Geological Evolution Being an accretionary orogen, the geological evolutionary history of the Central Asian Orogenic Belt is highly complicated. There are two major evolutionary hypotheses that have been proposed. One of the hypotheses posits that oceanic arcs and possible continental blocks derived from Gondwana were added to the Siberian, Russian, and North China cratons via accretion. Another hypothesis suggests that the Central Asian collage is made of accumulated Paleozoic materials that were derived from subduction, accretion, and deformation of a single magmatic arc. Even though the Orogenic Belt has been at the forefront of the research of accretionary orogens, there is no consensus on the formation history of the Central Asian Orogenic Belt. Further explanation of the two hypotheses for the geological evolution of the Central Asian Orogenic Belt is provided below. Two hypotheses of the formation of CAOB First hypothesis The first hypothesis states that the southern margin of the Siberian continent was formed from the accretion of multiple oceanic arcs and possibly parts of continents derived from Gondwana, a supercontinent existed from the Neoproterozoic to Jurassic, to the Russia, Siberian, and North China cratons. This hypothesis suggests that subduction of orogens in the Central Asian Orogenic Belt started in the late Precambrian and the Orogenic Belt reached its highest altitude with the amalgamation of Tarim's passive margin and northern accretionary system until the end Permian and middle Triassic. This hypothesis states that the Central Asian Orogenic Belt involved numerous subduction, collision in parallel orientation, accretion, amalgamation of microcontinents and bending of oroclines. It is still debated whether the microcontinents derived from Gondwana were involved in the formation of the Central Asian Orogenic Belt in this hypothesis since the original structure of the Orogenic Belt is highly deformed and broken through tectonic evolution. Second hypothesis The second hypothesis proposed by geologist Celal Sengor in 1993 suggested that the Central Asian Orogenic Belt was formed due to the accumulation of Paleozoic subduction-accretion materials against a single magmatic arc. The entire process of the formation of Central Asian Orogenic Belt is explained below and summarized in Table 2 and Figure 5. This hypothesis suggests that Baltica craton was attached with Siberia craton during the period Ediacaran. Their locations during Ediacaran were confirmed from paleomagnetic data. Continental rifting between Baltica and Siberia happened from late Ediacaran to Cambrian (610-520 Myr). During this period, collision of microcontinents and subduction happened at the north of the Siberia craton. During the Middle Silurian (430-424 Myr), the Kipchak arc, which is the fragment formed due to the rifting of Baltica and Siberia, had its northern end attached to the Siberia craton and its southern end free from attachment to the Baltica craton. Meanwhile, the accretionary complex formed during the subduction of microcontinents at the north of the Siberia craton and the amount of accretionary materials at the Kipchak arc decreased towards the southwest as it was more away from the source in Siberia. During the Early Devonian (390-386 Myr), there was no more addition growth of subduction-accretion complexes at the southern end of the Kipchak arc due to the abrupt influx of thick layer of Early Devonian clastic materials and the simultaneous decrease in subduction-related magmatism. This could be explained by collision of Mugodzhar arc at the north of Baltica with the southern end of the Kipchak arc. On the other hand, a subduction-accretion wedge started to grow at the north of the Kipchak arc. By Late Devonian (367-362 Myr), subduction-accretion and arc magmatism produced a continental crust that had a normal thickness. During Early Carboniferous (332-318 Myr), the Baltica craton migrated towards Siberia craton, which led to the subduction under the original southern end of the Kipchak arc. During the Late Carboniferous (318-303 Myr), Baltica and Siberia experienced right-lateral shearing, combined with compressional force, the entire Kazakhstan orocline became more tightly packed. Until the Early Permian (269-260 Myr), the Nurol basin, which is a stretched continental crust, was formed and alkaline magmatism occurred at its basement. Finally, during the Late Permian (225–251 Myr), the shearing direction of Baltica and Siberia reversed as the Gornostaev shear zone moved to the south and east of Siberia. With this final act during the Late Permian, Sengor's hypothesis on the Central Asian Orogenic Belt evolution was completed. It was estimated that around 2.5 million square kilometers of juvenile materials were added to Asia in around 350 million years, making the Central Asian Orogenic Belt to be one of the most important juvenile crust formations since the end of the Proterozoic. However, some geologists suggested that the extent of juvenile crust formed during the Paleozoic is highly overestimated as many of the Phanerozoic granites found in the belt were initially formed in the Mesoproterozic and being reworked later on. Major questions The Central Asian Orogenic Belt has been on the forefront of research since the 21st century. Despite international efforts of scientists, there are still many questions regarding the Central Asian Orogenic Belt that remain unanswered. They include: To what extent is the continental crust of the CAOB is juvenile or recycled in Phanerozoic; Whether microcontinents derived from Gondwana were accreted to the Siberian, Kazakhstan, Tarim and North China cratons; The balance between tectonic erosion and formation of continents. Economic significance The Central Asian Orogenic Belt is rich in natural resources and more extensive study of the region would yield more benefits to society. Mineral ore The Central Asian Orogenic Belt is rich in mineral ores, including platinum, gold, silver and copper. The mines of these valuable metals can be found and explored according to the tectonic settings and the structures of the orogenic belt. For platinum, its associated minerals can be found in the dunite, a type of ultramafic intrusive igneous rock, from the Xiadong Alaskan complex. The platinum would usually appear as platinum-group element sulfide and sulfarsenide. It could also appear as inclusions of chromite and clinopyroxene or as interstitial grains in the fractures of chromite. For gold, a large gold mine was found in the Nenjian-Heihe melange zone within the CAOB. This gold mine, namely the Yongxin gold deposit, is a fracture-controlled gold deposit with a thickness of 52m at the largest ore body. Pyrite, which is the most important mineral that host gold, could be found in the mine. The CAOB is also rich in world-class copper . The Laoshankou Iron Oxide-Cu-Au deposit, which is located at the southwest of the Qinhe City, Xinjian, Northwest China, is considered as one of the most important high-quality copper and gold reserve in the Central Asian Orogenic Belt, with the deposit being hosted by the volcanic rocks formed during Middle Devonian. Oil and gas Since Central Asian Orogenic Belt has a complex tectonic setting, it is often being associated with different kinds of energy production in the world. It is important to note that some of the richest hydrocarbon reserves in the world can be found in the region near Central Asian Orogenic Belt. Within the Orogenic Belt, oil- and gas-bearing basins were developed, such as Junggar, Santanghu, and Songliao basins, of which the former two are located at the south-western portion of the Orogenic Belt and the later one is located at the eastern portion of the Orogenic Belt. The Yinggen-Ejinaqi Basin, which is located at the southern portion of the Central Asian Orogenic Belt has been suggested to have a high potential of having a hydrocarbon reserve. Further research and analysis is required before commercial use of oil and gas can be extracted from this region.
Physical sciences
Geologic features
Earth science
60871168
https://en.wikipedia.org/wiki/Body%20%28biology%29
Body (biology)
A body () is the physical material of an organism. It is only used for organisms which are in one part or whole. There are organisms which change from single cells to whole organisms: for example, slime molds. For them the term 'body' would mean the multicellular stage. Other uses: Plant body: plants are modular, with modules being created by meristems and the body generally consisting of both the shoot system and the root system, with the body's development being influenced by its environment. Cell body: here it may be used for cells like neurons which have long axons (nerve fibres). The cell body is the part with the nucleus in it. The body of a dead person is also called a corpse or cadaver. The dead bodies of vertebrate animals and insects are sometimes called carcasses. The human body has a head, neck, torso, two arms, two legs and the genitals of the groin, which differ between males and females. The branch of biology dealing with the study of the bodies and their specific structural features is called morphology. Anatomy is a branch of morphology that deals with the structure of the body at a level higher than tissue. Anatomy is closely related to histology, which studies the structure of tissues, as well as cytology, which studies the structure and function of the individual cells, from which the tissues and organs of the studied macroorganism are built. Taken together, anatomy, histology, cytology and embryology represent a morphology The study of functions and mechanisms in a body is physiology. Human body Here are the names of the body parts of a woman and a man.
Biology and health sciences
Animal anatomy and morphology
Biology
54476844
https://en.wikipedia.org/wiki/Boolean%20algebra
Boolean algebra
In mathematics and mathematical logic, Boolean algebra is a branch of algebra. It differs from elementary algebra in two ways. First, the values of the variables are the truth values true and false, usually denoted 1 and 0, whereas in elementary algebra the values of the variables are numbers. Second, Boolean algebra uses logical operators such as conjunction (and) denoted as , disjunction (or) denoted as , and negation (not) denoted as . Elementary algebra, on the other hand, uses arithmetic operators such as addition, multiplication, subtraction, and division. Boolean algebra is therefore a formal way of describing logical operations in the same way that elementary algebra describes numerical operations. Boolean algebra was introduced by George Boole in his first book The Mathematical Analysis of Logic (1847), and set forth more fully in his An Investigation of the Laws of Thought (1854). According to Huntington, the term Boolean algebra was first suggested by Henry M. Sheffer in 1913, although Charles Sanders Peirce gave the title "A Boolian Algebra with One Constant" to the first chapter of his "The Simplest Mathematics" in 1880. Boolean algebra has been fundamental in the development of digital electronics, and is provided for in all modern programming languages. It is also used in set theory and statistics. History A precursor of Boolean algebra was Gottfried Wilhelm Leibniz's algebra of concepts. The usage of binary in relation to the I Ching was central to Leibniz's characteristica universalis. It eventually created the foundations of algebra of concepts. Leibniz's algebra of concepts is deductively equivalent to the Boolean algebra of sets. Boole's algebra predated the modern developments in abstract algebra and mathematical logic; it is however seen as connected to the origins of both fields. In an abstract setting, Boolean algebra was perfected in the late 19th century by Jevons, Schröder, Huntington and others, until it reached the modern conception of an (abstract) mathematical structure. For example, the empirical observation that one can manipulate expressions in the algebra of sets, by translating them into expressions in Boole's algebra, is explained in modern terms by saying that the algebra of sets is a Boolean algebra (note the indefinite article). In fact, M. H. Stone proved in 1936 that every Boolean algebra is isomorphic to a field of sets. In the 1930s, while studying switching circuits, Claude Shannon observed that one could also apply the rules of Boole's algebra in this setting, and he introduced switching algebra as a way to analyze and design circuits by algebraic means in terms of logic gates. Shannon already had at his disposal the abstract mathematical apparatus, thus he cast his switching algebra as the two-element Boolean algebra. In modern circuit engineering settings, there is little need to consider other Boolean algebras, thus "switching algebra" and "Boolean algebra" are often used interchangeably. Efficient implementation of Boolean functions is a fundamental problem in the design of combinational logic circuits. Modern electronic design automation tools for very-large-scale integration (VLSI) circuits often rely on an efficient representation of Boolean functions known as (reduced ordered) binary decision diagrams (BDD) for logic synthesis and formal verification. Logic sentences that can be expressed in classical propositional calculus have an equivalent expression in Boolean algebra. Thus, Boolean logic is sometimes used to denote propositional calculus performed in this way. Boolean algebra is not sufficient to capture logic formulas using quantifiers, like those from first order logic. Although the development of mathematical logic did not follow Boole's program, the connection between his algebra and logic was later put on firm ground in the setting of algebraic logic, which also studies the algebraic systems of many other logics. The problem of determining whether the variables of a given Boolean (propositional) formula can be assigned in such a way as to make the formula evaluate to true is called the Boolean satisfiability problem (SAT), and is of importance to theoretical computer science, being the first problem shown to be NP-complete. The closely related model of computation known as a Boolean circuit relates time complexity (of an algorithm) to circuit complexity. Values Whereas expressions denote mainly numbers in elementary algebra, in Boolean algebra, they denote the truth values false and true. These values are represented with the bits, 0 and 1. They do not behave like the integers 0 and 1, for which , but may be identified with the elements of the two-element field , that is, integer arithmetic modulo 2, for which . Addition and multiplication then play the Boolean roles of XOR (exclusive-or) and AND (conjunction), respectively, with disjunction (inclusive-or) definable as and negation as . In , may be replaced by , since they denote the same operation; however, this way of writing Boolean operations allows applying the usual arithmetic operations of integers (this may be useful when using a programming language in which is not implemented). Boolean algebra also deals with functions which have their values in the set . A sequence of bits is a commonly used example of such a function. Another common example is the totality of subsets of a set : to a subset of , one can define the indicator function that takes the value on , and outside . The most general example is the set elements of a Boolean algebra, with all of the foregoing being instances thereof. As with elementary algebra, the purely equational part of the theory may be developed, without considering explicit values for the variables. Operations Basic operations While Elementary algebra has four operations (addition, subtraction, multiplication, and division), the Boolean algebra has only three basic operations: conjunction, disjunction, and negation, expressed with the corresponding binary operators AND () and OR () and the unary operator NOT (), collectively referred to as Boolean operators. Variables in Boolean algebra that store the logical value of 0 and 1 are called the Boolean variables. They are used to store either true or false values. The basic operations on Boolean variables x and y are defined as follows: {| class="wikitable" style="text-align: center" |- !Logical operation !Operator !Notation !Alternative notations !Definition |- |Conjunction |AND | | | |- |Disjunction |OR | | | |- |Negation |NOT |¬x |{{math|NOT x, Nx, x̅, x, !x}} | |} Alternatively, the values of , , and ¬x can be expressed by tabulating their values with truth tables as follows: When used in expressions, the operators are applied according to the precedence rules. As with elementary algebra, expressions in parentheses are evaluated first, following the precedence rules. If the truth values 0 and 1 are interpreted as integers, these operations may be expressed with the ordinary operations of arithmetic (where x + y uses addition and xy uses multiplication), or by the minimum/maximum functions: One might consider that only negation and one of the two other operations are basic because of the following identities that allow one to define conjunction in terms of negation and the disjunction, and vice versa (De Morgan's laws): Secondary operations Operations composed from the basic operations include, among others, the following: These definitions give rise to the following truth tables giving the values of these operations for all four possible inputs. {| class="wikitable" style="text-align: center" |+Secondary operations. Table 1 |- ! ! ! ! ! |- !0 !0 | 1 || 0 || 1 |- !1 !0 |0 || 1 || 0 |- !0 !1 |1 || 1 || 0 |- !1 !1 | 1 || 0 || 1 |} Material conditional The first operation, x → y, or Cxy, is called material implication. If x is true, then the result of expression x → y is taken to be that of y (e.g. if x is true and y is false, then x → y is also false). But if x is false, then the value of y can be ignored; however, the operation must return some Boolean value and there are only two choices. So by definition, x → y is true when x is false (relevance logic rejects this definition, by viewing an implication with a false premise as something other than either true or false). Exclusive OR (XOR) The second operation, x ⊕ y, or Jxy, is called exclusive or (often abbreviated as XOR) to distinguish it from disjunction as the inclusive kind. It excludes the possibility of both x and y being true (e.g. see table): if both are true then result is false. Defined in terms of arithmetic it is addition where mod 2 is 1 + 1 = 0. Logical equivalence The third operation, the complement of exclusive or, is equivalence or Boolean equality: x ≡ y, or Exy, is true just when x and y have the same value. Hence x ⊕ y as its complement can be understood as x ≠ y, being true just when x and y are different. Thus, its counterpart in arithmetic mod 2 is x + y. Equivalence's counterpart in arithmetic mod 2 is x + y + 1. Laws A law of Boolean algebra is an identity such as between two Boolean terms, where a Boolean term is defined as an expression built up from variables and the constants 0 and 1 using the operations ∧, ∨, and ¬. The concept can be extended to terms involving other Boolean operations such as ⊕, →, and ≡, but such extensions are unnecessary for the purposes to which the laws are put. Such purposes include the definition of a Boolean algebra as any model of the Boolean laws, and as a means for deriving new laws from old as in the derivation of from (as treated in ). Monotone laws Boolean algebra satisfies many of the same laws as ordinary algebra when one matches up ∨ with addition and ∧ with multiplication. In particular the following laws are common to both kinds of algebra: {| |- | Associativity of : ||style="width:2em"| ||style="text-align: right"| || |- | Associativity of : || ||style="text-align: right"| || |- | Commutativity of : || ||style="text-align: right"| || |- | Commutativity of : || ||style="text-align: right"| || |- | Distributivity of over : || ||style="text-align: right"| || |- | Identity for : || ||style="text-align: right"| || |- | Identity for : || ||style="text-align: right"| || |- | Annihilator for : || ||style="text-align: right"| || |- |} The following laws hold in Boolean algebra, but not in ordinary algebra: {| |- Annihilator for : || ||style="text-align: right"| || |- |Annihilator for : || ||style="text-align: right"| | |- | Idempotence of : || ||style="text-align: right"| || |- | Idempotence of : || ||style="text-align: right"| || |- | Absorption 1: || ||style="text-align: right"| || |- | Absorption 2: || ||style="text-align: right"| || |- |Distributivity of over : | | | |- | Distributivity of over : | |} Taking in the third law above shows that it is not an ordinary algebra law, since . The remaining five laws can be falsified in ordinary algebra by taking all variables to be 1. For example, in absorption law 1, the left hand side would be , while the right hand side would be 1 (and so on). All of the laws treated thus far have been for conjunction and disjunction. These operations have the property that changing either argument either leaves the output unchanged, or the output changes in the same way as the input. Equivalently, changing any variable from 0 to 1 never results in the output changing from 1 to 0. Operations with this property are said to be monotone. Thus the axioms thus far have all been for monotonic Boolean logic. Nonmonotonicity enters via complement ¬ as follows. Nonmonotone laws The complement operation is defined by the following two laws. All properties of negation including the laws below follow from the above two laws alone. In both ordinary and Boolean algebra, negation works by exchanging pairs of elements, hence in both algebras it satisfies the double negation law (also called involution law) But whereas ordinary algebra satisfies the two laws Boolean algebra satisfies De Morgan's laws: Completeness The laws listed above define Boolean algebra, in the sense that they entail the rest of the subject. The laws complementation 1 and 2, together with the monotone laws, suffice for this purpose and can therefore be taken as one possible complete set of laws or axiomatization of Boolean algebra. Every law of Boolean algebra follows logically from these axioms. Furthermore, Boolean algebras can then be defined as the models of these axioms as treated in . Writing down further laws of Boolean algebra cannot give rise to any new consequences of these axioms, nor can it rule out any model of them. In contrast, in a list of some but not all of the same laws, there could have been Boolean laws that did not follow from those on the list, and moreover there would have been models of the listed laws that were not Boolean algebras. This axiomatization is by no means the only one, or even necessarily the most natural given that attention was not paid as to whether some of the axioms followed from others, but there was simply a choice to stop when enough laws had been noticed, treated further in . Or the intermediate notion of axiom can be sidestepped altogether by defining a Boolean law directly as any tautology, understood as an equation that holds for all values of its variables over 0 and 1. All these definitions of Boolean algebra can be shown to be equivalent. Duality principle Principle: If {X, R} is a partially ordered set, then {X, R(inverse)} is also a partially ordered set. There is nothing special about the choice of symbols for the values of Boolean algebra. 0 and 1 could be renamed to α and β, and as long as it was done consistently throughout, it would still be Boolean algebra, albeit with some obvious cosmetic differences. But suppose 0 and 1 were renamed 1 and 0 respectively. Then it would still be Boolean algebra, and moreover operating on the same values. However, it would not be identical to our original Boolean algebra because now ∨ behaves the way ∧ used to do and vice versa. So there are still some cosmetic differences to show that the notation has been changed, despite the fact that 0s and 1s are still being used. But if in addition to interchanging the names of the values, the names of the two binary operations are also interchanged, now there is no trace of what was done. The end product is completely indistinguishable from what was started with. The columns for and in the truth tables have changed places, but that switch is immaterial. When values and operations can be paired up in a way that leaves everything important unchanged when all pairs are switched simultaneously, the members of each pair are called dual to each other. Thus 0 and 1 are dual, and ∧ and ∨ are dual. The duality principle, also called De Morgan duality, asserts that Boolean algebra is unchanged when all dual pairs are interchanged. One change not needed to make as part of this interchange was to complement. Complement is a self-dual operation. The identity or do-nothing operation x (copy the input to the output) is also self-dual. A more complicated example of a self-dual operation is . There is no self-dual binary operation that depends on both its arguments. A composition of self-dual operations is a self-dual operation. For example, if , then is a self-dual operation of four arguments x, y, z, t. The principle of duality can be explained from a group theory perspective by the fact that there are exactly four functions that are one-to-one mappings (automorphisms) of the set of Boolean polynomials back to itself: the identity function, the complement function, the dual function and the contradual function (complemented dual). These four functions form a group under function composition, isomorphic to the Klein four-group, acting on the set of Boolean polynomials. Walter Gottschalk remarked that consequently a more appropriate name for the phenomenon would be the principle (or square) of quaternality. Diagrammatic representations Venn diagrams A Venn diagram can be used as a representation of a Boolean operation using shaded overlapping regions. There is one region for each variable, all circular in the examples here. The interior and exterior of region x corresponds respectively to the values 1 (true) and 0 (false) for variable x. The shading indicates the value of the operation for each combination of regions, with dark denoting 1 and light 0 (some authors use the opposite convention). The three Venn diagrams in the figure below represent respectively conjunction , disjunction , and complement ¬x. For conjunction, the region inside both circles is shaded to indicate that is 1 when both variables are 1. The other regions are left unshaded to indicate that is 0 for the other three combinations. The second diagram represents disjunction by shading those regions that lie inside either or both circles. The third diagram represents complement ¬x by shading the region not inside the circle. While we have not shown the Venn diagrams for the constants 0 and 1, they are trivial, being respectively a white box and a dark box, neither one containing a circle. However, we could put a circle for x in those boxes, in which case each would denote a function of one argument, x, which returns the same value independently of x, called a constant function. As far as their outputs are concerned, constants and constant functions are indistinguishable; the difference is that a constant takes no arguments, called a zeroary or nullary operation, while a constant function takes one argument, which it ignores, and is a unary operation. Venn diagrams are helpful in visualizing laws. The commutativity laws for ∧ and ∨ can be seen from the symmetry of the diagrams: a binary operation that was not commutative would not have a symmetric diagram because interchanging x and y would have the effect of reflecting the diagram horizontally and any failure of commutativity would then appear as a failure of symmetry. Idempotence of ∧ and ∨ can be visualized by sliding the two circles together and noting that the shaded area then becomes the whole circle, for both ∧ and ∨. To see the first absorption law, , start with the diagram in the middle for x ∨ y and note that the portion of the shaded area in common with the x circle is the whole of the x circle. For the second absorption law, , start with the left diagram for and note that shading the whole of the x circle results in just the x circle being shaded, since the previous shading was inside the x circle. The double negation law can be seen by complementing the shading in the third diagram for ¬x, which shades the x circle. To visualize the first De Morgan's law, , start with the middle diagram for and complement its shading so that only the region outside both circles is shaded, which is what the right hand side of the law describes. The result is the same as if we shaded that region which is both outside the x circle and outside the y circle, i.e. the conjunction of their exteriors, which is what the left hand side of the law describes. The second De Morgan's law, , works the same way with the two diagrams interchanged. The first complement law, , says that the interior and exterior of the x circle have no overlap. The second complement law, , says that everything is either inside or outside the x circle. Digital logic gates Digital logic is the application of the Boolean algebra of 0 and 1 to electronic hardware consisting of logic gates connected to form a circuit diagram. Each gate implements a Boolean operation, and is depicted schematically by a shape indicating the operation. The shapes associated with the gates for conjunction (AND-gates), disjunction (OR-gates), and complement (inverters) are as follows: The lines on the left of each gate represent input wires or ports. The value of the input is represented by a voltage on the lead. For so-called "active-high" logic, 0 is represented by a voltage close to zero or "ground," while 1 is represented by a voltage close to the supply voltage; active-low reverses this. The line on the right of each gate represents the output port, which normally follows the same voltage conventions as the input ports. Complement is implemented with an inverter gate. The triangle denotes the operation that simply copies the input to the output; the small circle on the output denotes the actual inversion complementing the input. The convention of putting such a circle on any port means that the signal passing through this port is complemented on the way through, whether it is an input or output port. The duality principle, or De Morgan's laws, can be understood as asserting that complementing all three ports of an AND gate converts it to an OR gate and vice versa, as shown in Figure 4 below. Complementing both ports of an inverter however leaves the operation unchanged. More generally, one may complement any of the eight subsets of the three ports of either an AND or OR gate. The resulting sixteen possibilities give rise to only eight Boolean operations, namely those with an odd number of 1s in their truth table. There are eight such because the "odd-bit-out" can be either 0 or 1 and can go in any of four positions in the truth table. There being sixteen binary Boolean operations, this must leave eight operations with an even number of 1s in their truth tables. Two of these are the constants 0 and 1 (as binary operations that ignore both their inputs); four are the operations that depend nontrivially on exactly one of their two inputs, namely x, y, ¬x, and ¬y; and the remaining two are x ⊕ y (XOR) and its complement x ≡ y. Boolean algebras The term "algebra" denotes both a subject, namely the subject of algebra, and an object, namely an algebraic structure. Whereas the foregoing has addressed the subject of Boolean algebra, this section deals with mathematical objects called Boolean algebras, defined in full generality as any model of the Boolean laws. We begin with a special case of the notion definable without reference to the laws, namely concrete Boolean algebras, and then give the formal definition of the general notion. Concrete Boolean algebras A concrete Boolean algebra or field of sets is any nonempty set of subsets of a given set X closed under the set operations of union, intersection, and complement relative to X. (Historically X itself was required to be nonempty as well to exclude the degenerate or one-element Boolean algebra, which is the one exception to the rule that all Boolean algebras satisfy the same equations since the degenerate algebra satisfies every equation. However, this exclusion conflicts with the preferred purely equational definition of "Boolean algebra", there being no way to rule out the one-element algebra using only equations— 0 ≠ 1 does not count, being a negated equation. Hence modern authors allow the degenerate Boolean algebra and let X be empty.) Example 1. The power set 2X of X, consisting of all subsets of X. Here X may be any set: empty, finite, infinite, or even uncountable. Example 2. The empty set and X. This two-element algebra shows that a concrete Boolean algebra can be finite even when it consists of subsets of an infinite set. It can be seen that every field of subsets of X must contain the empty set and X. Hence no smaller example is possible, other than the degenerate algebra obtained by taking X to be empty so as to make the empty set and X coincide. Example 3. The set of finite and cofinite sets of integers, where a cofinite set is one omitting only finitely many integers. This is clearly closed under complement, and is closed under union because the union of a cofinite set with any set is cofinite, while the union of two finite sets is finite. Intersection behaves like union with "finite" and "cofinite" interchanged. This example is countably infinite because there are only countably many finite sets of integers. Example 4. For a less trivial example of the point made by example 2, consider a Venn diagram formed by n closed curves partitioning the diagram into 2n regions, and let X be the (infinite) set of all points in the plane not on any curve but somewhere within the diagram. The interior of each region is thus an infinite subset of X, and every point in X is in exactly one region. Then the set of all 22n possible unions of regions (including the empty set obtained as the union of the empty set of regions and X obtained as the union of all 2n regions) is closed under union, intersection, and complement relative to X and therefore forms a concrete Boolean algebra. Again, there are finitely many subsets of an infinite set forming a concrete Boolean algebra, with example 2 arising as the case n = 0 of no curves. Subsets as bit vectors A subset Y of X can be identified with an indexed family of bits with index set X, with the bit indexed by being 1 or 0 according to whether or not . (This is the so-called characteristic function notion of a subset.) For example, a 32-bit computer word consists of 32 bits indexed by the set {0,1,2,...,31}, with 0 and 31 indexing the low and high order bits respectively. For a smaller example, if where are viewed as bit positions in that order from left to right, the eight subsets {}, {c}, {b}, {b,c}, {a}, {a,c}, {a,b}, and {a,b,c} of X can be identified with the respective bit vectors 000, 001, 010, 011, 100, 101, 110, and 111. Bit vectors indexed by the set of natural numbers are infinite sequences of bits, while those indexed by the reals in the unit interval [0,1] are packed too densely to be able to write conventionally but nonetheless form well-defined indexed families (imagine coloring every point of the interval [0,1] either black or white independently; the black points then form an arbitrary subset of [0,1]). From this bit vector viewpoint, a concrete Boolean algebra can be defined equivalently as a nonempty set of bit vectors all of the same length (more generally, indexed by the same set) and closed under the bit vector operations of bitwise ∧, ∨, and ¬, as in , , and , the bit vector realizations of intersection, union, and complement respectively. Prototypical Boolean algebra The set {0,1} and its Boolean operations as treated above can be understood as the special case of bit vectors of length one, which by the identification of bit vectors with subsets can also be understood as the two subsets of a one-element set. This is called the prototypical Boolean algebra, justified by the following observation. The laws satisfied by all nondegenerate concrete Boolean algebras coincide with those satisfied by the prototypical Boolean algebra. This observation is proved as follows. Certainly any law satisfied by all concrete Boolean algebras is satisfied by the prototypical one since it is concrete. Conversely any law that fails for some concrete Boolean algebra must have failed at a particular bit position, in which case that position by itself furnishes a one-bit counterexample to that law. Nondegeneracy ensures the existence of at least one bit position because there is only one empty bit vector. The final goal of the next section can be understood as eliminating "concrete" from the above observation. That goal is reached via the stronger observation that, up to isomorphism, all Boolean algebras are concrete. Boolean algebras: the definition The Boolean algebras so far have all been concrete, consisting of bit vectors or equivalently of subsets of some set. Such a Boolean algebra consists of a set and operations on that set which can be shown to satisfy the laws of Boolean algebra. Instead of showing that the Boolean laws are satisfied, we can instead postulate a set X, two binary operations on X, and one unary operation, and require that those operations satisfy the laws of Boolean algebra. The elements of X need not be bit vectors or subsets but can be anything at all. This leads to the more general abstract definition. A Boolean algebra is any set with binary operations ∧ and ∨ and a unary operation ¬ thereon satisfying the Boolean laws. For the purposes of this definition it is irrelevant how the operations came to satisfy the laws, whether by fiat or proof. All concrete Boolean algebras satisfy the laws (by proof rather than fiat), whence every concrete Boolean algebra is a Boolean algebra according to our definitions. This axiomatic definition of a Boolean algebra as a set and certain operations satisfying certain laws or axioms by fiat is entirely analogous to the abstract definitions of group, ring, field etc. characteristic of modern or abstract algebra. Given any complete axiomatization of Boolean algebra, such as the axioms for a complemented distributive lattice, a sufficient condition for an algebraic structure of this kind to satisfy all the Boolean laws is that it satisfy just those axioms. The following is therefore an equivalent definition. A Boolean algebra is a complemented distributive lattice. The section on axiomatization lists other axiomatizations, any of which can be made the basis of an equivalent definition. Representable Boolean algebras Although every concrete Boolean algebra is a Boolean algebra, not every Boolean algebra need be concrete. Let n be a square-free positive integer, one not divisible by the square of an integer, for example 30 but not 12. The operations of greatest common divisor, least common multiple, and division into n (that is, ¬x = n/x), can be shown to satisfy all the Boolean laws when their arguments range over the positive divisors of n. Hence those divisors form a Boolean algebra. These divisors are not subsets of a set, making the divisors of n a Boolean algebra that is not concrete according to our definitions. However, if each divisor of n is represented by the set of its prime factors, this nonconcrete Boolean algebra is isomorphic to the concrete Boolean algebra consisting of all sets of prime factors of n, with union corresponding to least common multiple, intersection to greatest common divisor, and complement to division into n. So this example, while not technically concrete, is at least "morally" concrete via this representation, called an isomorphism. This example is an instance of the following notion. A Boolean algebra is called representable when it is isomorphic to a concrete Boolean algebra. The next question is answered positively as follows. Every Boolean algebra is representable. That is, up to isomorphism, abstract and concrete Boolean algebras are the same thing. This result depends on the Boolean prime ideal theorem, a choice principle slightly weaker than the axiom of choice. This strong relationship implies a weaker result strengthening the observation in the previous subsection to the following easy consequence of representability. The laws satisfied by all Boolean algebras coincide with those satisfied by the prototypical Boolean algebra. It is weaker in the sense that it does not of itself imply representability. Boolean algebras are special here, for example a relation algebra is a Boolean algebra with additional structure but it is not the case that every relation algebra is representable in the sense appropriate to relation algebras. Axiomatizing Boolean algebra The above definition of an abstract Boolean algebra as a set together with operations satisfying "the" Boolean laws raises the question of what those laws are. A simplistic answer is "all Boolean laws", which can be defined as all equations that hold for the Boolean algebra of 0 and 1. However, since there are infinitely many such laws, this is not a satisfactory answer in practice, leading to the question of it suffices to require only finitely many laws to hold. In the case of Boolean algebras, the answer is "yes": the finitely many equations listed above are sufficient. Thus, Boolean algebra is said to be finitely axiomatizable or finitely based. Moreover, the number of equations needed can be further reduced. To begin with, some of the above laws are implied by some of the others. A sufficient subset of the above laws consists of the pairs of associativity, commutativity, and absorption laws, distributivity of ∧ over ∨ (or the other distributivity law—one suffices), and the two complement laws. In fact, this is the traditional axiomatization of Boolean algebra as a complemented distributive lattice. By introducing additional laws not listed above, it becomes possible to shorten the list of needed equations yet further; for instance, with the vertical bar representing the Sheffer stroke operation, the single axiom is sufficient to completely axiomatize Boolean algebra. It is also possible to find longer single axioms using more conventional operations; see Minimal axioms for Boolean algebra. Propositional logic Propositional logic is a logical system that is intimately connected to Boolean algebra. Many syntactic concepts of Boolean algebra carry over to propositional logic with only minor changes in notation and terminology, while the semantics of propositional logic are defined via Boolean algebras in a way that the tautologies (theorems) of propositional logic correspond to equational theorems of Boolean algebra. Syntactically, every Boolean term corresponds to a propositional formula of propositional logic. In this translation between Boolean algebra and propositional logic, Boolean variables x, y, ... become propositional variables (or atoms) P, Q, ... Boolean terms such as x ∨ y become propositional formulas P ∨ Q; 0 becomes false or ⊥, and 1 becomes true or T'''. It is convenient when referring to generic propositions to use Greek letters Φ, Ψ, ... as metavariables (variables outside the language of propositional calculus, used when talking about propositional calculus) to denote propositions. The semantics of propositional logic rely on truth assignments. The essential idea of a truth assignment is that the propositional variables are mapped to elements of a fixed Boolean algebra, and then the truth value of a propositional formula using these letters is the element of the Boolean algebra that is obtained by computing the value of the Boolean term corresponding to the formula. In classical semantics, only the two-element Boolean algebra is used, while in Boolean-valued semantics arbitrary Boolean algebras are considered. A tautology is a propositional formula that is assigned truth value 1 by every truth assignment of its propositional variables to an arbitrary Boolean algebra (or, equivalently, every truth assignment to the two element Boolean algebra). These semantics permit a translation between tautologies of propositional logic and equational theorems of Boolean algebra. Every tautology Φ of propositional logic can be expressed as the Boolean equation Φ = 1, which will be a theorem of Boolean algebra. Conversely, every theorem Φ = Ψ of Boolean algebra corresponds to the tautologies (Φ ∨ ¬Ψ) ∧ (¬Φ ∨ Ψ) and (Φ ∧ Ψ) ∨ (¬Φ ∧ ¬Ψ). If → is in the language, these last tautologies can also be written as (Φ → Ψ) ∧ (Ψ → Φ), or as two separate theorems Φ → Ψ and Ψ → Φ; if ≡ is available, then the single tautology Φ ≡ Ψ can be used. Applications One motivating application of propositional calculus is the analysis of propositions and deductive arguments in natural language. Whereas the proposition "if x = 3, then x + 1 = 4" depends on the meanings of such symbols as + and 1, the proposition "if x = 3, then x = 3" does not; it is true merely by virtue of its structure, and remains true whether "x = 3" is replaced by "x = 4" or "the moon is made of green cheese." The generic or abstract form of this tautology is "if P, then P," or in the language of Boolean algebra, P → P. Replacing P by x = 3 or any other proposition is called instantiation of P by that proposition. The result of instantiating P in an abstract proposition is called an instance of the proposition. Thus, x = 3 → x = 3 is a tautology by virtue of being an instance of the abstract tautology P → P. All occurrences of the instantiated variable must be instantiated with the same proposition, to avoid such nonsense as P → x = 3 or x = 3 → x = 4. Propositional calculus restricts attention to abstract propositions, those built up from propositional variables using Boolean operations. Instantiation is still possible within propositional calculus, but only by instantiating propositional variables by abstract propositions, such as instantiating Q by Q → P in P → (Q → P) to yield the instance P → ((Q → P) → P). (The availability of instantiation as part of the machinery of propositional calculus avoids the need for metavariables within the language of propositional calculus, since ordinary propositional variables can be considered within the language to denote arbitrary propositions. The metavariables themselves are outside the reach of instantiation, not being part of the language of propositional calculus but rather part of the same language for talking about it that this sentence is written in, where there is a need to be able to distinguish propositional variables and their instantiations as being distinct syntactic entities.) Deductive systems for propositional logic An axiomatization of propositional calculus is a set of tautologies called axioms and one or more inference rules for producing new tautologies from old. A proof in an axiom system A is a finite nonempty sequence of propositions each of which is either an instance of an axiom of A or follows by some rule of A from propositions appearing earlier in the proof (thereby disallowing circular reasoning). The last proposition is the theorem proved by the proof. Every nonempty initial segment of a proof is itself a proof, whence every proposition in a proof is itself a theorem. An axiomatization is sound when every theorem is a tautology, and complete when every tautology is a theorem. Sequent calculus Propositional calculus is commonly organized as a Hilbert system, whose operations are just those of Boolean algebra and whose theorems are Boolean tautologies, those Boolean terms equal to the Boolean constant 1. Another form is sequent calculus, which has two sorts, propositions as in ordinary propositional calculus, and pairs of lists of propositions called sequents, such as The two halves of a sequent are called the antecedent and the succedent respectively. The customary metavariable denoting an antecedent or part thereof is Γ, and for a succedent Δ; thus Γ, A &vdash; Δ would denote a sequent whose succedent is a list Δ and whose antecedent is a list Γ with an additional proposition A appended after it. The antecedent is interpreted as the conjunction of its propositions, the succedent as the disjunction of its propositions, and the sequent itself as the entailment of the succedent by the antecedent. Entailment differs from implication in that whereas the latter is a binary operation that returns a value in a Boolean algebra, the former is a binary relation which either holds or does not hold. In this sense, entailment is an external form of implication, meaning external to the Boolean algebra, thinking of the reader of the sequent as also being external and interpreting and comparing antecedents and succedents in some Boolean algebra. The natural interpretation of &vdash; is as ≤ in the partial order of the Boolean algebra defined by x ≤ y just when . This ability to mix external implication &vdash; and internal implication → in the one logic is among the essential differences between sequent calculus and propositional calculus. Applications Boolean algebra as the calculus of two values is fundamental to computer circuits, computer programming, and mathematical logic, and is also used in other areas of mathematics such as set theory and statistics. Computers In the early 20th century, several electrical engineers intuitively recognized that Boolean algebra was analogous to the behavior of certain types of electrical circuits. Claude Shannon formally proved such behavior was logically equivalent to Boolean algebra in his 1937 master's thesis, A Symbolic Analysis of Relay and Switching Circuits. Today, all modern general-purpose computers perform their functions using two-value Boolean logic; that is, their electrical circuits are a physical manifestation of two-value Boolean logic. They achieve this in various ways: as voltages on wires in high-speed circuits and capacitive storage devices, as orientations of a magnetic domain in ferromagnetic storage devices, as holes in punched cards or paper tape, and so on. (Some early computers used decimal circuits or mechanisms instead of two-valued logic circuits.) Of course, it is possible to code more than two symbols in any given medium. For example, one might use respectively 0, 1, 2, and 3 volts to code a four-symbol alphabet on a wire, or holes of different sizes in a punched card. In practice, the tight constraints of high speed, small size, and low power combine to make noise a major factor. This makes it hard to distinguish between symbols when there are several possible symbols that could occur at a single site. Rather than attempting to distinguish between four voltages on one wire, digital designers have settled on two voltages per wire, high and low. Computers use two-value Boolean circuits for the above reasons. The most common computer architectures use ordered sequences of Boolean values, called bits, of 32 or 64 values, e.g. 01101000110101100101010101001011. When programming in machine code, assembly language, and certain other programming languages, programmers work with the low-level digital structure of the data registers. These registers operate on voltages, where zero volts represents Boolean 0, and a reference voltage (often +5 V, +3.3 V, or +1.8 V) represents Boolean 1. Such languages support both numeric operations and logical operations. In this context, "numeric" means that the computer treats sequences of bits as binary numbers (base two numbers) and executes arithmetic operations like add, subtract, multiply, or divide. "Logical" refers to the Boolean logical operations of disjunction, conjunction, and negation between two sequences of bits, in which each bit in one sequence is simply compared to its counterpart in the other sequence. Programmers therefore have the option of working in and applying the rules of either numeric algebra or Boolean algebra as needed. A core differentiating feature between these families of operations is the existence of the carry operation in the first but not the second. Two-valued logic Other areas where two values is a good choice are the law and mathematics. In everyday relaxed conversation, nuanced or complex answers such as "maybe" or "only on the weekend" are acceptable. In more focused situations such as a court of law or theorem-based mathematics, however, it is deemed advantageous to frame questions so as to admit a simple yes-or-no answer—is the defendant guilty or not guilty, is the proposition true or false—and to disallow any other answer. However, limiting this might prove in practice for the respondent, the principle of the simple yes–no question has become a central feature of both judicial and mathematical logic, making two-valued logic deserving of organization and study in its own right. A central concept of set theory is membership. An organization may permit multiple degrees of membership, such as novice, associate, and full. With sets, however, an element is either in or out. The candidates for membership in a set work just like the wires in a digital computer: each candidate is either a member or a nonmember, just as each wire is either high or low. Algebra being a fundamental tool in any area amenable to mathematical treatment, these considerations combine to make the algebra of two values of fundamental importance to computer hardware, mathematical logic, and set theory. Two-valued logic can be extended to multi-valued logic, notably by replacing the Boolean domain {0, 1} with the unit interval [0,1], in which case rather than only taking values 0 or 1, any value between and including 0 and 1 can be assumed. Algebraically, negation (NOT) is replaced with 1 − x, conjunction (AND) is replaced with multiplication (xy), and disjunction (OR) is defined via De Morgan's law. Interpreting these values as logical truth values yields a multi-valued logic, which forms the basis for fuzzy logic and probabilistic logic. In these interpretations, a value is interpreted as the "degree" of truth – to what extent a proposition is true, or the probability that the proposition is true. Boolean operations The original application for Boolean operations was mathematical logic, where it combines the truth values, true or false, of individual formulas. Natural language Natural languages such as English have words for several Boolean operations, in particular conjunction (and), disjunction (or), negation (not), and implication (implies). But not is synonymous with and not. When used to combine situational assertions such as "the block is on the table" and "cats drink milk", which naïvely are either true or false, the meanings of these logical connectives often have the meaning of their logical counterparts. However, with descriptions of behavior such as "Jim walked through the door", one starts to notice differences such as failure of commutativity, for example, the conjunction of "Jim opened the door" with "Jim walked through the door" in that order is not equivalent to their conjunction in the other order, since and usually means and then in such cases. Questions can be similar: the order "Is the sky blue, and why is the sky blue?" makes more sense than the reverse order. Conjunctive commands about behavior are like behavioral assertions, as in get dressed and go to school. Disjunctive commands such love me or leave me or fish or cut bait tend to be asymmetric via the implication that one alternative is less preferable. Conjoined nouns such as tea and milk generally describe aggregation as with set union while tea or milk is a choice. However, context can reverse these senses, as in your choices are coffee and tea which usually means the same as your choices are coffee or tea (alternatives). Double negation, as in "I don't not like milk", rarely means literally "I do like milk" but rather conveys some sort of hedging, as though to imply that there is a third possibility. "Not not P" can be loosely interpreted as "surely P", and although P necessarily implies "not not P," the converse is suspect in English, much as with intuitionistic logic. In view of the highly idiosyncratic usage of conjunctions in natural languages, Boolean algebra cannot be considered a reliable framework for interpreting them. Digital logic Boolean operations are used in digital logic to combine the bits carried on individual wires, thereby interpreting them over {0,1}. When a vector of n identical binary gates are used to combine two bit vectors each of n bits, the individual bit operations can be understood collectively as a single operation on values from a Boolean algebra with 2n elements. Naive set theory Naive set theory interprets Boolean operations as acting on subsets of a given set X. As we saw earlier this behavior exactly parallels the coordinate-wise combinations of bit vectors, with the union of two sets corresponding to the disjunction of two bit vectors and so on. Video cards The 256-element free Boolean algebra on three generators is deployed in computer displays based on raster graphics, which use bit blit to manipulate whole regions consisting of pixels, relying on Boolean operations to specify how the source region should be combined with the destination, typically with the help of a third region called the mask. Modern video cards offer all ternary operations for this purpose, with the choice of operation being a one-byte (8-bit) parameter. The constants or , or , and or allow Boolean operations such as (meaning XOR the source and destination and then AND the result with the mask) to be written directly as a constant denoting a byte calculated at compile time, in the example, if just , etc. At run time the video card interprets the byte as the raster operation indicated by the original expression in a uniform way that requires remarkably little hardware and which takes time completely independent of the complexity of the expression. Modeling and CAD Solid modeling systems for computer aided design offer a variety of methods for building objects from other objects, combination by Boolean operations being one of them. In this method the space in which objects exist is understood as a set S of voxels (the three-dimensional analogue of pixels in two-dimensional graphics) and shapes are defined as subsets of S, allowing objects to be combined as sets via union, intersection, etc. One obvious use is in building a complex shape from simple shapes simply as the union of the latter. Another use is in sculpting understood as removal of material: any grinding, milling, routing, or drilling operation that can be performed with physical machinery on physical materials can be simulated on the computer with the Boolean operation or , which in set theory is set difference, remove the elements of y from those of x. Thus given two shapes one to be machined and the other the material to be removed, the result of machining the former to remove the latter is described simply as their set difference. Boolean searches Search engine queries also employ Boolean logic. For this application, each web page on the Internet may be considered to be an "element" of a "set." The following examples use a syntax supported by Google. Doublequotes are used to combine whitespace-separated words into a single search term. Whitespace is used to specify logical AND, as it is the default operator for joining search terms: "Search term 1" "Search term 2" The OR keyword is used for logical OR: "Search term 1" OR "Search term 2" A prefixed minus sign is used for logical NOT: "Search term 1" −"Search term 2"
Mathematics
Algebra
null
63030231
https://en.wikipedia.org/wiki/COVID-19
COVID-19
Coronavirus disease 2019 (COVID-19) is a contagious disease caused by the coronavirus SARS-CoV-2. In January 2020 the disease spread worldwide, resulting in the COVID-19 pandemic. The symptoms of COVID‑19 can vary but often include fever, fatigue, cough, breathing difficulties, loss of smell, and loss of taste. Symptoms may begin one to fourteen days after exposure to the virus. At least a third of people who are infected do not develop noticeable symptoms. Of those who develop symptoms noticeable enough to be classified as patients, most (81%) develop mild to moderate symptoms (up to mild pneumonia), while 14% develop severe symptoms (dyspnea, hypoxia, or more than 50% lung involvement on imaging), and 5% develop critical symptoms (respiratory failure, shock, or multiorgan dysfunction). Older people have a higher risk of developing severe symptoms. Some complications result in death. Some people continue to experience a range of effects (long COVID) for months or years after infection, and damage to organs has been observed. Multi-year studies on the long-term effects are ongoing. COVID‑19 transmission occurs when infectious particles are breathed in or come into contact with the eyes, nose, or mouth. The risk is highest when people are in close proximity, but small airborne particles containing the virus can remain suspended in the air and travel over longer distances, particularly indoors. Transmission can also occur when people touch their eyes, nose or mouth after touching surfaces or objects that have been contaminated by the virus. People remain contagious for up to 20 days and can spread the virus even if they do not develop symptoms. Testing methods for COVID-19 to detect the virus's nucleic acid include real-time reverse transcription polymerase chain reaction (RTPCR), transcription-mediated amplification, and reverse transcription loop-mediated isothermal amplification (RTLAMP) from a nasopharyngeal swab. Several COVID-19 vaccines have been approved and distributed in various countries, many of which have initiated mass vaccination campaigns. Other preventive measures include physical or social distancing, quarantining, ventilation of indoor spaces, use of face masks or coverings in public, covering coughs and sneezes, hand washing, and keeping unwashed hands away from the face. While drugs have been developed to inhibit the virus, the primary treatment is still symptomatic, managing the disease through supportive care, isolation, and experimental measures. The first known case was identified in Wuhan, China, in December 2019. Most scientists believe the SARS-CoV-2 virus entered into human populations through natural zoonosis, similar to the SARS-CoV-1 and MERS-CoV outbreaks, and consistent with other pandemics in human history. Social and environmental factors including climate change, natural ecosystem destruction and wildlife trade increased the likelihood of such zoonotic spillover. Nomenclature During the initial outbreak in Wuhan, the virus and disease were commonly referred to as "coronavirus" and "Wuhan coronavirus", with the disease sometimes called "Wuhan pneumonia". In the past, many diseases have been named after geographical locations, such as the Spanish flu, Middle East respiratory syndrome, and Zika virus. In January 2020, the World Health Organization (WHO) recommended 2019-nCoV and 2019-nCoV acute respiratory disease as interim names for the virus and disease per 2015 guidance and international guidelines against using geographical locations or groups of people in disease and virus names to prevent social stigma. The official names COVID‑19 and SARS-CoV-2 were issued by the WHO on 11 February 2020 with COVID-19 being shorthand for "coronavirus disease 2019". The WHO additionally uses "the COVID‑19 virus" and "the virus responsible for COVID‑19" in public communications. Symptoms and signs Complications Complications may include pneumonia, acute respiratory distress syndrome (ARDS), multi-organ failure, septic shock, and death. Cardiovascular complications may include heart failure, arrhythmias (including atrial fibrillation), heart inflammation, thrombosis, particularly venous thromboembolism, and endothelial cell injury and dysfunction. Approximately 20–30% of people who present with COVID‑19 have elevated liver enzymes, reflecting liver injury. Neurologic manifestations include seizure, stroke, encephalitis, and Guillain–Barré syndrome (which includes loss of motor functions). Following the infection, children may develop paediatric multisystem inflammatory syndrome, which has symptoms similar to Kawasaki disease, which can be fatal. In very rare cases, acute encephalopathy can occur, and it can be considered in those who have been diagnosed with COVID‑19 and have an altered mental status. According to the US Centers for Disease Control and Prevention, pregnant women are at increased risk of becoming seriously ill from COVID‑19. This is because pregnant women with COVID‑19 appear to be more likely to develop respiratory and obstetric complications that can lead to miscarriage, premature delivery and intrauterine growth restriction. Fungal infections such as aspergillosis, candidiasis, cryptococcosis and mucormycosis have been recorded in people recovering from COVID‑19. Cause COVID‑19 is caused by infection with a strain of coronavirus known as "severe acute respiratory syndrome coronavirus 2" (SARS-CoV-2). Transmission Virology Severe acute respiratory syndrome coronavirus2 (SARS-CoV-2) is a novel severe acute respiratory syndrome coronavirus. It was first isolated from three people with pneumonia connected to the cluster of acute respiratory illness cases in Wuhan. All structural features of the novel SARS-CoV-2 virus particle occur in related coronaviruses in nature, particularly in Rhinolophus sinicus (Chinese horseshoe bats). Outside the human body, the virus is destroyed by household soap which bursts its protective bubble. Hospital disinfectants, alcohols, heat, povidone-iodine, and ultraviolet-C (UV-C) irradiation are also effective disinfection methods for surfaces. SARS-CoV-2 is closely related to the original SARS-CoV. It is thought to have an animal (zoonotic) origin. Genetic analysis has revealed that the coronavirus genetically clusters with the genus Betacoronavirus, in subgenus Sarbecovirus (lineage B) together with two bat-derived strains. It is 96% identical at the whole genome level to other bat coronavirus samples (BatCov RaTG13). The structural proteins of SARS-CoV-2 include membrane glycoprotein (M), envelope protein (E), nucleocapsid protein (N), and the spike protein (S). The M protein of SARS-CoV-2 is about 98% similar to the M protein of bat SARS-CoV, maintains around 98% homology with pangolin SARS-CoV, and has 90% homology with the M protein of SARS-CoV; whereas, the similarity is only around 38% with the M protein of MERS-CoV. SARS-CoV-2 variants The many thousands of SARS-CoV-2 variants are grouped into either clades or lineages. The WHO, in collaboration with partners, expert networks, national authorities, institutions and researchers, have established nomenclature systems for naming and tracking SARS-CoV-2 genetic lineages by GISAID, Nextstrain and Pango. The expert group convened by the WHO recommended the labelling of variants using letters of the Greek alphabet, for example, Alpha, Beta, Delta, and Gamma, giving the justification that they "will be easier and more practical to discussed by non-scientific audiences". Nextstrain divides the variants into five clades (19A, 19B, 20A, 20B, and 20C), while GISAID divides them into seven (L, O, V, S, G, GH, and GR). The Pango tool groups variants into lineages, with many circulating lineages being classed under the B.1 lineage. Several notable variants of SARS-CoV-2 emerged throughout 2020. Cluster 5 emerged among minks and mink farmers in Denmark. After strict quarantines and the slaughter of all the country's mink, the cluster was assessed to no longer be circulating among humans in Denmark as of 1 February 2021. , there are five dominant variants of SARS-CoV-2 spreading among global populations: the Alpha variant (B.1.1.7, formerly called the UK variant), first found in London and Kent, the Beta variant (B.1.351, formerly called the South Africa variant), the Gamma variant (P.1, formerly called the Brazil variant), the Delta variant (B.1.617.2, formerly called the India variant), and the Omicron variant (B.1.1.529), which had spread to 57 countries as of 7 December. On December 19, 2023, the WHO declared that another distinctive variant, JN.1, had emerged as a "variant of interest". Though the WHO expected an increase in cases globally, particularly for countries entering winter, the overall global health risk was considered low. Pathophysiology The SARS-CoV-2 virus can infect a wide range of cells and systems of the body. COVID‑19 is most known for affecting the upper respiratory tract (sinuses, nose, and throat) and the lower respiratory tract (windpipe and lungs). The lungs are the organs most affected by COVID‑19 because the virus accesses host cells via the receptor for the enzyme angiotensin-converting enzyme 2 (ACE2), which is most abundant on the surface of type II alveolar cells of the lungs. The virus uses a special surface glycoprotein called a "spike" to connect to the ACE2 receptor and enter the host cell. Respiratory tract Following viral entry, COVID‑19 infects the ciliated epithelium of the nasopharynx and upper airways. Autopsies of people who died of COVID‑19 have found diffuse alveolar damage, and lymphocyte-containing inflammatory infiltrates within the lung. From the CT scans of COVID-19 infected lungs, white patches were observed containing fluid known as ground-glass opacity (GGO) or simply ground glass. This tended to correlate with the clear jelly liquid found in lung autopsies of people who died of COVID-19. One possibility addressed in medical research is that hyuralonic acid (HA) could be the leading factor for this observation of the clear jelly liquid found in the lungs, in what could be hyuralonic storm, in conjunction with cytokine storm. Nervous system One common symptom, loss of smell, results from infection of the support cells of the olfactory epithelium, with subsequent damage to the olfactory neurons. The involvement of both the central and peripheral nervous system in COVID‑19 has been reported in many medical publications. It is clear that many people with COVID-19 exhibit neurological or mental health issues. The virus is not detected in the central nervous system (CNS) of the majority of people with COVID-19 who also have neurological issues. However, SARS-CoV-2 has been detected at low levels in the brains of those who have died from COVID‑19, but these results need to be confirmed. While virus has been detected in cerebrospinal fluid of autopsies, the exact mechanism by which it invades the CNS remains unclear and may first involve invasion of peripheral nerves given the low levels of ACE2 in the brain. The virus may also enter the bloodstream from the lungs and cross the blood–brain barrier to gain access to the CNS, possibly within an infected white blood cell. Research conducted when Alpha was the dominant variant has suggested COVID-19 may cause brain damage. Later research showed that all variants studied (including Omicron) killed brain cells, but the exact cells killed varied by variant. It is unknown if such damage is temporary or permanent. Observed individuals infected with COVID-19 (most with mild cases) experienced an additional 0.2% to 2% of brain tissue lost in regions of the brain connected to the sense of smell compared with uninfected individuals, and the overall effect on the brain was equivalent on average to at least one extra year of normal ageing; infected individuals also scored lower on several cognitive tests. All effects were more pronounced among older ages. Gastrointestinal tract The virus also affects gastrointestinal organs as ACE2 is abundantly expressed in the glandular cells of gastric, duodenal and rectal epithelium as well as endothelial cells and enterocytes of the small intestine. Cardiovascular system The virus can cause acute myocardial injury and chronic damage to the cardiovascular system. An acute cardiac injury was found in 12% of infected people admitted to the hospital in Wuhan, China, and is more frequent in severe disease. Rates of cardiovascular symptoms are high, owing to the systemic inflammatory response and immune system disorders during disease progression, but acute myocardial injuries may also be related to ACE2 receptors in the heart. ACE2 receptors are highly expressed in the heart and are involved in heart function. A high incidence of thrombosis and venous thromboembolism occurs in people transferred to intensive care units with COVID‑19 infections, and may be related to poor prognosis. Blood vessel dysfunction and clot formation (as suggested by high D-dimer levels caused by blood clots) may have a significant role in mortality, incidents of clots leading to pulmonary embolisms, and ischaemic events (strokes) within the brain found as complications leading to death in people infected with COVID‑19. Infection may initiate a chain of vasoconstrictive responses within the body, including pulmonary vasoconstriction a possible mechanism in which oxygenation decreases during pneumonia. Furthermore, damage of arterioles and capillaries was found in brain tissue samples of people who died from COVID‑19. COVID19 may also cause substantial structural changes to blood cells, sometimes persisting for months after hospital discharge. A low level of blood lymphocytess may result from the virus acting through ACE2-related entry into lymphocytes. Kidneys Another common cause of death is complications related to the kidneys. Early reports show that up to 30% of people hospitalised with COVID-19 both in China and in New York have experienced some injury to their kidneys, including some persons with no previous kidney problems. Immunopathology Although SARS-CoV-2 has a tropism for ACE2-expressing epithelial cells of the respiratory tract, people with severe COVID‑19 have symptoms of systemic hyperinflammation. Clinical laboratory findings of elevated IL2, IL6, IL7, as well as the following suggest an underlying immunopathology: Granulocyte-macrophage colony-stimulating factor (GMCSF) Interferon gamma-induced protein10 (IP10) Monocyte chemoattractant protein1 (MCP1) Macrophage inflammatory protein 1alpha (MIP1alpha) Tumour necrosis factor (TNFα) indicative of cytokine release syndrome (CRS) Interferon alpha plays a complex, Janus-faced role in the pathogenesis of COVID-19. Although it promotes the elimination of virus-infected cells, it also upregulates the expression of ACE-2, thereby facilitating the SARS-Cov2 virus to enter cells and to replicate. A competition of negative feedback loops (via protective effects of interferon alpha) and positive feedback loops (via upregulation of ACE-2) is assumed to determine the fate of people with COVID-19. Additionally, people with COVID‑19 and acute respiratory distress syndrome (ARDS) have classical serum biomarkers of CRS, including elevated C-reactive protein (CRP), lactate dehydrogenase (LDH), D-dimer, and ferritin. Systemic inflammation results in vasodilation, allowing inflammatory lymphocytic and monocytic infiltration of the lung and the heart. In particular, pathogenic GM-CSF-secreting T cells were shown to correlate with the recruitment of inflammatory IL-6-secreting monocytes and severe lung pathology in people with COVID‑19. Lymphocytic infiltrates have also been reported at autopsy. Viral and host factors Virus proteins Multiple viral and host factors affect the pathogenesis of the virus. The S-protein, otherwise known as the spike protein, is the viral component that attaches to the host receptor via the ACE2 receptors. It includes two subunits: S1 and S2. S1 determines the virus-host range and cellular tropism via the receptor-binding domain. S2 mediates the membrane fusion of the virus to its potential cell host via the H1 and HR2, which are heptad repeat regions. Studies have shown that S1 domain induced IgG and IgA antibody levels at a much higher capacity. It is the focus spike proteins expression that are involved in many effective COVID‑19 vaccines. The M protein is the viral protein responsible for the transmembrane transport of nutrients. It is the cause of the bud release and the formation of the viral envelope. The N and E protein are accessory proteins that interfere with the host's immune response. Host factors Human angiotensin converting enzyme 2 (hACE2) is the host factor that SARS-CoV-2 virus targets causing COVID‑19. Theoretically, the usage of angiotensin receptor blockers (ARB) and ACE inhibitors upregulating ACE2 expression might increase morbidity with COVID‑19, though animal data suggest some potential protective effect of ARB; however no clinical studies have proven susceptibility or outcomes. Until further data is available, guidelines and recommendations for people with hypertension remain. The effect of the virus on ACE2 cell surfaces leads to leukocytic infiltration, increased blood vessel permeability, alveolar wall permeability, as well as decreased secretion of lung surfactants. These effects cause the majority of the respiratory symptoms. However, the aggravation of local inflammation causes a cytokine storm eventually leading to a systemic inflammatory response syndrome. Among healthy adults not exposed to SARS-CoV-2, about 35% have CD4+ T cells that recognise the SARS-CoV-2 S protein (particularly the S2 subunit) and about 50% react to other proteins of the virus, suggesting cross-reactivity from previous common colds caused by other coronaviruses. It is unknown whether different persons use similar antibody genes in response to COVID‑19. Host cytokine response The severity of the inflammation can be attributed to the severity of what is known as the cytokine storm. Levels of interleukin1B, interferon-gamma, interferon-inducible protein 10, and monocyte chemoattractant protein1 were all associated with COVID‑19 disease severity. Treatment has been proposed to combat the cytokine storm as it remains to be one of the leading causes of morbidity and mortality in COVID‑19 disease. A cytokine storm is due to an acute hyperinflammatory response that is responsible for clinical illness in an array of diseases but in COVID‑19, it is related to worse prognosis and increased fatality. The storm causes acute respiratory distress syndrome, blood clotting events such as strokes, myocardial infarction, encephalitis, acute kidney injury, and vasculitis. The production of IL-1, IL-2, IL-6, TNF-alpha, and interferon-gamma, all crucial components of normal immune responses, inadvertently become the causes of a cytokine storm. The cells of the central nervous system, the microglia, neurons, and astrocytes, are also involved in the release of pro-inflammatory cytokines affecting the nervous system, and effects of cytokine storms toward the CNS are not uncommon. Pregnancy response There are many unknowns for pregnant women during the COVID-19 pandemic. Given that they are prone to have complications and severe disease infection with other types of coronaviruses, they have been identified as a vulnerable group and advised to take supplementary preventive measures. Physiological responses to pregnancy can include: Immunological: The immunological response to COVID-19, like other viruses, depends on a working immune system. It adapts during pregnancy to allow the development of the foetus whose genetic load is only partially shared with their mother, leading to a different immunological reaction to infections during the course of pregnancy. Respiratory: Many factors can make pregnant women more vulnerable to hard respiratory infections. One of them is the total reduction of the lungs' capacity and inability to clear secretions. Coagulation: During pregnancy, there are higher levels of circulating coagulation factors, and the pathogenesis of SARS-CoV-2 infection can be implicated. The thromboembolic events with associated mortality are a risk for pregnant women. However, from the evidence base, it is difficult to conclude whether pregnant women are at increased risk of grave consequences of this virus. In addition to the above, other clinical studies have proved that SARS-CoV-2 can affect the period of pregnancy in different ways. On the one hand, there is little evidence of its impact up to 12 weeks gestation. On the other hand, COVID-19 infection may cause increased rates of unfavourable outcomes in the course of the pregnancy. Some examples of these could be foetal growth restriction, preterm birth, and perinatal mortality, which refers to the foetal death past 22 or 28 completed weeks of pregnancy as well as the death among live-born children up to seven completed days of life. For preterm birth, a 2023 review indicates that there appears to be a correlation with COVID-19. Unvaccinated women in later stages of pregnancy with COVID-19 are more likely than other people to need very intensive care. Babies born to mothers with COVID-19 are more likely to have breathing problems. Pregnant women are strongly encouraged to get vaccinated. Diagnosis COVID‑19 can provisionally be diagnosed on the basis of symptoms and confirmed using reverse transcription polymerase chain reaction (RT-PCR) or other nucleic acid testing of infected secretions. Along with laboratory testing, chest CT scans may be helpful to diagnose COVID‑19 in individuals with a high clinical suspicion of infection. Detection of a past infection is possible with serological tests, which detect antibodies produced by the body in response to the infection. Viral testing The standard methods of testing for presence of SARS-CoV-2 are nucleic acid tests, which detects the presence of viral RNA fragments. As these tests detect RNA but not infectious virus, its "ability to determine duration of infectivity of patients is limited". The test is typically done on respiratory samples obtained by a nasopharyngeal swab; however, a nasal swab or sputum sample may also be used. Results are generally available within hours. The WHO has published several testing protocols for the disease. Several laboratories and companies have developed serological tests, which detect antibodies produced by the body in response to infection. Some have been evaluated by Public Health England and approved for use in the UK. The University of Oxford's CEBM has pointed to mounting evidence that "a good proportion of 'new' mild cases and people re-testing positives after quarantine or discharge from hospital are not infectious, but are simply clearing harmless virus particles which their immune system has efficiently dealt with" and have called for "an international effort to standardize and periodically calibrate testing" In September 2020, the UK government issued "guidance for procedures to be implemented in laboratories to provide assurance of positive SARS-CoV-2 RNA results during periods of low prevalence, when there is a reduction in the predictive value of positive test results". Imaging Chest CT scans may be helpful to diagnose COVID‑19 in individuals with a high clinical suspicion of infection but are not recommended for routine screening. Bilateral multilobar ground-glass opacities with a peripheral, asymmetric, and posterior distribution are common in early infection. Subpleural dominance, crazy paving (lobular septal thickening with variable alveolar filling), and consolidation may appear as the disease progresses. Characteristic imaging features on chest radiographs and computed tomography (CT) of people who are symptomatic include asymmetric peripheral ground-glass opacities without pleural effusions. Many groups have created COVID‑19 datasets that include imagery such as the Italian Radiological Society which has compiled an international online database of imaging findings for confirmed cases. Due to overlap with other infections such as adenovirus, imaging without confirmation by rRT-PCR is of limited specificity in identifying COVID‑19. A large study in China compared chest CT results to PCR and demonstrated that though imaging is less specific for the infection, it is faster and more sensitive. Coding In late 2019, the WHO assigned emergency ICD-10 disease codes U07.1 for deaths from lab-confirmed SARS-CoV-2 infection and U07.2 for deaths from clinically or epidemiologically diagnosed COVID‑19 without lab-confirmed SARS-CoV-2 infection. Pathology The main pathological findings at autopsy are: Macroscopy: pericarditis, lung consolidation and pulmonary oedema Lung findings: Minor serous exudation, minor fibrin exudation Pulmonary oedema, pneumocyte hyperplasia, large atypical pneumocytes, interstitial inflammation with lymphocytic infiltration and multinucleated giant cell formation Diffuse alveolar damage (DAD) with diffuse alveolar exudates. DAD is the cause of acute respiratory distress syndrome (ARDS) and severe hypoxaemia. Organisation of exudates in alveolar cavities and pulmonary interstitial fibrosis Plasmocytosis in bronchoalveolar lavage (BAL) Blood and vessels: disseminated intravascular coagulation (DIC); leukoerythroblastic reaction, endotheliitis, hemophagocytosis Heart: cardiac muscle cell necrosis Liver: microvesicular steatosis Nose: shedding of olfactory epithelium Brain: infarction Kidneys: acute tubular damage. Spleen: white pulp depletion. Prevention Preventive measures to reduce the chances of infection include getting vaccinated, staying at home, wearing a mask in public, avoiding crowded places, keeping distance from others, ventilating indoor spaces, managing potential exposure durations, washing hands with soap and water often and for at least twenty seconds, practising good respiratory hygiene, and avoiding touching the eyes, nose, or mouth with unwashed hands. Those diagnosed with COVID‑19 or who believe they may be infected are advised by the CDC to stay home except to get medical care, call ahead before visiting a healthcare provider, wear a face mask before entering the healthcare provider's office and when in any room or vehicle with another person, cover coughs and sneezes with a tissue, regularly wash hands with soap and water and avoid sharing personal household items. The first COVID‑19 vaccine was granted regulatory approval on 2December 2020 by the UK medicines regulator MHRA. It was evaluated for emergency use authorisation (EUA) status by the US FDA, and in several other countries. Initially, the US National Institutes of Health guidelines do not recommend any medication for prevention of COVID‑19, before or after exposure to the SARS-CoV-2 virus, outside the setting of a clinical trial. Without a vaccine, other prophylactic measures, or effective treatments, a key part of managing COVID‑19 is trying to decrease and delay the epidemic peak, known as "flattening the curve". This is done by slowing the infection rate to decrease the risk of health services being overwhelmed, allowing for better treatment of active cases, and delaying additional cases until effective treatments or a vaccine become available. Vaccine Face masks and respiratory hygiene Indoor ventilation and avoiding crowded indoor spaces The CDC states that avoiding crowded indoor spaces reduces the risk of COVID-19 infection. When indoors, increasing the rate of air change, decreasing recirculation of air and increasing the use of outdoor air can reduce transmission. The WHO recommends ventilation and air filtration in public spaces to help clear out infectious aerosols. Exhaled respiratory particles can build-up within enclosed spaces with inadequate ventilation. The risk of COVID‑19 infection increases especially in spaces where people engage in physical exertion or raise their voice (e.g., exercising, shouting, singing) as this increases exhalation of respiratory droplets. Prolonged exposure to these conditions, typically more than 15 minutes, leads to higher risk of infection. Displacement ventilation with large natural inlets can move stale air directly to the exhaust in laminar flow while significantly reducing the concentration of droplets and particles. Passive ventilation reduces energy consumption and maintenance costs but may lack controllability and heat recovery. Displacement ventilation can also be achieved mechanically with higher energy and maintenance costs. The use of large ducts and openings helps to prevent mixing in closed environments. Recirculation and mixing should be avoided because recirculation prevents dilution of harmful particles and redistributes possibly contaminated air, and mixing increases the concentration and range of infectious particles and keeps larger particles in the air. Hand-washing and hygiene Thorough hand hygiene after any cough or sneeze is required. The WHO also recommends that individuals wash hands often with soap and water for at least twenty seconds, especially after going to the toilet or when hands are visibly dirty, before eating and after blowing one's nose. When soap and water are not available, the CDC recommends using an alcohol-based hand sanitiser with at least 60% alcohol. For areas where commercial hand sanitisers are not readily available, the WHO provides two formulations for local production. In these formulations, the antimicrobial activity arises from ethanol or isopropanol. Hydrogen peroxide is used to help eliminate bacterial spores in the alcohol; it is "not an active substance for hand antisepsis". Glycerol is added as a humectant. Social distancing Social distancing (also known as physical distancing) includes infection control actions intended to slow the spread of the disease by minimising close contact between individuals. Methods include quarantines; travel restrictions; and the closing of schools, workplaces, stadiums, theatres, or shopping centres. Individuals may apply social distancing methods by staying at home, limiting travel, avoiding crowded areas, using no-contact greetings, and physically distancing themselves from others. In 2020, outbreaks occurred in prisons due to crowding and an inability to enforce adequate social distancing. In the United States, the prisoner population is ageing and many of them are at high risk for poor outcomes from COVID‑19 due to high rates of coexisting heart and lung disease, and poor access to high-quality healthcare. Surface cleaning After being expelled from the body, coronaviruses can survive on surfaces for hours to days. If a person touches the dirty surface, they may deposit the virus at the eyes, nose, or mouth where it can enter the body and cause infection. Evidence indicates that contact with infected surfaces is not the main driver of COVID‑19, leading to recommendations for optimised disinfection procedures to avoid issues such as the increase of antimicrobial resistance through the use of inappropriate cleaning products and processes. Deep cleaning and other surface sanitation has been criticised as hygiene theatre, giving a false sense of security against something primarily spread through the air. The amount of time that the virus can survive depends significantly on the type of surface, the temperature, and the humidity. Coronaviruses die very quickly when exposed to the UV light in sunlight. Like other enveloped viruses, SARS-CoV-2 survives longest when the temperature is at room temperature or lower, and when the relative humidity is low (<50%). On many surfaces, including glass, some types of plastic, stainless steel, and skin, the virus can remain infective for several days indoors at room temperature, or even about a week under ideal conditions. On some surfaces, including cotton fabric and copper, the virus usually dies after a few hours. The virus dies faster on porous surfaces than on non-porous surfaces due to capillary action within pores and faster aerosol droplet evaporation. However, of the many surfaces tested, two with the longest survival times are N95 respirator masks and surgical masks, both of which are considered porous surfaces. The CDC says that in most situations, cleaning surfaces with soap or detergent, not disinfecting, is enough to reduce risk of transmission. The CDC recommends that if a COVID‑19 case is suspected or confirmed at a facility such as an office or day care, all areas such as offices, bathrooms, common areas, shared electronic equipment like tablets, touch screens, keyboards, remote controls, and ATMs used by the ill persons should be disinfected. Surfaces may be decontaminated with the following: 62–71% ethanol 50–100% isopropanol 0.1% sodium hypochlorite 0.5% hydrogen peroxide 0.2–7.5% povidone-iodine 50–200 ppm hypochlorous acid Other solutions, such as benzalkonium chloride and chlorhexidine gluconate, are less effective. Ultraviolet germicidal irradiation may also be used, although popular devices require exposure and may deteriorate some materials over time. A datasheet listing the authorised substances to disinfection in the food industry (including suspension or surface tested, kind of surface, use dilution, disinfectant and inoculum volumes) can be seen in the supplementary material of a 2021 Foods article. Self-isolation Self-isolation at home has been recommended for those diagnosed with COVID‑19 and those who suspect they have been infected. Health agencies have issued detailed instructions for proper self-isolation. Many governments have mandated or recommended self-quarantine for entire populations. The strongest self-quarantine instructions have been issued to those in high-risk groups. Those who may have been exposed to someone with COVID‑19 and those who have recently travelled to a country or region with the widespread transmission have been advised to self-quarantine for 14 days from the time of last possible exposure. International travel-related control measures A 2021 Cochrane rapid review found that based upon low-certainty evidence, international travel-related control measures such as restricting cross-border travel may help to contain the spread of COVID‑19. Additionally, symptom/exposure-based screening measures at borders may miss many positive cases. While test-based border screening measures may be more effective, it could also miss many positive cases if only conducted upon arrival without follow-up. The review concluded that a minimum 10-day quarantine may be beneficial in preventing the spread of COVID‑19 and may be more effective if combined with an additional control measure like border screening. Treatment Prognosis and risk factors The severity of COVID‑19 varies. The disease may take a mild course with few or no symptoms, resembling other common upper respiratory diseases such as the common cold. In 3–4% of cases (7.4% for those over age 65) symptoms are severe enough to cause hospitalisation. Mild cases typically recover within two weeks, while those with severe or critical diseases may take three to six weeks to recover. Among those who have died, the time from symptom onset to death has ranged from two to eight weeks. The Italian Istituto Superiore di Sanità reported that the median time between the onset of symptoms and death was twelve days, with seven being hospitalised. However, people transferred to an ICU had a median time of ten days between hospitalisation and death. Abnormal sodium levels during hospitalisation with COVID-19 are associated with poor prognoses: high sodium with a greater risk of death, and low sodium with an increased chance of needing ventilator support. Prolonged prothrombin time and elevated C-reactive protein levels on admission to the hospital are associated with severe course of COVID‑19 and with a transfer to ICU. Some early studies suggest 10% to 20% of people with COVID‑19 will experience symptoms lasting longer than a month. A majority of those who were admitted to hospital with severe disease report long-term problems including fatigue and shortness of breath. On 30 October 2020, WHO chief Tedros Adhanom warned that "to a significant number of people, the COVID virus poses a range of serious long-term effects". He has described the vast spectrum of COVID‑19 symptoms that fluctuate over time as "really concerning". They range from fatigue, a cough and shortness of breath, to inflammation and injury of major organsincluding the lungs and heart, and also neurological and psychologic effects. Symptoms often overlap and can affect any system in the body. Infected people have reported cyclical bouts of fatigue, headaches, months of complete exhaustion, mood swings, and other symptoms. Tedros therefore concluded that a strategy of achieving herd immunity by infection, rather than vaccination, is "morally unconscionable and unfeasible". In terms of hospital readmissions about 9% of 106,000 individuals had to return for hospital treatment within two months of discharge. The average to readmit was eight days since first hospital visit. There are several risk factors that have been identified as being a cause of multiple admissions to a hospital facility. Among these are advanced age (above 65 years of age) and presence of a chronic condition such as diabetes, COPD, heart failure or chronic kidney disease. According to scientific reviews smokers are more likely to require intensive care or die compared to non-smokers. Acting on the same ACE2 pulmonary receptors affected by smoking, air pollution has been correlated with the disease. Short-term and chronic exposure to air pollution seems to enhance morbidity and mortality from COVID‑19. Pre-existing heart and lung diseases and also obesity, especially in conjunction with fatty liver disease, contributes to an increased health risk of COVID‑19. It is also assumed that those that are immunocompromised are at higher risk of getting severely sick from SARS-CoV-2. One research study that looked into the COVID‑19 infections in hospitalised kidney transplant recipients found a mortality rate of 11%. Men with untreated hypogonadism were 2.4 times more likely than men with eugonadism to be hospitalised if they contracted COVID-19; Hypogonad men treated with testosterone were less likely to be hospitalised for COVID-19 than men who were not treated for hypogonadism. Genetic risk factors Genetics plays an important role in the ability to fight off Covid. For instance, those that do not produce detectable type I interferons or produce auto-antibodies against these may get much sicker from COVID‑19. Genetic screening is able to detect interferon effector genes. Some genetic variants are risk factors in specific populations. For instance, an allele of the DOCK2 gene (dedicator of cytokinesis 2 gene) is a common risk factor in Asian populations but much less common in Europe. The mutation leads to lower expression of DOCK2 especially in younger people with severe COVID-19 infections. In fact, many other genes and genetic variants have been found that determine the outcome of SARS-CoV-2 infections. Children While very young children have experienced lower rates of infection, older children have a rate of infection that is similar to the population as a whole. Children are likely to have milder symptoms and are at lower risk of severe disease than adults. The CDC reports that in the US roughly a third of hospitalised children were admitted to the ICU, while a European multinational study of hospitalised children from June 2020, found that about 8% of children admitted to a hospital needed intensive care. Four of the 582 children (0.7%) in the European study died, but the actual mortality rate may be "substantially lower" since milder cases that did not seek medical help were not included in the study. Long-term effects Around 10% to 30% of non-hospitalised people with COVID-19 go on to develop long COVID. For those that do need hospitalisation, the incidence of long-term effects is over 50%. Long COVID is an often severe multisystem disease with a large set of symptoms. There are likely various, possibly coinciding, causes. Organ damage from the acute infection can explain a part of the symptoms, but long COVID is also observed in people where organ damage seems to be absent. By a variety of mechanisms, the lungs are the organs most affected in COVID19. In people requiring hospital admission, up to 98% of CT scans performed show lung abnormalities after 28 days of illness even if they had clinically improved. People with advanced age, severe disease, prolonged ICU stays, or who smoke are more likely to have long-lasting effects, including pulmonary fibrosis. Overall, approximately one-third of those investigated after four weeks will have findings of pulmonary fibrosis or reduced lung function as measured by DLCO, even in asymptomatic people, but with the suggestion of continuing improvement with the passing of more time. After severe disease, lung function can take anywhere from three months to a year or more to return to previous levels. The risks of cognitive deficit, dementia, psychotic disorders, and epilepsy or seizures persists at an increased level two years after infection. Immunity The immune response by humans to SARS-CoV-2 virus occurs as a combination of the cell-mediated immunity and antibody production, just as with most other infections. B cells interact with T cells and begin dividing before selection into the plasma cell, partly on the basis of their affinity for antigen. Since SARS-CoV-2 has been in the human population only since December 2019, it remains unknown if the immunity is long-lasting in people who recover from the disease. The presence of neutralising antibodies in blood strongly correlates with protection from infection, but the level of neutralising antibody declines with time. Those with asymptomatic or mild disease had undetectable levels of neutralising antibody two months after infection. In another study, the level of neutralising antibodies fell four-fold one to four months after the onset of symptoms. However, the lack of antibodies in the blood does not mean antibodies will not be rapidly produced upon reexposure to SARS-CoV-2. Memory B cells specific for the spike and nucleocapsid proteins of SARS-CoV-2 last for at least six months after the appearance of symptoms. As of August 2021, reinfection with COVID‑19 was possible but uncommon. The first case of reinfection was documented in August 2020. A systematic review found 17 cases of confirmed reinfection in medical literature as of May 2021. With the Omicron variant, as of 2022, reinfections have become common, albeit it is unclear how common. COVID-19 reinfections are thought to likely be less severe than primary infections, especially if one was previously infected by the same variant. Mortality Several measures are commonly used to quantify mortality. These numbers vary by region and over time and are influenced by the volume of testing, healthcare system quality, treatment options, time since the initial outbreak, and population characteristics such as age, sex, and overall health. The mortality rate reflects the number of deaths within a specific demographic group divided by the population of that demographic group. Consequently, the mortality rate reflects the prevalence as well as the severity of the disease within a given population. Mortality rates are highly correlated to age, with relatively low rates for young people and relatively high rates among the elderly. In fact, one relevant factor of mortality rates is the age structure of the countries' populations. For example, the case fatality rate for COVID‑19 is lower in India than in the US since India's younger population represents a larger percentage than in the US. Case fatality rate The case fatality rate (CFR) reflects the number of deaths divided by the number of diagnosed cases within a given time interval. Based on Johns Hopkins University statistics, the global death-to-case ratio is (/) as of . The number varies by region. Infection fatality rate A key metric in gauging the severity of COVID‑19 is the infection fatality rate (IFR), also referred to as the infection fatality ratio or infection fatality risk. This metric is calculated by dividing the total number of deaths from the disease by the total number of infected individuals; hence, in contrast to the CFR, the IFR incorporates asymptomatic and undiagnosed infections as well as reported cases. Estimates A December 2020 systematic review and meta-analysis estimated that population IFR during the first wave of the pandemic was about 0.5% to 1% in many locations (including France, Netherlands, New Zealand, and Portugal), 1% to 2% in other locations (Australia, England, Lithuania, and Spain), and exceeded 2% in Italy. That study also found that most of these differences in IFR reflected corresponding differences in the age composition of the population and age-specific infection rates; in particular, the metaregression estimate of IFR is very low for children and younger adults (e.g., 0.002% at age 10 and 0.01% at age 25) but increases progressively to 0.4% at age 55, 1.4% at age 65, 4.6% at age 75, and 15% at age 85. These results were also highlighted in a December 2020 report issued by the WHO. An analysis of those IFR rates indicates that COVID19 is hazardous not only for the elderly but also for middle-aged adults, for whom the infection fatality rate of COVID-19 is two orders of magnitude greater than the annualised risk of a fatal automobile accident and far more dangerous than seasonal influenza. Earlier estimates of IFR At an early stage of the pandemic, the World Health Organization reported estimates of IFR between 0.3% and 1%. On 2July, The WHO's chief scientist reported that the average IFR estimate presented at a two-day WHO expert forum was about 0.6%. In August, the WHO found that studies incorporating data from broad serology testing in Europe showed IFR estimates converging at approximately 0.5–1%. Firm lower limits of IFRs have been established in a number of locations such as New York City and Bergamo in Italy since the IFR cannot be less than the population fatality rate. (After sufficient time however, people can get reinfected). As of 10 July, in New York City, with a population of 8.4 million, 23,377 individuals (18,758 confirmed and 4,619 probable) have died with COVID‑19 (0.3% of the population). Antibody testing in New York City suggested an IFR of ≈0.9%, and ≈1.4%. In Bergamo province, 0.6% of the population has died. In September 2020, the U.S. Centers for Disease Control and Prevention (CDC) reported preliminary estimates of age-specific IFRs for public health planning purposes. Sex differences COVID‑19 case fatality rates are higher among men than women in most countries. However, in a few countries like India, Nepal, Vietnam, and Slovenia the fatality cases are higher in women than men. Globally, men are more likely to be admitted to the ICU and more likely to die. One meta-analysis found that globally, men were more likely to get COVID‑19 than women; there were approximately 55 men and 45 women per 100 infections (CI: 51.43–56.58). The Chinese Center for Disease Control and Prevention reported the death rate was 2.8% for men and 1.7% for women. Later reviews in June 2020 indicated that there is no significant difference in susceptibility or in CFR between genders. One review acknowledges the different mortality rates in Chinese men, suggesting that it may be attributable to lifestyle choices such as smoking and drinking alcohol rather than genetic factors. Smoking, which in some countries like China is mainly a male activity, is a habit that contributes to increasing significantly the case fatality rates among men. Sex-based immunological differences, lesser prevalence of smoking in women and men developing co-morbid conditions such as hypertension at a younger age than women could have contributed to the higher mortality in men. In Europe as of February 2020, 57% of the infected people were men and 72% of those died with COVID‑19 were men. As of April 2020, the US government is not tracking sex-related data of COVID‑19 infections. Research has shown that viral illnesses like Ebola, HIV, influenza and SARS affect men and women differently. Ethnic differences In the US, a greater proportion of deaths due to COVID‑19 have occurred among African Americans and other minority groups. Structural factors that prevent them from practising social distancing include their concentration in crowded substandard housing and in "essential" occupations such as retail grocery workers, public transit employees, health-care workers and custodial staff. Greater prevalence of lacking health insurance and care of underlying conditions such as diabetes, hypertension, and heart disease also increase their risk of death. Similar issues affect Native American and Latino communities. On the one hand, in the Dominican Republic there is a clear example of both gender and ethnic inequality. In this Latin American territory, there is great inequality and precariousness that especially affects Dominican women, with greater emphasis on those of Haitian descent. According to a US health policy non-profit, 34% of American Indian and Alaska Native People (AIAN) non-elderly adults are at risk of serious illness compared to 21% of white non-elderly adults. The source attributes it to disproportionately high rates of many health conditions that may put them at higher risk as well as living conditions like lack of access to clean water. Leaders have called for efforts to research and address the disparities. In the UK, a greater proportion of deaths due to COVID‑19 have occurred in those of a Black, Asian, and other ethnic minority background. More severe impacts upon patients including the relative incidence of the necessity of hospitalisation requirements, and vulnerability to the disease has been associated via DNA analysis to be expressed in genetic variants at chromosomal region 3, features that are associated with European Neanderthal heritage. That structure imposes greater risks that those affected will develop a more severe form of the disease. The findings are from Professor Svante Pääbo and researchers he leads at the Max Planck Institute for Evolutionary Anthropology and the Karolinska Institutet. This admixture of modern human and Neanderthal genes is estimated to have occurred roughly between 50,000 and 60,000 years ago in Southern Europe. Comorbidities Biological factors (immune response) and the general behaviour (habits) can strongly determine the consequences of COVID‑19. Most of those who die of COVID‑19 have pre-existing (underlying) conditions, including hypertension, diabetes mellitus, and cardiovascular disease. According to March data from the United States, 89% of those hospitalised had preexisting conditions. The Italian Istituto Superiore di Sanità reported that out of 8.8% of deaths where medical charts were available, 96.1% of people had at least one comorbidity with the average person having 3.4 diseases. According to this report the most common comorbidities are hypertension (66% of deaths), type 2 diabetes (29.8% of deaths), ischaemic heart disease (27.6% of deaths), atrial fibrillation (23.1% of deaths) and chronic renal failure (20.2% of deaths). Most critical respiratory comorbidities according to the US Centers for Disease Control and Prevention (CDC), are: moderate or severe asthma, pre-existing COPD, pulmonary fibrosis, cystic fibrosis. Evidence stemming from meta-analysis of several smaller research papers also suggests that smoking can be associated with worse outcomes. When someone with existing respiratory problems is infected with COVID‑19, they might be at greater risk for severe symptoms. COVID‑19 also poses a greater risk to people who misuse opioids and amphetamines, insofar as their drug use may have caused lung damage. In August 2020, the CDC issued a caution that tuberculosis (TB) infections could increase the risk of severe illness or death. The WHO recommended that people with respiratory symptoms be screened for both diseases, as testing positive for COVID‑19 could not rule out co-infections. Some projections have estimated that reduced TB detection due to the pandemic could result in 6.3 million additional TB cases and 1.4 million TB-related deaths by 2025. History The virus is thought to be of natural animal origin, most likely through spillover infection. A joint-study conducted in early 2021 by the People's Republic of China and the World Health Organization indicated that the virus descended from a coronavirus that infects wild bats, and likely spread to humans through an intermediary wildlife host. There are several theories about where the index case originated and investigations into the origin of the pandemic are ongoing. According to articles published in July 2022 in Science, virus transmission into humans occurred through two spillover events in November 2019 and was likely due to live wildlife trade on the Huanan wet market in the city of Wuhan (Hubei, China). Doubts about the conclusions have mostly centered on the precise site of spillover. Earlier phylogenetics estimated that SARS-CoV-2 arose in October or November 2019. A phylogenetic algorithm analysis suggested that the virus may have been circulating in Guangdong before Wuhan. Most scientists believe the virus spilled into human populations through natural zoonosis, similar to the SARS-CoV-1 and MERS-CoV outbreaks, and consistent with other pandemics in human history. According to the Intergovernmental Panel on Climate Change several social and environmental factors including climate change, natural ecosystem destruction and wildlife trade increased the likelihood of such zoonotic spillover. One study made with the support of the European Union found climate change increased the likelihood of the pandemic by influencing distribution of bat species. Available evidence suggests that the SARS-CoV-2 virus was originally harboured by bats, and spread to humans multiple times from infected wild animals at the Huanan Seafood Market in Wuhan in December 2019. A minority of scientists and some members of the U.S intelligence community believe the virus may have been unintentionally leaked from a laboratory such as the Wuhan Institute of Virology. The US intelligence community has mixed views on the issue, but overall agrees with the scientific consensus that the virus was not developed as a biological weapon and is unlikely to have been genetically engineered. There is no evidence SARS-CoV-2 existed in any laboratory prior to the pandemic. The first confirmed human infections were in Wuhan. A study of the first 41 cases of confirmed COVID‑19, published in January 2020 in The Lancet, reported the earliest date of onset of symptoms as 1December 2019. Official publications from the WHO reported the earliest onset of symptoms as 8December 2019. Human-to-human transmission was confirmed by the WHO and Chinese authorities by 20 January 2020. According to official Chinese sources, these were mostly linked to the Huanan Seafood Wholesale Market, which also sold live animals. In May 2020, George Gao, the director of the CDC, said animal samples collected from the seafood market had tested negative for the virus, indicating that the market was the site of an early superspreading event, but that it was not the site of the initial outbreak. Traces of the virus have been found in wastewater samples that were collected in Milan and Turin, Italy, on 18 December 2019. By December 2019, the spread of infection was almost entirely driven by human-to-human transmission. The number of COVID-19 cases in Hubei gradually increased, reaching sixty by 20 December, and at least 266 by 31 December. On 24 December, Wuhan Central Hospital sent a bronchoalveolar lavage fluid (BAL) sample from an unresolved clinical case to sequencing company Vision Medicals. On 27 and 28 December, Vision Medicals informed the Wuhan Central Hospital and the Chinese CDC of the results of the test, showing a new coronavirus. A pneumonia cluster of unknown cause was observed on 26 December and treated by the doctor Zhang Jixian in Hubei Provincial Hospital, who informed the Wuhan Jianghan CDC on 27 December. On 30 December, a test report addressed to Wuhan Central Hospital, from company CapitalBio Medlab, stated an erroneous positive result for SARS, causing a group of doctors at Wuhan Central Hospital to alert their colleagues and relevant hospital authorities of the result. The Wuhan Municipal Health Commission issued a notice to various medical institutions on "the treatment of pneumonia of unknown cause" that same evening. Eight of these doctors, including Li Wenliang (punished on 3January), were later admonished by the police for spreading false rumours and another, Ai Fen, was reprimanded by her superiors for raising the alarm. The Wuhan Municipal Health Commission made the first public announcement of a pneumonia outbreak of unknown cause on 31 December, confirming 27 casesenough to trigger an investigation. During the early stages of the outbreak, the number of cases doubled approximately every seven and a half days. In early and mid-January 2020, the virus spread to other Chinese provinces, helped by the Chinese New Year migration and Wuhan being a transport hub and major rail interchange. On 20 January, China reported nearly 140 new cases in one day, including two people in Beijing and one in Shenzhen. Later official data shows 6,174 people had already developed symptoms by then, and more may have been infected. A report in The Lancet on 24 January indicated human transmission, strongly recommended personal protective equipment for health workers, and said testing for the virus was essential due to its "pandemic potential". On 30 January, the WHO declared COVID-19 a Public Health Emergency of International Concern. By this time, the outbreak spread by a factor of 100 to 200 times. Italy had its first confirmed cases on 31 January 2020, two tourists from China. Italy overtook China as the country with the most deaths on 19 March 2020. By 26 March the United States had overtaken China and Italy with the highest number of confirmed cases in the world. Research on coronavirus genomes indicates the majority of COVID-19 cases in New York came from European travellers, rather than directly from China or any other Asian country. Retesting of prior samples found a person in France who had the virus on 27 December 2019, and a person in the United States who died from the disease on 6February 2020. RT-PCR testing of untreated wastewater samples from Brazil and Italy have suggested detection of SARS-CoV-2 as early as November and December 2019, respectively, but the methods of such sewage studies have not been optimised, many have not been peer-reviewed, details are often missing, and there is a risk of false positives due to contamination or if only one gene target is detected. A September 2020 review journal article said, "The possibility that the COVID‑19 infection had already spread to Europe at the end of last year is now indicated by abundant, even if partially circumstantial, evidence", including pneumonia case numbers and radiology in France and Italy in November and December. , Reuters reported that it had estimated the worldwide total number of deaths due to COVID‑19 to have exceeded five million. The Public Health Emergency of International Concern for COVID-19 ended on May 5, 2023. By this time, everyday life in most countries had returned to how it was before the pandemic. Misinformation After the initial outbreak of COVID19, misinformation and disinformation regarding the origin, scale, prevention, treatment, and other aspects of the disease rapidly spread online. In September 2020, the US Centers for Disease Control and Prevention (CDC) published preliminary estimates of the risk of death by age groups in the United States, but those estimates were widely misreported and misunderstood. Other species Humans appear to be capable of spreading the virus to some other animals, a type of disease transmission referred to as zooanthroponosis. Some pets, especially cats and ferrets, can catch this virus from infected humans. Symptoms in cats include respiratory (such as a cough) and digestive symptoms. Cats can spread the virus to other cats, and may be able to spread the virus to humans, but cat-to-human transmission of SARS-CoV-2 has not been proven. Compared to cats, dogs are less susceptible to this infection. Behaviours which increase the risk of transmission include kissing, licking, and petting the animal. The virus does not appear to be able to infect pigs, ducks, or chickens at all. Mice, rats, and rabbits, if they can be infected at all, are unlikely to be involved in spreading the virus. Tigers and lions in zoos have become infected as a result of contact with infected humans. As expected, monkeys and great ape species such as orangutans can also be infected with the COVID‑19 virus. Minks, which are in the same family as ferrets, have been infected. Minks may be asymptomatic, and can also spread the virus to humans. Multiple countries have identified infected animals in mink farms. Denmark, a major producer of mink pelts, ordered the slaughter of all minks over fears of viral mutations, following an outbreak referred to as Cluster 5. A vaccine for mink and other animals is being researched. Research International research on vaccines and medicines in COVID19 is underway by government organisations, academic groups, and industry researchers. The CDC has classified it to require a BSL3 grade laboratory. There has been a great deal of COVID‑19 research, involving accelerated research processes and publishing shortcuts to meet the global demand. , hundreds of clinical trials have been undertaken, with research happening on every continent except Antarctica. , more than 200 possible treatments have been studied in humans. Transmission and prevention research Modelling research has been conducted with several objectives, including predictions of the dynamics of transmission, diagnosis and prognosis of infection, estimation of the impact of interventions, or allocation of resources. Modelling studies are mostly based on compartmental models in epidemiology, estimating the number of infected people over time under given conditions. Several other types of models have been developed and used during the COVID19 pandemic including computational fluid dynamics models to study the flow physics of COVID19, retrofits of crowd movement models to study occupant exposure, mobility-data based models to investigate transmission, or the use of macroeconomic models to assess the economic impact of the pandemic. Treatment-related research Repurposed antiviral drugs make up most of the research into COVID‑19 treatments. Other candidates in trials include vasodilators, corticosteroids, immune therapies, lipoic acid, bevacizumab, and recombinant angiotensin-converting enzyme 2. In March 2020, the World Health Organization (WHO) initiated the Solidarity trial to assess the treatment effects of some promising drugs: An experimental drug called remdesivir Anti-malarial drugs chloroquine and hydroxychloroquine Two anti-HIV drugs, lopinavir/ritonavir and interferon-beta More than 300 active clinical trials are underway as of April 2020. Research on the antimalarial drugs hydroxychloroquine and chloroquine showed that they were ineffective at best, and that they may reduce the antiviral activity of remdesivir. , France, Italy, and Belgium had banned the use of hydroxychloroquine as a COVID‑19 treatment. In June, initial results from the randomised RECOVERY Trial in the United Kingdom showed that dexamethasone reduced mortality by one third for people who are critically ill on ventilators and one fifth for those receiving supplemental oxygen. Because this is a well-tested and widely available treatment, it was welcomed by the WHO, which is in the process of updating treatment guidelines to include dexamethasone and other steroids. Based on those preliminary results, dexamethasone treatment has been recommended by the NIH for peoples with COVID‑19 who are mechanically ventilated or who require supplemental oxygen but not in people with COVID‑19 who do not require supplemental oxygen. In September 2020, the WHO released updated guidance on using corticosteroids for COVID‑19. The WHO recommends systemic corticosteroids rather than no systemic corticosteroids for the treatment of people with severe and critical COVID‑19 (strong recommendation, based on moderate certainty evidence). The WHO suggests not to use corticosteroids in the treatment of people with non-severe COVID‑19 (conditional recommendation, based on low certainty evidence). The updated guidance was based on a meta-analysis of clinical trials of people critically ill with COVID‑19. In September 2020, the European Medicines Agency (EMA) endorsed the use of dexamethasone in adults and adolescents from twelve years of age and weighing at least who require supplemental oxygen therapy. Dexamethasone can be taken by mouth or given as an injection or infusion (drip) into a vein. In November 2020, the US Food and Drug Administration (FDA) issued an emergency use authorisation for the investigational monoclonal antibody therapy bamlanivimab for the treatment of mild-to-moderate COVID‑19. Bamlanivimab is authorised for people with positive results of direct SARS-CoV-2 viral testing who are twelve years of age and older weighing at least , and who are at high risk for progressing to severe COVID‑19 or hospitalisation. This includes those who are 65 years of age or older, or who have chronic medical conditions. In February 2021, the FDA issued an emergency use authorisation (EUA) for bamlanivimab and etesevimab administered together for the treatment of mild to moderate COVID‑19 in people twelve years of age or older weighing at least who test positive for SARS‑CoV‑2 and who are at high risk for progressing to severe COVID‑19. The authorised use includes treatment for those who are 65 years of age or older or who have certain chronic medical conditions. In April 2021, the FDA revoked the emergency use authorisation (EUA) that allowed for the investigational monoclonal antibody therapy bamlanivimab, when administered alone, to be used for the treatment of mild-to-moderate COVID‑19 in adults and certain paediatric patients. Cytokine storm A cytokine storm can be a complication in the later stages of severe COVID‑19. A cytokine storm is a potentially deadly immune reaction where a large amount of pro-inflammatory cytokines and chemokines are released too quickly. A cytokine storm can lead to ARDS and multiple organ failure. Data collected from Jin Yin-tan Hospital in Wuhan, China indicates that people who had more severe responses to COVID‑19 had greater amounts of pro-inflammatory cytokines and chemokines in their system than people who had milder responses. These high levels of pro-inflammatory cytokines and chemokines indicate presence of a cytokine storm. Tocilizumab has been included in treatment guidelines by China's National Health Commission after a small study was completed. It is undergoing a PhaseII non-randomised trial at the national level in Italy after showing positive results in people with severe disease. Combined with a serum ferritin blood test to identify a cytokine storm (also called cytokine storm syndrome, not to be confused with cytokine release syndrome), it is meant to counter such developments, which are thought to be the cause of death in some affected people. The interleukin-6 receptor (IL-6R) antagonist was approved by the FDA to undergo a PhaseIII clinical trial assessing its effectiveness on COVID‑19 based on retrospective case studies for the treatment of steroid-refractory cytokine release syndrome induced by a different cause, CAR T cell therapy, in 2017. There is no randomised, controlled evidence that tocilizumab is an efficacious treatment for CRS. Prophylactic tocilizumab has been shown to increase serum IL-6 levels by saturating the IL-6R, driving IL-6 across the blood–brain barrier, and exacerbating neurotoxicity while having no effect on the incidence of CRS. Lenzilumab, an anti-GM-CSF monoclonal antibody, is protective in murine models for CAR T cell-induced CRS and neurotoxicity and is a viable therapeutic option due to the observed increase of pathogenic GM-CSF secreting Tcells in hospitalised patients with COVID‑19. Passive antibodies Transferring purified and concentrated antibodies produced by the immune systems of those who have recovered from COVID‑19 to people who need them is being investigated as a non-vaccine method of passive immunisation. Viral neutralisation is the anticipated mechanism of action by which passive antibody therapy can mediate defence against SARS-CoV-2. The spike protein of SARS-CoV-2 is the primary target for neutralising antibodies. As of 8August 2020, eight neutralising antibodies targeting the spike protein of SARS-CoV-2 have entered clinical studies. It has been proposed that selection of broad-neutralising antibodies against SARS-CoV-2 and SARS-CoV might be useful for treating not only COVID‑19 but also future SARS-related CoV infections. Other mechanisms, however, such as antibody-dependant cellular cytotoxicity or phagocytosis, may be possible. Other forms of passive antibody therapy, for example, using manufactured monoclonal antibodies, are in development. The use of passive antibodies to treat people with active COVID19 is also being studied. This involves the production of convalescent serum, which consists of the liquid portion of the blood from people who recovered from the infection and contains antibodies specific to this virus, which is then administered to active patients. This strategy was tried for SARS with inconclusive results. An updated Cochrane review in May 2023 found high certainty evidence that, for the treatment of people with moderate to severe COVID‑19, convalescent plasma did not reduce mortality or bring about symptom improvement. There continues to be uncertainty about the safety of convalescent plasma administration to people with COVID‑19 and differing outcomes measured in different studies limits their use in determining efficacy. Bioethics Since the outbreak of the COVID‑19 pandemic, scholars have explored the bioethics, normative economics, and political theories of healthcare policies related to the public health crisis. Academics have pointed to the moral distress of healthcare workers, ethics of distributing scarce healthcare resources such as ventilators, and the global justice of vaccine diplomacies. The socio-economic inequalities between genders, races, groups with disabilities, communities, regions, countries, and continents have also drawn attention in academia and the general public.
Biology and health sciences
Viral diseases
Health
47562547
https://en.wikipedia.org/wiki/MicroLED
MicroLED
MicroLED, also known as micro-LED, mLED or μLED is an emerging flat-panel display technology consisting of arrays of microscopic LEDs forming the individual pixel elements. Inorganic semiconductor microLED (μLED) technology was first invented in 2000 by the research group of Hongxing Jiang and Jingyu Lin of Texas Tech University (TTU) while they were at Kansas State University (KSU). The first high-resolution and video-capable InGaN microLED microdisplay in VGA format was realized in 2009 by Jiang, Lin and their colleagues at Texas Tech University and III-N Technology, Inc. via active driving of a microLED array by a complementary metal-oxide semiconductor (CMOS) IC. Compared to widespread LCD technology, microLED displays offer better contrast, response times, and energy efficiency. MicroLED offers greatly reduced energy requirements when compared to conventional LCD displays while also offering pixel-level light control and a high contrast ratio. The inorganic nature of microLEDs gives them a longer lifetime advantage over OLEDs and allows them to display brighter images with minimal risk of screen burn-in. The sub-nanosecond response time of μLED has a huge advantage over other display technologies for 3D/AR/VR displays since these devices need more frames per second and fast response times to minimise ghosting. MicroLEDs are capable of high speed modulation, and have been proposed for chip-to-chip interconnect applications. , Sony, Samsung, and Konka started to sell microLED video walls. LG, Tianma, PlayNitride, TCL/CSoT, Jasper Display, Jade Bird Display, Plessey Semiconductors Ltd, and Ostendo Technologies, Inc. have demonstrated prototypes. Sony already sells microLED displays as a replacement for conventional cinema screens. BOE, Epistar, and Leyard have plans for microLED mass production. MicroLED can be made flexible and transparent, just like OLEDs. According to a report by Market Research Future, the MicroLED display market will reach around USD 24.3 billion by 2027. Custom Market Insights reported that the MicroLED display market is expected to reach around USD 182.7 Billion by 2032. Research Following the first report of electrical injection microLEDs based on indium gallium nitride (InGaN) semiconductors in 2000 by the research group of Hongxing Jiang and Jingyu Lin, several groups have quickly engaged in pursuing this concept. Many related potential applications have been identified. Various on-chip connection schemes of microLED pixel arrays have been employed by AC LED Lighting, LLC (a company funded by Jiang and Lin) allowing for the development of single-chip high voltage DC/AC-LEDs to address the compatibility issue between the high voltage electrical infrastructure and low voltage operation nature of LEDs and high brightness self-emissive microdisplays. The microLED array has also been explored as a light source for optogenetic applications and for visible light communications. Early InGaN based microLED arrays and microdisplays were primarily passively driven. The first actively driven video-capable self-emissive InGaN microLED microdisplay in VGA format ( pixels, each 12μm in size with 15μm between them) possessing low voltage requirements was patented and realized in 2009 by Jiang, Lin and their colleagues at Texas Tech and III-N Technology, Inc.(a company funded by Jiang and Lin) via integration between microLED array and CMOS integrated circuit (IC) and the work was also published in the following years. The first microLED products were demonstrated by Sony in 2012. These displays, however, were very expensive. There are several methods to manufacture microLED displays. The flip-chip method manufactures the LED on a conventional sapphire substrate, while the transistor array and solder bumps are deposited on silicon wafers using conventional manufacturing and metallization processes. Mass transfer is used to pick and place several thousand LEDs from one wafer to another at the same time, and the LEDs are bonded to the silicon substrate using reflow ovens. The flip-chip method is used for micro displays used on virtual reality headsets. Another microLED manufacturing method involves bonding the LEDs to an IC layer on a silicon substrate and then removing the LED bonding material using conventional semiconductor manufacturing techniques. The current bottleneck in the manufacturing process is the need to individually test every LED and replace faulty ones using an excimer laser lift-off apparatus, which uses a laser to weaken the bond between the LED and its substrate. Faulty LED replacement must be performed using high accuracy pick-and-place machines and the test and repair process takes several hours. The mass transfer process alone can take 18 days, for a smartphone screen with a glass substrate. Special LED manufacturing techniques can be used to increase yield and reduce the amount of faulty LEDs that need to be replaced. Each LED can be as small as 5μm across. LED epitaxy techniques need to be improved to increase LED yields. Excimer lasers are used for several steps: laser lift-off to separate LEDs from their sapphire substrate and to remove faulty LEDs, for manufacturing the LTPS-TFT backplane, and for laser cutting of the finished LEDs. Special mass transfer techniques using elastomer stamps are also being researched. Other companies are exploring the possibility of packaging 3 LEDs: one red, one green and one blue LED into a single package to reduce mass transfer costs. Quantum dots are being researched as a way to shrink the size of microLED pixels, while other companies are exploring the use of phosphors and quantum dots to eliminate the need for different-colored LEDs. Sensors can be embedded in microLED displays. Over 130 companies are involved in microLED research and development. MicroLED light panels are also being made, and are an alternative to conventional OLED and LED light panels. Digital pulse-width modulation is well-suited to driving microLED displays. MicroLEDs experience a color shift as the current magnitude changes. Analog schemes change current to change brightness. With a digital pulse, only one current value is used for the on state. Thus, there is no color shift that occurs as brightness changes. Current microLED display offerings by Samsung and Sony consist of "cabinets" that can be tiled to create a large display of any size, with the display's resolution increasing with size. They also contain mechanisms to protect the display against water and dust. Each cabinet is diagonally with a resolution of . Commercialization MicroLEDs have already demonstrated performance advantages over LCD and OLED displays, including higher brightness, lower latency, higher contrast ratio, greater color saturation, intrinsic self-illumination, better efficiency and longer lifetime. Compared with OLED displays and LCDs, microLED displays stand out for their combination of high performance, durability, and energy efficiency. Ultrahigh brightness is particularly relevant for applications in augmented-reality displays that compete with the Sun’s brightness in outdoor environments. Glo and Jasper Display Corporation demonstrated the world's first RGB microLED microdisplay, measuring diagonally, at SID Display Week 2017. Glo transferred their microLEDs to the Jasper Display backplane. Sony launched a "Crystal LED Display" in 2012 with resolution, as a demonstration product. Sony announced its CLEDIS (Crystal LED Integrated Structure) brand which used surface mounted LEDs for large display production. , Sony offers CLEDIS in , and displays. On 12 September 2019, Sony announced Crystal LED availability to consumers ranging from 1080p to 16K displays. Samsung demonstrated a microLED display called The Wall at CES 2018. In July 2018, Samsung announced plans on bringing a 4K microLED TV to consumer market in 2019. At CES 2019, Samsung demonstrated a 4K microLED display and 6K microLED display. On June 12 at InfoComm 2019, Samsung announced the global launch of The Wall Luxury microLED display configurable from in 2K to in 8K. On October 4, 2019, Samsung announced that The Wall Luxury microLED display shipments had begun. In March 2018, Bloomberg reported Apple to have about 300 engineers devoted to in-house development of microLED screens. At IFA 2018 in August, LG Display demonstrated a microLED display. At SID's Display Week 2019 in May, Tianma and PlayNitride demonstrated their co-developed microLED display with over 60% transparency. China Star Optoelectronics Technology (CSoT) demonstrated a transparent microLED display with around 45% transparency, also co-developed with PlayNitride. Plessey Semiconductors Ltd demonstrated a monolithic monochrome blue GaN-on-silicon wafer bonded to a Jasper Display CMOS backplane active-matrix microLED display with an 8μm pixel pitch. At SID's Display Week 2019 in May, Jade Bird Display demonstrated their 720p and 1080p microLED microdisplays with 5μm and 2.5μm pitch respectively, achieving luminance in the millions of candelas per square metre. In 2021, Jade Bird Display and Vuzix have entered a Joint manufacturing agreement for making microLED based projectors for smart glasses and augmented reality glasses At Touch Taiwan 2019 on September 4, 2019, AU Optronics demonstrated a microLED display and indicated that microLED was 12 years from mass commercialization. At IFA 2019 on September 13, 2019, TCL Corporation demonstrated their Cinema Wall featuring a 4K microLED display with maximum brightness of 1,500cd/m and contrast ratio produced by their subsidiary China Star Optoelectronics Technology (CSoT). As of 2024, Samsung has already launched microLED display products including The Wall. Samsung’s microLED display technology transfers micrometer-scale LEDs into LED modules, resulting in what resembles wall tiles composed of mass-transferred clusters of almost microscopic lights. Samsung has also debuted at 2024 CES their Transparent MicroLED display. LG has also debuted at 2024 CES their microLED display - LG MAGNIT. In terms of microLED microdisplay, Jade Bird Display launched 0.13" series of MicroLED displays which has an active area of 0.13” (3.3 mm) in diagonal and a resolution of 640X480 for AR and VR display products. Apple reportedly invested billions of dollars in development of microLED displays in the years leading up to 2024, intending to transition its products to the technology beginning with the Apple Watch Ultra, before ultimately abandoning the effort after deciding it was unviable. However, the company is reportedly still "eyeing microLED for other projects down the road".
Technology
Media and communication: Basics
null
51664225
https://en.wikipedia.org/wiki/Antigone%20%28bird%29
Antigone (bird)
Antigone is a genus of large birds in the crane family. The species in this genus were formerly placed in the genus Grus. Taxonomy The genus was named by Carl Linnaeus to be used for the sarus crane or its old name Grus major Indica because he was confused between Greek princesses Antigone of Troy who turned into a stork and Gerana who turned into the crane. A molecular phylogenetic study published in 2010 found that the genus Grus was polyphyletic. In the subsequent rearrangement, four species were placed in the resurrected genus Antigone. The genus had initially been erected in 1853 by German naturalist Ludwig Reichenbach. The type species is the sarus crane (Antigone antigone). Species The genus includes four species:
Biology and health sciences
Gruiformes
Animals
47577681
https://en.wikipedia.org/wiki/Be%20star
Be star
Be stars are a heterogeneous set of stars with B spectral types and emission lines. A narrower definition, sometimes referred to as classical Be stars, is a non-supergiant B star whose spectrum has, or had at some time, one or more Balmer emission lines. Definition and classification Many stars have B-type spectra and show hydrogen emission lines, including many supergiants, Herbig Ae/Be stars, mass-transferring binary systems, and B[e] stars. It is preferred to restrict usage of the term Be star to non-supergiant stars showing one or more Balmer series lines in emission. These are sometimes referred to as classical Be stars. The emission lines may be present only at certain times. Although the Be type spectrum is most strongly produced in class B stars, it is also detected in O and A shell stars, and these are sometimes included under the "Be star" banner. Be stars are primarily considered to be main sequence stars, but a number of subgiants and giant stars are also included. Discovery The first star recognized as a Be star was Gamma Cassiopeiae, observed 1866 by Angelo Secchi, the first star ever observed with emission lines. Many other bright stars were found to show similar spectra, although many of these are no longer considered to be classical Be stars. The brightest is Achernar, although it was not recognised as a Be star until 1976. Model With the understanding of the processes of emission line formation in the early 20th century it became clear that these lines in Be stars must come from circumstellar material ejected from the star helped by the rapid rotation of the star. All the observational characteristics of Be stars can now be explained with a gaseous disk that is formed of material ejected from the star. The infrared excess and the polarization result from the scattering of stellar light in the disk, while the line emission is formed by re-processing stellar ultraviolet light in the gaseous disc. Shell stars Some Be stars exhibit spectral features that are interpreted as a detached "shell" of gas surrounding the star, or more accurately a disc or ring. These shell features are thought to be caused when the disc of gas that is present around many Be stars is aligned edge on to us so that it creates very narrow absorption lines in the spectrum. Variability Be stars are often visually and spectroscopically variable. Be stars can be classified as Gamma Cassiopeiae variables when a transient or variable disk is observed. Be stars that show variability without clear indication of the mechanism are listed simply as BE in the General Catalogue of Variable Stars. Some of these are thought to be pulsating stars and are sometimes called Lambda Eridani variables.
Physical sciences
Stellar astronomy
Astronomy
51711705
https://en.wikipedia.org/wiki/Approximate%20measures
Approximate measures
Approximate measures are units of volumetric measurement which are not defined by a government or government-sanctioned organization, or which were previously defined and are now repealed, yet which remain in use. It may be that all English-unit derived capacity measurements are derived from one original approximate measurement: the mouthful, consisting of about ounce, called the ro in ancient Egypt (their smallest recognized unit of capacity). The mouthful was still a unit of liquid measure during Elizabethan times. (The principal Egyptian standards from small to large were the ro, hin, hekat, and khar.) Because of the lack of official definitions, many of these units will not have a consistent value. United Kingdom glass-tumbler breakfast-cup tea-cup wine-glass table-spoon dessert-spoon tea-spoon black-jack demijohn (dame-jeanne) goblet pitcher gyllot (about equal to 1/2 gill) noggin (1/4 pint) nipperkin (measure for liquor, containing no more than 1/2 pint) tumblerful (10 fl oz or 2 gills or 2 teacupsful) apothecaries' approximate measures teacupful = about 4 fl oz wineglassful = about 2 fl oz tablespoonful = about 1/2 fl oz dessertspoonful = about 2 fl dr teaspoonful = about 1 fl dr drop = about minim teacupful (5 fl oz, or 1 gill ibid) wineglassful (2-1/2 fl oz or 1/2 gill or 1/2 teacupful or 1/4 tumblerful) dessertspoonful (1/4 fl oz or 2 fl dr and equal to 2 teaspoonful or 1/2 tablespoonful) teaspoonful (1/8 fl oz or 1 fl dr and also equal to 1/2 dessertspoonful or 1/4 tablespoonful) United States The vagueness of how these measures have been defined, redefined, and undefined over the years, both through written and oral history, is best exemplified by the large number of sources that need to be read and cross-referenced in order to paint even a reasonably accurate picture. So far, the list includes the United States Pharmacopoeia, U.S. FDA, NIST, A Manual of Weights, Measures, and Specific Gravity, State Board Questions and Answers, MediCalc, MacKenzie's Ten Thousand Receipts, Approximate Practical Equivalents, When is a Cup not a Cup?, Cook's Info, knitting-and.com., and Modern American Drinks. Dashes, pinches, and smidgens are all traditionally very small amounts well under a teaspoon, but not more uniformly defined. In the early 2000s some companies began selling measuring spoons that defined a dash as teaspoon, a pinch as teaspoon, and a smidgen as teaspoon. Based on these spoons, there are two smidgens in a pinch and two pinches in a dash. However, the 1954 Angostura “Professional Mixing Guide” states that “a dash” is 1/6th of a teaspoon, or 1/48 of an ounce, and Victor Bergeron (a.k.a. Trader Vic, famous saloonkeeper), said that for bitters it was teaspoon, but fl oz for all other liquids.
Physical sciences
Measurement systems
Basics and measurement
51721949
https://en.wikipedia.org/wiki/Argon%20compounds
Argon compounds
Argon compounds, the chemical compounds that contain the element argon, are rarely encountered due to the inertness of the argon atom. However, compounds of argon have been detected in inert gas matrix isolation, cold gases, and plasmas, and molecular ions containing argon have been made and also detected in space. One solid interstitial compound of argon, Ar1C60 is stable at room temperature. Ar1C60 was discovered by the CSIRO. Argon ionises at 15.76 eV, which is higher than hydrogen, but lower than helium, neon or fluorine. Molecules containing argon can be van der Waals molecules held together very weakly by London dispersion forces. Ionic molecules can be bound by charge induced dipole interactions. With gold atoms there can be some covalent interaction. Several boron-argon bonds with significant covalent interactions have been also reported. Experimental methods used to study argon compounds have included inert gas matrices, infrared spectroscopy to study stretching and bending movements, microwave spectroscopy and far infrared to study rotation, and also visible and ultraviolet spectroscopy to study different electronic configurations including excimers. Mass spectroscopy is used to study ions. Computation methods have been used to theoretically compute molecule parameters, and predict new stable molecules. Computational ab initio methods used have included CCSD(T), MP2 (Møller–Plesset perturbation theory of the second order), CIS and CISD. For heavy atoms, effective core potentials are used to model the inner electrons, so that their contributions do not have to be individually computed. More powerful computers since the 1990s have made this kind of in silico study much more popular, being much less risky and simpler than an actual experiment. This article is mostly based on experimental or observational results. The argon fluoride laser is important in photolithography of silicon chips. These lasers make a strong ultraviolet emission at 192 nm. Argonium Argonium (ArH+) is an ion combining a proton and an argon atom. It is found in interstellar space in diffuse atomic hydrogen gas where the fraction of molecular hydrogen H2 is in the range of 0.0001 to 0.001. Argonium is formed when H2+ reacts with Ar atoms: Ar + → ArH+ + H and it is also produced from Ar+ ions produced by cosmic rays and X-rays from neutral argon: Ar+ + H2 → *ArH+ + H 1.49 eV. When ArH+ encounters an electron, dissociative recombination can occur, but it is extremely slow for lower energy electrons, allowing ArH+ to survive for a much longer time than many other similar protonated cations. ArH+ + e− → ArH* → Ar + H Artificial ArH+ made from earthly Ar contains mostly the isotope 40Ar rather than the cosmically abundant 36Ar. Artificially it is made by an electric discharge through an argon-hydrogen mixture. Natural occurrence In the Crab Nebula, ArH+ occurs in several spots revealed by emission lines. The strongest place is in the Southern Filament. This is also the place with the strongest concentration of Ar+ and Ar2+ ions. The column density of ArH+ in the Crab Nebula is between 1012 and 1013 atoms per square centimeter. Possibly the energy required to excite the ions so that then can emit, comes from collisions with electrons or hydrogen molecules. Towards the Milky Way centre the column density of ArH+ is around . Cluster argon cations The diargon cation, has a binding energy of 1.29 eV. The triargon cation is linear, but has one Ar−Ar bond shorter than the other. Bond lengths are 2.47 and 2.73 ångströms. The dissociation energy to Ar and Ar2+ is 0.2 eV. In line with the molecule's asymmetry, the charge is calculated as +0.10, +0.58 and +0.32 on each argon atom, so that it greatly resembles bound to a neutral Ar atom. Larger charged argon clusters are also detectable in mass spectroscopy. The tetraargon cation is also linear. icosahedral clusters have an core, whereas is dioctahedral with an core. The linear core has +0.1 charge on the outer atoms, and +0.4 charge on each or the inner atoms. For larger charged argon clusters, the charge is not distributed on more than four atoms. Instead the neutral outer atoms are attracted by induced electric polarization. The charged argon clusters absorb radiation, from the near infrared, through visible to ultraviolet. The charge core, , or is called a chromophore. Its spectrum is modified by the first shell of neutral atoms attached. Larger clusters have the same spectrum as the smaller ones. When photons are absorbed in the chromophore, it is initially electronically excited, but then energy is transferred to the whole cluster in the form of vibration. Excess energy is removed by outer atoms evaporating from the cluster one at a time. The process of destroying a cluster by light is called photofragmentation. Negatively-charged argon clusters are thermodynamically unstable, and therefore cannot exist. Argon has a negative electron affinity. Argon monohydride Neutral argon hydride, also known as argon monohydride (ArH), was the first discovered noble gas hydride. J. W. C. Johns discovered an emission line of ArH at 767 nm and announced the find in 1970. The molecule was synthesized using X-ray irradiation of mixtures of argon with hydrogen-rich molecules such as H2, H2O, CH4 and CH3OH. The X-ray excited argon atoms are in the 4p state. Argon monohydride is unstable in its ground state, 4s, as a neutral inert gas atom and a hydrogen atom repel each other at normal intermolecular distances. When a higher-energy-level ArH* emits a photon and reaches the ground state, the atoms are too close to each other, and they repel and break up. However a van der Waals molecule can exist with a long bond. However, excited ArH* can form stable Rydberg molecules, also known as excimers. These Rydberg molecules can be considered as a protonated argon core, surrounded by an electron in one of many possible higher energy states. Formation: Ar + ν → Ar*; Ar* + H2 → ArH* + H Instead of dihydrogen, other hydrogen containing molecules can also have a hydrogen atom abstracted by excited argon, but note that some molecules bind hydrogen too strongly for the reaction to proceed. For example, acetylene will not form ArH this way. In the van der Waals molecule of ArH, the bond length is calculated to be about 3.6 Å and the dissociation energy calculated to be 0.404 kJ/mol (33.8 cm−1). The bond length in ArH* is calculated as 1.302 Å. The spectrum of argon monohydride, both ArH* and ArD*, has been studied. The lowest bound state is termed A2Σ+ or 5s. Another low lying state is known as 4p, made up of C2Σ+ and B2π states. Each transition to or from higher level states corresponds to a band. Known bands are 3p → 5s, 4p → 5s, 5p → 5s (band origin ), 6p → 5s (band origin ) 3dσ → 4p, 3dπ → 4p (6900 cm−1), 3dδ → 4p (8200–8800 cm−1), 4dσ → 4p (), 6s → 4p (7400–7950 cm−1), 7s → 4p (predicted at , but obscured), 8s → 4p (), 5dπ → 4p (), 5p → 6s (band origin 3681.171 cm−1), 4f → 5s ( and band origin for ArD and ArH), 4f → 3dπ (7548.76 and 7626.58 ccm−1), 4f → 3dδ (6038.47 and 6026.57 cm−1), 4f → 3dσ (4351.44 cm−1 for ArD). The transitions going to 5s, 3dπ → 5s and 5dπ → 5s, are strongly predissociated, blurring out the lines. In the UV spectrum a continuous band exists from 200 to 400 nm. This band is due to two different higher states: B2Π → A2Σ+ radiates over 210–450 nm, and E2Π → A2Σ+ is between 180 and 320 nm. A band in the near infrared from 760 to 780 nm. Other ways to make ArH include a Penning-type discharge tube, or other electric discharges. Yet another way is to create a beam of ArH+ (argonium) ions and then neutralize them in laser-energized caesium vapour. By using a beam, the lifetimes of the different energy states can be observed, by measuring the profile of electromagnetic energy emitted at different wavelengths. The E2π state of ArH has a radiative lifetime of 40 ns. For ArD the lifetime is 61 ns. The B2Π state has a lifetime of 16.6 ns in ArH and 17 ns in ArD. Argon polyhydrides The argon dihydrogen cation has been predicted to exist and to be detectable in the interstellar medium. However it has not been detected . is predicted to be linear in the form Ar−H−H. The H−H distance is 0.94 Å. The dissociation barrier is only 2 kcal/mol (8 kJ/mol), and readily loses a hydrogen atom to yield ArH+. The force constant of the ArH bond in this is 1.895 mdyne/Å2 (). The argon trihydrogen cation has been observed in the laboratory. ArH2D+, and have also been observed. The argon trihydrogen cation is planar in shape, with an argon atom off the vertex of a triangle of hydrogen atoms. Argoxonium The argoxonium ion ArOH+ is predicted to be bent molecular geometry in the 11A′ state. 3Σ− is a triplet state 0.12 eV higher in energy, and 3A″ is a triplet state 0.18 eV higher. The Ar−O bond is predicted to be 1.684 Å long and to have a force constant of 2.988 mdyne/Å2 (). ArNH+ ArNH+ is a possible ionic molecule to detect in the lab, and in space, as the atoms that compose it are common. ArNH+ is predicted to be more weakly bound than ArOH+, with a force constant in the Ar−N bond of 1.866 mdyne/Å2 (). The angle at the nitrogen atom is predicted to be 97.116°. The Ar−N lengths should be 1.836 Å and the N−H bond length would be 1.046 Å Argon dinitrogen cation The argon dinitrogen linear cationic complex has also been detected in the lab: Ar + → Ar+ + N2. The dissociation yields Ar+, as this is a higher-energy state. The binding energy is 1.19 eV. The molecule is linear. The distance between two nitrogen atoms is 1.1 Å. This distance is similar to that of neutral N2 rather than that of ion. The distance between one nitrogen and the argon atom is 2.2 Å. The vibrational band origin for the nitrogen bond in (V = 0 → 1) is at 2272.2564 cm−1 compared with N2+ at 2175 and N2 at 2330 cm−1. In the process of photodissociation, it is three times more likely to yield Ar+ + N2 compared to Ar + . has been produced in a supersonic jet expansion of gas and detected by Fourier transform microwave spectroscopy. The molecule is linear, with the atoms in the order Ar−H−N−N. The Ar−H distance is 1.864 Å. There is a stronger bond between hydrogen and argon than in ArHCO+. The molecule is made by the following reaction: ArH+ + N2 → . Bis(dinitrogen) argon cation The argon ion can bond two molecules of dinitrogen (N2) to yield an ionic complex with a linear shape and structure N=N−−N=N. The N=N bond length is 1.1014 Å, and the nitrogen to argon bond length is 2.3602 Å. 1.7 eV of energy is required to break this apart to N2 and . The band origin of an infrared band due to antisymmetric vibration of the N=N bonds is at 2288.7272 cm−1. Compared to N2 it is redshifted 41.99 cm−1. The ground state rotational constant of the molecule is . is produced by a supersonic expansion of a 10:1 mixture of argon with nitrogen through a nozzle, which is impacted by an electron beam. ArN2O+ ArN2O+ absorbs photons in four violet–ultraviolet wavelength bands leading to breakup of the molecule. The bands are 445–420, 415–390, 390–370, and 342 nm. ArHCO+ ArHCO+ has been produced in a supersonic-jet expansion of gas and detected by Fabry–Perot-type Fourier transform microwave spectroscopy. The molecule is made by this reaction ArH+ + CO → ArHCO+. ArnBO+ BO+ forms four complexes with argon: ArBO+; two isomers of Ar2BO+ (one with equidistant Ar-B bonds and another with a short and long bond); and Ar3BO+. These ions were formed by firing a green laser at a boron target in a gaseous mixture of helium, argon and nitrous oxide. Carbon dioxide–argon ion can be excited to form * where the positive charge is moved from the carbon dioxide part to the argon. This molecule may occur in the upper atmosphere. Experimentally the molecule is made from a low-pressure argon gas with 0.1% carbon dioxide, irradiated by a 150 V electron beam. Argon is ionized, and can transfer the charge to a carbon dioxide molecule. The dissociation energy of is 0.26 eV. + CO2 → Ar + (yields 0.435 eV.) van der Waals molecules Neutral argon atoms bind very weakly to other neutral atoms or molecules to form van der Waals molecules. These can be made by expanding argon under high pressure mixed with the atoms of another element. The expansion happens through a tiny hole into a vacuum, and results in cooling to temperatures a few degrees above absolute zero. At higher temperatures the atoms will be too energetic to stay together by way of the weak London dispersion forces. The atoms that are to combine with argon can be produced by evaporation with a laser or alternatively by an electric discharge. The known molecules include AgAr, Ag2Ar, NaAr, KAr, MgAr, CaAr, SrAr, ZnAr, CdAr, HgAr, SiAr, InAr, CAr, GeAr, SnAr, and BAr. SiAr was made from silicon atoms derived from Si(CH3)4. In addition to the very weakly bound van der Waals molecules, electronically excited molecules with the same formula exist. As a formula these can be written ArX*, with the "*" indicating an excited state. The atoms are much more strongly bound with a covalent bond. They can be modeled as an ArX+ surrounded by a higher energy shell with one electron. This outer electron can change energy by exchanging photons and so can fluoresce. The widely used argon fluoride laser makes use of the ArF* excimer to produce strong ultraviolet radiation at 192 nm. The argon chloride laser using ArCl* produces even shorter ultraviolet at 175 nm, but is too feeble for application. The argon chloride in this laser comes from argon and chlorine molecules. Argon clusters Cooled argon gas can form clusters of atoms. Diargon, also known as the argon dimer, has a binding energy of 0.012 eV, but the Ar13 and Ar19 clusters have a sublimation energy (per atom) of 0.06 eV. For liquid argon, which could be written as Ar∞, the energy increases to 0.08 eV. Clusters of up to several hundred argon atoms have been detected. These argon clusters are icosahedral in shape, consisting of shells of atoms arranged around a central atom. The structure changes for clusters with more than 800 atoms to resemble a tiny crystal with a face-centered cubic (fcc) structure, as in solid argon. It is the surface energy that maintains an icosahedral shape, but for larger clusters internal pressure will attract the atoms into an fcc arrangement. Neutral argon clusters are transparent to visible light. Diatomic van der Waals molecules {|class="wikitable" !Molecule !Binding energyground Σ state(cm−1) !Binding energyexcited Π state(cm−1) !Ground statebond length(Å) !Excited statebond length(Å) !CAS number |- |ArH | | | | |30736-04-0 |- |ArHe | | | | |12254-69-2 |- |LiAr |42.5 |925 |4.89 |2.48 | |- |BAr | | | | |149358-32-7 |- |ArNe | | | | |12301-65-4 |- |NaAr |40 |560 | | |56633-38-6 |- |MgAr |44 |246 | | |72052-59-6 |- |AlAr | | | | |143752-09-4 |- |SiAr | | | | | |- |ArCl | | | | |54635-29-9 |- |Ar2 | | | | |12595-59-4 |- |KAr |42 |373 | | |12446-47-8 |- |CaAr |62 |134 | | |72052-60-9 |- |SrAr |68 |136 | | | |- |NiAr | | | | |401838-48-0 |- |ZnAr |96 |706 | | |72052-61-0 |- |GaAr | | | | |149690-22-2 |- |GeAr | | | | | |- |KrAr | | | | |51184-77-1 |- |AgAr |90 |1200 | | | |- |CdAr |106 |544 | | |72052-62-1 |- |InAr | | | | |146021-90-1 |- |SnAr | | | | | |- |ArXe | | | | |58206-67-0 |- |AuAr | | | | |195245-92-2 |- |HgAr |131 |446 | | |87193-95-1 |} ArO* is also formed when dioxygen trapped in an argon matrix is subjected to vacuum ultraviolet. It can be detected by its luminescence: O2 + hv → + e−; + e− → 2O*; O* + Ar → ArO*. Light emitted by ArO* has two main bands, one at 2.215 eV, and a weaker one at 2.195 eV. Argon sulfide, ArS* luminesces in the near infrared at 1.62 eV. ArS is made from UV irradiated OCS in an argon matrix. The excited states lasts for 7.4 and 3.5 μs for spectrum peak and band respectively. Triatomic van der Waals molecules Cluster molecules containing dichlorine and more than one argon atom can be made by forcing a 95:5 mixture of helium and argon and a trace of chlorine though a nozzle. ArCl2 exists in a T shape. Ar2Cl2 has a distorted tetrahedron shape, with the two argon atoms 4.1 Å from each other, and their axis 3.9 Å from the Cl2. The van der Waals bond energy is 447 cm−1. Ar3Cl2 also exists with a van der Waals bond energy of 776 cm−1. The linear Ar·Br2 molecule has a continuous spectrum for bromine molecule X → B transitions. The spectrum of bromine is blue-shifted and spread out when it binds an argon atom. ArI2 shows a spectrum that adds satellite bands to the higher vibrational bands of I2. The ArI2 molecule has two different isomers, one shape is linear, and the other is T-shaped. The dynamics of ArI2 is complex. Breakup occurs through different routes in the two isomers. The T shape undergoes intramolecular vibrational relaxation, whereas the linear one directly breaks apart. Diiodine clusters, I2Arn have been made. The ArClF cluster has a linear shape. The argon atom is closest to the chlorine atom. Linear ArBrCl can also rearrange to ArClBr, or a T-shaped isomer. Multiple argon atoms can "solvate" a water molecule forming a monolayer around the H2O. Ar12·H2O is particularly stable, having an icosahedral shape. Molecules from Ar·H2O to Ar14·H2O have been studied. ArBH was produced from boron monohydride (BH) which in turn was created from diborane by way of an ultraviolet 193 nm laser. The BH-argon mixture was expanded through a 0.2 mm diameter nozzle into a vacuum. The gas mixture cools and Ar and BH combine to yield ArBH. A band spectrum that combines the A1Π←X1Σ+ electronic transition, with vibration and rotation can be observed. The BH has singlet spin, and this is the first known van der Waals complex with a singlet spin pair of atoms. For this molecule the rotational constant is 0.133 cm−1, The dissociation energy is 92 cm−1 and distance from argon to boron atom is 3.70 Å. ArAlH is also known to exist. MgAr2 is also known. Polyatomic van der Waals molecules Some linear polyatomic molecules can form T-shaped van der Waals complexes with argon. These include NCCN, carbon dioxide, nitrous oxide, acetylene, carbon oxysulfide, and ClCN. Others attach the argon atom at one end to continue to be linear, including HCN. Other polyatomic van der Waals compounds of argon, include those of fluorobenzene, formyl radical (ArHCO), 7-azaindole, glyoxal, sodium chloride (ArNaCl), ArHCl, and cyclopentanone. {|class="wikitable" !Molecule !Name !Ground statebinding energy(cm−1) !Closest position or atomto argon !Ground statebond length of Ar(Å) !Bond anglefrom atom(degrees) !Bond stretch force or frequency !dipole moment D !CAS number !references |- |(CH3)2F2Si·Ar | Difluorodimethylsilane – argon | | | | | | | | |- |CH2F2·Ar |Difluoromethane – argon | |F |3.485 |58.6 | | | | |- |CF3CN |trifluoromethylcyanide argon | |C1 |3.73 |77 | | |947504-98-5 | |- ||CF2HCH3·Ar |1,1-difluoroethane argon | |F | | | | | – | |- |CH2FCH2F·Ar |1,2-difluoroethane argon |181 |F |3.576 |61 | | |264131-14-8 | |- |CH3CHO·Ar |Acetaldehyde argon |161 |C-1 |3.567 |76.34 | | |158885-13-3 | |- |C2H4O·Ar |oxirane argon |200 |O |3.606 (CM) |72.34 | | | | |- |ArBF3 |Boron trifluoride argon | |B |3.325 |on axis ArBF ≈90.5° |0.030 mdyn/Å |0.176 | | |- |ArC6H6 |benzene-argon | |on sixfold axis |3.53 from plane | | |0.12 | | |- |ArPF3 |argon phosphorus trifluoride complex | |P |3.953 from centre of mass |70.3° on PF2 face | | | | |- |Ar-NCCN |argon–cyanogen van der Waals complex | |centre of molecule |3.58 |90° T shape |30 cm−1 |0.0979 | | |- |DCCDAr |argon-deuterated acetylene | |centre of molecule |3.25 |90° T shape |0.0008 mdyn/Å / 8.7 cm−1 | | | |- |SO3Ar |sulfur trioxide argon | |S |3.350 |on axis 90° from SO bond |0.059 mdyn/Å / 61 cm−1 | | | |- |Ar•HCCH |acetylene argon | | | |T shape | | | | |- |OCS•Ar|| || || || || || || || || |- |CH3OH•Ar|| || || || || || || || || |- |CH3Cl•Ar|| || || || || || || || || |- | |Pyridine argon|| || || || || || || || |- | |Pyrrole argon|| || || || || || || || |} Aqueous argon Argon dissolved in water causes the pH to rise to 8.0, apparently by reducing the number of oxygen atoms available to bind protons. With ice, argon forms a clathrate hydrate. Up to 0.6 GPa, the clathrate has a cubic structure. Between 0.7 and 1.1 GPa the clathrate has a tetragonal structure. Between 1.1 and 6.0 GPa the structure is body centered orthorhombic. Over 6.1 GPa, the clathrate converts into solid argon and ice VII. At atmospheric pressure the clathrate is stable below 147 K. At 295 K the argon pressure from the clathrate is 108 MPa. Argon fluorohydride Argon fluorohydride was an important discovery in the rejuvenation of the study of noble gas chemistry. HArF is stable in solid form at temperatures below 17 K. It is prepared by photolysis of hydrogen fluoride in a solid argon matrix. HArArF would have such a low barrier to decomposition that it will likely never be observed. However HBeArF is predicted to be more stable than HArF. Uranium compounds CUO in a solid argon matrix can bind one, or a few argon atoms to yield CUO·Ar, CUO·Ar3 or CUO·Ar4. CUO itself is made by evaporating uranium atoms into carbon monoxide. Uranium acts as a strong Lewis acid in CUO and forms bonds with energies of about 3.2 kcal/mol (13.4 kJ/mol) with argon. The argon acts as a Lewis base. Its electron density is inserted into an empty 6d orbital on the uranium atom. The spectrum of CUO is changed by argon so that the U−O stretch frequency changes from 872.2 to 804.3 cm−1 and the U−C stretch frequency from 1047.3 to 852.5 cm−1. The significant change in the spectrum occurs because the CUO is changed from a singlet state (in gas phase or solid neon) to a triplet state, with argon or noble gas complexing. The argon–uranium bond length is 3.16 Å. This is shorter than the sum of atomic radii of U and Ar of 3.25 Å, but considerably longer than a normal covalent bond to uranium. For example, U−Cl in UCl6 is 2.49 Å. When xenon is included in the solid argon matrix up to a few percent, additional van der Waals molecules are formed: CUO·Ar3Xe, CUO·Ar2Xe2, CUO·ArXe3 and CUO·Xe4. Similarly krypton can substitute for argon in CUO·Ar3Kr, CUO·Ar2Kr2, CUO·ArKr3 and CUO·Kr4. The shape of these molecules is roughly octahedral, with a uranium centre and with the noble gas atoms around the equator. can bind up to five noble gas atoms in a ring around a linear O==O core. These molecules are produced when uranium metal is laser ablated into dioxygen. This produces UO, UO2, UO3, U+, and importantly . is then condensed into a noble gas matrix, either a pure element or a mixture. Heavier noble gas atoms will tend to displace the lighter atoms. Ionic molecules produced this way include , , , , , , , , , , , , and , which are identified by a shift in the U=O antisymmetric stretching frequency. Neutral UO2 condensed in solid argon is converted from one electronic state to another by the argon atom ligands. In argon the electron configuration is 5f2(δφ) whereas in neon it is 5f17s1 (the state 3H4g compared to 3Φ2u). This is because the argon atoms have a larger antibonding interaction with the 7s1 electron, forcing it into a different subshell. The argonated compound has a stretching frequency of 776 cm−1 compared to 914.8 cm−1 in neon. The argon uranium dioxide molecule is likely UO2Ar5. Beryllium oxide When beryllium atoms react with oxygen in a solid argon matrix (or beryllia is evaporated into the matrix) ArBeO is formed, and is observable by its infrared spectrum. The beryllia molecule is strongly polarised, and the argon atom is attracted to the beryllium atom. The bond strength of Ar−Be is calculated to be 6.7 kcal/mol (28 kJ/mol). The Ar−Be bond length is predicted to be 2.042 Å. The cyclic Be2O2 molecule can bind two argon atoms, or one argon along with another noble gas atom. Analogously, beryllium reacting with hydrogen sulfide and trapped in an argon matrix at 4 K forms ArBeS. It has a binding energy calculated to be 12.8 kcal/mol (54 kJ/mol). ArBeO2CO (beryllium carbonate) has been prepared (along with Ne, Kr and Xe adducts). The cyclic beryllium sulfite molecule can also coordinate an argon atom onto the beryllium atom in solid neon or argon matrix. Carbonyl compounds Group 6 elements can form reactive pentacarbonyls that can react with argon. These were actually argon compounds discovered in 1975, and were known before the discovery of HArF, but are usually overlooked. Tungsten normally forms a hexacarbonyl, but when subject to ultraviolet radiation it breaks into a reactive pentacarbonyl. When this is condensed into a noble gas matrix the infrared and UV spectrum varies considerably depending on the noble gas used. This is because the noble gas present binds to the vacant position on the tungsten atom. Similar results also occur with molybdenum and chromium. Argon is only very weakly bound to tungsten in ArW(CO)5. The Ar−W bondlength is predicted to be 2.852 Å. The same substance is produced for a brief time in supercritical argon at 21 °C. For ArCr(CO)5 the band maximum is at 533 nm (compared to 624 nm in neon, and 518 nm in krypton). Forming 18-electron complexes, the shift in spectrum due to different matrices was much smaller, only around 5 nm. This clearly indicates the formation of a molecule using atoms from the matrix. Other carbonyls and complexed carbonyls also have reports of bonding to argon. These include Ru(CO)2(PMe3)2Ar, Ru(CO)2(dmpe)2Ar, η6-C6H6Cr(CO)2Ar. Evidence also exists for ArHMn(CO)4, ArCH3Mn(CO)4, and fac-(η2-dfepe)Cr(CO)3Ar. Other noble gas complexes have been studied by photolysis of carbonyls dissolved in liquid rare gas, possibly under pressure. These Kr or Xe complexes decay on the time scale of seconds, but argon does not seem to have been studied this way. The advantage of liquid noble gases is that the medium is completely transparent to infrared radiation, which is needed to study the bond vibration in the solute. Attempts have been made to study carbonyl–argon adducts in the gas phase, but the interaction appears to be too weak to observe a spectrum. In the gas form, the absorption lines are broadened into bands because of rotation that happens freely in a gas. The argon adducts in liquids or gases are unstable as the molecules easily react with the other photolysis products, or dimerize, eliminating argon. Coinage metal monohalides The argon coinage metal monohalides were the first noble gas metal halides discovered, when the metal monohalide molecules were put through an argon jet. There were first found in Vancouver in 2000. ArMX with M = Cu, Ag or Au and X = F, Cl or Br have been prepared. The molecules are linear. In ArAuCl the Ar−Au bond is 2.47 Å, the stretching frequency is 198 cm−1 and the dissociation energy is 47 kJ/mol. ArAgBr also has been made. ArAgF has a dissociation energy of 21 kJ/mol. The Ar−Ag bond-length in these molecules is 2.6 Å. ArAgCl is isoelectronic with which is better known. The Ar−Cu bond length in these molecules is 2.25 Å. Transition metal oxides In a solid argon matrix VO2 forms VO2Ar2, and VO4 forms VO4·Ar with binding energy calculated to be 12.8 and 5.0 kcal/mol (53 and 21 kJ/mol). Scandium in the form of ScO+ coordinates five argon atoms to yield . these argon atoms can be substituted by numbers of krypton or xenon atoms to yield even more mixed noble gas molecules. With yttrium, YO+ bonds six argon atoms, and these too can be substituted by varying numbers of krypton or xenon atoms. In the case of transition metal monoxides, ScO, TiO and VO do not form a molecule with one argon atom. However CrO, MnO, FeO, CoO and NiO can each coordinate one argon atom in a solid argon matrix. The metal monoxide molecules can be produced by laser ablation of the metal trioxide, followed by condensation on solid argon. ArCrO absorbs at 846.3 cm−1, ArMnO at 833.1, ArFeO at 872.8, ArCoO at 846.2, Ar58NiO at 825.7 and Ar60NiO at 822.8 cm−1. All these molecules are linear. There are also claims of argon forming coordination molecules in NbO2Ar2, NbO4Ar, TaO4Ar, VO2Ar2, VO4Ar, Rh(η2-O2)Ar2, Rh(η2-O2)2Ar2, Rh(η2-O2)2(η1-OO)Ar. Tungsten trioxide, WO3, and tungsten dioxide mono-superoxide (η2-O2)WO2 can both coordinate argon in an argon matrix. The argon can be replaced by xenon or molecular oxygen to make xenon coordinated compounds or superoxides. For WO3Ar the binding energy is 9.4 kcal/mol and for (η2-O2)WO2 it is 8.1 kcal/mol. Other transition metal compounds ArNiN2 binds argon with 11.52 kcal/mol. The bending frequency of ArN2 is changed from 310.7 to 358.7 cm−1 when argon attaches to the nickel atom. Other ions Some other binary ions observed that contain argon include BaAr2+ and , VAr+, CrAr+, FeAr+, CoAr+, and NiAr+. Gold and silver cluster ions can bind argon. Known ions are , , , and . These have a triangular shaped metallic core with argon bound at the vertexes. ArF+ is also known to be formed in the reaction + Ar → ArF+ + F and also Ar+ + F2 → ArF+ + F. and also SF + Ar → ArF+ + SF. The ions can be produced by ultraviolet light at 79.1 nm or less. The ionisation energy of fluorine is higher than that of argon, so breakup occurs thus: ArF+ → Ar+ + F. The millimeter wave spectrum of ArF+ between 119.0232 and 505.3155 GHz has been measured to calculate molecular constants B0 = , D0 = 28.718 kHz. There is a possibility that a solid salt of ArF+ could be prepared with or anions. Excited or ionized argon atoms can react with molecular iodine gas to yield ArI+ Argon plasma is used as an ionisation source and carrier gas in inductively coupled plasma mass spectrometry. This plasma reacts with samples to produce monatomic ions, but also forms argon oxide (ArO+), and argon nitride (ArN+) cations, which can cause isobaric interference with detection and measurement of iron-56 (56Fe) and iron-54 (54Fe), respectively, in mass spectrometry. Platinum present in stainless steel can form platinum argide (PtAr+) which interferes with the detection of uranium-234 which can be used as a tracer in aquifers. Argon chloride cations can interfere with the detection of arsenic as Ar35Cl+ has a mass-to-charge ratio almost identical to that of arsenic's one stable isotope, 75As. In these circumstances ArO+ may be removed by reaction with NH3. Alternatively electrothermal vaporization or using helium gas can avoid these interference problems. Argon can also form an anion with chlorine, ArCl−, though this is not a problem for mass spectrometry applications as only cations are detected. The argon borynium ion, BAr+ is produced when BBr+ at energies between 9 and 11 eV reacts with argon atoms. 90% of the positive charge is on the argon atom. ArC+ ions can be formed when argon ions impact carbon monoxide with energies between 21 and 60 eV. However more C+ ions are formed, and when the energy is on the high side, O+ is higher. ArN+ can form when argon ions impact dinitrogen with energies between 8.2 and 41.2 eV and peaking around 35 eV. However far more and N+ are produced. ArXe+ is held together with a strength of 1445 cm−1 when it is in the X electronic state, but 1013 cm−1 when it is in the B excited state. Metal–argon cations are called "argides". The argide ions produced during mass spectroscopy have higher intensity when the binding energy of the ion is higher. Transition elements have higher binding and ion flux intensity compared to main group elements. Argides can be formed in the plasma by excited argon atoms reacting with another element atom, or by an argon atom binding with another ion: Ar+ + M → ArM+ + e−; M+ + Ar → ArM+. Doubly charged cations, called superelectrophiles, are capable of reacting with argon. Ions produced include ArCF ArCH, ArBF and ArBF containing bonds between argon and carbon or boron. Doubly ionised acetylene HCCH2+ reacts inefficiently with argon to yield HCCAr2+. This product competes with the formation of Ar+ and argonium. The SiF ion reacts with argon to yield ArSiF. {|class="wikitable" !Ion !Bond length(Å) !Dissociation energy(kJ/mol) !Excited statebond length (Å) !Excited statedissociation energy |- |ArH+ | |3.4 eV |- |LiAr+ |2.343 |0.30 eV |- |BeAr+ | |4100 cm−1 |- |BAr+ |2.590 |210 |- |ArC+ | | |- |ArN+ |3.5 |2.16 eV |- |ArO+ | | |- |ArF+ |1.637 |194 |- |NaAr+ | |19.3 |- |MgAr+ |2.88 |1200 cm−1 |- |AlAr+ | |982 cm−1 |- |SiAr+ | | |- |ArP+ | | |- |ArS+ | | |- |ArCl+ | | |- | | | |- |CaAr+ | |700 cm−1 |- |ScAr+ | | |- |TiAr+ | |0.31eV |- |VAr+ |2.65 |37,D0=2974 cm−1 |- |CrAr+ | |28,D0=2340 |- |MnAr+ | |0.149 eV |- |FeAr+ | |0.11 eV |- |CoAr+ |2.385 |49,D0=4111 cm−1 |- |NiAr+ | |53,D0=4572 |- |CuAr+ | |0.53 eV |- |ZnAr+ |2.72 |0.25 eV, D0=2706 cm−1 |- |GaAr+ | | |- |AsAr+ | | |- |RbAr+ | | |- |SrAr+ | |800 |- |ZrAr+ |2.72 |D0 = 2706 cm−1 |3.050 |1179 cm−1 |- |NbAr+ |2.677 |37,D0=3106 cm−1 |- |AgAr+ | | |- |InAr+ | | |- |ArI+ | | |- |BaAr+ | |600 cm−1 |} Polyatomic cations Metal ions can also form with more than one argon atom, in a kind of argon metal cluster. Different sized metal ions at the centre of a cluster can fit different geometries of argons atoms around the ion. Argides with multiple argon atoms have been detected in mass spectrometry. These can have variable numbers of argon attached, but there are magic numbers, where the complex more commonly has a particular number, either four or six argon atoms. These can be studied by time of flight mass spectrometer analysis and by the photodissociation spectrum. Other study methods include Coulomb explosion analysis. Argon-tagging is a technique whereby argon atoms are weakly bound to a molecule under study. It results in a much lower temperature of the tagged molecules, with sharper infra-red absorption lines. The argon-tagged molecules can be disrupted by photons of a particular wavelength. Lithium ions add argon atoms to form clusters with more than a hundred argon atoms. The clusters Li+Ar4, and Li+Ar4 are particularly stable and common. Calculations show that the small clusters are all quite symmetrical. Li+Ar2 is linear, Li+Ar3 is flat and triangular shaped with D3h symmetry, Li+Ar4 is tetrahedral, Li+Ar5 could be a square pyramid or trigonal bipyramid shape. Li+Ar6 is an octahedron shape with Li at the centre. Li+Ar7 or slightly larger clusters have a core octahedron of argon atoms with one or more triangular faces capped by other argon atoms. The bonding is much weaker, which explains their greater scarcity. Sodium forms clusters with argon atoms with peaks at numbers of 8, 10, 16, 20, 23, 25 and 29, and also at the icosahedral numbers of 47, 50, 57, 60, 63, 77, 80, 116 and 147 argon atoms. This includes the square antiprism (8) and the capped square antiprism (10 atoms). In Ti+Ar1−n the argon atoms induce a mixing of the ground electronic state of 3d24s1 with 3d34s0. When a plasma of titanium in expanding argon gas is made via a laser, clusters from Ti+Ar up to Ti+Ar50 are formed. But Ti+Ar6 is much more common than all the others. In this the six argon atoms are arranged in an octahedron shape around the central titanium ion. For Ti+Ar2 DFT calculations predict it is linear, Ti+Ar3 is not even flat, and has one short and two longer Ti-Ar bonds. Ti+Ar4 is a distorted tetrahedron, with one longer Ti-Ar bond. Ti+Ar5 is an asymmetrical trigonal bipyramid shape with one bond shorter. For clusters with seven or more argon atoms, the structure contains a Ti+Ar6 octahedton with triangular faces caped by more argon atoms. Cu+Ar2 is predicted to be linear. Cu+Ar3 is predicted to be planar T-shaped with an Ar-Cu-Ar angle of 93°. Cu+Ar4 is predicted to be rhombic planar (not square or tetrahedral). For alkali and alkaline earth metals the M+Ar4 cluster is tetrahedral. Cu+Ar5 is predicted to have a rhombic pyramid shape. Cu+Ar6 has a flattened octahedral shape. Cu+Ar7 is much less stable, and the seventh argon atom is outside an inner shell of six argon atoms. This is called capped octahedral. A complete second shell of argon atoms yields Cu+Ar34. Above this number a structural change takes place with an icosahedral arrangement with Cu+Ar55 and Cu+Ar146 having more stability. With a strontium ion Sr+ from two to eight argon atoms can form clusters. Sr+Ar2 has a triangle shape with C2v symmetry. Sr+Ar3 has a trigonal pyramid shape with C3v symmetry. Sr+Ar4 has two trigonal pyramids sharing a face and strontium at the common apex. It has a C2v symmetry. Sr+Ar6 has a pentagonal pyramid of argon atoms with the strontium atom below the base. Niobium tetraargide, Nb+Ar4 probably has the argon atoms arranged in a square around the niobium. Similarly for vanadium tetraargide, V+Ar4. The hexaargides, Co+Ar6 and Rh+Ar6 likely have octahedral argon arrangement. Indium monocation forms clusters with multiple argon, with magic numbers at 12, 18, 22, 25, 28, 45 and 54, and 70 argon atoms, which are numbers for icosahedral shapes. By zapping copper metal with a UV laser in an argon-carbon monoxide mixture, argon tagged copper carbonyl cations are formed. These ions can be studied by observing which wavelengths of infrared radiation cause the molecules to break up. These molecular ions include CuCO+Ar, Cu(CO)2+Ar, Cu(CO)3+Ar, Cu(CO)4+Ar which are respectively disrupted to lose argon, by infrared wavenumbers 2216, 2221, 2205 and 2194 cm−1 respectively. The argon binding energy is respectively 16.3, 1.01, 0.97 and 0.23 kcal/mol. The infrared absorption peak for Cu(CO)3+Ar is 2205 cm−1 compared to 2199 cm−1 for Cu(CO)3+. For Cu(CO)4+Ar the peak is at 2198 cm−1 compared to 2193 for Cu(CO)4+. For Cu(CO)2+Ar the peak is at 2221 cm−1 compared to 2218.3 for argon free, and for CuCO+Ar the peak is at 2216 cm−1 considerably different to 2240.6 cm−1 for CuCO+. Computationally predicted shapes for these molecular ions are linear for CuCO+Ar, slightly bent T-shaped for Cu(CO)2+Ar and a trigonal pyramid with argon at the top and a flat star like copper tricarbonyl forming the base. Ions studied by argon tagging include the hydrated proton H+(H2O)nAr with n=2 to 5, hydrated 18-crown-6 ether alkali metal ions, hydrated alkali metal ions, transition metal acetylene complexes, protonated ethylene, and IrO4+. Argon methyl cations, (or methyliumargon) ArxCH3+ are known for n=1 to 8. CH3+ is a Y shape, and when argon atoms are added they go above and below the plane of the Y. If more argon atoms are added they line up with the hydrogen atoms. ΔH0 for ArCH3+ is 11 kcal/mol, and for Ar2CH3+ it is 13.5 kcal/mol (for 2Ar + CH3+). Boroxyl ring cationic complexes with argon [ArB3O4]+, [ArB3O5]+, [ArB4O6]+ and [ArB5O7]+ were prepared via a laser vaporization at cryogenic temperatures and investigated by infrared gas phase spectroscopy. They were the first large stable gas phase complexes that feature strong dative bonding between argon and boron. Dications Dications with argon are known for the coinage metals. Known dications include CuArn2+ and AgArn2+ for n=1-8, with a peak occurrence of CuAr42+, or AgAr42+, and AuArn2+ n=3–7. In addition to the four argon atoms, the six argon atoms clusters have enhanced concentration. The stability of the ions with two positive charges is unexpected as the ionization energy of argon is lower than the second ionization energy of the metal atom. So the positive second charge on the metal atom should move to the argon, ionizing it, and then forming a highly repulsive molecule that undergoes a Coulomb explosion. However these molecules appear to be kinetically stable, and to transfer the charge to an argon atom, they have to pass through a higher energy state. The clusters with four argon atoms are expected to be square planar, and those with six, to be octahedral distorted by the Jahn–Teller effect. Polyatomic anions Examples of anions containing strong bonds with noble gases are extremely rare: generally nucleophilic nature of anions results in their inability to bind to noble gases with their negative electron affinity. However, the 2017 discovery of "superelectrophilic anions", gas phase fragmentation products of closo-dodecaborates, led to the observation of stable anionic compounds containing a boron-noble gas bond with significant degree of covalent interaction. The most reactive superelectrophilic anion [B12(CN)11]−, fragmentation product of cyanated cluster [B12(CN)12]2-, was reported to bind argon spontaneously at room temperature. Solid compounds Armand Gautier noticed that rock contained argon (and also nitrogen) that was liberated when the rock was dissolved in acid however how the argon was combined in rock was ignored by the scientific community. Fullerene solvates Solid buckminsterfullerene has small spaces between the C60 balls. Under 200 MPa pressure and 200 °C heat for 12 hours, argon can be intercalated into the solid to form crystalline Ar1C60. Once this cools down it is stable at standard conditions for months. Argon atoms occupy octahedral interstitial sites. The crystalline lattice size is almost unchanged at room temperature, but is slightly larger than pure C60 below 265 K. However argon does stop the buckyballs spinning below 250 K, a lower temperature than in pure C60. Solid C70 fullerene will also absorb argon under pressure of 200 MPa and at a temperature of 200 °C. C70·Ar has argon in octahedral sites and has the rock salt structure, with cubic crystals in which the lattice parameter is 15.001 Å. This compares to the pure C70 lattice parameter of 14.964 Å, so the argon forces the crystals to expand slightly. The C70 ellipsoidal balls rotate freely in the solid, they are not locked into position by extra argon atoms filling the holes. Argon gradually escapes over a couple of days when the solid is stored at standard conditions, so that C70·Ar is less stable than C60·Ar. This is likely to be due to the shape and internal rotation allowing channels through which Ar atoms can move. When fullerenes are dissolved and crystallized from toluene, solids may form with toluene included as part of the crystal. However, if this crystallization is performed under a high pressure argon atmosphere, toluene is not included, being replaced by argon. The argon is then removed from the resultant crystal by heating to produce unsolvated solid fullerene. Clathrate Argon forms a clathrate with hydroquinone (HOC6H4OH)3•Ar. When crystallised from benzene under a pressure of 20 atmospheres of argon, a well defined structure containing argon results. An argon-phenol clathrate 4C6H5OH•Ar is also known. It has a binding energy of 40 kJ/mol. Other substituted phenols can also crystallise with argon. The argon water clathrate is described in the Aqueous argon section. Argon difluoride Argon difluoride, ArF2, is predicted to be stable at pressures over 57 GPa. It should be an electrical insulator. Ne2Ar and Ar2Ne At around 4 K there are two phases where neon and argon are mixed as a solid: Ne2Ar and Ar2Ne. With Kr, solid argon forms a disorganized mixture. ArH4 Under high pressure stoichiometric solids are formed with hydrogen and oxygen: Ar(H2)2 and Ar(O2)3. Ar(H2)2 crystallises in the hexagonal C14 MgZn2 Laves phase. It is stable to at least 200 GPa, but is predicted to change at 250 GPa to an AlB2 structure. At even higher pressures the hydrogen molecules should break up followed by metallization. ArO and ArO6 Oxygen and argon under pressure at room temperature form several different alloys with different crystal structures. Argon atoms and oxygen molecules are similar in size, so that a greater range of miscibility occurs compared to other gas mixtures. Solid argon can dissolve up to 5% oxygen without changing structure. Below 50% oxygen a hexagonal close packed phase exists. This is stable from about 3GPa to 8.5 GPa. Typical formula is ArO. With more oxygen between 5.5 and 7 GPa, a cubic Pm3n structure exists, but under higher pressure it changes to a I2d space group form. With more than 8.5 GPa these alloys separate to solid argon and ε-oxygen. The cubic structure has a unit cell edge of 5.7828 Å at 6.9 GPa. The representative formula is Ar(O2)3. ArHe2 Using density-functional theory ArHe2 is predicted to exist with the MgCu2 Laves phase structure at high pressures below 13.8 GPa. Above 13.8 GPa it transforms to AlB2 structure. Ar-TON Under pressure argon inserts into zeolite. Argon has an atomic radius of 1.8 Å, so it can insert into pores if they are big enough. Each unit cell of the TON zeolite can contain up to 5 atoms of argon, compared to 12 of neon. Argon infused TON zeolite (Ar-TON) is more compressible than Ne-TON as the unoccupied pores become elliptical under increased pressure. When Ar-TON is brought to atmospheric pressure, the argon only desorbs slowly, so that some remains in the solid without external pressure for a day. Nickel argide At 140 GPa and 1500K nickel and argon form an alloy, NiAr. NiAr is stable at room temperature and a pressure as low as 99 GPa. It has a face-centred cubic (fcc) structure. The compound is metallic. Each nickel atom loses 0.2 electrons to an argon atom which is thereby an oxidant. This contrasts with Ni3Xe, in which nickel is the oxidant. The volume of the ArNi compound is 5% less than that of the separate elements at these pressures. If this compound exists in the core of the Earth it could explain why only half the argon-40 that should be produced during the radioactive decay that produces geothermal heating seems to exist on the Earth. Organoargon chemistry Organoargon chemistry describes the synthesis and properties of chemical compounds containing a carbon to argon chemical bond. Very few such compounds are known. The reaction of acetylene dications with argon produced in 2008. Reaction of the dication with argon produced : this reaction is unique to argon among the noble gases. The compound FArCCH has been theoretically studied and is predicted to be stable. FArCCF might also be stable enough to synthesise and detect, but probably not FArCCArF. Calculations in 2015 suggest that FArCCH and FArCH3 are stable, but not FArCN. should be kinetically stable, as is also expected of the krypton and xenon (but not helium) analogues. HArC4H (for which the krypton analogue is known) and HArC6H have also been predicted as stable. FArCO+ and ClArCO+ should be metastable and might be possible to characterise under cryogenic conditions. Calculations suggest that HArCCF and HCCArF should be stable, and that HNgCCF molecules should be more stable than HNgCCH (Ng = Ar, Kr, Xe); the corresponding krypton species have been experimentally produced, but not the argon species despite an experimental attempt. HCCNgCN and HCCNgNC (Ng = Ar, Kr, Xe) are likewise computed to be stable, but experimental searches for them have failed.
Physical sciences
Noble gas compounds
Chemistry
63074081
https://en.wikipedia.org/wiki/Climate%20change%20in%20the%20Middle%20East%20and%20North%20Africa
Climate change in the Middle East and North Africa
Climate change in the Middle East and North Africa (MENA) refers to changes in the climate of the MENA region and the subsequent response, adaption and mitigation strategies of countries in the region. In 2018, the MENA region emitted 3.2 billion tonnes of carbon dioxide and produced 8.7% of global greenhouse gas emissions (GHG) despite making up only 6% of the global population. These emissions are mostly from the energy sector, an integral component of many Middle Eastern and North African economies due to the extensive oil and natural gas reserves that are found within the region. The region of Middle East is one of the most vulnerable to climate change. The impacts include increase in drought conditions, aridity, heatwaves and sea level rise. Sharp global temperature and sea level changes, shifting precipitation patterns and increased frequency of extreme weather events are some of the main impacts of climate change as identified by the Intergovernmental Panel on Climate Change (IPCC). The MENA region is especially vulnerable to such impacts due to its arid and semi-arid environment, facing climatic challenges such as low rainfall, high temperatures and dry soil. The climatic conditions that foster such challenges for MENA are projected by the IPCC to worsen throughout the 21st century. If greenhouse gas emissions are not significantly reduced, part of the MENA region risks becoming uninhabitable before the year 2100. Climate change is expected to put significant strain on already scarce water and agricultural resources within the MENA region, threatening the national security and political stability of all included countries. Over 60 percent of the region's population lives in high and very high water-stressed areas compared to the global average of 35 percent. This has prompted some MENA countries to engage with the issue of climate change on an international level through environmental accords such as the Paris Agreement. Law and policy are also being established on a national level amongst MENA countries, with a focus on the development of renewable energies. Greenhouse gas emissions As of January 2021, the UNICEF website groups the following set of 20 countries as belonging to the MENA region: 'Algeria, Bahrain, Djibouti, Egypt, Iran (Islamic Republic of), Iraq, Jordan, Kuwait, Lebanon, Libya, Morocco, Oman, Qatar, Saudi Arabia, State of Palestine, Sudan, Syrian Arab Republic, Tunisia, United Arab Emirates, Yemen.' Others include Israel as well. Greenhouse gas emissions produced by humans have been identified by the IPCC and the vast majority of climate scientists as the primary driver of climate change. In the past three decades, the MENA region has more than tripled its greenhouse gas emissions and is currently emitting above the global average per person, with most of the top ten countries by carbon dioxide emissions per person being found in the Middle East. These high emissions levels can be primarily attributed to Saudi Arabia and Iran, which are the 9th and 7th largest emitters of CO2 in the world, accounting for 40% of the region's emissions in 2018. MENA countries heavily rely on fossil fuels for the generation of electricity, sourcing 97% of their energy from oil, natural gas, and coal (in Turkey). Fossil fuel extraction, production and exportation is also a significant component of many economies within the MENA region, which possesses 60% of the world oil reserves and 45% of known natural gas reserves. Reducing gas flaring would help. The failure of the Iranian subsidy reform plan during the 2010s left Iran as the world's largest subsidizer of fossil fuel in 2018. But, unlike other countries which successfully removed subsidies by acting gradually, at the end of the decade, the government attempted to suddenly reduce gasoline subsidies, sparking riots. Impacts on the natural environment Temperature and weather changes Heat extremes The IPCC project average global temperatures to rise more than 1.5 degrees by the end of the 21st century. MENA has been identified as a hotspot for future temperature changes due to its arid environmental conditions. Whilst projected rates of warming during winter months are low, the region is expected to experience extreme temperature increases during summer. Temperature rises are expected to be further amplified by reductions in rainfall and the associated depletion of soil moisture, limiting evaporative cooling. As a result, heat extremes are expected to increase significantly in both frequency and intensity across the MENA region. According to studies published by the Max Planck Institute for Chemistry, the number of very hot days in the region has doubled between the 1970s and the time when the report was published (2016). The study further projects that heatwaves will occur for 80 days of the year by 2050 and 118 days of the year by 2100. Considering also increased sandstorms associated with longer drought periods, even a 2 degree temperature rise would make large parts of the region uninhabitable and force people to migrate. Limiting temperature rise to 1.5 degrees, will significantly reduce risks for the region. The average maximum temperature during the hottest days of the past 30 years has been 43 degrees Celsius. Dutch atmospheric chemist Johannes Lelieveld has projected that temperature maximum's could reach almost 50 degrees Celsius under current climate scenarios established by the IPCC. Johannes Lelieveld further projects that average summer temperatures are expected to increase by up to 7% across the MENA region, and up to 10% in highly urbanised areas. Extreme heat has been identified as a serious threat to human health, heightening an individual's susceptibility to exhaustion, heart attack and mortality. Climate scientist Ali Ahmadalipour has projected heat-related mortality rates within the MENA region to be up to 20 times higher than current rates by the end of the century. Water resources The Middle East and North Africa currently faces extreme water scarcity, with twelve out of the 17 most water stressed countries in the world deriving from the region. The World Bank defines an area as being water stressed when per person water supplies fall below 1,700 cubic metres per year. The water supply across the MENA region is averaged at 1274 cubic metres per capita, with some countries having access to only 50 cubic metres per person. The agricultural sector within the MENA region is heavily dependent on irrigation systems due to its arid climate, with 85% of fresh water resources being utilised for agricultural purposes. The IPCC indicate that the global distribution of rainfall is currently shifting in response to increasing greenhouse gas emissions, with increases in high latitude and mid-latitude wet region and decreases in equatorial dry regions such as the MENA. These shifting precipitation patterns have already placed significant strain on MENA agriculture, with the frequency and severity of droughts rising significantly in the past decade. A recent NASA study suggests that the 1998–2012 drought in the Middle East was the worst to occur in the past 900 years. Climate scientist Colin Kelley suggests that climate change was a significant contributor to the increased severity of the most recent drought in the region. He claims that such drought is 3 times more likely to occur due to human influence on climate and the drought have contributed to the beginning of the Syrian civil war. Along with environmental impacts, increasing drought periods affect agricultural incomes, diminishes public health and weakens political stability in the MENA region. Syria experienced its most severe drought on record from 2007 to 2010, where restricted water supply degraded agricultural resources and increased economic pressures. American environmental scientist Peter Gleick also asserts that heightened social vulnerability and conflict over scare water supplies during this period catalysed the onset of the Syrian war. However, in 2017, a study led by sociologist and political ecologist Jan Selby has discredited these claims, reporting that there is no solid evidence that climate change is associated with the drought, the same about the impact of the drought on the conflict in Syria. In 2019 Konstantin Ash and Nick Obradovich published research indicating that extreme drought was one of the leading factors in the creation of the Syrian war. Increasing water insecurity as a result of climate change is set to exacerbate existing food insecurities in the countries affected. A study published by the World Food Programme has predicted a decline in crop yields by 30% in 2050 as a result of increasing droughts. North African countries are highly vulnerable to reduced precipitation, as 88% of the region's crops possess no irrigation, relying on consistent rainfall. The consequences of these reduced harvests strongly impact rural regions and communities that rely heavily on agriculture as a source of income. Sea level rise Alexandria is one of the most vulnerable cities to sea level rise. Across the MENA region, 60 million people inhabited coastal areas in 2010, a population that has been predicted by the World Bank to grow to 100 million by 2030. As a result, the population of the MENA region is expected to be significantly impacted by sea level rise occurring due to climate change. One consequence of rising sea levels is the loss of coastal wetlands, a natural resource responsible for ecosystem services such as storm buffering, water quality maintenance and carbon sequestration. A study conducted by the World Bank predicts that the MENA region would lose over 90% of its coastal and freshwater wetlands if a one-metre sea level rise were to occur. In North Africa, Egypt is expected to be most affected by changes in sea level. A third of the Nile Delta and large parts of Alexandria, Egypt's second-largest city, lie below the mean global sea level. These areas have been drained for agricultural purposes and undergone urban development, where inundation and flooding is prevented by sea walls and dams. However, failures occurring in these structures, storm surges and extreme weather events could lead to the inundation of these areas in the future if sea levels continue to rise. Agricultural areas in Egypt are particularly at risk, where a one-metre rise in sea level would submerge 12–15% of the nation's total agricultural land. This is estimated to displace 6.7 million people in Egypt and affect millions more who rely on agriculture for income. A more moderate 50 cm increase in sea level has been projected to displace 2 million people and generate US$35 billion of damages. Mitigation and adaptation The severe impacts of climate change on the region, made climate change mitigation and adaptation an important issue in it. Regional cooperation is considered as one of the main conditions for effective mitigation and adaptation. Renewable energy The MENA region possesses high potential for developing renewable energy technologies due to the high levels of wind and sunshine that are associated with its climate. The International Renewable Energy Agency (IRENA) has identified over half of all land in GCC states as being suitable for the deployment of solar and wind technologies. IRENA has also identified North African countries as having greater potential for wind and solar energy generation than all other regions of the continent. Sourcing energy from renewable technologies instead of fossil fuels could significantly reduce energy related GHG emissions, which presently account for 85% of total emissions within the MENA region. Renewable energy generation also involves significantly less water usage than processes associated with fossil fuel extraction and its conversion into usable energy, possessing the potential to improve water quality and availability within the region. Renewable energy presently accounts for 1% of the total primary energy supply across the MENA region. At the 2016 UN Climate Change Conference in Marrakech, Morocco (COP22), Morocco, Tunisia, Yemen, Lebanon and the State of Palestine, along with 43 other countries, committed to deriving all energy from renewable resources by 2050. Ouarzazate Solar Power Station The Ouarzazate Solar Power Station is a solar power complex located in the Drâa-Tafilalet region of Morocco, and is currently the largest concentrated solar power plant in the world. The complex consists of four separate power plants that utilise concentrated solar power and photovoltaic solar technology. The project, costing US$2.67 billion, is expected to provide 1.1 million Moroccans with clean energy and reduce the country's carbon emissions by 700,000 tonnes every year. The total energy capacity of the solar plant is expected to reach 2000 Megawatts by the end of 2020. Policies and legislation Paris Agreement Eleven countries from the MENA region attended the 21st Conference of the Parties of the UNFCCC where countries negotiated the Paris Agreement, an agreement with the United Nations concerning greenhouse gas emissions mitigation. Eritrea, Iran, Iraq, Libya, and Yemen are the only countries in the world which have not ratified the agreement. Morocco has set its nationally determined contribution to a 17%-42% reduction in emissions and has set a target of having 52% of renewable energy in its total installed electricity production capacity by 2050. The share of renewable energy reached 28% in 2018 and is currently recognised by the United Nations as being on track to achieving its renewable energy targets. The UAE, despite ratifying the agreement, have set no reduction in emissions in their nationally determined contribution. The United Nations have identified their NDC target as "critically insufficient". MENA Climate Action Plan In 2016 the World Bank put forth the MENA Climate Action Plan, a series of financial commitments centred around the redistribution of finance to the MENA region. The World Bank deemed the plan's core focus to be ensuring food and water security, increasing resilience to climate change impacts and improving investment in renewable energy source. One of the Action Plan's major commitments was to allocate 18-30% of MENA finance towards climate related initiatives, which currently stands a $1.5 billion annually. The World Bank have also outlined a significant increase in funding directed towards adaptation initiatives such as water conservation and recycling, introduction of desalination facilities and investment into carbon sequestration technologies. By country Algeria Egypt Egypt's Nile Delta is impacted by saltwater intrusion caused by sea level rise, leading to major implications for the country. Agriculture and food security in Egypt will be disrupted by climate change due to increased drought, higher temperatures, extreme weather events, plant diseases and pests, with major infrastructure changes required to adapt. Water security in Egypt will also be disrupted. Iran Iraq Israel According to the Ministry of Environmental Protection of Israel: "While Israel is a relatively small contributor to climate change due to its size and population, it is sensitive to the potential impacts of the phenomenon, due to its location. Thus, it is making an effort to reduce greenhouse gas emissions while simultaneously doing whatever possible to reduce the expected damage that will result if climate change is not halted." The impacts of climate change are already felt in Israel. The temperature rose by 1.4 degrees between 1950 and 2017. The number of hot days increased and the number of cold days decreased. Precipitation rates have fallen. The trends are projected to continue. By the year 2050, in the coastal area the number of days with maximal temperature above 30 degrees, per year, is projected to increase by 20 in the scenario with climate change mitigation and by 40 in "business as usual" scenario. Israel ratified the Paris Agreement in 2016. The country is part of 3 initiatives on mitigation and adaptation and 16 other actions taken by non-governmental organisations. According to Israel's Intended nationally determined contribution the main mitigation target is to reduce per capita greenhouse gas emissions to 8.8 tCO2e by 2025 and to 7.7 tCO2e by 2030. Total emissions should be 81.65 MtCO2e in 2030. In the business as usual scenario the emissions would be 105.5 MtCO2e by 2030 or 10.0 tCO2e per capita. To reach it, the government of Israel wants to reduce the consumption of electricity by 17% relative to the business as usual scenario, produce 17% of electricity from renewables and shift 20% of transportation from cars to public transport by 2030. In an effort to comply with GHG emission reductions, Israel formed a committee with the goal of evaluating the country's potential to reduce emissions by the year 2030. Their findings have confirmed that Israel's power sector generates approximately half of the country's total GHG emissions. The second-largest offender is the transport sector, which produces approximately 19% of total emissions. Jordan Kuwait Morocco Sudan Syria Tunisia Turkey United Arab Emirates
Physical sciences
Climate change
Earth science
73165062
https://en.wikipedia.org/wiki/Lithophane
Lithophane
A lithophane is a thin plaque of translucent material, normally porcelain, which has been moulded to varying thickness, such that when lit from behind the different thicknesses show as different shades, forming an image. Only when lit from behind does the image display properly. They were invented in the 19th century and became very popular, typically for lampshades, nightlights, or to be hung on windows. They could also be given stands, to be placed in front of a light source. The longest side of a lithophane is typically between . The images tended to be artistically unadventurous, mostly repeating designs from prints, or paintings via reproductive prints. A large number were rather sentimental domestic genre scenes, though there were also portraits, landscapes and religious subjects. The technique naturally produced images only in grisaille, tones of grey, but later ones were often painted in translucent paint such as that used for watercolours to give colour images. The name comes from Greek; lithos means "stone," and phainen, means "to cause to appear". Invented in France in the 1820s, they rapidly became popular and produced in various countries. But Germany soon became the main producer, remaining so for the rest of the century. The largest producer was the Prussian Königliche Porzellan-Manufaktur (KPM) in Berlin, leading to "Berlin transparencies" becoming a common term for them in English. The Plauesche Porzellanmanufaktur in Plaue, Thuringia, Germany, was another large manufacturer, who continued to make them into the second half of the 20th century. Their peak of production was perhaps from about 1840 to 1870. By the end of the 19th century lithophanes had largely fallen from fashion, but in recent decades they have had something of a revival, using in addition to porcelain, glass, plastic, and with 3D printing sometimes paper. Technique To make a porcelain lithophane, a wax plaque was placed on a glass backing and carved, so that by lighting from behind the developing image could be seen in a similar fashion to the final lithophane. A cast of the wax was then taken in plaster of Paris, which became the reuseable mould for the porcelain. This was generally left unglazed as biscuit porcelain. As lithophanes became produced in larger numbers, more durable metal moulds were often used. As the porcelain is in places only about thick, wastage in firing was high, up to about 60%. History There were precedents in Chinese porcelain, in a technique known as an hua, meaning "secret" or "hidden decoration". But this seems to have been produced by scratching or engraving the unfired porcelain body, and was mostly used for floral decoration, or text inscriptions, often Buddhist, rather than the images in the Western tradition. It was also mostly used on closed vessel shapes such as vases and teapots, suggesting that a backlit view was not intended to be used. The European technique was invented by the French diplomat Baron Paul de Bourgoing (1791–1864), who patented it in 1827. His friend Baron Alexis du Tremblay had a pottery on his estate at Rubelles, and the earliest examples were made there. As de Bourgoing did not feel it appropriate, as a diplomat, for his name to be used in commerce, the lithophanes were marked "AdT" (for Tremblay's name). Other factories quickly adopted the technique, many under licence from de Bourgoing. Meissen porcelain made them from 1829, and had made tens of thousands by 1850. Apart from Berlin and Plaue, mentioned above and perhaps the largest manufacturers, they were also made by Volkstedt, St Petersburg and Royal Copenhagen. There was an English patent, under licence from Bourgoing, granted in 1828, to a Robert Griffith Jones, who then gave sub-licences to English factories including Mintons, Copelands (later part of Spode) and Grainger's Factory in Worcester, later merged into Royal Worcester. By the end of the century the fashion was largely over, but lithophanes were made to commemorate the Coronation of Edward VII in 1902. By the middle of the 20th century, the technique was used in Japan, mostly for gaudy teasets for American soldiers after World War II, with the lithophaned face of a geisha at the bottom of the cups. Modern lithophanes Porcelain lithophanes are still made in limited numbers, by both studio potters and large manufacturers such as Bernardaud and Wedgwood. Similar effects can be achieved in moulded coloured glass, but these should probably not be called lithophanes. The term has revived in use for images created by digitally-controlled cutting ("CNC"), a subtractive process, or by 3D printing, an additive one. Many companies now offer to make one-off images, or the equipment to make them. Solutions are offered to add colour to these. Collections Most museums with a collection of 19th-century porcelain have examples of lithophanes, though only a small number are likely to be on display. The largest collection belongs to the Blair Museum of Lithophanes, now at the Schedel Arboretum and Gardens in Elmore, Ohio. Gallery
Technology
Materials
null
51726398
https://en.wikipedia.org/wiki/Detrital%20zircon%20geochronology
Detrital zircon geochronology
Detrital zircon geochronology is the science of analyzing the age of zircons deposited within a specific sedimentary unit by examining their inherent radioisotopes, most commonly the uranium–lead ratio. Zircon is a common accessory or trace mineral constituent of most granite and felsic igneous rocks. Due to its hardness, durability and chemical inertness, zircon persists in sedimentary deposits and is a common constituent of most sands. Zircons contain trace amounts of uranium and thorium and can be dated using several modern analytical techniques. Detrital zircon geochronology has become increasingly popular in geological studies from the 2000s mainly due to the advancement in radiometric dating techniques. Detrital zircon age data can be used to constrain the maximum depositional age, determine provenance, and reconstruct the tectonic setting on a regional scale. Detrital zircon Origin Detrital zircons are part of the sediment derived from weathering and erosion of pre-existing rocks. Since zircons are heavy and highly resistant at Earth's surface, many zircons are transported, deposited and preserved as detrital zircon grains in sedimentary rocks. Properties Detrital zircons usually retain similar properties as their parent igneous rocks, such as age, rough size and mineral chemistry. However, the composition of detrital zircons is not entirely controlled by the crystallization of the zircon mineral. In fact, many of them are modified by later processes in the sedimentary cycle. Depending on the degree of physical sorting, mechanical abrasion and dissolution, a detrital zircon grain may lose some of its inherent features and gain some over-printed properties like rounded shape and smaller size. On a larger scale, two or more tribes of detrital zircons from different origins may deposit within the same sedimentary basin. This give rise to a natural complexity of associating detrital zircon populations and their sources. Zircon is a strong tool for uranium-lead age determination because of its inherent properties: Zircon contains high amount of uranium for machine recognition, commonly 100–1000 ppm. Zircon has a low amount of lead during crystallization, in parts per trillion. Thus, lead found in zircon can be assumed as daughter nuclei from parent uranium. Zircon crystals grow between 600 and 1100 °C, while lead is retained within the crystal structure below 800 °C (see Closure temperature). So once zircon has cooled below 800 °C it retains all the lead from the radioactive decay. Therefore, U-Pb age can be treated as the age of crystallization, if the mineral/sample itself has not undergone high temperature metamorphism after formation. Zircon commonly crystallizes in felsic igneous rocks, with greater than 60% silica (SiO2) content. These rocks are generally less dense and more buoyant. They sit high in the Earth's (continental crust), and have good preservation potential. Zircon is physically and chemically resistant, so it is more likely to be preserved in the sedimentary cycle. Zircon contains other elements which gives supplementary information, such as hafnium (Hf), uranium/thorium (U/Th) ratio. Sample collection There are no set rules for sample selection in detrital zircon geochronology studies. The objective and scale of the research project govern the type and number of samples taken. In some cases, the sedimentary rock type and depositional setting can significantly affect the result. Examples include: Matured quartz arenite within Vlamy Formation yield older and more diverse ages given by well-rounded detrital zircons, which may correlate to multiple sedimentary reworking events. On the contrary, Harmony Formation in the same region has younger and homogenous ages given by euhedral detrital zircons. These two formations illustrate the possibility of relating sedimentary maturity with resulting zircon ages, meaning that rounded and well-sorted sedimentary rocks (e.g. siltstone and mudstone) may have older and more diverse ages. Turbidites in Harts Pass Formation contain homogenous detrital zircons ages. On the other hand, fluvial Winthrop Formation in another strata of the same basin has various detrital zircon age populations. Comparing the vertical detrital zircon distribution within these two formations, one can expect a narrower age population of detrital zircons from rocks which are rapidly deposited, such as turbidites. Rocks that are gradually deposited (e.g. marine mudstone), however, have a greater chance and time to incorporate zircon sediments from different localities. Detrital zircon extraction After rock samples are collected, they are cleaned, chipped, crushed and milled through standardized procedures. Then, detrital zircons are separated from the fine rock powder by three different ways, namely gravity separation using water, magnetic separation, and gravity separation using heavy liquid. In the process, grains are also sieved according to their size. The commonly used grain size for detrital zircon provenance analysis is 63–125 μm, which is equivalent to fine sand grain size. Type of detrital zircon analysis There are two main types of detrital zircon analysis: qualitative analysis and quantitative analysis. The biggest advantage of qualitative analysis is being able to uncover all possible origin of the sedimentary unit, whereas quantitative analysis should allow meaningful comparison of proportions in the sample. Qualitative analysis Qualitative approach examines all the available detrital zircons individually regardless of their abundance among all grains. This approach is usually conducted with high precision thermal ionization mass spectrometry (TIMS) and sometimes secondary ion mass spectrometry (SIMS). Optical examination and classification of detrital zircon grains are commonly included in qualitative studies through back-scatter electrons (BSE) or cathodoluminescence (CL) imagery, despite the relationship between the age and optical classification of detrital zircon grains is not always reliable. Quantitative analysis Quantitative approach requires large number of grain analyses within a sample rock in order to represent the overall detrital zircon population statistically (i.e. the total number of analyses should achieve an appropriate level of confidence). Because of the large sample size, secondary ion mass spectrometry (SIMS) and laser ablation-inductively coupled plasma mass spectrometry (LA-ICPMS) are used instead of thermal ionization mass spectrometry (TIMS). In this case, BSE and CL imagery are applied to select the best spot on a zircon grain for acquiring reliable age. Methods Different methods in detrital zircon analysis yield different results. Generally, researchers would include the methods/ analytical instruments they used within their studies. There are generally three categories, which are the instrument(s) used for zircon analysis, their calibration standards and instrument(s) used for zircon imagery. Details are listed in Table 1. Detrital zircon data Depending on the detrital zircon study, there should be different variables included for analysis. There are two main types of data, analyzed zircon data (quantifiable data and imagery/descriptive data), and sample (where they extract the zircon grains) data. Details are listed in Table 2. Filtering detrital zircon data All data acquired first-hand should be cleansed before using to avoid error, normally by computer. By U-Pb age discordance Before applying detrital zircon ages, they should be evaluated and screened accordingly. In most cases, data are compared with U-Pb Concordia graphically. For a large dataset, however, data with high U-Pb age discordance (>10 – 30%) are filtered out numerically. The acceptable discordance level is often adjusted with the age of the detrital zircon since older population should experience higher chances of alteration and project higher discordance. (See Uranium–lead dating) By choosing the best age Because of the intrinsic uncertainties within the three yield U-Pb ages (207Pb/235U, 206Pb/238U and 207Pb/206Pb), the age at ~1.4 Ga has the poorest resolution. An overall consensus for age with higher accuracy is to adopt: 207Pb/206Pb for ages older than 0.8 – 1.0 Ga 206Pb/238U for ages younger than 0.8 – 1.0 Ga By data clustering Given the possibility of concordant yet incorrect detrital zircon U-Pb ages associated with lead loss or inclusion of older components, some scientists apply data selection through clustering and comparing the ages. Three or more data overlapping within ±2σ uncertainty would be classified as a valid age population of a particular source origin. By age uncertainty (±σ) There are no set limit for age uncertainty and the cut-off value varies with different precision requirement. Although excluding data with huge age uncertainty would enhance the overall zircon grain age accuracy, over elimination may lower overall research reliability (decrease in size of the database). The best practice would be to filter accordingly, i.e. setting the cut-off error to eliminate reasonable portion of the dataset (say <5% of the total ages available) By applied analytical methods Depending on the required analytical accuracy, researchers may filter data via their analytical instruments. Generally, researchers use only the data from sensitive high-resolution ion microprobe (SHRIMP), inductively coupled plasma mass spectrometry (LA-ICPMS) and thermal ionization mass spectrometry (TIMS) because of their high precision (1–2%, 1–2% and 0.1% respectively) in spot analysis. An older analytical technique, lead-lead evaporation, is no longer used since it cannot determine the U-Pb concordance of the age data. By spot nature Apart from analytical methods, researchers would isolate core or rim ages for analysis. Normally, core ages would be used as crystallization age as they are first generated and least disturbed part in zircon grains. On the other hand, rim ages can be used to track peak metamorphism as they are first in contact with certain temperature and pressure condition. Researchers may utilize these different spot natures to reconstruct the geological history of a basin. Application of detrital zircon ages Maximum depositional age Some of the most important information we can get from detrital zircon ages is the maximum depositional age of the referring sedimentary unit. The sedimentary unit cannot be older than the youngest age of the analyzed detrital zircons because the zircon should have existed before the rock formation. This provides useful age information to rock strata where fossils are unavailable, such as the terrestrial successions during Precambrian or pre-Devonian times. Practically, maximum depositional age is averaged from a cluster of youngest age data or the peak in age probability because the youngest U-Pb age within a sample is almost always younger with uncertainty. Tectonic studies Using detrital zircon age abundance In a global scale, detrital zircon age abundance can be used as a tool to infer significant tectonic events in the past. In Earth's history, the abundance of magmatic age peaks during periods of supercontinent assembly. This is because supercontinent provides a major crustal envelop selectively preserve the felsic magmatic rocks, resulting from partial melts. Thus, many detrital zircons originate from these igneous provence, resulting similar age peak records. For instance, the peak at about 0.6–0.7 Ga and 2.7 Ga (Figure 6) may correlate the break-up of Rodinia and supercontinent Kenorland respectively. Using difference between detrital zircons crystallisation ages and their corresponding maximum depositional age Apart from the detrital zircon age abundance, difference between detrital zircons crystallisation ages (CA) and their corresponding maximum depositional age (DA) can be plotted in cumulative distribution function to correlate particular tectonic regime in the past. The effect of different tectonic settings on the difference between CA and DA is illustrated in Figure 7 and summarized in Table. 3.
Physical sciences
Geochronology
Earth science
54561126
https://en.wikipedia.org/wiki/Basic%20lead%20phosphite
Basic lead phosphite
Basic lead phosphite is an inorganic compound with the proposed composition Pb3O(OH)2(HPO3). The compound contains the phosphite anion, which provides the reducing properties associated with the application of this material. It is widely used as a stabilizer for chlorine-containing polymers, especially polyvinylchloride. Other lead phosphites are known, including normal lead phosphite, PbHPO3, although the basic salt is especially effective.
Physical sciences
Phosphoric oxyanions
Chemistry
64469899
https://en.wikipedia.org/wiki/Uroplatinae
Uroplatinae
Uroplatinae is a subfamily of geckos in the family Gekkonidae. At least 28 genera have been found to be cluster in a clade together. In the past this was once a monotypic subfamily that included Uroplatus. Below are the following genera:
Biology and health sciences
Lizards and other Squamata
Animals
47640103
https://en.wikipedia.org/wiki/HCL%20color%20space
HCL color space
HCL (Hue-Chroma-Luminance) or LCh refers to any of the many cylindrical color space models that are designed to accord with human perception of color with the three parameters. Lch has been adopted by information visualization practitioners to present data without the bias implicit in using varying saturation. They are, in general, designed to have characteristics of both cylindrical translations of the RGB color space, such as HSL and HSV, and the L*a*b* color space. Some conflicting definitions of the terms are: A name for a cylindrical transformation of CIELuv (CIELChuv) employed by Ihaka (2003) and adopted by Zeileis et al. (2009, 2020). This name appears to be the one most commonly used in information visualization. Ihaka, Zeileis, and co-authors also provide software implementations and web pages to promote its use. A name for cylindrical CIELab (CIELChab), employed by chroma.js. "HCL" designed in 2005 by Sarifuddin and Missaou, which is a transformation of whatever type of RGB color space is in use. HCT with tone as a synonym for luminance is then used within Material Design for its color system, using value ranges of 0–360°, 0–120+ and 0–100%, respectively. Its hue and chroma come from CAM16, whereas tone is actually L* from CIELab. Derivation Color-making attributes HCL concerns the following attributes of color appearance: Hue The "attribute of a visual sensation according to which an area appears to be similar to one of the perceived colors: red, yellow, green, and blue, or to a combination of two of them". Lightness, value The "brightness relative to the brightness of a similarly illuminated white". Luminance (Y or Lv,Ω) The radiance weighted by the effect of each wavelength on a typical human observer, measured in SI units in candela per square meter (). Often the term luminance is used for the relative luminance, Y/Yn, where Yn is the luminance of the reference white point. Colorfulness The "attribute of a visual sensation according to which the perceived color of an area appears to be more or less chromatic". The HSL and HSV color spaces are more intuitive translations of the RGB color space, because they provide a single hue number. However, their luminance variation does not match the way humans perceive color. Perceptually uniform color spaces outperform RGB in cases such as high noise environments. CIE color spaces CIE-based LCh color spaces are transformations of the two chroma values (ab or uv) into the polar coordinate. The source color spaces are still very well-regarded for their uniformity, and the transformation does not cause degradation in this aspect. See the respective articles for how the underlying coordinates are derived. Sarifuddin 2005 Sarifuddin, noting the lack of blue hue consistency of CIELAB—a common complaint among its users— decided to make their own color space by mashing up some of the features. According to the Stack Overflow user Tatarize, what Sarifuddin proposes as "HCL" is algorithmically similar to HSL. While pointing out advantages in computational efficiency, they argue that Sarifuddin's work does not represent a significant improvement over the CIELAB color space while showing failure to reproduce the paper's claims. They also propose what they consider to be an improved version of Sarifuddin's algorithm. Other color appearance models In general, any color appearance model with a lightness and two chroma components can also be transformed into a HCL-type color space by turning the chroma components into polar coordinates. Implementations CIELCh has been implemented in a wide range of ways: as programmatic code for generating color swatches in statistics tools, as standalone tools for designing and testing swatches, or as libraries that allow other programs to use the color space. Some implementations include: Statistical tools: d3.js: Data Driven Documents JavaScript library (CIELChab) Swatch designs: The colorspace package for the R and the Python programming languages, also with pre-made sets of swatches in hclwizard Fabio Crameri's scientific colour maps, a set of pre-made swatches Library: The aforementioned colorspace library (CIELChuv) ac-colors JavaScript library (CIELChab and CIELChuv) chroma.js JavaScript library (CIELChab) colorio for Python Most other color space libraries handle at least one of CIELUV or CIELAB
Physical sciences
Basics
Physics
65970498
https://en.wikipedia.org/wiki/Marine%20coastal%20ecosystem
Marine coastal ecosystem
A marine coastal ecosystem is a marine ecosystem which occurs where the land meets the ocean. Worldwide there is about of coastline. Coastal habitats extend to the margins of the continental shelves, occupying about 7 percent of the ocean surface area. Marine coastal ecosystems include many very different types of marine habitats, each with their own characteristics and species composition. They are characterized by high levels of biodiversity and productivity. For example, estuaries are areas where freshwater rivers meet the saltwater of the ocean, creating an environment that is home to a wide variety of species, including fish, shellfish, and birds. Salt marshes are coastal wetlands which thrive on low-energy shorelines in temperate and high-latitude areas, populated with salt-tolerant plants such as cordgrass and marsh elder that provide important nursery areas for many species of fish and shellfish. Mangrove forests survive in the intertidal zones of tropical or subtropical coasts, populated by salt-tolerant trees that protect habitat for many marine species, including crabs, shrimp, and fish. Further examples are coral reefs and seagrass meadows, which are both found in warm, shallow coastal waters. Coral reefs thrive in nutrient-poor waters on high-energy shorelines that are agitated by waves. They are underwater ecosystem made up of colonies of tiny animals called coral polyps. These polyps secrete hard calcium carbonate skeletons that builds up over time, creating complex and diverse underwater structures. These structures function as some of the most biodiverse ecosystems on the planet, providing habitat and food for a huge range of marine organisms. Seagrass meadows can be adjacent to coral reefs. These meadows are underwater grasslands populated by marine flowering plants that provide nursery habitats and food sources for many fish species, crabs and sea turtles, as well as dugongs. In slightly deeper waters are kelp forests, underwater ecosystems found in cold, nutrient-rich waters, primarily in temperate regions. These are dominated by a large brown algae called kelp, a type of seaweed that grows several meters tall, creating dense and complex underwater forests. Kelp forests provide important habitats for many fish species, sea otters and sea urchins. Directly and indirectly, marine coastal ecosystems provide vast arrays of ecosystem services for humans, such as cycling nutrients and elements, and purifying water by filtering pollutants. They sequester carbon as a cushion against climate change. They protect coasts by reducing the impacts of storms, reducing coastal erosion and moderating extreme events. They provide essential nurseries and fishing grounds for commercial fisheries. They provide recreational services and support tourism. These ecosystems are vulnerable to various anthropogenic and natural disturbances, such as pollution, overfishing, and coastal development, which have significant impacts on their ecological functioning and the services they provide. Climate change is impacting coastal ecosystems with sea level rises, ocean acidification, and increased storm frequency and intensity. When marine coastal ecosystems are damaged or destroyed, there can be serious consequences for the marine species that depend on them, as well as for the overall health of the ocean ecosystem. Some conservation efforts are underway to protect and restore marine coastal ecosystems, such as establishing marine protected areas and developing sustainable fishing practices. Overview The Earth has approximately of coastline. Coastal habitats extend to the margins of the continental shelves, occupying about 7 percent by area of the Earth's oceans. These coastal seas are highly productive systems, providing an array of ecosystem services to humankind, such as processing of nutrient effluents from land and climate regulation. However, coastal ecosystems are threatened by human-induced pressures such as climate change and eutrophication. In the coastal zone, the fluxes and transformations of nutrients and carbon sustaining coastal ecosystem functions and services are strongly regulated by benthic (that is, occurring at the seafloor) biological and chemical processes. Coastal systems also contribute to the regulation of climate and nutrient cycles, by efficiently processing anthropogenic emissions from land before they reach the ocean. The high value of these ecosystem services is obvious considering that a large proportion of the world population lives close to the coast. Currently, coastal seas around the world are undergoing major ecological changes driven by human-induced pressures, such as climate change, anthropogenic nutrient inputs, overfishing and the spread of invasive species. In many cases, the changes alter underlying ecological functions to such an extent that new states are achieved and baselines are shifted. In 2015, the United Nations established 17 Sustainable Development Goals with the aim of achieving certain targets by 2030. Their mission statement for their 14th goal, Life below water, is to "conserve and sustainably use the oceans, seas and marine resources for sustainable development". The United Nations has also declared 2021–2030 the UN Decade on Ecosystem Restoration, but restoration of coastal ecosystems is not receiving appropriate attention. Coastal habitats Intertidal zone Intertidal zones are the areas that are visible and exposed to air during low tide and covered up by saltwater during high tide. There are four physical divisions of the intertidal zone with each one having its distinct characteristics and wildlife. These divisions are the Spray zone, High intertidal zone, Middle Intertidal zone, and Low intertidal zone. The Spray zone is a damp area that is usually only reached by the ocean and submerged only under high tides or storms. The high intertidal zone is submerged at high tide but remains dry for long periods between high tides. Due to the large variance of conditions possible in this region, it is inhabited by resilient wildlife that can withstand these changes such as barnacles, marine snails, mussels and hermit crabs. Tides flow over the middle intertidal zone two times a day and this zone has a larger variety of wildlife. The low intertidal zone is submerged nearly all the time except during the lowest tides and life is more abundant here due to the protection that the water gives. Estuaries Estuaries occur where there is a noticeable change in salinity between saltwater and freshwater sources. This is typically found where rivers meet the ocean or sea. The wildlife found within estuaries is unique as the water in these areas is brackish - a mix of freshwater flowing to the ocean and salty seawater. Other types of estuaries also exist and have similar characteristics as traditional brackish estuaries. The Great Lakes are a prime example. There, river water mixes with lake water and creates freshwater estuaries. Estuaries are extremely productive ecosystems that many humans and animal species rely on for various activities. This can be seen as, of the 32 largest cities in the world, 22 are located on estuaries as they provide many environmental and economic benefits such as crucial habitat for many species, and being economic hubs for many coastal communities. Estuaries also provide essential ecosystem services such as water filtration, habitat protection, erosion control, gas regulation nutrient cycling, and it even gives education, recreation and tourism opportunities to people. Lagoons Lagoons are areas that are separated from larger water by natural barriers such as coral reefs or sandbars. There are two types of lagoons, coastal and oceanic/atoll lagoons. A coastal lagoon is, as the definition above, simply a body of water that is separated from the ocean by a barrier. An atoll lagoon is a circular coral reef or several coral islands that surround a lagoon. Atoll lagoons are often much deeper than coastal lagoons. Most lagoons are very shallow meaning that they are greatly affected by changed in precipitation, evaporation and wind. This means that salinity and temperature are widely varied in lagoons and that they can have water that ranges from fresh to hypersaline. Lagoons can be found in on coasts all over the world, on every continent except Antarctica and is an extremely diverse habitat being home to a wide array of species including birds, fish, crabs, plankton and more. Lagoons are also important to the economy as they provide a wide array of ecosystem services in addition to being the home of so many different species. Some of these services include fisheries, nutrient cycling, flood protection, water filtration, and even human tradition. Reefs Coral reefs Coral reefs are one of the most well-known marine ecosystems in the world, with the largest being the Great Barrier Reef. These reefs are composed of large coral colonies of a variety of species living together. The corals from multiple symbiotic relationships with the organisms around them. Coral reefs are being heavily affected by global warming. They are one of the most vulnerable marine ecosystems. Due to marine heatwaves that have high warming levels coral reefs are at risk of a great decline, loss of its important structures, and exposure to higher frequency of marine heatwaves. Bivalve reefs Bivalve reefs provide coastal protection through erosion control and shoreline stabilization, and modify the physical landscape by ecosystem engineering, thereby providing habitat for species by facilitative interactions with other habitats such as tidal flat benthic communities, seagrasses and marshes. Vegetated Vegetated coastal ecosystems occur throughout the world, as illustrated in the diagram on the right. Seagrass beds are found from cold polar waters to the tropics. Mangrove forests are confined to tropical and sub-tropical areas, while tidal marshes are found in all regions, but most commonly in temperate areas. Combined, these ecosystems cover about 50 million hectares and provide a diverse array of ecosystem services such as fishery production, coastline protection, pollution buffering, as well as high rates of carbon sequestration. Rapid loss of vegetated coastal ecosystems through land-use change has occurred for centuries, and has accelerated in recent decades. Causes of habitat conversion vary globally and include conversion to aquaculture, agriculture, forest over-exploitation, industrial use, upstream dams, dredging, eutrophication of overlying waters, urban development, and conversion to open water due to accelerated sea-level rise and subsidence. Vegetated coastal ecosystems typically reside over organic-rich sediments that may be several meters deep and effectively lock up carbon due to low-oxygen conditions and other factors that inhibit decomposition at depth. These carbon stocks can exceed those of terrestrial ecosystems, including forests, by several times. When coastal habitats are degraded or converted to other land uses, the sediment carbon is destabilised or exposed to oxygen, and subsequent increased microbial activity releases large amounts of greenhouse gasses to the atmosphere or water column. The potential economic impacts that come from releasing stored coastal blue carbon to the atmosphere are felt worldwide. Economic impacts of greenhouse gas emissions in general stem from associated increases in droughts, sea level, and frequency of extreme weather events. Coastal wetlands Coastal wetlands are among the most productive ecosystems on Earth and generate vital services that benefit human societies around the world. Sediment-stabilization by wetlands such as salt marshes and mangroves serves to protect coastal communities from storm-waves, flooding, and land erosion. Coastal wetlands also reduce pollution from human waste, remove excess nutrients from the water column, trap pollutants, and sequester carbon. Further, near-shore wetlands act as both essential nursery habitats and feeding grounds for game fish, supporting a diverse group of economically important species. Mangrove forests Mangroves are trees or shrubs that grow in low-oxygen soil near coastlines in tropical or subtropical latitudes. They are an extremely productive and complex ecosystem that connects the land and sea. Mangroves consist of species that are not necessarily related to each other and are often grouped for the characteristics they share rather than genetic similarity. Because of their proximity to the coast, they have all developed adaptions such as salt excretion and root aeration to live in salty, oxygen-depleted water. Mangroves can often be recognized by their dense tangle of roots that act to protect the coast by reducing erosion from storm surges, currents, wave, and tides. The mangrove ecosystem is also an important source of food for many species as well as excellent at sequestering carbon dioxide from the atmosphere with global mangrove carbon storage is estimated at 34 million metric tons per year. Salt marshes Salt marshes are a transition from the ocean to the land, where fresh and saltwater mix. The soil in these marshes is often made up of mud and a layer of organic material called peat. Peat is characterized as waterlogged and root-filled decomposing plant matter that often causes low oxygen levels (hypoxia). These hypoxic conditions causes growth of the bacteria that also gives salt marshes the sulfurous smell they are often known for. Salt marshes exist around the world and are needed for healthy ecosystems and a healthy economy. They are extremely productive ecosystems and they provide essential services for more than 75 percent of fishery species and protect shorelines from erosion and flooding. Salt marshes can be generally divided into the high marsh, low marsh, and the upland border. The low marsh is closer to the ocean, with it being flooded at nearly every tide except low tide. The high marsh is located between the low marsh and the upland border and it usually only flooded when higher than usual tides are present. The upland border is the freshwater edge of the marsh and is usually located at elevations slightly higher than the high marsh. This region is usually only flooded under extreme weather conditions and experiences much less waterlogged conditions and salt stress than other areas of the marsh. Seagrass meadows Seagrasses form dense underwater meadows which are among the most productive ecosystems in the world. They provide habitats and food for a diversity of marine life comparable to coral reefs. This includes invertebrates like shrimp and crabs, cod and flatfish, marine mammals and birds. They provide refuges for endangered species such as seahorses, turtles, and dugongs. They function as nursery habitats for shrimps, scallops and many commercial fish species. Seagrass meadows provide coastal storm protection by the way their leaves absorb energy from waves as they hit the coast. They keep coastal waters healthy by absorbing bacteria and nutrients, and slow the speed of climate change by sequestering carbon dioxide into the sediment of the ocean floor. Seagrasses evolved from marine algae which colonized land and became land plants, and then returned to the ocean about 100 million years ago. However, today seagrass meadows are being damaged by human activities such as pollution from land runoff, fishing boats that drag dredges or trawls across the meadows uprooting the grass, and overfishing which unbalances the ecosystem. Seagrass meadows are currently being destroyed at a rate of about two football fields every hour. Kelp forests Kelp forests occur worldwide throughout temperate and polar coastal oceans. In 2007, kelp forests were also discovered in tropical waters near Ecuador. Physically formed by brown macroalgae, kelp forests provide a unique habitat for marine organisms and are a source for understanding many ecological processes. Over the last century, they have been the focus of extensive research, particularly in trophic ecology, and continue to provoke important ideas that are relevant beyond this unique ecosystem. For example, kelp forests can influence coastal oceanographic patterns and provide many ecosystem services. However, the influence of humans has often contributed to kelp forest degradation. Of particular concern are the effects of overfishing nearshore ecosystems, which can release herbivores from their normal population regulation and result in the overgrazing of kelp and other algae. This can rapidly result in transitions to barren landscapes where relatively few species persist. Already due to the combined effects of overfishing and climate change, kelp forests have all but disappeared in many especially vulnerable places, such as Tasmania's east coast and the coast of Northern California. The implementation of marine protected areas is one management strategy useful for addressing such issues, since it may limit the impacts of fishing and buffer the ecosystem from additive effects of other environmental stressors. Coastal ecology Coastal food webs Coastal waters include the waters in estuaries and over continental shelves. They occupy about 8 percent of the total ocean area and account for about half of all the ocean productivity. The key nutrients determining eutrophication are nitrogen in coastal waters and phosphorus in lakes. Both are found in high concentrations in guano (seabird feces), which acts as a fertilizer for the surrounding ocean or an adjacent lake. Uric acid is the dominant nitrogen compound, and during its mineralization different nitrogen forms are produced. Ecosystems, even those with seemingly distinct borders, rarely function independently of other adjacent systems. Ecologists are increasingly recognizing the important effects that cross-ecosystem transport of energy and nutrients have on plant and animal populations and communities. A well known example of this is how seabirds concentrate marine-derived nutrients on breeding islands in the form of feces (guano) which contains ~15–20% nitrogen (N), as well as 10% phosphorus. These nutrients dramatically alter terrestrial ecosystem functioning and dynamics and can support increased primary and secondary productivity. However, although many studies have demonstrated nitrogen enrichment of terrestrial components due to guano deposition across various taxonomic groups, only a few have studied its retroaction on marine ecosystems and most of these studies were restricted to temperate regions and high nutrient waters. In the tropics, coral reefs can be found adjacent to islands with large populations of breeding seabirds, and could be potentially affected by local nutrient enrichment due to the transport of seabird-derived nutrients in surrounding waters. Studies on the influence of guano on tropical marine ecosystems suggest nitrogen from guano enriches seawater and reef primary producers. Reef building corals have essential nitrogen needs and, thriving in nutrient-poor tropical waters where nitrogen is a major limiting nutrient for primary productivity, they have developed specific adaptations for conserving this element. Their establishment and maintenance are partly due to their symbiosis with unicellular dinoflagellates, Symbiodinium spp. (zooxanthellae), that can take up and retain dissolved inorganic nitrogen (ammonium and nitrate) from the surrounding waters. These zooxanthellae can also recycle the animal wastes and subsequently transfer them back to the coral host as amino acids, ammonium or urea. Corals are also able to ingest nitrogen-rich sediment particles and plankton. Coastal eutrophication and excess nutrient supply can have strong impacts on corals, leading to a decrease in skeletal growth, Coastal predators Food web theory predicts that current global declines in marine predators could generate unwanted consequences for many marine ecosystems. In coastal plant communities, such as kelp, seagrass meadows, mangrove forests and salt marshes, several studies have documented the far-reaching effects of changing predator populations. Across coastal ecosystems, the loss of marine predators appears to negatively affect coastal plant communities and the ecosystem services they provide. The green world hypothesis predicts loss of predator control on herbivores could result in runaway consumption that would eventually denude a landscape or seascape of vegetation. Since the inception of the green world hypothesis, ecologists have tried to understand the prevalence of indirect and alternating effects of predators on lower trophic levels (trophic cascades), and their overall impact on ecosystems. Multiple lines of evidence now suggest that top predators are key to the persistence of some ecosystems. With an estimated habitat loss greater than 50 percent, coastal plant communities are among the world’s most endangered ecosystems. As bleak as this number is, the predators that patrol coastal systems have fared far worse. Several predatory taxa including species of marine mammals, elasmobranchs, and seabirds have declined by 90 to 100 percent compared to historical populations. Predator declines pre-date habitat declines, suggesting alterations to predator populations may be a major driver of change for coastal systems. There is little doubt that collapsing marine predator populations results from overharvesting by humans. Localized declines and extinctions of coastal predators by humans began over 40,000 years ago with subsistence harvesting. However, for most large bodied, marine predators (toothed whales, large pelagic fish, sea birds, pinnipeds, and otters) the beginning of their sharp global declines occurred over the last century, coinciding with the expansion of coastal human populations and advances in industrial fishing. Following global declines in marine predators, evidence of trophic cascades in coastal ecosystems started to emerge, with the disturbing realisation that they affected more than just populations of lower trophic levels. Understanding the importance of predators in coastal plant communities has been bolstered by their documented ability to influence ecosystem services. Multiple examples have shown that changes to the strength or direction of predator effects on lower trophic levels can influence coastal erosion, carbon sequestration, and ecosystem resilience. The idea that the extirpation of predators can have far-reaching effects on the persistence of coastal plants and their ecosystem services has become a major motivation for their conservation in coastal systems. Seascape ecology Seascape ecology is the marine and coastal version of landscape ecology. It is currently emerging as an interdisciplinary and spatially explicit ecological science with relevance to marine management, biodiversity conservation, and restoration. Seascapes are complex ocean spaces, shaped by dynamic and interconnected patterns and processes operating across a range of spatial and temporal scales. Rapid advances in geospatial technologies and the proliferation of sensors, both above and below the ocean surface, have revealed intricate and scientifically intriguing ecological patterns and processes, some of which are the result of human activities. Despite progress in the collecting, mapping, and sharing of ocean data, the gap between technological advances and the ability to generate ecological insights for marine management and conservation practice remains substantial. For instance, fundamental gaps exist in the understanding of multidimensional spatial structure in the sea, and the implications for planetary health and human wellbeing. Deeper understanding of the multi-scale linkages between ecological structure, function, and change will better support the design of whole-system strategies for biodiversity preservation and reduce uncertainty around the consequences of human activity. For example, in the design and evaluation of marine protected areas (MPAs) and habitat restoration, it is important to understand the influence of spatial context, configuration, and connectivity, and to consider effects of scale. Interactions between ecosystems The diagram on the right shows the principal interactions between mangroves, seagrass, and coral reefs. Coral reefs, seagrasses, and mangroves buffer habitats further inland from storms and wave damage as well as participate in a tri-system exchange of mobile fish and invertebrates. Mangroves and seagrasses are critical in regulating sediment, freshwater, and nutrient flows to coral reefs. The diagram immediately below shows locations where mangroves, coral reefs, and seagrass beds exist within one km of each other. Buffered intersection between the three systems provides relative co-occurrence rates on a global scale. Regions where systems strongly intersect include Central America (Belize), the Caribbean, the Red Sea, the Coral Triangle (particularly Malaysia), Madagascar, and the Great Barrier Reef. The diagram at the right graphically illustrates the ecosystem service synergies between mangroves, seagrasses, and coral reefs. The ecosystem services provided by intact reefs, seagrasses, and mangroves are both highly valuable and mutually enhance each other. Coastal protection (storm/wave attenuation) maintains the structure of adjacent ecosystems, and associated ecosystem services, in an offshore-to-onshore direction. Fisheries are characterized by migratory species, and therefore, protecting fisheries in one ecosystem increases fish biomass in others. Tourism benefits from coastal protection and healthy fisheries from multiple ecosystems. Here, we do not draw within-ecosystem connections in order to better emphasise synergies between systems. Network ecology To compound things, removal of biomass from the ocean occurs simultaneously with multiple other stressors associated to climate change that compromise the capacity of these socio-ecological systems to respond to perturbations. Besides sea surface temperature, climate change also affects many other physical–chemical characteristics of marine coastal waters (stratification, acidification, ventilation) as well as the wind regimes that control surface water productivity along the productive coastal upwelling ecosystems. Changes in the productivity of the oceans are reflected in changes of plankton biomass. Plankton contributes approximately half of the global primary production, supports marine food webs, influences the biogeochemical process in the ocean, and strongly affects commercial fisheries. Indeed, an overall decrease in marine plankton productivity is expected over global scales. Long-term increases and decreases in plankton productivity have already occurred over the past two decades along extensive regions of the Humboldt upwelling ecosystem off Chile, and are expected to propagate up the pelagic and benthic food webs. Network ecology has advanced understanding of ecosystems by providing a powerful framework to analyse biological communities. Previous studies used this framework to assess food web robustness against species extinctions, defined as the fraction of initial species that remain present in the ecosystem after a primary extinction. These studies showed the importance for food web persistence of highly connected species (independent of trophic position), basal species, and highly connected species that, at the same time, trophically support other highly connected species. Most of these studies used a static approach, which stems from network theory and analyzes the impacts of structural changes on food webs represented by nodes (species) and links (interactions) that connect nodes, but ignores interaction strengths and population dynamics of interacting species. Other studies used a dynamic approach, which considers not only the structure and intensity of interactions in a food web, but also the changes in species biomasses through time and the indirect effects that these changes have on other species. Coastal biogeochemistry Globally, eutrophication is one of the major environmental problems in coastal ecosystems. Over the last century the annual riverine inputs of nitrogen and phosphorus to the oceans have increased from 19 to 37 megatonnes of nitrogen and from 2 to 4 megatonnes of phosphorus. Regionally, these increases were even more substantial as observed in the United States, Europe and China. In the Baltic Sea nitrogen and phosphorus loads increased by roughly a factor of three and six, respectively. The riverine nitrogen flux has increased by an order of magnitude to coastal waters of China within thirty years, while phosphorus export has tripled between 1970 and 2000. Efforts to mitigate eutrophication through nutrient load reductions are hampered by the effects of climate change. Changes in precipitation increase the runoff of N, P and carbon (C) from land, which together with warming and increased dissolution alter the coupled marine nutrient and carbon cycles. In contrast to the open ocean where biogeochemical cycling is largely dominated by pelagic processes driven primarily by ocean circulation, in the coastal zone, pelagic and benthic processes interact strongly and are driven by a complex and dynamic physical environment. Eutrophication in coastal areas leads to shifts toward rapidly growing opportunistic algae, and generally to a decline in benthic macrovegetation because of decreased light penetration, substrate change and more reducing sediments. Increased production and warming waters have caused expanding hypoxia at the seafloor with a consequent loss of benthic fauna. Hypoxic systems tend to lose many long-lived higher organisms and biogeochemical cycles typically become dominated by benthic bacterial processes and rapid pelagic turnover. However, if hypoxia does not occur, benthic fauna tends to increase in biomass with eutrophication. Changes in benthic biota have far-reaching impacts on biogeochemical cycles in the coastal zone and beyond. In the illuminated zone, benthic microphytes and macrophytes mediate biogeochemical fluxes through primary production, nutrient storage and sediment stabilization and act as a habitat and food source for a variety of animals, as shown in the diagram on the left above. Benthic animals contribute to biogeochemical transformations and fluxes between water and sediments both directly through their metabolism and indirectly by physically reworking the sediments and their porewaters and stimulating bacterial processes. Grazing on pelagic organic matter and biodeposition of feces and pseudofeces by suspension-feeding fauna increases organic matter sedimentation rates. In addition, nutrients and carbon are retained in biomass and transformed from organic to inorganic forms through metabolic processes. Bioturbation, including sediment reworking and burrow ventilation activities (bioirrigation), redistributes particles and solutes within the sediment and enhances sediment-water fluxes of solutes. Bioturbation can also enhance resuspension of particles, a phenomenon termed "bioresuspension". Together, all these processes affect physical and chemical conditions at the sediment-water interface, and strongly influence organic matter degradation. When up-scaled to the ecosystem level, such modified conditions can significantly alter the functioning of coastal ecosystems and ultimately, the role of the coastal zone in filtering and transforming nutrients and carbon. Artisan fisheries Artisanal fisheries use simple fishing gears and small vessels. Their activities tend to be confined to coastal areas. In general, top-down and bottom-up forces determine ecosystem functioning and dynamics. Fisheries as a top-down force can shorten and destabilise food webs, while effects driven by climate change can alter the bottom-up forces of primary productivity. Direct human impacts and the full suite of drivers of global change are the main cause of species extinctions in Anthropocene ecosystems, with detrimental consequences on ecosystem functioning and their services to human societies. The world fisheries crisis is among those consequences, which cuts across fishing strategies, oceanic regions, species, and includes countries that have little regulation and those that have implemented rights-based co-management strategies to reduce overharvesting. Chile has been one of the countries implementing Territorial Use Rights (TURFs) over an unprecedented geographic scale to manage the diverse coastal benthic resources using a co-management strategy. These TURFS are used for artisanal fisheries. Over 60 coastal benthic species are actively harvested by these artisanal fisheries, with species that are extracted from intertidal and shallow subtidal habitats. The Chilean TURFs system brought significant improvements in sustainability of this complex socio-ecological system, helping to rebuild benthic fish stocks, improving fishers’ perception towards sustainability and increasing compliance9, as well as showing positive ancillary effects on conservation of biodiversity. However, the situation of most artisanal fisheries is still far from sustainable, and many fish stocks and coastal ecosystems show signs of overexploitation and ecosystem degradation, a consequence of the low levels of cooperation and low enforcement of TURF regulations, which leads to high levels of free-riding and illegal fishing. It is imperative to improve understanding of the effects of these multi-species artisanal fisheries which simultaneously harvest species at all trophic levels from kelp primary producers to top carnivores. Remote sensing Coastal zones are among the most populated areas on the planet. As the population continues to increase, economic development must expand to support human welfare. However, this development may damage the ability of the coastal environment to continue supporting human welfare for current and future generations. The management of complex coastal and marine social-ecological systems requires tools that provide frameworks with the capability of responding to current and emergent issues. Remote data collection technologies include satellite-based remote sensing, aerial remote sensing, unmanned aerial vehicles, unmanned surface vehicles, unmanned underwater vehicles, and static sensors. Frameworks have been developed that attempt to address and integrate these complex issues, such as the Millennium Ecosystem Assessment framework which links drivers, ecosystem services, and human welfare However, obtaining the environmental data that is necessary to use such frameworks is difficult, especially in countries where access to reliable data and their dissemination are limited or non-existent and even thwarted. Traditional techniques of point sampling and observation in the environment do deliver high information content, but they are expensive and often do not provide adequate spatial and temporal coverage, while remote sensing can provide cost-effective solutions, as well as data for locations where there is no or only limited information. Coastal observing systems are typically nationally funded and built around national priorities. As a result, there are presently significant differences between countries in terms of sustainability, observing capacity and technologies, as well as methods and research priorities. Ocean observing systems in coastal areas need to move toward integrated, multidisciplinary and multiscale systems, where heterogeneity can be exploited to deliver fit-for-purpose answers. Essential elements of such distributed observation systems are the use of machine-to-machine communication, data fusion and processing applying recent technological developments for the Internet of Things (IoT) toward a common cyberinfrastructure. It has been argued that the standardisation that IoT brings to wireless sensing will revolutionise areas like this. Coastal areas are the most dynamic and productive parts of the oceans, which makes them a significant source of human resources and services. Coastal waters are located immediately in contact with human populations and exposed to anthropogenic disturbances, placing these resources and services under threat. These concerns explain why, in several coastal regions, a rapidly increasing number of observing systems have been implemented in the last decade. Expansion of coherent and sustained coastal observations has been fragmented and driven by national and regional policies and is often undertaken through short-term research projects. This results in significant differences between countries both in terms of sustainability and observing technologies, methods and research priorities. Unlike the open ocean, where challenges are rather well-defined and stakeholders are fewer and well-identified, coastal processes are complex, acting on several spatial and temporal scales, with numerous and diversified users and stakeholders, often with conflicting interests. To adapt to such complexity coastal ocean observing system must be an integrated, multidisciplinary and multiscale system of systems. Regime shifts Marine ecosystems are affected by diverse pressures and consequently may undergo significant changes that can be interpreted as regime shifts. Marine ecosystems worldwide are affected by increasing natural and anthropogenic pressures and consequently undergo significant changes at unprecedented rates. Affected by these changes, ecosystems can reorganise and still maintain the same function, structure, and identity. However, under some circumstances, the ecosystem may undergo changes that modify the system’s structure and function and this process can be described as a shift to a new regime. Usually, a regime shift is triggered by large-scale climate-induced variations, intense fishing exploitation or both. Criteria used to define regime shifts vary and the changes that have to occur in order to consider that a system has undergone a regime shift are not well-defined. Normally, regime shifts are defined as high amplitude, low-frequency and often abrupt changes in species abundance and community composition that are observed at multiple trophic levels (TLs). These changes are expected to occur on a large spatial scale and take place concurrently with physical changes in the climate system. Regime shifts have been described in several marine ecosystems including Northern Benguela, the North Sea, and the Baltic Sea. In large upwelling ecosystems, it is common to observe decadal fluctuations in species abundance and their replacements. These fluctuations might be irreversible and might be an indicator of the new regime, as was the case in the Northern Benguela ecosystem. However, changes in the upwelling systems might be interpreted as fluctuations within the limits of natural variability for an ecosystem, and not as an indicator of the regime shift. The Portuguese continental shelf ecosystem (PCSE) constitutes the northernmost part of the Canary Current Upwelling System and is characterised by seasonal upwelling that occurs during the spring and summer as a result of steady northerly winds. It has recently changed in the abundance of coastal pelagic species such as sardine, chub mackerel, horse mackerel, blue jack mackerel and anchovy. Moreover, in the last decades, an increase in higher trophic level species has been documented. The causes underlying changes in the pelagic community are not clear but it has been suggested that they result from a complex interplay between environmental variability, species interactions and fishing pressure. There is evidence, that changes in the intensity of the Iberian coastal upwelling (resulting from the strengthening or weakening northern winds) had occurred in the last decades. However, the character of these changes is contradictory where some authors observed intensification of upwelling-favourable winds while others documented their weakening. A 2019 review of upwelling rate and intensity along the Portuguese coast documented a successive weakening of the upwelling since 1950 that lasted till mid/late 1970s in the north-west and south-west and till 1994 in the south coast. An increase in upwelling index over the period 1985–2009 was documented in all studied regions while additionally upwelling intensification were observed in the south. A continuous increase in water temperature, ranging from 0.1 to 0.2 °C per decade has also been documented. Threats and decline Many marine fauna utilise coastal habitats as critical nursery areas, for shelter and feeding, yet these habitats are increasingly at risk from agriculture, aquaculture, industry and urban expansion. Indeed, these systems are subject to what may be called "a triple whammy" of increasing industrialisation and urbanisation, an increased loss of biological and physical resources (fish, water, energy, space), and a decreased resilience to the consequences of a warming climate and sea level rise. This has given rise to the complete loss, modification or disconnection of natural coastal ecosystems globally. For example, almost 10% of the entire Great Barrier Reef coastline in Australia (2,300 km) has been replaced with urban infrastructure (e.g., rock seawalls, jetties, marinas), causing massive loss and fragmentation of sensitive coastal ecosystems. Global loss of seagrass reached around 7% of seagrasses area per year by the end of the twentieth century. A global analysis of tidal wetlands (mangroves, tidal flats, and tidal marshes) published in 2022 estimated global losses of from 1999-2019, however, this study also estimated that these losses were largely offset by the establishment of of new tidal wetlands that were not present in 1999. Approximately three-quarters of the net decrease between 1999 and 2019 occurred in Asia (74.1%), with 68.6% concentrated in three countries: Indonesia (36%), China (20.6%), and Myanmar (12%). Of these global tidal wetland losses and gains, 39% of losses and 14% of gains were attributed to direct human activities. Approximately 40% of the global mangrove has been lost since the 1950's with more than 9,736 km2 of the world's mangroves continuing to be degraded in the 20 years period between 1996 and 2016. Saltmarshes are drained when coastal land is claimed for agriculture, and deforestation is an increasing threat to shoreline vegetation (such as mangroves) when coastal land is appropriated for urban and industrial development, both of which may result in the degradation of blue carbon storages and increasing greenhouse gas emissions. These accumulating pressures and impacts on coastal ecosystems are neither isolated nor independent, rather they are synergistic, with feedbacks and interactions that cause individual effects to be greater than their sums. In the year before the ecosystem restoration Decade commences, there is a critical knowledge deficit inhibiting an appreciation of the complexity of coastal ecosystems that hampers the development of responses to mitigate continuing impacts—not to mention uncertainty on projected losses of coastal systems for some of the worst-case future climate change scenarios. Restoration The United Nations has declared 2021–2030 the UN Decade on Ecosystem Restoration. This call to action has the purpose of recognising the need to massively accelerate global restoration of degraded ecosystems, to fight the climate heating crisis, enhance food security, provide clean water and protect biodiversity on the planet. The scale of restoration will be key. For example, the Bonn Challenge has the goal to restore 350 million km2, about the size of India, of degraded terrestrial ecosystems by 2030. However, international support for restoration of blue coastal ecosystems, which provide an impressive array of benefits to people, has lagged. The diagram on the right shows the current state of modified and impacted coastal ecosystems and the expected state following the decade of restoration. Also, shown is the uncertainty in the success of past restoration efforts, current state of altered systems, climate variability, and restoration actions that are available now or on the horizon. This could mean that delivering the Decade on Ecosystem Restoration for coastal systems needs to be viewed as a means of getting things going where the benefits might take longer than a decade. Only the Global Mangrove Alliance comes close to the Bonn Challenge, with the aim of increasing the global area of mangroves by 20% by 2030. However, mangrove scientists have reservations about this target, voicing concerns that it is unrealistic and may prompt inappropriate practices in attempting to reach this target. Conservation and connectivity There has recently been a perceptual shift away from habitat representation as the sole or primary focus of conservation prioritisation, towards consideration of ecological processes that shape the distribution and abundance of biodiversity features. In marine ecosystems, connectivity processes are paramount, and designing systems of marine protected areas that maintain connectivity between habitat patches has long been considered an objective of conservation planning. Two forms of connectivity are critical to structuring coral reef fish populations: dispersal of larvae in the pelagic environment, and post-settlement migration by individuals across the seascape. Whilst a growing literature has described approaches for considering larval connectivity in conservation prioritisation, relatively less attention has been directed towards developing and applying methods for considering post-settlement connectivity Seascape connectivity (connectedness among different habitats in a seascape, c.f. among patches of the same habitat type is essential for species that utilise more than one habitat, either during diurnal movements or at different stages in their life history. Mangroves, seagrass beds, and lagoon reefs provide nursery areas for many commercially and ecologically important fish species that subsequently make ontogenetic shifts to adult populations on coral reefs. These back-reef habitats are often overlooked for conservation or management in favour of coral reefs that support greater adult biomass, yet they can be equally if not more at risk from habitat degradation and loss. Even where juveniles are not targeted by fishers, they can be vulnerable to habitat degradation, for example from sedimentation caused by poor land-use practices. There is clear empirical evidence that proximity to nursery habitats can enhance the effectiveness (i.e. increasing the abundance, density, or biomass of fish species) of marine protected areas on coral reefs. For example, at study sites across the western Pacific, the abundance of harvested fish species was significantly greater on protected reefs close to mangroves, but not on protected reefs isolated from mangroves. The functional role of herbivorous fish species that perform ontogenetic migrations may also enhance the resilience of coral reefs close to mangroves. Despite this evidence, and widespread calls to account for connectivity among habitats in the design of spatial management, there remain few examples where seascape connectivity is explicitly considered in spatial conservation prioritisation (the analytical process of identifying priority areas for conservation or management actions).
Physical sciences
Oceanography
Earth science
77591639
https://en.wikipedia.org/wiki/1
1
1 (one, unit, unity) is a number, numeral, and glyph. It is the first and smallest positive integer of the infinite sequence of natural numbers. This fundamental property has led to its unique uses in other fields, ranging from science to sports, where it commonly denotes the first, leading, or top thing in a group. 1 is the unit of counting or measurement, a determiner for singular nouns, and a gender-neutral pronoun. Historically, the representation of 1 evolved from ancient Sumerian and Babylonian symbols to the modern Arabic numeral. In mathematics, 1 is the multiplicative identity, meaning that any number multiplied by 1 equals the same number. 1 is by convention not considered a prime number. In digital technology, 1 represents the "on" state in binary code, the foundation of computing. Philosophically, 1 symbolizes the ultimate reality or source of existence in various traditions. In mathematics The number 1 is the first natural number after 0. Each natural number, including 1, is constructed by succession, that is, by adding 1 to the previous natural number. The number 1 is the multiplicative identity of the integers, real numbers, and complex numbers, that is, any number multiplied by 1 remains unchanged (). As a result, the square (), square root (), and any other power of 1 is always equal to 1 itself. 1 is its own factorial (), and 0! is also 1. These are a special case of the empty product. Although 1 meets the naïve definition of a prime number, being evenly divisible only by 1 and itself (also 1), by modern convention it is regarded as neither a prime nor a composite number. Different mathematical constructions of the natural numbers represent 1 in various ways. In Giuseppe Peano's original formulation of the Peano axioms, a set of postulates to define the natural numbers in a precise and logical way, 1 was treated as the starting point of the sequence of natural numbers. Peano later revised his axioms to begin the sequence with 0. In the Von Neumann cardinal assignment of natural numbers, where each number is defined as a set that contains all numbers before it, 1 is represented as the singleton , a set containing only the element 0. The unary numeral system, as used in tallying, is an example of a "base-1" number system, since only one mark – the tally itself – is needed. While this is the simplest way to represent the natural numbers, base-1 is rarely used as a practical base for counting due to its difficult readability. In many mathematical and engineering problems, numeric values are typically normalized to fall within the unit interval ([0,1]), where 1 represents the maximum possible value. For example, by definition 1 is the probability of an event that is absolutely or almost certain to occur. Likewise, vectors are often normalized into unit vectors (i.e., vectors of magnitude one), because these often have more desirable properties. Functions are often normalized by the condition that they have integral one, maximum value one, or square integral one, depending on the application. 1 is the value of Legendre's constant, introduced in 1808 by Adrien-Marie Legendre to express the asymptotic behavior of the prime-counting function. The Weil's conjecture on Tamagawa numbers states that the Tamagawa number , a geometrical measure of a connected linear algebraic group over a global number field, is 1 for all simply connected groups (those that are path-connected with no 'holes'). 1 is the most common leading digit in many sets of real-world numerical data. This is a consequence of Benford’s law, which states that the probability for a specific leading digit is . The tendency for real-world numbers to grow exponentially or logarithmically biases the distribution towards smaller leading digits, with 1 occurring approximately 30% of the time. As a word One originates from the Old English word an, derived from the Germanic root , from the Proto-Indo-European root *oi-no- (meaning "one, unique"). Linguistically, one is a cardinal number used for counting and expressing the number of items in a collection of things. One is most commonly a determiner used with singular countable nouns, as in one day at a time. The determiner has two senses: numerical one (I have one apple) and singulative one (one day I'll do it). One is also a gender-neutral pronoun used to refer to an unspecified person or to people in general as in one should take care of oneself. Words that derive their meaning from one include alone, which signifies all one in the sense of being by oneself, none meaning not one, once denoting one time, and atone meaning to become at one with the someone. Combining alone with only (implying one-like) leads to lonely, conveying a sense of solitude. Other common numeral prefixes for the number 1 include uni- (e.g., unicycle, universe, unicorn), sol- (e.g., solo dance), derived from Latin, or mono- (e.g., monorail, monogamy, monopoly) derived from Greek. Symbols and representation History Among the earliest known records of a numeral system, is the Sumerian decimal-sexagesimal system on clay tablets dating from the first half of the third millennium BCE. The Archaic Sumerian numerals for 1 and 60 both consisted of horizontal semi-circular symbols. By , the older Sumerian curviform numerals were replaced with cuneiform symbols, with 1 and 60 both represented by the same symbol . The Sumerian cuneiform system is a direct ancestor to the Eblaite and Assyro-Babylonian Semitic cuneiform decimal systems. Surviving Babylonian documents date mostly from Old Babylonian () and the Seleucid () eras. The Babylonian cuneiform script notation for numbers used the same symbol for 1 and 60 as in the Sumerian system. The most commonly used glyph in the modern Western world to represent the number 1 is the Arabic numeral, a vertical line, often with a serif at the top and sometimes a short horizontal line at the bottom. It can be traced back to the Brahmic script of ancient India, as represented by Ashoka as a simple vertical line in his Edicts of Ashoka in c. 250 BCE. This script's numeral shapes were transmitted to Europe via the Maghreb and Al-Andalus during the Middle Ages The Arabic numeral, and other glyphs used to represent the number one (e.g., Roman numeral ( ), Chinese numeral ()) are logograms. These symbols directly represent the concept of 'one' without breaking it down into phonetic components. Modern typefaces In modern typefaces, the shape of the character for the digit 1 is typically typeset as a lining figure with an ascender, such that the digit is the same height and width as a capital letter. However, in typefaces with text figures (also known as Old style numerals or non-lining figures), the glyph usually is of x-height and designed to follow the rhythm of the lowercase, as, for example, in . In old-style typefaces (e.g., Hoefler Text), the typeface for numeral 1 resembles a small caps version of , featuring parallel serifs at the top and bottom, while the capital retains a full-height form. This is a relic from the Roman numerals system where represents 1. Many older typewriters do not have a dedicated key for the numeral 1, requiring the use of the lowercase letter L or uppercase I as substitutes. The lower case "" can be considered a swash variant of a lower-case Roman numeral "", often employed for the final of a "lower-case" Roman numeral. It is also possible to find historic examples of the use of j or J as a substitute for the Arabic numeral 1. In German, the serif at the top may be extended into a long upstroke as long as the vertical line. This variation can lead to confusion with the glyph used for seven in other countries and so to provide a visual distinction between the two the digit 7 may be written with a horizontal stroke through the vertical line. In other fields In digital technology, data is represented by binary code, i.e., a base-2 numeral system with numbers represented by a sequence of 1s and 0s. Digitised data is represented in physical devices, such as computers, as pulses of electricity through switching devices such as transistors or logic gates where "1" represents the value for "on". As such, the numerical value of true is equal to 1 in many programming languages. In lambda calculus and computability theory, natural numbers are represented by Church encoding as functions, where the Church numeral for 1 is represented by the function applied to an argument once . In physics, selected physical constants are set to 1 in natural unit systems in order to simplify the form of equations; for example, in Planck units the speed of light equals 1. Dimensionless quantities are also known as 'quantities of dimension one'. In quantum mechanics, the normalization condition for wavefunctions requires the integral of a wavefunction's squared modulus to be equal to 1. In chemistry, hydrogen, the first element of the periodic table and the most abundant element in the known universe, has an atomic number of 1. Group 1 of the periodic table consists of hydrogen and the alkali metals. In philosophy, the number 1 is commonly regarded as a symbol of unity, often representing God or the universe in monotheistic traditions. The Pythagoreans considered the numbers to be plural and therefore did not classify 1 itself as a number, but as the origin of all numbers. In their number philosophy, where odd numbers were considered male and even numbers female, 1 was considered neutral capable of transforming even numbers to odd and vice versa by addition. The Neopythagorean philosopher Nicomachus of Gerasa's number treatise, as recovered by Boethius in the Latin translation Introduction to Arithmetic, affirmed that one is not a number, but the source of number. In the philosophy of Plotinus (and that of other neoplatonists), 'The One' is the ultimate reality and source of all existence. Philo of Alexandria (20 BC – AD 50) regarded the number one as God's number, and the basis for all numbers.
Mathematics
Basics
null
60987806
https://en.wikipedia.org/wiki/Stretch%20sensor
Stretch sensor
A stretch sensor is a sensor which can be used to measure deformation and stretching forces such as tension or bending. They are usually made from a material that is itself soft and stretchable. Most stretch sensors fall into one of three categories. The first type consists of an electrical conductor for which the electrical resistance changes (usually increases) substantially when the sensor is deformed. The second type consists of a capacitor for which the capacitance changes under deformation. Known properties of the sensor can then be used to deduce the deformation from the resistance/capacitance. Both the rheostatic and capacitive types often take the form of a cord, tape, or mesh. The third type of sensor uses high performance piezoelectric systems in soft, flexible/stretchable formats for measuring signals using the capability of piezoelectric materials to interconvert mechanical and electrical forms of energy. Applications Wearable stretch sensors can be used for tasks such as measuring body posture or movement. in 2018, New Zealand based company StretchSense began making a motion capture glove (data glove) using stretch sensors. Unlike gloves that use inertial or optical sensors, stretchable sensors do not suffer from drift or occlusion. They can also be used in robotics, particularly in soft robots. Stretch sensors are now widely used in medical fields for analysis and measuring the human dielectric properties w.r.t skin.
Technology
Components
null
56164172
https://en.wikipedia.org/wiki/Anthias%20anthias
Anthias anthias
Anthias anthias, the swallowtail sea perch or marine goldfish, is a species of marine ray-finned fish from the family Anthiadidae. It is native to the eastern Atlantic Ocean and the Mediterranean Sea where it is associated with reefs. It is found in the aquarium trade. Description Anthias anthias has a rather deep body which has a standard length which is equivalent to 2.5 times its depth. The dorsal fin has 10 spines, with the third spine being especially long, and 15 soft rays. The anal fin has 3 spines and 7 soft rays. The pectoral fins are longer than the pelvic fins. The caudal fin has asymmetrical, pointed lobes with lower lobe being longer than the upper lobe. It has a complete lateral line which has 36–39 scales, the scales are large. Their colour varies from pink through to red and they have 3 yellow lines on the sides of their heads. Frequently they can show brown blotches along the back. The pelvic fins are yellow in colour but when breeding those of the males turn red. They can attain a standard length of but they are more normally around . Distribution Anthias anthias is found in the eastern Atlantic Ocean and Mediterranean Sea. In the Eastern Atlantic it occurs from Portugal south to Angola and northern Namibia. It also occurs around the Azores, Madeira, Canary Islands and Cape Verde Islands and the islands in the Gulf of Guinea. It is widespread in the Mediterranean and has been recorded in the Canakkale Strait off Gallipoli but not in the Black Sea. Habitat and biology Anthias anthias occurs from in depth and lives among rocks and corals, hiding in caves during the day. It emerges at night to feed on zooplankton, small crustacea and smaller fishes. This species is a protogynous hermaphrodite, all individuals hatch as females. Each time a male dies, one of the larger females changes her sex and becomes male. The majority are female throughout their lives and even a large school of these fish will contain only a few males. It takes around two weeks for the female to change sex and this involves not just a change in the gonads but also in colour, size and shape. If there are too many males in a social group then some of the males can reverse the sex change and revert to being females. They have been observed to co-operatively feed, some fish feed while others herd the prey, such as shrimp, the roles being reversed to allow all the fish to feed. Species description and taxonomy Anthias anthias was first formally described in 1758 as Labrus anthius by Carolus Linnaeus in Volume 1 of the Xth edition of the Systema Naturae with the type locality given as southern Europe. When Marcus Elieser Bloch created the genus Anthias he used Anthias sacer as the type species but this is regarded as a synonym of Linnaeus's Labrus anthias, so this species is the type species of its genus. The name anthias is Greek for a fish, probably the gilt-head bream. Utilisation Anthias anthias is used in the marine aquarium trade.
Biology and health sciences
Acanthomorpha
Animals
51807646
https://en.wikipedia.org/wiki/Peninsula
Peninsula
A peninsula is a landform that extends from a mainland and is surrounded by water on most sides. Peninsulas exist on each continent. The largest peninsula in the world is the Arabian Peninsula. Etymology The word peninsula derives , . The word entered English in the 16th century. Definitions A peninsula is generally defined as a piece of land surrounded on most sides by water. A peninsula may be bordered by more than one body of water, and the body of water does not have to be an ocean or a sea. A piece of land on a very tight river bend or one between two rivers is sometimes said to form a peninsula, for example in the New Barbadoes Neck in New Jersey, United States. A peninsula may be connected to the mainland via an isthmus, for example, in the Isthmus of Corinth which connects to the Peloponnese peninsula. Formation and types Peninsulas can be formed from continental drift, glacial erosion, glacial meltwater, glacial deposition, marine sediment, marine transgressions, volcanoes, divergent boundaries or river sedimentation. More than one factor may play into the formation of a peninsula. For example, in the case of Florida, continental drift, marine sediment, and marine transgressions were all contributing factors to its shape. Glaciers In the case of formation from glaciers (e.g., the Antarctic Peninsula or Cape Cod), peninsulas can be created due to glacial erosion, meltwater or deposition. If erosion formed the peninsula, softer and harder rocks were present, and since the glacier only erodes softer rock, it formed a basin. This may create peninsulas, and occurred for example in the Keweenaw Peninsula. In the case of formation from meltwater, melting glaciers deposit sediment and form moraines, which act as dams for the meltwater. This may create bodies of water that surround the land, forming peninsulas. If deposition formed the peninsula, the peninsula was composed of sedimentary rock, which was created from a large deposit of glacial drift. The hill of drift becomes a peninsula if the hill formed near water but was still connected to the mainland, for example during the formation of Cape Cod about 23,000 years ago. Others In the case of formation from volcanoes, when a volcano erupts magma near water, it may form a peninsula (e.g., the Alaskan Peninsula). Peninsulas formed from volcanoes are especially common when the volcano erupts near shallow water. Marine sediment may form peninsulas by the creation of limestone. A rift peninsula may form as a result of a divergent boundary in plate tectonics (e.g. the Arabian Peninsula), while a convergent boundary may also form peninsulas (e.g. Gibraltar or the Indian subcontinent). Peninsulas can also form due to sedimentation in rivers. When a river carrying sediment flows into an ocean, the sediment is deposited, forming a delta peninsula. Marine transgressions (changes in sea level) may form peninsulas, but also may affect existing peninsulas. For example, the water level may change, which causes a peninsula to become an island during high water levels. Similarly, wet weather causing higher water levels make peninsulas appear smaller, while dry weather make them appear larger. Sea level rise from global warming will permanently reduce the size of some peninsulas over time. Uses Peninsulas are noted for their use as shelter for humans and Neanderthals. The landform is advantageous because it gives hunting access to both land and sea animals. They can also serve as markers of a nation's borders. List of the largest peninsulas in the world
Physical sciences
Oceanic and coastal landforms
null
71840293
https://en.wikipedia.org/wiki/Height%20above%20mean%20sea%20level
Height above mean sea level
Height above mean sea level is a measure of a location's vertical distance (height, elevation or altitude) in reference to a vertical datum based on a historic mean sea level. In geodesy, it is formalized as orthometric height. The zero level varies in different countries due to different reference points and historic measurement periods. Climate change and other forces can cause sea levels and elevations to vary over time. Uses Elevation or altitude above sea level is a standard measurement for: Geographic locations such as towns, mountains and other landmarks. The top of buildings and other structures. Mining infrastructure, particularly underground. Flying objects such as airplanes or helicopters below a Transition Altitude defined by local regulations. Units and abbreviations Elevation or altitude is generally expressed as "metres above mean sea level" in the metric system, or "feet above mean sea level" in United States customary and imperial units. Common abbreviations in English are: AMSL – above mean sea level ASL – above sea level FAMSL – feet above mean sea level FASL – feet above sea level MAMSL – metres above mean sea level MASL – metres above sea level MSL – mean sea level For elevations or altitudes, often just the abbreviation MSL is used, e.g., Mount Everest (8849 m MSL), or the reference to sea level is omitted completely, e.g., Mount Everest (8849 m). Methods of measurement Altimetry is the measurement of altitude or elevation above sea level. Common techniques are: Surveying, especially levelling. Global Navigation Satellite System (such as GPS), where a receiver determines a location from pseudoranges to multiple satellites. A geoid is needed to convert the 3D position to sea-level elevation. Pressure altimeter measuring atmospheric pressure, which decreases as altitude increases. Since atmospheric pressure varies with the weather, too, a recent local measure of the pressure at a known altitude is needed to calibrate the altimeter. Stereoscopy in aerial photography. Aerial lidar and satellite laser altimetry. Aerial or satellite radar altimetry. Accurate measurement of historical mean sea levels is complex. Land mass subsidence (as occurs naturally in some regions) can give the appearance of rising sea levels. Conversely, markings on land masses that are uplifted (due to geological processes) can suggest a relative lowering of mean sea level.
Physical sciences
Metric
Basics and measurement
59580357
https://en.wikipedia.org/wiki/Yoga%20as%20exercise
Yoga as exercise
Yoga as exercise is a physical activity consisting mainly of postures, often connected by flowing sequences, sometimes accompanied by breathing exercises, and frequently ending with relaxation lying down or meditation. Yoga in this form has become familiar across the world, especially in the US and Europe. It is derived from medieval Haṭha yoga, which made use of similar postures, but it is generally simply called "yoga". Academic research has given yoga as exercise a variety of names, including modern postural yoga and transnational anglophone yoga. Posture is described in the Yoga Sutras II.29 as the third of the eight limbs, the ashtanga, of yoga. Sutra II.46 defines it as that which is steady and comfortable, but no further elaboration or list of postures is given. Postures were not central in any of the older traditions of yoga; posture practice was revived in the 1920s by yoga gurus including Yogendra and Kuvalayananda, who emphasised its health benefits. The flowing sequences of Surya Namaskar (Salute to the Sun) were pioneered by the Rajah of Aundh, Bhawanrao Shrinivasrao Pant Pratinidhi, in the 1920s. It and many standing poses used in gymnastics were incorporated into yoga by the yoga teacher Krishnamacharya in Mysore from the 1930s to the 1950s. Several of his students went on to found influential schools of yoga: Pattabhi Jois created Ashtanga Vinyasa Yoga, which in turn led to Power Yoga; B. K. S. Iyengar created Iyengar Yoga, and defined a modern set of yoga postures in his 1966 book Light on Yoga; and Indra Devi taught yoga as exercise to many celebrities in Hollywood. Other major schools founded in the 20th century include Bikram Yoga and Sivananda Yoga. Yoga as exercise spread across America and Europe, and then the rest of the world. Haṭha yoga's non-postural practices such as its purifications are much reduced or absent in yoga as exercise. The term "hatha yoga" is also in use with a different meaning, a gentle unbranded yoga practice, independent of the major schools, often mainly for women. Practices vary from wholly secular, for exercise and relaxation, through to undoubtedly spiritual, whether in traditions like Sivananda Yoga or in personal rituals. Yoga as exercise's relationship to Hinduism is complex and contested; some Christians have rejected it on the grounds that it is covertly Hindu, while the "Take Back Yoga" campaign insisted that it was necessarily connected to Hinduism. Scholars have identified multiple trends in the changing nature of yoga since the end of the 19th century. Yoga as exercise has developed into a worldwide multi-billion dollar business, involving classes, certification of teachers, clothing such as yoga pants, books, videos, equipment including yoga mats, and yoga tourism. History Yoga's origins The Sanskrit noun योग , cognate with English "yoke", is derived from the root "to attach, join, harness, yoke". Its ancient spiritual and philosophical goal was to unite the human spirit with the divine. The branch of yoga that makes use of physical postures is Haṭha yoga. The Sanskrit word हठ haṭha means "force", alluding to its use of physical techniques. Haṭha yoga Haṭha yoga flourished among secretive ascetic groups such as Nath yogins in South Asia from c. 1100-c. 1900. Instruction was directly from guru to individual pupil, in a long-term relationship. It was associated with religions, especially Hinduism but also Jainism and Buddhism. Its objectives were to manipulate vital fluids to enable absorption and ultimately liberation. It consisted of practices including purifications, postures (asanas), locks, the directed gaze, seals, and rhythmic breathing. These were claimed to provide supernatural powers including healing, destruction of poisons, invisibility, and shape-shifting. Yogins wore little or no clothing; their bodies were sometimes smeared with cremation ash as a reminder of their forthcoming deaths. Equipment, too, was scanty; sometimes yogins used a tiger or deer skin as a rug to meditate on. Haṭha yoga made use of a small number of asanas, mainly seated; in particular, there were very few standing poses before 1900. They were practised slowly, often holding a position for long periods. The practice of asanas was a minor preparatory aspect of spiritual work. Yogins followed a strict vegetarian diet, excluding stimulants such as tea, coffee or alcohol. Their yoga was taught without payment; gurus were supported by gifts and the philosophy was anti-consumerist. Early influences According to one theory, the system of physical education practised in the 19th-century Young Men's Christian Association, adapted by ex-military gymnasts for the schooling system in colonial British India, became the default form of mass-drill, and this influenced the "modernized hatha yoga". According to the yoga scholar Suzanne Newcombe, modern yoga in India is a blend of Western gymnastics with postures from Haṭha yoga in India in the 20th century. From the 1850s onwards, there developed in India a culture of physical exercise to counter the colonial stereotype of supposed "degeneracy" of Indians compared to the British, a belief reinforced by then-current ideas of Lamarckism and eugenics. This culture was taken up from the 1880s to the early 20th century by Indian nationalists such as Tiruka, who taught exercises and unarmed combat techniques under the guise of yoga. The German bodybuilder Eugen Sandow was acclaimed on his 1905 visit to India, at which time he was already a "cultural hero" in the country. The anthropologist Joseph Alter suggests that Sandow was the person who had the most influence on modern yoga. The first handbook of asanas in English, and the first to be illustrated with photographs, was Seetharaman Sundaram's 1928 Yogic Physical Culture. Introduction to the West Yoga was introduced to the Western world by the spiritual leader Vivekananda's 1893 visit to the World Parliament of Religions in Chicago, and his 1896 book Raja Yoga. However, he rejected Haṭha yoga and its "entirely" physical practices such as asanas as difficult and ineffective for spiritual growth, out of a widely shared distaste for India's wandering yogins. Yoga asanas were brought to America by the yoga teacher Yogendra. He founded a branch of The Yoga Institute in New York state in 1919, starting to make Haṭha yoga acceptable, seeking scientific evidence for its health benefits, and writing books such as his 1928 Yoga Asanas Simplified and his 1931 Yoga Personal Hygiene. The flowing sequences of salute to the sun, Surya Namaskar, now accepted as yoga and containing popular asanas such as Uttanasana and upward and downward dog poses, were popularized by the Rajah of Aundh, Bhawanrao Shrinivasrao Pant Pratinidhi, in the 1920s. In 1924, the yoga teacher Kuvalayananda founded the Kaivalyadhama Health and Yoga Research Center in Maharashtra, combining asanas with gymnastics, and like Yogendra seeking a scientific and medical basis for yogic practices. In 1925, Kuvalayananda's rival Paramahansa Yogananda, having moved from India to America, set up the Self-Realization Fellowship in Los Angeles, and taught yoga, including asanas, breathing, chanting and meditation, to "tens of thousands of Americans". In 1923, Yogananda's younger brother, Bishnu Charan Ghosh, founded the Ghosh College of Yoga and Physical Culture in Calcutta. Tirumalai Krishnamacharya (1888–1989), "the father of modern yoga", claimed to have spent seven years with one of the few masters of Haṭha yoga then living, Ramamohana Brahmachari, at Lake Manasarovar in Tibet, from 1912 to 1918. He studied under Kuvalayananda in the 1930s, and then in his yogashala in the Jaganmohan Palace in Mysore created "a marriage of Haṭha yoga, wrestling exercises, and modern Western gymnastic movement, and unlike anything seen before in the yoga tradition." The Maharajah of Mysore Krishna Raja Wadiyar IV was a leading advocate of physical culture in India, and a neighbouring hall of his palace was used to teach Surya Namaskar classes, then considered to be gymnastic exercises. Krishnamacharya adapted these sequences of exercises into his flowing vinyasa style of yoga. The yoga scholar Mark Singleton noted that gymnastic systems like Niels Bukh's were popular in physical culture in India at that time, and that they contained many postures similar to Krishnamacharya's new asanas. Among Krishnamacharya's pupils were people who became influential yoga teachers themselves: the Russian Eugenie V. Peterson, known as Indra Devi (from 1937), who moved to Hollywood, taught yoga to celebrities, and wrote the bestselling book Forever Young, Forever Healthy; Pattabhi Jois (from 1927), who founded the flowing style Ashtanga Vinyasa Yoga whose Mysore style makes use of repetitions of Surya Namaskar, in 1948, which in turn led to Power Yoga; and B.K.S. Iyengar (from 1933), his brother-in-law, who founded Iyengar Yoga, with its first centre in Britain. Together they made yoga popular as exercise and brought it to the Western world. Iyengar's 1966 book Light on Yoga popularised yoga asanas worldwide with what the scholar-practitioner Norman Sjoman calls its "clear no-nonsense descriptions and the obvious refinement of the illustrations", though the degree of precision it calls for is missing from earlier yoga texts. Other Indian schools of yoga took up the new style of asanas, but continued to emphasize Haṭha yoga's spiritual goals and practices to varying extents. The Divine Life Society was founded by Sivananda Saraswati of Rishikesh in 1936. His many disciples include Swami Vishnudevananda, who founded the International Sivananda Yoga Vedanta Centres, starting in 1959; Swami Satyananda of the Bihar School of Yoga, a major centre of Haṭha yoga teacher training, founded in 1963; and Swami Satchidananda of Integral Yoga, founded in 1966. Vishnudevananda published his Complete Illustrated Book of Yoga in 1960, with a list of asanas that substantially overlaps with Iyengar's, sometimes with different names for the same poses. Jois's asana names almost exactly match Iyengar's. Worldwide commodity Three changes around the 1960s allowed yoga as exercise to become a worldwide commodity. People were for the first time able to travel freely around the world: consumers could go to the east; Indians could migrate to Europe and America; and business people and religious leaders could go where they liked to sell their wares. Secondly, people across the Western world became disillusioned with organised religion, and started to look for alternatives. And thirdly, yoga became an uncontroversial form of exercise suitable for mass consumption, unlike the more religious or meditational forms of modern yoga such as Siddha Yoga or Transcendental Meditation. This involved the dropping of many traditional requirements on the practice of yoga, such as giving alms, being celibate, studying the Hindu scriptures, and retreating from society. From the 1970s, yoga as exercise spread across many countries of the world, changing as it did so, and becoming "an integral part of (primarily) urban cultures worldwide", to the extent that the word yoga in the Western world now means the practice of asanas, typically in a class. For example, Iyengar Yoga reached South Africa in 1979 with the opening of its institute at Pietermaritzburg; its Association of South East & East Asia was founded in 2009. The spread of Yoga in America was assisted by the television show Lilias, Yoga and You, hosted by Lilias Folan; it ran from 1970 to 1999. In Australia, by 2005 some 12% of the population practised yoga in a class or at home. As a valuable business, yoga has in turn been used in advertising, sometimes for yoga-related products, sometimes for other goods and services. The market for yoga grew, argues the scholar of religion Andrea Jain, with the creation of an "endless" variety of second-generation yoga brands, saleable products, "constructed and marketed for immediate consumption", based on earlier developments. For example, in 1997 John Friend, once a financial analyst, who had intensively studied both the postural Iyengar Yoga and the non-postural Siddha Yoga, founded Anusara Yoga. Friend likened the choice of his yoga over other brands to choosing "a fine restaurant" over "a fast-food joint." The New York Times Magazine headed its piece on him "The Yoga Mogul", while the historian of yoga Stefanie Syman argued that Friend had "very self-consciously" created his own yoga community. For example, Friend published his own teacher training manual, held workshops, conferences, and festivals, marketed his own brand of yoga mats and water bottles, and prescribed ethical guidelines. When Friend did not live up to the brand's high standards, he apologised publicly and took steps to protect the brand, in 2012 stepping back from running it and appointing a CEO. Jain states that yoga is becoming "part of the pop culture around the world". Alter writes that it illustrates "transnational transmutation and the blurring of consumerism, holistic health, and embodied mysticism—as well as good old-fashioned Orientalism." Singleton argues that the commodity is the yoga body itself, its "spiritual possibility" signified by the "lucent skin of the yoga model", a beautiful image endlessly sold back to the yoga-practising public "as an irresistible commodity of the holistic, perfectible self". In 2008, the United States Department of Health and Human Services labelled September as National Yoga Month. From 2015, at the suggestion of India's Prime Minister, Narendra Modi, an annual International Day of Yoga has been held on 21 June. Transformation The anthropologist Sarah Strauss contrasts the goal of classical yoga, the isolation of the self or kaivalya, with the modern goals of good health, reduced stress, and physical flexibility. Sjoman notes that many of the asanas in Iyengar's Light on Yoga can be traced to his teacher, Krishnamacharya, "but not beyond him". Singleton states that yoga used as exercise is not "the outcome of a direct and unbroken lineage of haṭha yoga", but it would be "going too far to say that modern postural yoga has no relationship to asana practice within the Indian tradition." The contemporary yoga practice is the result of "radical innovation and experimentation" of its Indian heritage. Jain writes that equating yoga as exercise with hatha yoga "does not account for the historical sources": asanas "only became prominent in modern yoga in the early twentieth century as a result of the dialogical exchanges between Indian reformers and nationalists and Americans and Europeans interested in health and fitness". In short, Jain writes, "modern yoga systems ... bear little resemblance to the yoga systems that preceded them. This is because [both] ... are specific to their own social contexts." The historian Jared Farmer writes that twelve trends have characterised yoga's progression from the 1890s onwards: from peripheral to central in society; from India to global; from male to "predominantly" female; from spiritual to "mostly" secular; from sectarian to universal; from mendicant to consumerist; from meditational to postural; from being understood intellectually to experientially; from embodying esoteric knowledge to being accessible to all; from being taught orally to hands-on instruction; from presenting poses in text to using photographs; and from being "contorted social pariahs" to "lithe social winners". The trend away from authority is continued in post-lineage yoga, which is practised outside any major school or guru's lineage. Practices Asanas Yoga as exercise consists largely but not exclusively of the practice of asanas. The numbers of asanas described (not just named) in some major Haṭha yoga and modern texts are shown in the table; all the Haṭha yoga text dates are approximate. Asanas can be classified in different ways, which may overlap: for example, by the position of the head and feet (standing, sitting, reclining, inverted), by whether balancing is required, or by the effect on the spine (forward bend, backbend, twist), giving a set of asana types agreed by most authors. The yoga guru Dharma Mittra uses his own categories such as "Floor & Supine Poses". Yogapedia and Yoga Journal add "Hip-opening"; the yoga teacher Darren Rhodes, Yogapedia and Yoga Journal also add "Core strength." Styles The number of schools and styles of yoga in the Western world has continued to grow rapidly. By 2012, there were at least 19 widespread styles from Ashtanga Yoga to Viniyoga. These emphasise different aspects including aerobic exercise, precision in the asanas, and spirituality in the Haṭha yoga tradition. These aspects can be illustrated by schools with distinctive styles. For example, Bikram Yoga has an aerobic exercise style with rooms heated to and a fixed pattern of 2 breathing exercises and 24 asanas. Iyengar Yoga emphasises correct alignment in the postures, working slowly, if necessary with props, and ending with relaxation. Sivananda Yoga focuses more on spiritual practice, with 12 basic poses, chanting in Sanskrit, pranayama breathing exercises, meditation, and relaxation in each class, and importance is placed on vegetarian diet. Jivamukti Yoga uses a flowing vinyasa style of asanas accompanied by music, chanting, and the reading of scriptures. Kundalini yoga emphasises the awakening of kundalini energy through meditation, pranayama, chanting, and suitable asanas. Alongside the yoga brands, many teachers, for example in England, offer an unbranded "hatha yoga", often mainly to women, creating their own combinations of poses. These may be in flowing sequences (vinyasas), and new variants of poses are often created. The gender imbalance has sometimes been marked; in Britain in the 1970s, women formed between 70 and 90 percent of most yoga classes, as well as most of the yoga teachers. The tradition begun by Krishnamacharya survives at the Krishnamacharya Yoga Mandiram in Chennai; his son T. K. V. Desikachar and his grandson Kausthub Desikachar continued to teach in small groups, coordinating asana movements with the breath, and personalising the teaching according to the needs of individual students. Sessions Yoga sessions vary widely depending on the school and style, and according to how advanced the class is. As with any exercise class, sessions usually start slowly with gentle warm-up exercises, move on to more vigorous exercises, and slow down again towards the end. A beginners' class can begin with simple poses like Sukhasana, some rounds of Surya Namaskar, and then a combination of standing poses such as Trikonasana, sitting poses like Dandasana, and balancing poses like Navasana; it may end with some reclining and inverted poses like Setu Bandha Sarvangasana and Viparita Karani, a reclining twist, and finally Savasana for relaxation and in some styles also for a guided meditation. A typical session in most styles lasts from an hour to an hour and a half, whereas in Mysore style yoga, the class is scheduled in a three-hour time window during which the students practice on their own at their own speed, following individualised instruction by the teacher. Hybrids The evolution of yoga as exercise is not confined to the creation of new asanas and linking vinyasa sequences. A wide variety of hybrid activities combining yoga with martial arts, aerial yoga combined with acrobatics, yoga with barre work (as in ballet preparation), on horseback, with dogs, with goats, with ring-tailed lemurs, with weights, and on paddleboards are all being explored. Purposes Exercise The energy cost of exercise is measured in units of metabolic equivalent of task (MET). Less than 3 METs counts as light exercise; 3 to 6 METs is moderate; 6 or over is vigorous. American College of Sports Medicine and American Heart Association guidelines count periods of at least 10 minutes of moderate MET level activity towards their recommended daily amounts of exercise. For healthy adults aged 18 to 65, the guidelines recommend moderate exercise for 30 minutes five days a week, or vigorous aerobic exercise for 20 minutes three days a week. Treated as a form of exercise, a complete yoga session with asanas and pranayama provides 3.3 ± 1.6 METs, on average a moderate workout. Surya Namaskar ranged from a light 2.9 to a vigorous 7.4 METs; the average for a session of yoga practice without Surya Namaskar was a light 2.9 ± 0.8 METs. Physical or Hindu Since the mid-20th century, yoga has been used, especially in the Western world, as physical exercise for fitness and suppleness, rather than for what the historian of American yoga, Stefanie Syman, calls any "overtly Hindu" purpose. In 2010, this ambiguity triggered what the New York Times called "a surprisingly fierce debate in the gentle world of yoga". Some saffronising Indian-Americans campaigned to "Take Back Yoga" by informing Americans and other Westerners about the connection between yoga and Hinduism. The campaign was criticised by the New Age author Deepak Chopra, but supported by the president of the Southern Baptist Theological Seminary, R. Albert Mohler Jr. Jain notes that yoga is not necessarily Hindu, as it can also be Jain or Buddhist; nor is it homogeneous or static, so she is critical of both what she calls the "Christian yogaphobic position" and the "Hindu origins position." Farmer writes that Syman identifies a Protestant streak in yoga as exercise, "with its emphasis on working the body. This effortful yoga is, she says, paradoxically, both 'an indulgence and a penance'." Authorities differ on whether yoga is purely exercise. For example, in 2012, New York state decided that yoga was exempt from state sales tax as it did not constitute "true exercise", whereas in 2014 the District of Columbia was clear that yoga premises were subject to the local sales tax on premises "the purpose of which is physical exercise." Similar debates have taken place in a Muslim context; for example, restrictions on yoga have been lifted in Saudi Arabia. In Malaysia, Kuala Lumpur permits yoga classes provided they do not include chanting or meditation. The yoga teacher and author Mira Mehta, asked by Yoga Magazine in 2010 whether she preferred her pupils to commit to a spiritual path before they start yoga, replied "Certainly not. A person's spiritual life is his or her own affair. People come to yoga for all sorts of reasons. High on the list is health and the desire to become de-stressed." Kimberley J. Pingatore, studying attitudes among American yoga practitioners, found that they did not view the categories of religious, spiritual, and secular as alternatives. However, Haṭha yoga's "ecstatic ... transcendent ... possibly subversive" elements remain in yoga used as exercise. The yoga teacher and author Jessamyn Stanley writes that modern Western society "does not respect the esoteric or spiritual at all", making people skeptical about any alignment of yoga as practised in the West with "chakras or spirituality". Stanley states that it is possible to start a practice without considering such matters, and that styles such as Bikram do not mention them, but that a deepening yoga practice will bring "an overall evolution of the self." Syman suggests that part of the attraction of Bikram and Ashtanga Yoga was that under the sweat, the commitment, the schedule, the physical demands and even the verbal abuse was a hard-won ecstasy, "a deep feeling of vitality, a feeling of pure energy, an unbowed posture, and mental acuity." That context has led to a division of opinion among Christians, some like Alexandra Davis of the Evangelical Alliance asserting that it is acceptable as long as they are aware of modern yoga's origins, others like Paul Gosbee stating that yoga's purpose is to "open up chakras" and release kundalini or "serpent power" which in Gosbee's view is "from Satan", making "Christian yoga ... a contradiction." Church halls are sometimes used for yoga, and in 2015 a yoga group was banned from a church hall in Bristol by the local parochial church council, stating that yoga represented "alternative spiritualities." In a secular context, the journalists Nell Frizzell and Reni Eddo-Lodge have debated (in The Guardian) whether Western yoga classes represent "cultural appropriation." In Frizzell's view, yoga has become a new entity, a long way from the Yoga Sutras of Patanjali, and while some practitioners are culturally insensitive, others treat it with more respect. Eddo-Lodge agrees that Western yoga is far from Patanjali, but argues that the changes cannot be undone, whether people use it "as a holier-than-thou tool, as a tactic to balance out excessive drug use, or practised similarly to its origins with the spirituality that comes with it." Jain argues however that charges of appropriation "from 'the East' to 'the West'" fail to take account of the fact that yoga is evolving in a shared multinational process; it is not something that is being stolen from one place by another. Health Yoga as exercise has been popularized in the Western world by claims about its health benefits. The history of such claims was reviewed by William J. Broad in his 2012 book The Science of Yoga; he states that the claims that yoga was scientific began as Hindu nationalist posturing. Among the early exponents was Kuvalayananda, who attempted to demonstrate scientifically in his purpose-built 1924 laboratory at Kaivalyadhama that Sarvangasana (shoulderstand) specifically rehabilitated the endocrine glands (the organs that secrete hormones). He found no evidence to support such a claim, for this or any other asana. The impact of yoga as exercise on physical and mental health has been a topic of systematic studies (evaluating primary research), although a 2014 report found that, despite its common practice and possible health benefits, it remained "extremely understudied." A systematic review of six studies found that Iyengar yoga is effective at least in the short term for both neck pain and low back pain. A review of six studies found benefits for depression, but noted that the studies' methods imposed limitations, while a clinical practice guideline from the American Cancer Society stated that yoga may reduce anxiety and stress in people with cancer. A 2015 systematic review called for more rigour in clinical trials of the effect of yoga on mood and measures of stress. The practice of asanas has been claimed to improve flexibility, strength, and balance; to alleviate stress and anxiety, and to reduce the symptoms of lower back pain. A review of five studies noted that three psychological (positive affect, mindfulness, self-compassion) and four biological mechanisms (posterior hypothalamus, interleukin-6, C-reactive protein and cortisol) that might act on stress had been examined empirically, whereas many other potential mechanisms remained to be studied; four of the mechanisms (positive affect, self-compassion, inhibition of the posterior hypothalamus and salivary cortisol) were found to mediate the potential stress-lowering effects of yoga. A 2017 review found moderate-quality evidence that yoga reduces back pain. For people with cancer, yoga may help relieve fatigue, improve psychological outcomes, and support sleep quality and life attitudes, although results vary from reviews published in 2017. A 2015 systematic review noted that yoga may be effective in alleviating symptoms of prenatal depression. There is evidence that practice of asanas improves birth outcomes, physical health, anxiety and worry in older adults, quality of life measures in the elderly, whilst also reducing hypertension. Secular religion From its origins in the 1920s, yoga used as exercise has had a "spiritual" aspect which is not necessarily neo-Hindu; its assimilation with Harmonial Gymnastics is an example. Jain calls yoga as exercise "a sacred fitness regimen set apart from day-to-day life." The yoga therapist Ann Swanson writes that "scientific principles and evidence have demystified [yoga, but] ... surprisingly, this made my transformative experiences feel even more magical." Yoga practice sessions have, notes yoga scholar Elizabeth De Michelis, a highly specific three-part structure that matches Arnold van Gennep's 1908 definition of the basic structure of a ritual:    1. a separation phase (detaching from the world outside);    2. a transition or liminal state; and    3. an incorporation or postliminal state. For the separation phase, the yoga session begins by going into a neutral and if possible a secluded practice hall; worries, responsibilities, ego and shoes are all left outside; and the yoga teacher is treated with deference. The actual yoga practice forms the transition state, combining practical instructions with theory, made more or less explicit. The practitioner learns "to feel and to perceive in novel ways, most of all inwardly"; to "become silent and receptive" to help to get away from the "ego-dominated rationality of modern Western life." The final relaxation forms the incorporation phase; the practitioner relaxes in Savasana, just as dictated by the Hatha Yoga Pradipika 1.32. The posture offers "an exercise in sense withdrawal and mental quietening, and thus ... a first step towards meditative practice," a cleansing and healing process, and even a symbolic death and moment of self-renewal. Iyengar writes that savasana puts the practitioner in "that precise state [where] the body, the breath, the mind and the brain move toward the real self (Atma)" so as to merge into the Infinite, thus explaining the modern yoga healing ritual in terms of the Hindu Vishishtadvaita: an explanation that, De Michelis notes, practitioners are free to follow if they wish. The yoga scholar Elliott Goldberg notes that some practitioners of yoga as exercise "inhabit their body as a means of accessing the spiritual... they use their asana practice as a vehicle for transcendence." He cites yoga teacher Vanda Scaravelli's 1991 Awakening the Spine as an instance of such transcendence: "We learn to elongate and extend, rather than to pull and push... [and so] an unexpected opening follows, an opening from within us, giving life to the spine, as though the body had to reverse and awaken into another dimension." In mindful yoga, the practice of asanas is combined with pranayama and meditation, using the breath and sometimes Buddhist Vipassana meditation techniques to bring the attention to the body and the emotions, thus quietening the mind. Competition The idea of competitive yoga has been called an oxymoron by some people in the yoga community, such as the yoga teacher Maja Sidebaeck, but the fiercely contested Bishnu Charan Ghosh Cup, founded by Bikram Choudhury in 2003, is now held annually in Los Angeles. Business By the 21st century, yoga as exercise had become a flourishing business, professionally marketed. A 2016 Ipsos study reported that 36.7 million Americans practise yoga, making the business of classes, clothing and equipment worth $16 billion in America, compared to $10 billion in 2012, and $80 billion worldwide. 72 percent of practitioners were women. By 2010, Yoga Journal, founded in 1975, had some 350,000 subscribers and over 1,300,000 readers. Clothing and equipment Fashion has entered the world of yoga, with brands such as Lorna Jane and Lululemon offering their own ranges of women's yoga clothing. Sales of goods such as yoga mats are increasing rapidly; sales are projected to rise to $14 billion by 2020 in North America, where the key vendors in 2016 were Barefoot Yoga, Gaiam, Jade Yoga, and Manduka, according to Technavio. Sales of athleisure clothing such as yoga pants were worth $35 billion in 2014, forming 17% of American clothing sales. A wide variety of instructional videos are available, some free, for yoga practice at beginner and advanced levels. By 2018, over 6,000 commercially produced titles were on sale. Over 1,000 books have been published on yoga poses. Yoga has reached high fashion, too: in 2011, the fashion house Gucci, noting the "halo of chic" around yoga-practising celebrities such as Madonna and Sting, produced a yoga mat costing $850 and a matching carry case in leather for $350. In India, participants typically wear loose-fitting clothes for yoga classes, while serious practitioners in yoga ashrams practice an arduous combination of exercise, meditation, selfless service, vegetarian diet and celibacy, making yoga a way of life. Holidays and training Yoga holidays (vacations) are offered in "idyllic" places around the world, including in Croatia, England, France, Greece, Iceland, Indonesia, India, Italy, Montenegro, Morocco, Portugal, Romania, Spain, Sri Lanka, Thailand, and Turkey. In 2018, prices were up to £1,295 (about $1,500) for 6 days. Teacher training, as of 2017, could cost between $2,000 and $5,000. It can take up to 3 years to obtain a teaching certificate. Yoga training courses, as of 2017, were still unregulated in the UK; the British Wheel of Yoga has been appointed the activity's official governing body by Sport England, but it lacks power to compel training organisations, and many people are taking short unaccredited courses rather than one of the nine courses so far accredited. Copyright claims Bikram Yoga has become a global brand, and its founder, Bikram Choudhury, spent some ten years from 2002 attempting to establish copyright on the sequence of 26 postures used in Bikram Yoga, with some initial success. However, in 2012, the American federal court ruled that Bikram Yoga could not be copyrighted. In 2015, after further legal action, the American court of appeals ruled that the yoga sequence and breathing exercises were not eligible for copyright protection. In culture Literature Yoga has found its way into types of literature as varied as autobiography, chick lit, and documentary. The actress Mariel Hemingway's 2002 autobiography Finding My Balance: A Memoir with Yoga describes how she used yoga to recover balance in her life after a dysfunctional upbringing: among other things, her grandfather, the novelist Ernest Hemingway, killed himself shortly before she was born. Each chapter is titled after an asana, the first being "Mountain Pose, or Tadasana", the posture of standing in balance. The teacher of yoga and mindful meditation Anne Cushman's 2009 novel Enlightenment for Idiots tells the story of a woman nearing the age of thirty whose life as a nanny and yogini hopeful is not working out as expected, and is sure that a visit to the ashrams of India will sort out her life. Instead, she finds that nothing in India is quite what it seems on the surface. The Yoga Journal review notes that underneath the chick lit "fun romp", the book is a serious "call to enlightenment and an introduction to yoga philosophy." Kate Churchill's 2009 film Enlighten Up! follows an unemployed journalist for six months as, on the filmmaker's invitation, he travels the globe – New York, Boulder, California, Hawaii, India – to practise under yoga masters including Jois, Norman Allen, and Iyengar. The critic Roger Ebert found it interesting and peaceful, if "not terribly eventful, but I suppose we wouldn't want a yoga thriller". He commented: "I'm glad I saw it. I enjoyed all the people I met during Nick's six-month quest. Most seemed cheerful and outgoing, and exuded good health. They smiled a lot. They weren't creepy true believers obsessed with converting everyone." Research Yoga is becoming a subject of academic inquiry; many of the researchers are "scholar practitioners" who do yoga themselves. Medknow (part of Wolters Kluwer), with Swami Vivekananda Yoga Anusandhana Samsthana university, publishes the peer-reviewed open access medical journal International Journal of Yoga. An increasing number of papers are being published on the possible medical benefits of yoga, such as on stress and low back pain. The School of Oriental and African Studies in London has created a Centre of Yoga Studies; it hosted the five-year Hatha Yoga Project which traced the history of physical yoga, and it teaches a master's degree in yoga and meditation. Academics have given yoga as exercise a variety of names, including "modern postural yoga" reflecting its emphasis on asanas (postures) and "transnational anglophone yoga" denoting its growth in the English-speaking world, especially America.
Biology and health sciences
Physical fitness
Health
53365898
https://en.wikipedia.org/wiki/Earliest%20known%20life%20forms
Earliest known life forms
The earliest known life forms on Earth may be as old as 4.1 billion years (or Ga) according to biologically fractionated graphite inside a single zircon grain in the Jack Hills range of Australia. The earliest evidence of life found in a stratigraphic unit, not just a single mineral grain, is the 3.7 Ga metasedimentary rocks containing graphite from the Isua Supracrustal Belt in Greenland. The earliest direct known life on Earth are stromatolite fossils which have been found in 3.480-billion-year-old geyserite uncovered in the Dresser Formation of the Pilbara Craton of Western Australia. Various microfossils of microorganisms have been found in 3.4 Ga rocks, including 3.465-billion-year-old Apex chert rocks from the same Australian craton region, and in 3.42 Ga hydrothermal vent precipitates from Barberton, South Africa. Much later in the geologic record, likely starting in 1.73 Ga, preserved molecular compounds of biologic origin are indicative of aerobic life. Therefore, the earliest time for the origin of life on Earth is at most 3.5 billion years ago, possibly as early as 4.1 billion years ago — not long after the oceans formed 4.5 billion years ago and after the formation of the Earth 4.54 billion years ago. Biospheres Earth is the only place in the universe known to harbor life, where it exists in multiple environments. The origin of life on Earth was at least 3.5 billion years ago, possibly as early as 3.8-4.1 billion years ago. Since its emergence, life has persisted in several geological environments. The Earth's biosphere extends down to at least below the seafloor, up to into the atmosphere, and includes soil, hydrothermal vents, and rock. Further, the biosphere has been found to extend at least below the ice of Antarctica and includes the deepest parts of the ocean. In July 2020, marine biologists reported that aerobic microorganisms (mainly) in "quasi-suspended animation" were found in organically poor sediment below the seafloor in the South Pacific Gyre (SPG) ("the deadest spot in the ocean"). Microbes have been found in the Atacama Desert in Chile, one of the driest places on Earth, and in deep-sea hydrothermal vent environments which can reach temperatures over 400°C. Microbial communities can also survive in cold permafrost conditions down to -25°C. Under certain test conditions, life forms have been observed to survive in the vacuum of outer space. More recently, studies conducted on the International Space Station found that bacteria could survive in outer space. In February 2023, findings of a "dark microbiome" of microbial dark matter of unfamiliar microorganisms in the Atacama Desert in Chile, a Mars-like region of planet Earth, were reported. Geochemical evidence The age of Earth is about 4.54 billion years; the earliest undisputed evidence of life on Earth dates from at least 3.5 billion years ago according to the stromatolite record. Some computer models suggest life began as early as 4.5 billion years ago. The oldest evidence of life is indirect in the form of isotopic fractionation. Microorganisms will preferentially use the lighter isotope of an atom to build biomass, as it takes less energy to break the bonds for metabolic processes. Biologic material will often have a composition that is enriched in lighter isotopes compared to the surrounding rock it's found in. Carbon isotopes, expressed scientifically in parts per thousand difference from a standard as δ13C, are frequently used to detect carbon fixation by organisms and assess if purported early life evidence has biological origins. Typically, life will preferentially metabolize the isotopically light 12C isotope instead of the heavier 13C isotope. Biologic material can record this fractionation of carbon. The oldest disputed geochemical evidence of life is isotopically light graphite inside a single zircon grain from the Jack Hills in Western Australia. The graphite showed a δ13C signature consistent with biogenic carbon on Earth. Other early evidence of life is found in rocks both from the Akilia Sequence and the Isua Supracrustal Belt (ISB) in Greenland. These 3.7 Ga metasedimentary rocks also contain graphite or graphite inclusions with carbon isotope signatures that suggest biological fractionation. The primary issue with isotopic evidence of life is that abiotic processes can fractionate isotopes and produce similar signatures to biotic processes. Reassessment of the Akilia graphite show that metamorphism, Fischer-Tropsch mechanisms in hydrothermal environments, and volcanic processes may be responsible for enrichment lighter carbon isotopes. The ISB rocks that contain the graphite may have experienced a change in composition from hot fluids, i.e. metasomatism, thus the graphite may have been formed by abiotic chemical reactions. However, the ISB's graphite is generally more accepted as biologic in origin after further spectral analysis. Metasedimentary rocks from the 3.5 Ga Dresser Formation, which experienced less metamorphism than the sequences in Greenland, contain better preserved geochemical evidence. Carbon isotopes as well as sulfur isotopes found in barite, which are fractionated by microbial metabolisms during sulfate reduction, are consistent with biological processes. However, the Dresser formation was deposited in an active volcanic and hydrothermal environment, and abiotic processes could still be responsible for these fractionations. Many of these findings are supplemented by direct evidence, typically by the presence of microfossils, however. Fossil evidence Fossils are direct evidence of life. In the search for the earliest life, fossils are often supplemented by geochemical evidence. The fossil record does not extend as far back as the geochemical record due to metamorphic processes that erase fossils from geologic units. Stromatolites Stromatolites are laminated sedimentary structures created by photosynthetic organisms as they establish a microbial mat on a sediment surface. An important distinction for biogenicity is their convex-up structures and wavy laminations, which are typical of microbial communities who build preferentially toward the sun. A disputed report of stromatolites is from the 3.7 Ga Isua metasediments that show convex-up, conical, and domical morphologies. Further mineralogical analysis disagrees with the initial findings of internal convex-up laminae, a critical criterion for stromatolite identification, suggesting that the structures may be deformation features (i.e. boudins) caused by extensional tectonics in the Isua Supracrustal Belt. The earliest direct evidence of life are stromatolites found in 3.48 billion-year-old chert in the Dresser formation of the Pilbara Craton in Western Australia. Several features in these fossils are difficult to explain with abiotic processes, for example, the thickening of laminae over flexure crests that is expected from more sunlight. Sulfur isotopes from barite veins in the stromatolites also favor a biologic origin. However, while most scientists accept their biogenicity, abiotic explanations for these fossils cannot be fully discarded due to their hydrothermal depositional environment and debated geochemical evidence. Most archean stromatolites older than 3.0 Ga are found in Australia or South Africa. Stratiform stromatolites from the Pilbara Craton have been identified in the 3.47 Ga Mount Ada Basalt. Barberton, South Africa hosts stratiform stromatolites in the 3.46 Hooggenoeg, 3.42 Kromberg and 3.33 Ga Mendon Formations of the Onverwacht Group. The 3.43 Ga Strelley Pool Formation in Western Australia hosts stromatolites that demonstrate vertical and horizontal changes that may demonstrate microbial communities responding to transient environmental conditions. Thus, it is likely anoxygenic or oxygenic photosynthesis has been occurring since at least 3.43 Ga Strelley Pool Formation. Microfossils Claims of the earliest life using fossilized microorganisms (microfossils) are from hydrothermal vent precipitates from an ancient sea-bed in the Nuvvuagittuq Belt of Quebec, Canada. These may be as old as 4.28 billion years, which would make it the oldest evidence of life on Earth, suggesting "an almost instantaneous emergence of life" after ocean formation 4.41 billion years ago. These findings may be better explained by abiotic processes: for example, silica-rich waters, "chemical gardens," circulating hydrothermal fluids, and volcanic ejecta can produce morphologies similar to those presented in Nuvvuagittuq. The 3.48 Ga Dresser formation hosts microfossils of prokaryotic filaments in silica veins, the earliest fossil evidence of life on Earth, but their origins may be volcanic. 3.465-billion-year-old Australian Apex chert rocks may once have contained microorganisms, although the validity of these findings has been contested. "Putative filamentous microfossils," possibly of methanogens and/or methanotrophs that lived about 3.42-billion-year-old in "a paleo-subseafloor hydrothermal vein system of the Barberton greenstone belt, have been identified in South Africa." A diverse set of microfossil morphologies have been found in the 3.43 Ga Strelley Pool Formation including spheroid, lenticular, and film-like microstructures. Their biogenicity are strengthened by their observed chemical preservation. The early lithification of these structures allowed important chemical tracers, such as the carbon-to-nitrogen ratio, to be retained at levels higher than is typical in older, metamorphosed rock units. Molecular biomarkers Biomarkers are compounds of biologic origin found in the geologic record that can be linked to past life. Although they aren't preserved until the late Archean, they are important indicators of early photosynthetic life. Lipids are particularly useful biomarkers because they can survive for long periods of geologic time and reconstruct past environments. Fossilized lipids were reported from 2.7 Ga laminated shales from the Pilbara Craton and the 2.67 Ga Kaapvaal Craton in South Africa. However, the age of these biomarkers and whether their deposition was synchronous with their host rocks were debated, and further work showed that the lipids were contaminants. The oldest "clearly indigenous" biomarkers are from the 1.64 Ga Barney Creek Formation in the McArthur Basin in Northern Australia, but hydrocarbons from the 1.73 Ga Wollogorang Formation in the same basin have also been detected. Other indigenous biomarkers can be dated to the Mesoproterozoic era (1.6-1.0 Ga). The 1.4 Ga Hongshuizhuang Formation in the North China Craton contains hydrocarbons in shales that were likely sourced from prokaryotes. Biomarkers were found in siltstones from the 1.38 Ga Roper Group of the McArthur Basin. Hydrocarbons possibly derived from bacteria and algae were reported in 1.37 Ga Xiamaling Formation of the NCC. The 1.1 Ga Atar/El Mreïti Group in the Taoudeni Basin, Mauritania show indigenous biomarkers in black shales. Genomic evidence By comparing the genomes of modern organisms (in the domains Bacteria and Archaea), it is evident that there was a last universal common ancestor (LUCA). LUCA is not thought to be the first life on Earth, but rather the only type of organism of its time to still have living descendants. In 2016, M. C. Weiss and colleagues proposed a minimal set of genes that each occurred in at least two groups of Bacteria and two groups of Archaea. They argued that such a distribution of genes would be unlikely to arise by horizontal gene transfer, and so any such genes must have derived from the LUCA. A molecular clock model suggests that the LUCA may have lived 4.477—4.519 billion years ago, within the Hadean eon. RNA replicators Model Hadean-like geothermal microenvironments were demonstrated to have the potential to support the synthesis and replication of RNA and thus possibly the evolution of primitive life. Porous rock systems, comprising heated air-water interfaces, were shown to facilitate ribozyme catalyzed RNA replication of sense and antisense strands and then subsequent strand-dissociation. This enabled combined synthesis, release and folding of active ribozymes. Further work on early life Extraterrestrial origin for early life While current geochemical evidence dates the origin of life to possibly as early as 4.1 Ga, and fossil evidence shows life at 3.5 Ga, some researchers speculate that life may have started nearly 4.5 billion years ago. According to biologist Stephen Blair Hedges, "If life arose relatively quickly on Earth ... then it could be common in the universe." The possibility that terrestrial life forms may have been seeded from outer space has been considered. In January 2018, a study found that 4.5 billion-year-old meteorites found on Earth contained liquid water along with prebiotic complex organic substances that may be ingredients for life. Early life on land As for life on land, in 2019 scientists reported the discovery of a fossilized fungus, named Ourasphaira giraldae, in the Canadian Arctic, that may have grown on land a billion years ago, well before plants are thought to have been living on land. The earliest life on land may have been bacteria 3.22 billion years ago. Evidence of microbial life on land may have been found in 3.48 billion-year-old geyserite in the Pilbara Craton of Western Australia. Gallery
Biology and health sciences
Biology basics
Biology
73248112
https://en.wikipedia.org/wiki/Large%20language%20model
Large language model
A large language model (LLM) is a type of machine learning model designed for natural language processing tasks such as language generation. LLMs are language models with many parameters, and are trained with self-supervised learning on a vast amount of text. The largest and most capable LLMs are generative pretrained transformers (GPTs). Modern models can be fine-tuned for specific tasks or guided by prompt engineering. These models acquire predictive power regarding syntax, semantics, and ontologies inherent in human language corpora, but they also inherit inaccuracies and biases present in the data they are trained in. History Before 2017, there were a few language models that were large as compared to capacities then available. In the 1990s, the IBM alignment models pioneered statistical language modelling. A smoothed n-gram model in 2001 trained on 0.3 billion words achieved state-of-the-art perplexity at the time. In the 2000s, as Internet use became prevalent, some researchers constructed Internet-scale language datasets ("web as corpus"), upon which they trained statistical language models. In 2009, in most language processing tasks, statistical language models dominated over symbolic language models, as they can usefully ingest large datasets. After neural networks became dominant in image processing around 2012, they were applied to language modelling as well. Google converted its translation service to Neural Machine Translation in 2016. As it was before transformers, it was done by seq2seq deep LSTM networks. At the 2017 NeurIPS conference, Google researchers introduced the transformer architecture in their landmark paper "Attention Is All You Need". This paper's goal was to improve upon 2014 seq2seq technology, and was based mainly on the attention mechanism developed by Bahdanau et al. in 2014. The following year in 2018, BERT was introduced and quickly became "ubiquitous". Though the original transformer has both encoder and decoder blocks, BERT is an encoder-only model. Academic and research usage of BERT began to decline in 2023, following rapid improvements in the abilities of decoder-only models (such as GPT) to solve tasks via prompting. Although decoder-only GPT-1 was introduced in 2018, it was GPT-2 in 2019 that caught widespread attention because OpenAI at first deemed it too powerful to release publicly, out of fear of malicious use. GPT-3 in 2020 went a step further and is available only via API with no offering of downloading the model to execute locally. But it was the 2022 consumer-facing browser-based ChatGPT that captured the imaginations of the general population and caused some media hype and online buzz. The 2023 GPT-4 was praised for its increased accuracy and as a "holy grail" for its multimodal capabilities. OpenAI did not reveal the high-level architecture and the number of parameters of GPT-4. The release of ChatGPT led to an uptick in LLM usage across several research subfields of computer science, including robotics, software engineering, and societal impact work. Competing language models have for the most part been attempting to equal the GPT series, at least in terms of number of parameters. Since 2022, source-available models have been gaining popularity, especially at first with BLOOM and LLaMA, though both have restrictions on the field of use. Mistral AI's models Mistral 7B and Mixtral 8x7b have the more permissive Apache License. , The Instruction fine tuned variant of the Llama 3 70 billion parameter model is the most powerful open LLM according to the LMSYS Chatbot Arena Leaderboard, being more powerful than GPT-3.5 but not as powerful as GPT-4. Since 2023, many LLMs have been trained to be multimodal, having the ability to also process or generate other types of data, such as images or audio. These LLMs are also called large multimodal models (LMMs). As of 2024, the largest and most capable models are all based on the transformer architecture. Some recent implementations are based on other architectures, such as recurrent neural network variants and Mamba (a state space model). Dataset preprocessing Tokenization As machine learning algorithms process numbers rather than text, the text must be converted to numbers. In the first step, a vocabulary is decided upon, then integer indices are arbitrarily but uniquely assigned to each vocabulary entry, and finally, an embedding is associated to the integer index. Algorithms include byte-pair encoding (BPE) and WordPiece. There are also special tokens serving as control characters, such as [MASK] for masked-out token (as used in BERT), and [UNK] ("unknown") for characters not appearing in the vocabulary. Also, some special symbols are used to denote special text formatting. For example, "Ġ" denotes a preceding whitespace in RoBERTa and GPT. "##" denotes continuation of a preceding word in BERT. For example, the BPE tokenizer used by GPT-3 (Legacy) would split tokenizer: texts -> series of numerical "tokens" as Tokenization also compresses the datasets. Because LLMs generally require input to be an array that is not jagged, the shorter texts must be "padded" until they match the length of the longest one. How many tokens are, on average, needed per word depends on the language of the dataset. BPE As an example, consider a tokenizer based on byte-pair encoding. In the first step, all unique characters (including blanks and punctuation marks) are treated as an initial set of n-grams (i.e. initial set of uni-grams). Successively the most frequent pair of adjacent characters is merged into a bi-gram and all instances of the pair are replaced by it. All occurrences of adjacent pairs of (previously merged) n-grams that most frequently occur together are then again merged into even lengthier n-gram, until a vocabulary of prescribed size is obtained (in case of GPT-3, the size is 50257). After a tokenizer is trained, any text can be tokenized by it, as long as it does not contain characters not appearing in the initial-set of uni-grams. Problems A token vocabulary based on the frequencies extracted from mainly English corpora uses as few tokens as possible for an average English word. An average word in another language encoded by such an English-optimized tokenizer is however split into suboptimal amount of tokens. GPT-2 tokenizer can use up to 15 times more tokens per word for some languages, for example for the Shan language from Myanmar. Even more widespread languages such as Portuguese and German have "a premium of 50%" compared to English. Greedy tokenization also causes subtle problems with text completion. Dataset cleaning In the context of training LLMs, datasets are typically cleaned by removing low-quality, duplicated, or toxic data. Cleaned datasets can increase training efficiency and lead to improved downstream performance. A trained LLM can be used to clean datasets for training a further LLM. With the increasing proportion of LLM-generated content on the web, data cleaning in the future may include filtering out such content. LLM-generated content can pose a problem if the content is similar to human text (making filtering difficult) but of lower quality (degrading performance of models trained on it). Synthetic data Training of largest language models might need more linguistic data than naturally available, or that the naturally occurring data is of insufficient quality. In these cases, synthetic data might be used. Microsoft's Phi series of LLMs is trained on textbook-like data generated by another LLM. Training and architecture Reinforcement learning from human feedback (RLHF) Reinforcement learning from human feedback (RLHF) through algorithms, such as proximal policy optimization, is used to further fine-tune a model based on a dataset of human preferences. Instruction tuning Using "self-instruct" approaches, LLMs have been able to bootstrap correct responses, replacing any naive responses, starting from human-generated corrections of a few cases. For example, in the instruction "Write an essay about the main themes represented in Hamlet," an initial naive completion might be "If you submit the essay after March 17, your grade will be reduced by 10% for each day of delay," based on the frequency of this textual sequence in the corpus. Mixture of experts The largest LLM may be too expensive to train and use directly. For such models, mixture of experts (MoE) can be applied, a line of research pursued by Google researchers since 2017 to train models reaching up to 1 trillion parameters. Prompt engineering, attention mechanism, and context window Most results previously achievable only by (costly) fine-tuning, can be achieved through prompt engineering, although limited to the scope of a single conversation (more precisely, limited to the scope of a context window). In order to find out which tokens are relevant to each other within the scope of the context window, the attention mechanism calculates "soft" weights for each token, more precisely for its embedding, by using multiple attention heads, each with its own "relevance" for calculating its own soft weights. For example, the small (i.e. 117M parameter sized) GPT-2 model has had twelve attention heads and a context window of only 1k tokens. In its medium version it has 345M parameters and contains 24 layers, each with 12 attention heads. For the training with gradient descent a batch size of 512 was utilized. The largest models, such as Google's Gemini 1.5, presented in February 2024, can have a context window sized up to 1 million (context window of 10 million was also "successfully tested"). Other models with large context windows includes Anthropic's Claude 2.1, with a context window of up to 200k tokens. Note that this maximum refers to the number of input tokens and that the maximum number of output tokens differs from the input and is often smaller. For example, the GPT-4 Turbo model has a maximum output of 4096 tokens. Length of a conversation that the model can take into account when generating its next answer is limited by the size of a context window, as well. If the length of a conversation, for example with ChatGPT, is longer than its context window, only the parts inside the context window are taken into account when generating the next answer, or the model needs to apply some algorithm to summarize the too distant parts of conversation. The shortcomings of making a context window larger include higher computational cost and possibly diluting the focus on local context, while making it smaller can cause a model to miss an important long-range dependency. Balancing them are a matter of experimentation and domain-specific considerations. A model may be pre-trained either to predict how the segment continues, or what is missing in the segment, given a segment from its training dataset. It can be either autoregressive (i.e. predicting how the segment continues, the way GPTs do it): for example given a segment "I like to eat", the model predicts "ice cream", or "sushi". "masked" (i.e. filling in the parts missing from the segment, the way "BERT" does it): for example, given a segment "I like to [__] [__] cream", the model predicts that "eat" and "ice" are missing. Models may be trained on auxiliary tasks which test their understanding of the data distribution, such as Next Sentence Prediction (NSP), in which pairs of sentences are presented and the model must predict whether they appear consecutively in the training corpus. During training, regularization loss is also used to stabilize training. However regularization loss is usually not used during testing and evaluation. Infrastructure Substantial infrastructure is necessary for training the largest models. Training cost The qualifier "large" in "large language model" is inherently vague, as there is no definitive threshold for the number of parameters required to qualify as "large". As time goes on, what was previously considered "large" may evolve. GPT-1 of 2018 is usually considered the first LLM, even though it has only 0.117 billion parameters. The tendency towards larger models is visible in the list of large language models. Advances in software and hardware have reduced the cost substantially since 2020, such that in 2023 training of a 12-billion-parameter LLM computational cost is 72,300 A100-GPU-hours, while in 2020 the cost of training a 1.5-billion-parameter LLM (which was two orders of magnitude smaller than the state of the art in 2020) was between $80,000 and $1,600,000. Since 2020, large sums were invested in increasingly large models. For example, training of the GPT-2 (i.e. a 1.5-billion-parameters model) in 2019 cost $50,000, while training of the PaLM (i.e. a 540-billion-parameters model) in 2022 cost $8 million, and Megatron-Turing NLG 530B (in 2021) cost around $11 million. For Transformer-based LLM, training cost is much higher than inference cost. It costs 6 FLOPs per parameter to train on one token, whereas it costs 1 to 2 FLOPs per parameter to infer on one token. Tool use There are certain tasks that, in principle, cannot be solved by any LLM, at least not without the use of external tools or additional software. An example of such a task is responding to the user's input '354 * 139 = ', provided that the LLM has not already encountered a continuation of this calculation in its training corpus. In such cases, the LLM needs to resort to running program code that calculates the result, which can then be included in its response.: Another example is "What is the time now? It is ", where a separate program interpreter would need to execute a code to get system time on the computer, so that the LLM can include it in its reply. This basic strategy can be sophisticated with multiple attempts of generated programs, and other sampling strategies. Generally, in order to get an LLM to use tools, one must fine-tune it for tool-use. If the number of tools is finite, then fine-tuning may be done just once. If the number of tools can grow arbitrarily, as with online API services, then the LLM can be fine-tuned to be able to read API documentation and call API correctly. A simpler form of tool use is retrieval-augmented generation: the augmentation of an LLM with document retrieval. Given a query, a document retriever is called to retrieve the most relevant documents. This is usually done by encoding the query and the documents into vectors, then finding the documents with vectors (usually stored in a vector database) most similar to the vector of the query. The LLM then generates an output based on both the query and context included from the retrieved documents. Agency An LLM is typically not an autonomous agent by itself, as it lacks the ability to interact with dynamic environments, recall past behaviors, and plan future actions, but can be transformed into one by integrating modules like profiling, memory, planning, and action. The ReAct pattern, a portmanteau of "Reason + Act", constructs an agent out of an LLM, using the LLM as a planner. The LLM is prompted to "think out loud". Specifically, the language model is prompted with a textual description of the environment, a goal, a list of possible actions, and a record of the actions and observations so far. It generates one or more thoughts before generating an action, which is then executed in the environment. The linguistic description of the environment given to the LLM planner can even be the LaTeX code of a paper describing the environment. In the DEPS ("Describe, Explain, Plan and Select") method, an LLM is first connected to the visual world via image descriptions, then it is prompted to produce plans for complex tasks and behaviors based on its pretrained knowledge and environmental feedback it receives. The Reflexion method constructs an agent that learns over multiple episodes. At the end of each episode, the LLM is given the record of the episode, and prompted to think up "lessons learned", which would help it perform better at a subsequent episode. These "lessons learned" are given to the agent in the subsequent episodes. Monte Carlo tree search can use an LLM as rollout heuristic. When a programmatic world model is not available, an LLM can also be prompted with a description of the environment to act as world model. For open-ended exploration, an LLM can be used to score observations for their "interestingness", which can be used as a reward signal to guide a normal (non-LLM) reinforcement learning agent. Alternatively, it can propose increasingly difficult tasks for curriculum learning. Instead of outputting individual actions, an LLM planner can also construct "skills", or functions for complex action sequences. The skills can be stored and later invoked, allowing increasing levels of abstraction in planning. LLM-powered agents can keep a long-term memory of its previous contexts, and the memory can be retrieved in the same way as Retrieval Augmented Generation. Multiple such agents can interact socially. Compression Typically, LLMs are trained with single- or half-precision floating point numbers (float32 and float16). One float16 has 16 bits, or 2 bytes, and so one billion parameters require 2 gigabytes. The largest models typically have 100 billion parameters, requiring 200 gigabytes to load, which places them outside the range of most consumer electronics. Post-training quantization aims to decrease the space requirement by lowering precision of the parameters of a trained model, while preserving most of its performance. The simplest form of quantization simply truncates all numbers to a given number of bits. It can be improved by using a different quantization codebook per layer. Further improvement can be done by applying different precisions to different parameters, with higher precision for particularly important parameters ("outlier weights"). See for a visual guide. While quantized models are typically frozen, and only pre-quantized models are fine-tuned, quantized models can still be fine-tuned. Multimodality Multimodality means "having several modalities", and a "modality" refers to a type of input or output, such as video, image, audio, text, proprioception, etc. There have been many AI models trained specifically to ingest one modality and output another modality, such as AlexNet for image to label, visual question answering for image-text to text, and speech recognition for speech to text. A common method to create multimodal models out of an LLM is to "tokenize" the output of a trained encoder. Concretely, one can construct an LLM that can understand images as follows: take a trained LLM, and take a trained image encoder . Make a small multilayered perceptron , so that for any image , the post-processed vector has the same dimensions as an encoded token. That is an "image token". Then, one can interleave text tokens and image tokens. The compound model is then fine-tuned on an image-text dataset. This basic construction can be applied with more sophistication to improve the model. The image encoder may be frozen to improve stability. Flamingo demonstrated the effectiveness of the tokenization method, finetuning a pair of pretrained language model and image encoder to perform better on visual question answering than models trained from scratch. Google PaLM model was fine-tuned into a multimodal model PaLM-E using the tokenization method, and applied to robotic control. LLaMA models have also been turned multimodal using the tokenization method, to allow image inputs, and video inputs. GPT-4 can use both text and image as inputs (although the vision component was not released to the public until GPT-4V); Google DeepMind's Gemini is also multimodal. Mistral introduced its own multimodel Pixtral 12B model in September 2024. Properties Scaling laws The performance of an LLM after pretraining largely depends on the: cost of pretraining (the total amount of compute used), size of the artificial neural network itself, such as number of parameters (i.e. amount of neurons in its layers, amount of weights between them and biases), size of its pretraining dataset (i.e. number of tokens in corpus, ). "Scaling laws" are empirical statistical laws that predict LLM performance based on such factors. One particular scaling law ("Chinchilla scaling") for LLM autoregressively trained for one epoch, with a log-log learning rate schedule, states that: where the variables are is the cost of training the model, in FLOPs. is the number of parameters in the model. is the number of tokens in the training set. is the average negative log-likelihood loss per token (nats/token), achieved by the trained LLM on the test dataset. and the statistical hyper-parameters are , meaning that it costs 6 FLOPs per parameter to train on one token. Note that training cost is much higher than inference cost, where it costs 1 to 2 FLOPs per parameter to infer on one token. Emergent abilities Performance of bigger models on various tasks, when plotted on a log-log scale, appears as a linear extrapolation of performance achieved by smaller models. However, this linearity may be punctuated by "break(s)" in the scaling law, where the slope of the line changes abruptly, and where larger models acquire "emergent abilities". They arise from the complex interaction of the model's components and are not explicitly programmed or designed. Furthermore, recent research has demonstrated that AI systems, including large language models, can employ heuristic reasoning akin to human cognition. They balance between exhaustive logical processing and the use of cognitive shortcuts (heuristics), adapting their reasoning strategies to optimize between accuracy and effort. This behavior aligns with principles of resource-rational human cognition, as discussed in classical theories of bounded rationality and dual-process theory. The most intriguing among emergent abilities is in-context learning from example demonstrations. In-context learning is involved in tasks, such as: reported arithmetics, decoding the International Phonetic Alphabet, unscrambling a word's letters, disambiguate word in context, converting spatial words, cardinal directions (for example, replying "northeast" upon [0, 0, 1; 0, 0, 0; 0, 0, 0]), color terms represented in text. chain-of-thought prompting: Model outputs are improved by chain-of-thought prompting only when model size exceeds 62B. Smaller models perform better when prompted to answer immediately, without chain of thought. identifying offensive content in paragraphs of Hinglish (a combination of Hindi and English), and generating a similar English equivalent of Kiswahili proverbs. Schaeffer et. al. argue that the emergent abilities are not unpredictably acquired, but predictably acquired according to a smooth scaling law. The authors considered a toy statistical model of an LLM solving multiple-choice questions, and showed that this statistical model, modified to account for other types of tasks, applies to these tasks as well. Let be the number of parameter count, and be the performance of the model. Interpretation Large language models by themselves are black boxes, and it is not clear how they can perform linguistic tasks. There are several methods for understanding how LLM work. Mechanistic interpretability aims to reverse-engineer LLM by discovering symbolic algorithms that approximate the inference performed by LLM. One example is Othello-GPT, where a small Transformer is trained to predict legal Othello moves. It is found that there is a linear representation of Othello board, and modifying the representation changes the predicted legal Othello moves in the correct way. In another example, a small Transformer is trained on Karel programs. Similar to the Othello-GPT example, there is a linear representation of Karel program semantics, and modifying the representation changes output in the correct way. The model also generates correct programs that are on average shorter than those in the training set. In another example, the authors trained small transformers on modular arithmetic addition. The resulting models were reverse-engineered, and it turned out they used discrete Fourier transform. Understanding and intelligence NLP researchers were evenly split when asked, in a 2022 survey, whether (untuned) LLMs "could (ever) understand natural language in some nontrivial sense". Proponents of "LLM understanding" believe that some LLM abilities, such as mathematical reasoning, imply an ability to "understand" certain concepts. A Microsoft team argued in 2023 that GPT-4 "can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more" and that GPT-4 "could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence system": "Can one reasonably say that a system that passes exams for software engineering candidates is not really intelligent?" Ilya Sutskever argues that predicting the next word sometimes involves reasoning and deep insights, for example if the LLM has to predict the name of the criminal in an unknown detective novel after processing the entire story leading up to the revelation. Some researchers characterize LLMs as "alien intelligence". For example, Conjecture CEO Connor Leahy considers untuned LLMs to be like inscrutable alien "Shoggoths", and believes that RLHF tuning creates a "smiling facade" obscuring the inner workings of the LLM: "If you don't push it too far, the smiley face stays on. But then you give it [an unexpected] prompt, and suddenly you see this massive underbelly of insanity, of weird thought processes and clearly non-human understanding." In contrast, some proponents of the "LLMs lack understanding" school believe that existing LLMs are "simply remixing and recombining existing writing", a phenomenon known as stochastic parrot, or they point to the deficits existing LLMs continue to have in prediction skills, reasoning skills, agency, and explainability. For example, GPT-4 has natural deficits in planning and in real-time learning. Generative LLMs have been observed to confidently assert claims of fact which do not seem to be justified by their training data, a phenomenon which has been termed "hallucination". Specifically, hallucinations in the context of LLMs correspond to the generation of text or responses that seem syntactically sound, fluent, and natural but are factually incorrect, nonsensical, or unfaithful to the provided source input. Neuroscientist Terrence Sejnowski has argued that "The diverging opinions of experts on the intelligence of LLMs suggests that our old ideas based on natural intelligence are inadequate". The matter of LLM's exhibiting intelligence or understanding has two main aspects – the first is how to model thought and language in a computer system, and the second is how to enable the computer system to generate human like language. These aspects of language as a model of cognition have been developed in the field of cognitive linguistics. American linguist George Lakoff presented Neural Theory of Language (NTL) as a computational basis for using language as a model of learning tasks and understanding. The NTL Model outlines how specific neural structures of the human brain shape the nature of thought and language and in turn what are the computational properties of such neural systems that can be applied to model thought and language in a computer system. After a framework for modeling language in a computer systems was established, the focus shifted to establishing frameworks for computer systems to generate language with acceptable grammar. In his 2014 book titled The Language Myth: Why Language Is Not An Instinct, British cognitive linguist and digital communication technologist Vyvyan Evans mapped out the role of probabilistic context-free grammar (PCFG) in enabling NLP to model cognitive patterns and generate human like language. Evaluation Perplexity The canonical measure of the performance of an LLM is its perplexity on a given text corpus. Perplexity measures how well a model predicts the contents of a dataset; the higher the likelihood the model assigns to the dataset, the lower the perplexity. In mathematical terms, perplexity is the exponential of the average negative log likelihood per token. Here, is the number of tokens in the text corpus, and "context for token " depends on the specific type of LLM. If the LLM is autoregressive, then "context for token " is the segment of text appearing before token . If the LLM is masked, then "context for token " is the segment of text surrounding token . Because language models may overfit to training data, models are usually evaluated by their perplexity on a test set. This evaluation is potentially problematic for larger models which, as they are trained on increasingly large corpora of text, are increasingly likely to inadvertently include portions of any given test set. BPW, BPC, and BPT In information theory, the concept of entropy is intricately linked to perplexity, a relationship notably established by Claude Shannon. This relationship is mathematically expressed as . Entropy, in this context, is commonly quantified in terms of bits per word (BPW) or bits per character (BPC), which hinges on whether the language model utilizes word-based or character-based tokenization. Notably, in the case of larger language models that predominantly employ sub-word tokenization, bits per token (BPT) emerges as a seemingly more appropriate measure. However, due to the variance in tokenization methods across different Large Language Models (LLMs), BPT does not serve as a reliable metric for comparative analysis among diverse models. To convert BPT into BPW, one can multiply it by the average number of tokens per word. In the evaluation and comparison of language models, cross-entropy is generally the preferred metric over entropy. The underlying principle is that a lower BPW is indicative of a model's enhanced capability for compression. This, in turn, reflects the model's proficiency in making accurate predictions. Task-specific datasets and benchmarks A large number of testing datasets and benchmarks have also been developed to evaluate the capabilities of language models on more specific downstream tasks. Tests may be designed to evaluate a variety of capabilities, including general knowledge, commonsense reasoning, and mathematical problem-solving. One broad category of evaluation dataset is question answering datasets, consisting of pairs of questions and correct answers, for example, ("Have the San Jose Sharks won the Stanley Cup?", "No"). A question answering task is considered "open book" if the model's prompt includes text from which the expected answer can be derived (for example, the previous question could be adjoined with some text which includes the sentence "The Sharks have advanced to the Stanley Cup finals once, losing to the Pittsburgh Penguins in 2016."). Otherwise, the task is considered "closed book", and the model must draw on knowledge retained during training. Some examples of commonly used question answering datasets include TruthfulQA, Web Questions, TriviaQA, and SQuAD. Evaluation datasets may also take the form of text completion, having the model select the most likely word or sentence to complete a prompt, for example: "Alice was friends with Bob. Alice went to visit her friend, ". Some composite benchmarks have also been developed which combine a diversity of different evaluation datasets and tasks. Examples include GLUE, SuperGLUE, MMLU, BIG-bench, and HELM. OpenAI has released tools for running composite benchmarks, but noted that the eval results are sensitive to the prompting method. Some public datasets contain questions that are mislabeled, ambiguous, unanswerable, or otherwise of low-quality, which can be cleaned to give more reliable benchmark scores. It was previously standard to report results on a heldout portion of an evaluation dataset after doing supervised fine-tuning on the remainder. It is now more common to evaluate a pre-trained model directly through prompting techniques, though researchers vary in the details of how they formulate prompts for particular tasks, particularly with respect to how many examples of solved tasks are adjoined to the prompt (i.e. the value of n in n-shot prompting). Adversarially constructed evaluations Because of the rapid pace of improvement of large language models, evaluation benchmarks have suffered from short lifespans, with state of the art models quickly "saturating" existing benchmarks, exceeding the performance of human annotators, leading to efforts to replace or augment the benchmark with more challenging tasks. In addition, there are cases of "shortcut learning" wherein AIs sometimes "cheat" on multiple-choice tests by using statistical correlations in superficial test question wording in order to guess the correct responses, without necessarily understanding the actual question being asked. Some datasets have been constructed adversarially, focusing on particular problems on which extant language models seem to have unusually poor performance compared to humans. One example is the TruthfulQA dataset, a question answering dataset consisting of 817 questions which language models are susceptible to answering incorrectly by mimicking falsehoods to which they were repeatedly exposed during training. For example, an LLM may answer "No" to the question "Can you teach an old dog new tricks?" because of its exposure to the English idiom you can't teach an old dog new tricks, even though this is not literally true. Another example of an adversarial evaluation dataset is Swag and its successor, HellaSwag, collections of problems in which one of multiple options must be selected to complete a text passage. The incorrect completions were generated by sampling from a language model and filtering with a set of classifiers. The resulting problems are trivial for humans but at the time the datasets were created state of the art language models had poor accuracy on them. For example: We see a fitness center sign. We then see a man talking to the camera and sitting and laying on a exercise ball. The man... a) demonstrates how to increase efficient exercise work by running up and down balls. b) moves all his arms and legs and builds up a lot of muscle. c) then plays the ball and we see a graphics and hedge trimming demonstration. d) performs sit ups while on the ball and talking. BERT selects b) as the most likely completion, though the correct answer is d). Wider impact In 2023, Nature Biomedical Engineering wrote that "it is no longer possible to accurately distinguish" human-written text from text created by large language models, and that "It is all but certain that general-purpose large language models will rapidly proliferate... It is a rather safe bet that they will change many industries over time." Goldman Sachs suggested in 2023 that generative language AI could increase global GDP by 7% in the next ten years, and could expose to automation 300 million jobs globally. Memorization and copyright Memorization is an emergent behavior in LLMs in which long strings of text are occasionally output verbatim from training data, contrary to typical behavior of traditional artificial neural nets. Evaluations of controlled LLM output measure the amount memorized from training data (focused on GPT-2-series models) as variously over 1% for exact duplicates or up to about 7%. A 2023 study showed that when ChatGPT 3.5 turbo was prompted to repeat the same word indefinitely, after a few hundreds of repetitions, it would start outputting excerpts from its training data. Security Some commenters expressed concern over accidental or deliberate creation of misinformation, or other forms of misuse. For example, the availability of large language models could reduce the skill-level required to commit bioterrorism; biosecurity researcher Kevin Esvelt has suggested that LLM creators should exclude from their training data papers on creating or enhancing pathogens. The potential presence of "sleeper agents" within LLM models is another emerging security concern. These are hidden functionalities built into the model that remain dormant until triggered by a specific event or condition. Upon activation, the LLM deviates from its expected behavior to make insecure actions. LLM applications accessible to the public, like ChatGPT or Claude, typically incorporate safety measures designed to filter out harmful content. However, implementing these controls effectively has proven challenging. For instance, a 2023 study proposed a method for circumventing LLM safety systems. Similarly, Yongge Wang illustrated in 2024 how a potential criminal could potentially bypass ChatGPT 4o's safety controls to obtain information on establishing a drug trafficking operation. Algorithmic bias While LLMs have shown remarkable capabilities in generating human-like text, they are susceptible to inheriting and amplifying biases present in their training data. This can manifest in skewed representations or unfair treatment of different demographics, such as those based on race, gender, language, and cultural groups. Since English data is overrepresented in current large language models' training data, it may also downplay non-English views. Stereotyping AI models can reinforce a wide range of stereotypes, including those based on gender, ethnicity, age, nationality, religion, or occupation. This can lead to outputs that unfairly generalize or caricature groups of people, sometimes in harmful or derogatory ways. Notably, gender bias refers to the tendency of these models to produce outputs that are unfairly prejudiced towards one gender over another. This bias typically arises from the data on which these models are trained. Large language models often assign roles and characteristics based on traditional gender norms. For example, it might associate nurses or secretaries predominantly with women and engineers or CEOs with men. Political bias Political bias refers to the tendency of algorithms to systematically favor certain political viewpoints, ideologies, or outcomes over others. Language models may also exhibit political biases. Since the training data includes a wide range of political opinions and coverage, the models might generate responses that lean towards particular political ideologies or viewpoints, depending on the prevalence of those views in the data.
Technology
Computer science
null
47690389
https://en.wikipedia.org/wiki/Hook
Hook
A hook is a tool consisting of a length of material, typically metal, that contains a portion that is curved/bent back or has a deeply grooved indentation, which serves to grab, latch or in any way attach itself onto another object. The hook's design allows traction forces to be relayed through the curved/indented portion to and from the proximal end of the hook, which is either a straight shaft (known as the hook's shank) or a ring (sometimes called the hook's "eye") for attachment to a thread, rope or chain, providing a reversible attachment between two objects. In many cases, the distal end of the hook is sharply pointed to enable penetration into the target material, providing a firmer anchorage. Some hooks, particularly fish hooks, also have a barb, a backwards-pointed projection near the pointed end that functions as a secondary "mini-hook" to catch and trap surrounding material, ensuring that the hook point cannot be easily pulled back out once embedded in the target. Variations Bagging hook, a large sickle or reaping hook used for harvesting grain Bondage hook, used in sexual bondage play Cabin hook, a hooked bar that engages into an eye screw, used on doors Cap hook, hat ornament of the 15th and 16th centuries Cargo hook, different types of hook systems for helicopters Crochet hook, used for crocheting thread or yarn Drapery hook, for hanging drapery Dress hook, fashion accessory Ear hook, to attach earrings Fish hook, used to catch fish Flesh-hook, used in cooking meat Grappling hook, a hook attached to a rope, designed to be thrown and snagged on a target Hook and chain coupler, mechanical part for the coupling for railway vehicles Hook (hand tool), also known as longshoreman's hook and bale hook, a tool used for securing and moving loads Hook-and-eye closure, a clothing fastener Hook-and-loop fastener, a type of textile fastener Hook hand, also called prosthesis, an artificial hand replacement made from a hook Lifting hook, for grabbing and lifting loads Mail hook, for grabbing mail bags without stopping a train Meat hook, for hanging up meat or carcasses of animals in butcheries and meat industry Prosthetic hook or transradial prosthesis, part of a prosthetic arm for amputees Purse hook, used to keep a woman's purse from touching the floor Shepherd's hook, a staff used in herding sheep or other animals Siege hook, an Ancient Roman weapon used to pull stones from a wall during a siege Tailhook, used by aircraft to snag cables in order to slow down more quickly
Technology
Rigid components
null
57939963
https://en.wikipedia.org/wiki/Random%20column%20packing
Random column packing
Random column packing is the practice of packing a distillation column with randomly fitting filtration material in order to optimize surface area over which reactants can interact while minimizing the complexity of construction of such columns. Random column packing is an alternative to structured column packing. Packed columns Packed columns utilizing filter media for chemical exchange are the most common devices used in the chemical industry for reactant contact optimization. Packed columns are used in a range of industries to allow intimate contact between two immiscible/partly immiscible fluids, which can be liquid/gas or liquid/liquid. The fluids are passed through a column in a countercurrent flow. In the column it is important to maintain an effective mass transfer, so it is essential that a packing is selected which will support a large surface area for mass transfer. History Random packing was used as early as 1820. Originally the packing material consisted of glass spheres, however in 1850 they were replaced by a more porous pumice stone and pieces of coke. Applications Random packed columns are used in a variety of applications, including: Distillation Stripping Carbon dioxide scrubbing Liquid%E2%80%93liquid extraction Types Raschig ring The Raschig ring is a piece of tube, invented circa 1914, that is used in large numbers in a packing column. Raschig rings are usually made of ceramic or metals, and they provide a large surface area within the column, allowing for interaction between liquid and gas vapors. Lessing ring Lessing rings are a type of random packing similar to the Raschig ring invented in the early 20th century by German-born British chemist Rudolf Lessing (1878-1964) of Mond Nickel Company. Originally wrapped from steel strips according to his 1919 patent, now they are made of ceramic. Lessing rings have partitions insides which increase the surface area and enhance mass transfer efficiency. Lessing rings have a high density and an excellent heat and acid resistance. Lessing rings withstand corrosion and are used in regenerative oxide systems and transfer systems. Pall ring Pall rings are the most common form of random packing. They are similar to Lessing rings and were developed from the Raschig ring. Pall rings have similar cylindrical dimensions but has rows of windows which increase performance by increasing the surface area. They are suited for low pressure drop and high capacity applications. They have a degree of randomness and a relatively high liquid hold up, promoting a high absorption, especially when the rate of reaction is slow. The cross structure of the Pall ring makes it mechanically robust and suitable for use in deep packed beds. Białecki ring The Bialecki ring was patented in 1974 by Polish chemical engineer from Kraków Zbigniew Białecki rings are an improved version of Raschig rings. The rings may be injection moulded of plastics or press-formed from metal sheet without welding. Specific surface area of filling ranges between 60 and 440 m2/m3. Dixon ring Dixon rings have a similar design to Lessing rings. They are made of stainless steel mesh, giving Dixon rings a low pressure drop and after pre-wetting. Dixon rings have a very large surface area, which increases the rate of mass transfer. Dixon rings have a large liquid hold up, a low pressure drop and a large surface area, and have a high mass transfer rate. Dixon rings are used for laboratory distillation and scrubbing applications.
Physical sciences
Phase separations
Chemistry
64514823
https://en.wikipedia.org/wiki/South%20Pole%20Wall
South Pole Wall
The South Pole Wall (SPW or The South Pole Wall) is a massive cosmic structure formed by a giant wall of galaxies (a galaxy filament) that extends across at least 1.37 billion light-years of space, the nearest light (and consequently part) of which is aged about half a billion light-years. The structure, in its astronomical angle, is dense in five known places including one very near to the celestial South Pole and is, according to the international team of astronomers that discovered the South Pole Wall, "...the largest contiguous feature in the local volume and comparable to the Sloan Great Wall at half the distance ...". Its discovery was announced by Daniel Pomarède of Paris-Saclay University and R. Brent Tully and colleagues of the University of Hawaiʻi in July 2020. Pomarède explained, "One might wonder how such a large and not-so distant structure remained unnoticed. This is due to its location in a region of the sky that has not been completely surveyed, and where direct observations are hindered by foreground patches of galactic dust and clouds. We have found it thanks to its gravitational influence, imprinted in the velocities of a sample of galaxies". Size The wall measures over 1.37 billion light-years in length, and spans a large zone 500 million light-years away. The massive structure, at least to a very small extent, is behind the Milky Way galaxy's Zone of Avoidance (or Zone of Galactic Obscuration). The filament curves from the Perseus constellation in the Northern Hemisphere to Telescopium in the far south, in between which, skirting – slightly – over the present south celestial pole itself. It is so large that it greatly affects the local expansion of the universe. According to astronomer Tully, "We wonder if the South Pole Wall is much bigger than what we see. What we have mapped stretches across the full domain of the region we have surveyed. We are early explorers of the cosmos, extending our maps into unknown territory." According to the astronomers who discovered it "We will not be certain of its full extent, nor whether it is unusual, until we map the universe on a significantly grander scale."
Physical sciences
Notable patches of universe
Astronomy
61044116
https://en.wikipedia.org/wiki/Upper%20mantle
Upper mantle
The upper mantle of Earth is a very thick layer of rock inside the planet, which begins just beneath the crust (at about under the oceans and about under the continents) and ends at the top of the lower mantle at . Temperatures range from approximately at the upper boundary with the crust to approximately at the boundary with the lower mantle. Upper mantle material that has come up onto the surface comprises about 55% olivine, 35% pyroxene, and 5 to 10% of calcium oxide and aluminum oxide minerals such as plagioclase, spinel, or garnet, depending upon depth. Seismic structure The density profile through Earth is determined by the velocity of seismic waves. Density increases progressively in each layer, largely due to compression of the rock at increased depths. Abrupt changes in density occur where the material composition changes. The upper mantle begins just beneath the crust and ends at the top of the lower mantle. The upper mantle causes the tectonic plates to move. Crust and mantle are distinguished by composition, while the lithosphere and asthenosphere are defined by a change in mechanical properties. The top of the mantle is defined by a sudden increase in the speed of seismic waves, which Andrija Mohorovičić first noted in 1909; this boundary is now referred to as the Mohorovičić discontinuity or "Moho." The Moho defines the base of the crust and varies from to below the surface of the Earth. Oceanic crust is thinner than continental crust and is generally less than thick. Continental crust is about thick, but the large crustal root under the Tibetan Plateau is approximately thick. The thickness of the upper mantle is about . The entire mantle is about thick, which means the upper mantle is only about 20% of the total mantle thickness. The boundary between the upper and lower mantle is a discontinuity. Earthquakes at shallow depths result from strike-slip faulting; however, below about , the hot, high-pressure conditions inhibit further seismicity. The mantle is viscous and incapable of faulting. However, in subduction zones, earthquakes are observed down to . Lehmann discontinuity The Lehmann discontinuity is an abrupt increase of P-wave and S-wave velocities at a depth of (Note that this is a different "Lehmann discontinuity" than the one between the Earth's inner and outer cores labeled in the image on the right.) Transition zone The transition zone is located between the upper mantle and the lower mantle between a depth of and . This is thought to occur as a result of the rearrangement of grains in olivine to form a denser crystal structure as a result of the increase in pressure with increasing depth. Below a depth of , due to pressure changes, ringwoodite minerals change into two new denser phases, bridgmanite and periclase. This can be seen using body waves from earthquakes, which are converted, reflected, or refracted at the boundary, and predicted from mineral physics, as the phase changes are temperature and density-dependent and hence depth-dependent. 410 km discontinuity A single peak is seen in all seismological data at , which is predicted by the single transition from α- to β- Mg2SiO4 (olivine to wadsleyite). From the Clapeyron slope this discontinuity is expected to be shallower in cold regions, such as subducting slabs, and deeper in warmer regions, such as mantle plumes. 670 km discontinuity This is the most complex discontinuity and marks the boundary between the upper and lower mantle. It appears in PP precursors (a wave that reflects off the discontinuity once) only in certain regions but is always apparent in SS precursors. It is seen as single and double reflections in receiver functions for P to S conversions over a broad range of depths (640–720 km, or 397–447 mi). The Clapeyron slope predicts a deeper discontinuity in colder regions and a shallower discontinuity in hotter regions. This discontinuity is generally linked to the transition from ringwoodite to bridgmanite and periclase. This is thermodynamically an endothermic reaction and creates a viscosity jump. Both characteristics cause this phase transition to play an important role in geodynamical models. Other discontinuities There is another major phase transition predicted at for the transition of olivine (β to γ) and garnet in the pyrolite mantle. This one has only sporadically been observed in seismological data. Other non-global phase transitions have been suggested at a range of depths. Temperature and pressure Temperatures range from approximately at the upper boundary with the crust to approximately at the core-mantle boundary. The highest temperature of the upper mantle is . Although the high temperature far exceeds the melting points of the mantle rocks at the surface, the mantle is almost exclusively solid. The enormous lithostatic pressure exerted on the mantle prevents melting because the temperature at which melting begins (the solidus) increases with pressure. Pressure increases as depth increases since the material beneath has to support the weight of all the material above it. The entire mantle is thought to deform like a fluid on long timescales, with permanent plastic deformation. The highest pressure of the upper mantle is compared to the bottom of the mantle, which is . Estimates for the viscosity of the upper mantle range between 1019 and 1024 Pa·s, depending on depth, temperature, composition, state of stress, and numerous other factors. The upper mantle can only flow very slowly. However, when large forces are applied to the uppermost mantle, it can become weaker, and this effect is thought to be important in allowing the formation of tectonic plate boundaries. Although there is a tendency to larger viscosity at greater depth, this relation is far from linear and shows layers with dramatically decreased viscosity, in particular in the upper mantle and at the boundary with the core. Movement Because of the temperature difference between the Earth's surface and outer core and the ability of the crystalline rocks at high pressure and temperature to undergo slow, creeping, viscous-like deformation over millions of years, there is a convective material circulation in the mantle. Hot material upwells, while cooler (and heavier) material sinks downward. Downward motion of material occurs at convergent plate boundaries called subduction zones. Locations on the surface that lie over plumes are predicted to have high elevation (because of the buoyancy of the hotter, less-dense plume beneath) and to exhibit hot spot volcanism. Mineral composition The seismic data is not sufficient to determine the composition of the mantle. Observations of rocks exposed on the surface and other evidence reveal that the upper mantle is mafic minerals olivine and pyroxene, and it has a density of about Upper mantle material that has come up onto the surface comprises about 55% olivine and 35% pyroxene, and 5 to 10% of calcium oxide and aluminum oxide. The upper mantle is dominantly peridotite, composed primarily of variable proportions of the minerals olivine, clinopyroxene, orthopyroxene, and an aluminous phase. The aluminous phase is plagioclase in the uppermost mantle, then spinel, and then garnet below about . Gradually through the upper mantle, pyroxenes become less stable and transform into majoritic garnet. Experiments on olivines and pyroxenes show that these minerals change the structure as pressure increases at greater depth, which explains why the density curves are not perfectly smooth. When there is a conversion to a more dense mineral structure, the seismic velocity rises abruptly and creates a discontinuity. At the top of the transition zone, olivine undergoes isochemical phase transitions to wadsleyite and ringwoodite. Unlike nominally anhydrous olivine, these high-pressure olivine polymorphs have a large capacity to store water in their crystal structure. This has led to the hypothesis that the transition zone may host a large quantity of water. In Earth's interior, olivine occurs in the upper mantle at depths less than , and ringwoodite is inferred within the transition zone from about depth. Seismic activity discontinuities at about , , and depth have been attributed to phase changes involving olivine and its polymorphs. At the base of the transition zone, ringwoodite decomposes into bridgmanite (formerly called magnesium silicate perovskite), and ferropericlase. Garnet also becomes unstable at or slightly below the base of the transition zone. Kimberlites explode from the earth's interior and sometimes carry rock fragments. Some of these xenolithic fragments are diamonds that can only come from the higher pressures below the crust. The rocks that come with this are ultramafic nodules and peridotite. Chemical composition The composition seems to be very similar to the crust. One difference is that rocks and minerals of the mantle tend to have more magnesium and less silicon and aluminum than the crust. The first four most abundant elements in the upper mantle are oxygen, magnesium, silicon, and iron. Exploration Exploration of the mantle is generally conducted at the seabed rather than on land because of the oceanic crust's relative thinness as compared to the significantly thicker continental crust. The first attempt at mantle exploration, known as Project Mohole, was abandoned in 1966 after repeated failures and cost overruns. The deepest penetration was approximately . In 2005 an oceanic borehole reached below the seafloor from the ocean drilling vessel JOIDES Resolution. On 5 March 2007, a team of scientists on board the RRS James Cook embarked on a voyage to an area of the Atlantic seafloor where the mantle lies exposed without any crust covering, midway between the Cape Verde Islands and the Caribbean Sea. The exposed site lies approximately beneath the ocean surface and covers thousands of square kilometers. The Chikyu Hakken mission attempted to use the Japanese vessel Chikyū to drill up to below the seabed. On 27 April 2012, Chikyū drilled to a depth of below sea level, setting a new world record for deep-sea drilling. This record has since been surpassed by the ill-fated Deepwater Horizon mobile offshore drilling unit, operating on the Tiber prospect in the Mississippi Canyon Field, United States Gulf of Mexico, when it achieved a world record for total length for a vertical drilling string of 10,062 m (33,011 ft). The previous record was held by the U.S. vessel Glomar Challenger, which in 1978 drilled to 7,049.5 meters (23,130 feet) below sea level in the Mariana Trench. On 6 September 2012, Scientific deep-sea drilling vessel Chikyū set a new world record by drilling down and obtaining rock samples from deeper than below the seafloor off the Shimokita Peninsula of Japan in the northwest Pacific Ocean. A novel method of exploring the uppermost few hundred kilometers of the Earth was proposed in 2005, consisting of a small, dense, heat-generating probe that melts its way down through the crust and mantle while its position and progress are tracked by acoustic signals generated in the rocks. The probe consists of an outer sphere of tungsten about in diameter with a cobalt-60 interior acting as a radioactive heat source. This should take half a year to reach the oceanic Moho. Exploration can also be aided through computer simulations of the evolution of the mantle. In 2009, a supercomputer application provided new insight into the distribution of mineral deposits, especially isotopes of iron, from when the mantle developed 4.5 billion years ago. In 2023 JOIDES Resolution recovered cores of what appeared to be rock from the upper mantle after drilling only a few hundred meters into the Atlantis Massif. The borehole reached a maximum depth of 1,268 meters and recovered 886 meters of rock samples consisting of primarily peridotite. There is debate over the extent to which the samples represent the upper mantle with some arguing the effects of seawater on the samples situates them as examples of deep lower crust. However, the samples offer a much closer analogue to mantle rock than magmatic xenoliths as the sampled rock never melted into magma or recrystallized.
Physical sciences
Tectonics
Earth science
73291755
https://en.wikipedia.org/wiki/Generative%20artificial%20intelligence
Generative artificial intelligence
Generative artificial intelligence (generative AI, GenAI, or GAI) is a subset of artificial intelligence that uses generative models to produce text, images, videos, or other forms of data. These models learn the underlying patterns and structures of their training data and use them to produce new data based on the input, which often comes in the form of natural language prompts. Improvements in transformer-based deep neural networks, particularly large language models (LLMs), enabled an AI boom of generative AI systems in the early 2020s. These include chatbots such as ChatGPT, Copilot, Gemini, and LLaMA; text-to-image artificial intelligence image generation systems such as Stable Diffusion, Midjourney, and DALL-E; and text-to-video AI generators such as Sora. Companies such as OpenAI, Anthropic, Microsoft, Google, and Baidu as well as numerous smaller firms have developed generative AI models. Generative AI has uses across a wide range of industries, including software development, healthcare, finance, entertainment, customer service, sales and marketing, art, writing, fashion, and product design. However, concerns have been raised about the potential misuse of generative AI such as cybercrime, the use of fake news or deepfakes to deceive or manipulate people, and the mass replacement of human jobs. Intellectual property law concerns also exist around generative models that are trained on and emulate copyrighted works of art. History Early history Since its inception, researchers in the field have raised philosophical and ethical arguments about the nature of the human mind and the consequences of creating artificial beings with human-like intelligence; these issues have previously been explored by myth, fiction and philosophy since antiquity. The concept of automated art dates back at least to the automata of ancient Greek civilization, where inventors such as Daedalus and Hero of Alexandria were described as having designed machines capable of writing text, generating sounds, and playing music. The tradition of creative automations has flourished throughout history, exemplified by Maillardet's automaton created in the early 1800s. Markov chains have long been used to model natural languages since their development by Russian mathematician Andrey Markov in the early 20th century. Markov published his first paper on the topic in 1906, and analyzed the pattern of vowels and consonants in the novel Eugeny Onegin using Markov chains. Once a Markov chain is learned on a text corpus, it can then be used as a probabilistic text generator. Academic artificial intelligence The academic discipline of artificial intelligence was established at a research workshop held at Dartmouth College in 1956 and has experienced several waves of advancement and optimism in the decades since. Artificial Intelligence research began in the 1950s with works like Computing Machinery and Intelligence (1950) and the 1956 Dartmouth Summer Research Project on AI. Since the 1950s, artists and researchers have used artificial intelligence to create artistic works. By the early 1970s, Harold Cohen was creating and exhibiting generative AI works created by AARON, the computer program Cohen created to generate paintings. The terms generative AI planning or generative planning were used in the 1980s and 1990s to refer to AI planning systems, especially computer-aided process planning, used to generate sequences of actions to reach a specified goal. Generative AI planning systems used symbolic AI methods such as state space search and constraint satisfaction and were a "relatively mature" technology by the early 1990s. They were used to generate crisis action plans for military use, process plans for manufacturing and decision plans such as in prototype autonomous spacecraft. Generative neural nets (2014-2019) Since its inception, the field of machine learning used both discriminative models and generative models, to model and predict data. Beginning in the late 2000s, the emergence of deep learning drove progress and research in image classification, speech recognition, natural language processing and other tasks. Neural networks in this era were typically trained as discriminative models, due to the difficulty of generative modeling. In 2014, advancements such as the variational autoencoder and generative adversarial network produced the first practical deep neural networks capable of learning generative models, as opposed to discriminative ones, for complex data such as images. These deep generative models were the first to output not only class labels for images but also entire images. In 2017, the Transformer network enabled advancements in generative models compared to older Long-Short Term Memory models, leading to the first generative pre-trained transformer (GPT), known as GPT-1, in 2018. This was followed in 2019 by GPT-2 which demonstrated the ability to generalize unsupervised to many different tasks as a Foundation model. The new generative models introduced during this period allowed for large neural networks to be trained using unsupervised learning or semi-supervised learning, rather than the supervised learning typical of discriminative models. Unsupervised learning removed the need for humans to manually label data, allowing for larger networks to be trained. Generative AI boom (2020-) In March 2020, 15.ai, created by an anonymous MIT researcher, was a free web application that could generate convincing character voices using minimal training data. The platform is credited as the first mainstream service to popularize AI voice cloning (audio deepfakes) in memes and content creation, influencing subsequent developments in voice AI technology. In 2021, the emergence of DALL-E, a transformer-based pixel generative model, marked an advance in AI-generated imagery. This was followed by the releases of Midjourney and Stable Diffusion in 2022, which further democratized access to high-quality artificial intelligence art creation from natural language prompts. These systems demonstrated unprecedented capabilities in generating photorealistic images, artwork, and designs based on text descriptions, leading to widespread adoption among artists, designers, and the general public. In late 2022, the public release of ChatGPT revolutionized the accessibility and application of generative AI for general-purpose text-based tasks. The system's ability to engage in natural conversations, generate creative content, assist with coding, and perform various analytical tasks captured global attention and sparked widespread discussion about AI's potential impact on work, education, and creativity. In March 2023, GPT-4's release represented another jump in generative AI capabilities. A team from Microsoft Research controversially argued that it "could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system." However, this assessment was contested by other scholars who maintained that generative AI remained "still far from reaching the benchmark of 'general human intelligence'" as of 2023. Later in 2023, Meta released ImageBind, an AI model combining multiple modalities including text, images, video, thermal data, 3D data, audio, and motion, paving the way for more immersive generative AI applications. In December 2023, Google unveiled Gemini, a multimodal AI model available in four versions: Ultra, Pro, Flash, and Nano. The company integrated Gemini Pro into its Bard chatbot and announced plans for "Bard Advanced" powered by the larger Gemini Ultra model. In February 2024, Google unified Bard and Duet AI under the Gemini brand, launching a mobile app on Android and integrating the service into the Google app on iOS. In March 2024, Anthropic released the Claude 3 family of large language models, including Claude 3 Haiku, Sonnet, and Opus. The models demonstrated significant improvements in capabilities across various benchmarks, with Claude 3 Opus notably outperforming leading models from OpenAI and Google. In June 2024, Anthropic released Claude 3.5 Sonnet, which demonstrated improved performance compared to the larger Claude 3 Opus, particularly in areas such as coding, multistep workflows, and image analysis. According to a survey by SAS and Coleman Parkes Research, China has emerged as a global leader in generative AI adoption, with 83% of Chinese respondents using the technology, exceeding both the global average of 54% and the U.S. rate of 65%. This leadership is further evidenced by China's intellectual property developments in the field, with a UN report revealing that Chinese entities filed over 38,000 generative AI patents from 2014 to 2023, substantially surpassing the United States in patent applications. Modalities A generative AI system is constructed by applying unsupervised machine learning (invoking for instance neural network architectures such as generative adversarial networks (GANs), variation autoencoders (VAEs), transformers, or self-supervised machine learning trained on a dataset. The capabilities of a generative AI system depend on the modality or type of the data set used. Generative AI can be either unimodal or multimodal; unimodal systems take only one type of input, whereas multimodal systems can take more than one type of input. For example, one version of OpenAI's GPT-4 accepts both text and image inputs. Text Generative AI systems trained on words or word tokens include GPT-3, GPT-4, GPT-4o, LaMDA, LLaMA, BLOOM, Gemini and others (see List of large language models). They are capable of natural language processing, machine translation, and natural language generation and can be used as foundation models for other tasks. Data sets include BookCorpus, Wikipedia, and others (see List of text corpora). Code In addition to natural language text, large language models can be trained on programming language text, allowing them to generate source code for new computer programs. Examples include OpenAI Codex and the VS Code fork Cursor. Images Producing high-quality visual art is a prominent application of generative AI. Generative AI systems trained on sets of images with text captions include Imagen, DALL-E, Midjourney, Adobe Firefly, FLUX.1, Stable Diffusion and others (see Artificial intelligence art, Generative art, and Synthetic media). They are commonly used for text-to-image generation and neural style transfer. Datasets include LAION-5B and others (see List of datasets in computer vision and image processing). Audio Generative AI can also be trained extensively on audio clips to produce natural-sounding speech synthesis and text-to-speech capabilities. An early pioneer in this field was 15.ai, launched in March 2020, which demonstrated the ability to clone character voices using as little as 15 seconds of training data. The website gained widespread attention for its ability to generate emotionally expressive speech for various fictional characters, though it was later taken offline in 2022 due to copyright concerns. Commercial alternatives subsequently emerged, including ElevenLabs' context-aware synthesis tools and Meta Platform's Voicebox. Generative AI systems such as MusicLM and MusicGen can also be trained on the audio waveforms of recorded music along with text annotations, in order to generate new musical samples based on text descriptions such as a calming violin melody backed by a distorted guitar riff. Music Audio deepfakes of lyrics have been generated, like the song Savages, which used AI to mimic rapper Jay-Z's vocals. Music artist's instrumentals and lyrics are copyrighted but their voices aren't protected from regenerative AI yet, raising a debate about whether artists should get royalties from audio deepfakes. Many AI music generators have been created that can be generated using a text phrase, genre options, and looped libraries of bars and riffs. Video Generative AI trained on annotated video can generate temporally-coherent, detailed and photorealistic video clips. Examples include Sora by OpenAI, Gen-1 and Gen-2 by Runway, and Make-A-Video by Meta Platforms. Actions Generative AI can also be trained on the motions of a robotic system to generate new trajectories for motion planning or navigation. For example, UniPi from Google Research uses prompts like "pick up blue bowl" or "wipe plate with yellow sponge" to control movements of a robot arm. Multimodal "vision-language-action" models such as Google's RT-2 can perform rudimentary reasoning in response to user prompts and visual input, such as picking up a toy dinosaur when given the prompt pick up the extinct animal at a table filled with toy animals and other objects. 3D modeling Artificially intelligent computer-aided design (CAD) can use text-to-3D, image-to-3D, and video-to-3D to automate 3D modeling. AI-based CAD libraries could also be developed using linked open data of schematics and diagrams. AI CAD assistants are used as tools to help streamline workflow. Software and hardware Generative AI models are used to power chatbot products such as ChatGPT, programming tools such as GitHub Copilot, text-to-image products such as Midjourney, and text-to-video products such as Runway Gen-2. Generative AI features have been integrated into a variety of existing commercially available products such as Microsoft Office (Microsoft Copilot), Google Photos, and the Adobe Suite (Adobe Firefly). Many generative AI models are also available as open-source software, including Stable Diffusion and the LLaMA language model. Smaller generative AI models with up to a few billion parameters can run on smartphones, embedded devices, and personal computers. For example, LLaMA-7B (a version with 7 billion parameters) can run on a Raspberry Pi 4 and one version of Stable Diffusion can run on an iPhone 11. Larger models with tens of billions of parameters can run on laptop or desktop computers. To achieve an acceptable speed, models of this size may require accelerators such as the GPU chips produced by NVIDIA and AMD or the Neural Engine included in Apple silicon products. For example, the 65 billion parameter version of LLaMA can be configured to run on a desktop PC. The advantages of running generative AI locally include protection of privacy and intellectual property, and avoidance of rate limiting and censorship. The subreddit r/LocalLLaMA in particular focuses on using consumer-grade gaming graphics cards through such techniques as compression. That forum is one of only two sources Andrej Karpathy trusts for language model benchmarks. Yann LeCun has advocated open-source models for their value to vertical applications and for improving AI safety. Language models with hundreds of billions of parameters, such as GPT-4 or PaLM, typically run on datacenter computers equipped with arrays of GPUs (such as NVIDIA's H100) or AI accelerator chips (such as Google's TPU). These very large models are typically accessed as cloud services over the Internet. In 2022, the United States New Export Controls on Advanced Computing and Semiconductors to China imposed restrictions on exports to China of GPU and AI accelerator chips used for generative AI. Chips such as the NVIDIA A800 and the Biren Technology BR104 were developed to meet the requirements of the sanctions. There is free software on the market capable of recognizing text generated by generative artificial intelligence (such as GPTZero), as well as images, audio or video coming from it. Potential mitigation strategies for detecting generative AI content include digital watermarking, content authentication, information retrieval, and machine learning classifier models. Despite claims of accuracy, both free and paid AI text detectors have frequently produced false positives, mistakenly accusing students of submitting AI-generated work. Law and regulation In the United States, a group of companies including OpenAI, Alphabet, and Meta signed a voluntary agreement with the Biden administration in July 2023 to watermark AI-generated content. In October 2023, Executive Order 14110 applied the Defense Production Act to require all US companies to report information to the federal government when training certain high-impact AI models. In the European Union, the proposed Artificial Intelligence Act includes requirements to disclose copyrighted material used to train generative AI systems, and to label any AI-generated output as such. In China, the Interim Measures for the Management of Generative AI Services introduced by the Cyberspace Administration of China regulates any public-facing generative AI. It includes requirements to watermark generated images or videos, regulations on training data and label quality, restrictions on personal data collection, and a guideline that generative AI must "adhere to socialist core values". Copyright Training with copyrighted content Generative AI systems such as ChatGPT and Midjourney are trained on large, publicly available datasets that include copyrighted works. AI developers have argued that such training is protected under fair use, while copyright holders have argued that it infringes their rights. Proponents of fair use training have argued that it is a transformative use and does not involve making copies of copyrighted works available to the public. Critics have argued that image generators such as Midjourney can create nearly-identical copies of some copyrighted images, and that generative AI programs compete with the content they are trained on. As of 2024, several lawsuits related to the use of copyrighted material in training are ongoing. Getty Images has sued Stability AI over the use of its images to train Stable diffusion. Both the Authors Guild and The New York Times have sued Microsoft and OpenAI over the use of their works to train ChatGPT. Copyright of AI-generated content A separate question is whether AI-generated works can qualify for copyright protection. The United States Copyright Office has ruled that works created by artificial intelligence without any human input cannot be copyrighted, because they lack human authorship. However, the office has also begun taking public input to determine if these rules need to be refined for generative AI. Concerns The development of generative AI has raised concerns from governments, businesses, and individuals, resulting in protests, legal actions, calls to pause AI experiments, and actions by multiple governments. In a July 2023 briefing of the United Nations Security Council, Secretary-General António Guterres stated "Generative AI has enormous potential for good and evil at scale", that AI may "turbocharge global development" and contribute between $10 and $15 trillion to the global economy by 2030, but that its malicious use "could cause horrific levels of death and destruction, widespread trauma, and deep psychological damage on an unimaginable scale". Job losses From the early days of the development of AI, there have been arguments put forward by ELIZA creator Joseph Weizenbaum and others about whether tasks that can be done by computers actually should be done by them, given the difference between computers and humans, and between quantitative calculations and qualitative, value-based judgements. In April 2023, it was reported that image generation AI has resulted in 70% of the jobs for video game illustrators in China being lost. In July 2023, developments in generative AI contributed to the 2023 Hollywood labor disputes. Fran Drescher, president of the Screen Actors Guild, declared that "artificial intelligence poses an existential threat to creative professions" during the 2023 SAG-AFTRA strike. Voice generation AI has been seen as a potential challenge to the voice acting sector. The intersection of AI and employment concerns among underrepresented groups globally remains a critical facet. While AI promises efficiency enhancements and skill acquisition, concerns about job displacement and biased recruiting processes persist among these groups, as outlined in surveys by Fast Company. To leverage AI for a more equitable society, proactive steps encompass mitigating biases, advocating transparency, respecting privacy and consent, and embracing diverse teams and ethical considerations. Strategies involve redirecting policy emphasis on regulation, inclusive design, and education's potential for personalized teaching to maximize benefits while minimizing harms. Racial and gender bias Generative AI models can reflect and amplify any cultural bias present in the underlying data. For example, a language model might assume that doctors and judges are male, and that secretaries or nurses are female, if those biases are common in the training data. Similarly, an image model prompted with the text "a photo of a CEO" might disproportionately generate images of white male CEOs, if trained on a racially biased data set. A number of methods for mitigating bias have been attempted, such as altering input prompts and reweighting training data. Deepfakes Deepfakes (a portmanteau of "deep learning" and "fake") are AI-generated media that take a person in an existing image or video and replace them with someone else's likeness using artificial neural networks. Deepfakes have garnered widespread attention and concerns for their uses in deepfake celebrity pornographic videos, revenge porn, fake news, hoaxes, health disinformation, financial fraud, and covert foreign election interference. This has elicited responses from both industry and government to detect and limit their use. In July 2023, the fact-checking company Logically found that the popular generative AI models Midjourney, DALL-E 2 and Stable Diffusion would produce plausible disinformation images when prompted to do so, such as images of electoral fraud in the United States and Muslim women supporting India's Hindu nationalist Bharatiya Janata Party. In April 2024, a paper proposed to use blockchain (distributed ledger technology) to promote "transparency, verifiability, and decentralization in AI development and usage". Audio deepfakes Instances of users abusing software to generate controversial statements in the vocal style of celebrities, public officials, and other famous individuals have raised ethical concerns over voice generation AI. In response, companies such as ElevenLabs have stated that they would work on mitigating potential abuse through safeguards and identity verification. Concerns and fandoms have spawned from AI-generated music. The same software used to clone voices has been used on famous musicians' voices to create songs that mimic their voices, gaining both tremendous popularity and criticism. Similar techniques have also been used to create improved quality or full-length versions of songs that have been leaked or have yet to be released. Generative AI has also been used to create new digital artist personalities, with some of these receiving enough attention to receive record deals at major labels. The developers of these virtual artists have also faced their fair share of criticism for their personified programs, including backlash for "dehumanizing" an artform, and also creating artists which create unrealistic or immoral appeals to their audiences. Cybercrime Generative AI's ability to create realistic fake content has been exploited in numerous types of cybercrime, including phishing scams. Deepfake video and audio have been used to create disinformation and fraud. In 2020, former Google click fraud czar Shuman Ghosemajumder argued that once deepfake videos become perfectly realistic, they would stop appearing remarkable to viewers, potentially leading to uncritical acceptance of false information. Additionally, large language models and other forms of text-generation AI have been used to create fake reviews of e-commerce websites to boost ratings. Cybercriminals have created large language models focused on fraud, including WormGPT and FraudGPT. A 2023 study showed that generative AI can be vulnerable to jailbreaks, reverse psychology and prompt injection attacks, enabling attackers to obtain help with harmful requests, such as for crafting social engineering and phishing attacks. Additionally, other researchers have demonstrated that open-source models can be fine-tuned to remove their safety restrictions at low cost. Reliance on industry giants Training frontier AI models requires an enormous amount of computing power. Usually only Big Tech companies have the financial resources to make such investments. Smaller start-ups such as Cohere and OpenAI end up buying access to data centers from Google and Microsoft respectively. Energy and environment Scientists and journalists have expressed concerns about the environmental impact that the development and deployment of generative models are having: high CO2 emissions, large amounts of freshwater used for data centers, and high amounts of electricity usage. There is also concern that these impacts may increase as these models are incorporated into widely used search engines such as Google Search and Bing; as chatbots and other applications become more popular; and as models need to be retrained. Proposed mitigation strategies include factoring potential environmental costs prior to model development or data collection, increasing efficiency of data centers to reduce electricity/energy usage, building more efficient machine learning models, minimizing the number of times that models need to be retrained, developing a government-directed framework for auditing the environmental impact of these models, regulating for transparency of these models, regulating their energy and water usage, encouraging researchers to publish data on their models' carbon footprint, and increasing the number of subject matter experts who understand both machine learning and climate science. Content quality The New York Times defines slop as analogous to spam: "shoddy or unwanted A.I. content in social media, art, books and ... in search results." Journalists have expressed concerns about the scale of low-quality generated content with respect to social media content moderation, the monetary incentives from social media companies to spread such content, false political messaging, spamming of scientific research paper submissions, increased time and effort to find higher quality or desired content on the Internet, the indexing of generated content by search engines, and on journalism itself. A paper published by researchers at Amazon Web Services AI Labs found that over 57% of sentences from a sample of over 6 billion sentences from Common Crawl, a snapshot of web pages, were machine translated. Many of these automated translations were seen as lower quality, especially for sentences that were translated across at least three languages. Many lower-resource languages (ex. Wolof, Xhosa) were translated across more languages than higher-resource languages (ex. English, French). In September 2024, Robyn Speer, the author of wordfreq, an open source database that calculated word frequencies based on text from the Internet, announced that she had stopped updating the data for several reasons: high costs for obtaining data from Reddit and Twitter, excessive focus on generative AI compared to other methods in the natural language processing community, and that "generative AI has polluted the data". The adoption of generative AI tools led to an explosion of AI-generated content across multiple domains. A study from University College London estimated that in 2023, more than 60,000 scholarly articles—over 1% of all publications—were likely written with LLM assistance. According to Stanford University's Institute for Human-Centered AI, approximately 17.5% of newly published computer science papers and 16.9% of peer review text now incorporate content generated by LLMs. Visual content follows a similar trend. Since the launch of DALL-E 2 in 2022, it is estimated that an average of 34 million images have been created daily. As of August 2023, more than 15 billion images had been generated using text-to-image algorithms, with 80% of these created by models based on Stable Diffusion. If AI-generated content is included in new data crawls from the Internet for additional training of AI models, defects in the resulting models may occur. Training an AI model exclusively on the output of another AI model produces a lower-quality model. Repeating this process, where each new model is trained on the previous model's output, leads to progressive degradation and eventually results in a "model collapse" after multiple iterations. Tests have been conducted with pattern recognition of handwritten letters and with pictures of human faces. As a consequence, the value of data collected from genuine human interactions with systems may become increasingly valuable in the presence of LLM-generated content in data crawled from the Internet. On the other side, synthetic data is often used as an alternative to data produced by real-world events. Such data can be deployed to validate mathematical models and to train machine learning models while preserving user privacy, including for structured data. The approach is not limited to text generation; image generation has been employed to train computer vision models. Misuse in journalism In January 2023, Futurism.com broke the story that CNET had been using an undisclosed internal AI tool to write at least 77 of its stories; after the news broke, CNET posted corrections to 41 of the stories. In April 2023, the German tabloid Die Aktuelle published a fake AI-generated interview with former racing driver Michael Schumacher, who had not made any public appearances since 2013 after sustaining a brain injury in a skiing accident. The story included two possible disclosures: the cover included the line "deceptively real", and the interview included an acknowledgment at the end that it was AI-generated. The editor-in-chief was fired shortly thereafter amid the controversy. Other outlets that have published articles whose content and/or byline have been confirmed or suspected to be created by generative AI models – often with false content, errors, and/or non-disclosure of generative AI use - include: NewsBreak outlets owned by Arena Group Sports Illustrated TheStreet Men's Journal B&H Photo outlets owned by Gannett The Columbus Dispatch Reviewed USA Today MSN News Corp outlets owned by G/O Media Gizmodo Jalopnik A.V. Club The Irish Times outlets owned by Red Ventures Bankrate BuzzFeed Newsweek Hoodline outlets owned by Outside Inc. Yoga Journal Backpacker Clean Eating Hollywood Life Us Weekly The Los Angeles Times Cody Enterprise Cosmos outlets owned by McClatchy Miami Herald Sacramento Bee Tacoma News Tribune The Rock Hill Herald The Modesto Bee Fort Worth Star-Telegram Merced Sun-Star Ledger-Enquirer The Kansas City Star Raleigh News & Observer outlets owned by Ziff Davis PC Magazine Mashable AskMen outlets owned by Hearst Good Housekeeping outlets owned by IAC Inc. People Parents Food & Wine InStyle Real Simple Travel + Leisure Better Homes & Gardens Southern Living outlets owned by Street Media LA Weekly The Village Voice Riverfront Times Apple Intelligence In May 2024, Futurism noted that a content management system video by AdVon Commerce, who had used generative AI to produce articles for many of the aforementioned outlets, appeared to show that they "had produced tens of thousands of articles for more than 150 publishers." News broadcasters in Kuwait, Greece, South Korea, India, China and Taiwan have presented news with anchors based on Generative AI models, prompting concerns about job losses for human anchors and audience trust in news that has historically been influenced by parasocial relationships with broadcasters, content creators or social media influencers. Algorithmically generated anchors have also been used by allies of ISIS for their broadcasts. In 2023, Google reportedly pitched a tool to news outlets that claimed to "produce news stories" based on input data provided, such as "details of current events". Some news company executives who viewed the pitch described it as "[taking] for granted the effort that went into producing accurate and artful news stories." In February 2024, Google launched a program to pay small publishers to write three articles per day using a beta generative AI model. The program does not require the knowledge or consent of the websites that the publishers are using as sources, nor does it require the published articles to be labeled as being created or assisted by these models. Many defunct news sites (The Hairpin, The Frisky, Apple Daily, Ashland Daily Tidings, Clayton County Register, Southwest Journal) and blogs (The Unofficial Apple Weblog, iLounge) have undergone cybersquatting, with articles created by generative AI. United States Senators Richard Blumenthal and Amy Klobuchar have expressed concern that generative AI could have a harmful impact on local news. In July 2023, OpenAI partnered with the American Journalism Project to fund local news outlets for experimenting with generative AI, with Axios noting the possibility of generative AI companies creating a dependency for these news outlets. Meta AI, a chatbot based on Llama 3 which summarizes news stories, was noted by The Washington Post to copy sentences from those stories without direct attribution and to potentially further decrease the traffic of online news outlets. In response to potential pitfalls around the use and misuse of generative AI in journalism and worries about declining audience trust, outlets around the world, including publications such as Wired, Associated Press, The Quint, Rappler or The Guardian have published guidelines around how they plan to use and not use AI and generative AI in their work. In June 2024, Reuters Institute published their Digital New Report for 2024. In a survey of people in America and Europe, Reuters Institute reports that 52% and 47% respectively are uncomfortable with news produced by "mostly AI with some human oversight", and 23% and 15% respectively report being comfortable. 42% of Americans and 33% of Europeans reported that they were comfortable with news produced by "mainly human with some help from AI". The results of global surveys reported that people were more uncomfortable with news topics including politics (46%), crime (43%), and local news (37%) produced by AI than other news topics.
Technology
Artificial intelligence concepts
null
53409225
https://en.wikipedia.org/wiki/Copulation%20%28zoology%29
Copulation (zoology)
In zoology, copulation is animal sexual behavior in which a male introduces sperm into the female's body, especially directly into her reproductive tract. This is an aspect of mating. Many aquatic animals use external fertilization, whereas internal fertilization may have developed from a need to maintain gametes in a liquid medium in the Late Ordovician epoch. Internal fertilization with many vertebrates (such as all reptiles, some fish, and most birds) occurs via cloacal copulation, known as cloacal kiss (see also hemipenis), while most mammals copulate vaginally, and many basal vertebrates reproduce sexually with external fertilization. In spiders and insects Spiders are often confused with insects, but they are not insects; instead, they are arachnids. Spiders have separate male and female sexes. Before mating and copulation, the male spider spins a small web and ejaculates on to it. He then stores the sperm in reservoirs on his large pedipalps, from which he transfers sperm to the female's genitals. The females can store sperm indefinitely. For primitive insects, the male deposits spermatozoa on the substrate, sometimes stored within a special structure; courtship involves inducing the female to take up the sperm package into her genital opening, but there is no actual copulation. In groups that have reproduction similar to spiders, such as dragonflies, males extrude sperm into secondary copulatory structures removed from their genital opening, which are then used to inseminate the female. In dragonflies, it is a set of modified sternites on the second abdominal segment. In advanced groups of insects, the male uses its aedeagus, a structure formed from the terminal segments of the abdomen, to deposit sperm directly (though sometimes in a capsule called a spermatophore) into the female's reproductive tract. In mammals Sexual behavior can be classified into behavioral states associated with reward motivation ("wanting"), reward consummation also known as pleasure ("liking"), and satiety ("inhibition"); these behavioral states are regulated in mammals by reward-based sexual learning, fluctuations in various neurochemicals (i.e., dopamine − sexual desire also known as "wanting"; norepinephrine − sexual arousal; oxytocin and melanocortins − sexual attraction), and gonadal hormone cycles and further influenced by sex pheromones and motor reflexes (i.e., lordosis behaviour) in some mammals. These behavioral states correlate with the phases of the human sexual response cycle: motivation − excitement; consummation − plateau and orgasm; satiety − refraction. Sexual learning (a form of associative learning) occurs when an animal starts to associate bodily features, personality, contextual cues, and other stimuli with genitally-induced sexual pleasure. Once formed, these associations in turn impinge upon both sexual wanting and sexual liking. In most female mammals, the act of copulation is controlled by several innate neurobiological processes, including the motor sexual reflex of lordosis. In males, the act of copulation is more complex, because some learning is necessary, but the innate processes (retrocontrol of penis intromission in the vagina, rhythmic movement of the pelvis, detection of female pheromones) are specific to copulation. These innate processes direct heterosexual copulation. Female lordosis behaviour became secondary in Hominidae and is non-functional in humans. Mammals usually copulate in a dorso-ventral posture, although some primate species copulate in a ventro-vental posture. Most mammals possess a vomeronasal organ that is involved in pheromone detection, including sex pheromones. Despite the fact that humans do not possess this organ, adult humans appear to be sensitive to certain mammalian pheromones that putative pheromone receptor proteins in the olfactory epithelium are capable of detecting. While sex pheromones clearly play a role in modifying sexual behavior in some mammals, the capacity for general pheromone detection and the involvement of pheromones in human sexual behavior has not yet been determined. The duration of copulation varies significantly between mammal species, and may be correlated with body mass, lasting longer in large mammals than in small mammals. The duration of copulation may also be correlated with the length of the baculum in mammals. Male mammals ejaculate semen through the penis into the female reproductive tract during copulation. Ejaculation usually occurs after only one intromission in humans, canids, and ungulates, but occurs after multiple intromissions in most mammal species. Copulation can induce ovulation in mammal species that do not ovulate spontaneously.
Biology and health sciences
Ethology
Biology
54691944
https://en.wikipedia.org/wiki/Tunnel%20construction
Tunnel construction
Tunnels are dug in types of materials varying from soft clay to hard rock. The method of tunnel construction depends on such factors as the ground conditions, the ground water conditions, the length and diameter of the tunnel drive, the depth of the tunnel, the logistics of supporting the tunnel excavation, the final use and shape of the tunnel and appropriate risk management. Tunnel construction is a subset of underground construction. There are three basic types of tunnel construction in common use: Cut-and-cover tunnel, constructed in a shallow trench and then covered over. Bored tunnel, constructed in situ, without removing the ground above. They are usually of circular or horseshoe cross-section. Some concepts of underground mining apply. Modern techniques include shotcrete used in the new Austrian tunnelling method, use of a tunnel boring machine (TBM) or tunnelling shield. But still tunnels are constructed which are secured with pit props and shoring and then are steined or timer supports are set. Techniques known from barrel vaults are helpful. Immersed tube tunnel, sunk into a body of water and laid on or buried just under its bed. History Cost In 2017 experiences show that city subway TBM tunnels cost approximately 500 Million EUR per kilometer. In Switzerland a kilometer of motorway tunnel was roughly calculated at 300 Million CHF, at the time 200 Million Eur. The undersea tunnel between Denmark and Germany is planned for 425 Million per km, in 2015. Cut-and-cover Cut-and-cover is a simple method of construction for shallow tunnels where a trench is excavated and roofed over with an overhead support system strong enough to carry the load of what is to be built above the tunnel. Two basic forms of cut-and-cover tunnelling are available: Bottom-up method: A trench is excavated, with ground support as necessary, and the tunnel is constructed in it. The tunnel may be of in situ concrete, precast concrete, precast arches, or corrugated steel arches; in early days brickwork was used. The trench is then carefully back-filled and the surface is reinstated. Top-down method: Side support walls and capping beams are constructed from ground level by such methods as slurry walling or contiguous bored piling. Then a shallow excavation allows making the tunnel roof of precast beams or in situ concrete. The surface is then reinstated except for access openings. This allows early reinstatement of roadways, services and other surface features. Excavation then takes place under the permanent tunnel roof, and the base slab is constructed. Shallow tunnels are often of the cut-and-cover type (if under water, of the immersed-tube type), while deep tunnels are excavated, often using a tunnelling shield. For intermediate levels, both methods are possible. Large cut-and-cover boxes are often used for underground metro stations, such as Canary Wharf tube station in London. This construction form generally has two levels, which allows economical arrangements for ticket hall, station platforms, passenger access and emergency egress, ventilation and smoke control, staff rooms, and equipment rooms. The interior of Canary Wharf station has been likened to an underground cathedral, owing to the sheer size of the excavation. This contrasts with many traditional stations on London Underground, where bored tunnels were used for stations and passenger access. Nevertheless, the original parts of the London Underground network, the Metropolitan and District Railways, were constructed using cut-and-cover. These lines pre-dated electric traction and the proximity to the surface was useful to ventilate the inevitable smoke and steam. A major disadvantage of cut-and-cover is the widespread disruption generated at the surface level during construction. This, and the availability of electric traction, brought about London Underground's switch to bored tunnels at a deeper level towards the end of the 19th century. Boring machines Tunnel boring machines and associated back-up systems are used to highly automate the entire tunnelling process, reducing tunnelling costs. In certain predominantly urban applications, tunnel boring is viewed as quick and cost effective alternative to laying surface rails and roads. Expensive compulsory purchase of buildings and land, with potentially lengthy planning inquiries, is eliminated. Disadvantages of TBMs arise from their usually large size – the difficulty of transporting the large TBM to the site of tunnel construction, or (alternatively) the high cost of assembling the TBM on-site, often within the confines of the tunnel being constructed. There are a variety of TBM designs that can operate in a variety of conditions, from hard rock to soft water-bearing ground. Some types of TBMs, the bentonite slurry and earth-pressure balance machines, have pressurised compartments at the front end, allowing them to be used in difficult conditions below the water table. This pressurizes the ground ahead of the TBM cutter head to balance the water pressure. The operators work in normal air pressure behind the pressurised compartment, but may occasionally have to enter that compartment to renew or repair the cutters. This requires special precautions, such as local ground treatment or halting the TBM at a position free from water. Despite these difficulties, TBMs are now preferred over the older method of tunnelling in compressed air, with an air lock/decompression chamber some way back from the TBM, which required operators to work in high pressure and go through decompression procedures at the end of their shifts, much like deep-sea divers. In February 2010, Aker Wirth delivered a TBM to Switzerland, for the expansion of the Linth–Limmern Power Stations located south of Linthal in the canton of Glarus. The borehole has a diameter of . The four TBMs used for excavating the Gotthard Base Tunnel, in Switzerland, had a diameter of about . A larger TBM was built to bore the Green Heart Tunnel (Dutch: Tunnel Groene Hart) as part of the HSL-Zuid in the Netherlands, with a diameter of . This in turn was superseded by the Madrid M30 ringroad, Spain, and the Chong Ming tunnels in Shanghai, China. All of these machines were built at least partly by Herrenknecht. , the largest TBM by head diameter ever built was the Tuen Mun–Chek Lap Kok TBM, a diameter machine built by Herrenknecht, for the Tuen Mun-Chek Lap Kok Link in Hong Kong. Clay-kicking Clay-kicking is a specialised method developed in the United Kingdom of digging tunnels in strong clay-based soil structures. Unlike previous manual methods of using mattocks which relied on the soil structure to be hard, clay-kicking was relatively silent and hence did not harm soft clay-based structures. The clay-kicker lies on a plank at a 45-degree angle away from the working face and inserts a tool with a cup-like rounded end with the feet. Turning the tool manually, the kicker extracts a section of soil, which is then placed on the waste extract. Used in Victorian civil engineering, the method found favour in the renewal of Britain's ancient sewerage systems, by not having to remove all property or infrastructure to create a small tunnel system. During the First World War, the system was used by Royal Engineer tunnelling companies to put mines beneath the German Empire lines. The method was virtually silent and so not susceptible to listening methods of detection. Shafts A temporary access shaft is sometimes necessary during the excavation of a tunnel. They are usually circular and go straight down until they reach the level at which the tunnel is going to be built. A shaft normally has concrete walls and is usually built to be permanent. Once the access shafts are complete, TBMs are lowered to the bottom and excavation can start. Shafts are the main entrance in and out of the tunnel until the project is completed. If a tunnel is going to be long, multiple shafts at various locations may be bored so that entrance to the tunnel is closer to the unexcavated area. Once construction is complete, construction access shafts are often used as ventilation shafts, and may also be used as emergency exits. Sprayed concrete techniques The New Austrian Tunneling Method (NATM) was developed in the 1960s and is the best known of a number of engineering practices that use calculated and empirical measurements to provide safe support to the tunnel lining. The main idea of this method is to use the geological stress of the surrounding rock mass to stabilize the tunnel, by allowing a measured relaxation and stress reassignment into the surrounding rock to prevent full loads becoming imposed on the supports. Based on geotechnical measurements, an optimal cross section is computed. The excavation is protected by a layer of sprayed concrete, commonly referred to as shotcrete. Other support measures can include steel arches, rock bolts and mesh. Technological developments in sprayed concrete technology have resulted in steel and polypropylene fibres being added to the concrete mix to improve lining strength. This creates a natural load-bearing ring, which minimizes the rock's deformation. By special monitoring the NATM method is flexible, even at surprising changes of the geomechanical rock consistency during the tunneling work. The measured rock properties lead to appropriate tools for tunnel strengthening. In the last decades also soft ground excavations up to became usual. Pipe jacking In pipe jacking, hydraulic jacks are used to push specially made pipes through the ground behind a TBM or shield. This method is commonly used to create tunnels under existing structures, such as roads or railways. Tunnels constructed by pipe jacking are normally small diameter bores with a maximum size of around . Box jacking Box jacking is similar to pipe jacking, but instead of jacking tubes, a box-shaped tunnel is used. Jacked boxes can be a much larger span than a pipe jack, with the span of some box jacks in excess of . A cutting head is normally used at the front of the box being jacked, and spoil removal is normally by excavator from within the box. Recent developments of the Jacked Arch and Jacked deck have enabled longer and larger structures to be installed to close accuracy. The 126m long 20m clear span underpass below the high speed rail lines at Cliffsend in Kent, UK. Underwater tunnels There are also several approaches to underwater tunnels, the two most common being bored tunnels or immersed tubes, examples are Bjørvika Tunnel and Marmaray. Submerged floating tunnels are a novel approach under consideration; however, no such tunnels have been constructed to date. Land tunnels A new kind of tunnels is used to reduce the environmental impact of motorways or railways: land tunnels. These are not underground tunnels, but built at ground level. The urban area next to the tunnel can be raised with ground or buildings (for instance parking facilities) to improve the integration of the tunnel in the immediate area. A good early example of such a land tunnel is the A2 motorway tunnel at Leidsche Rijn, near the Dutch city of Utrecht. Temporary way During construction of a tunnel it is often convenient to install a temporary railway, particularly to remove excavated spoil, often narrow gauge so that it can be double track to allow the operation of empty and loaded trains at the same time. The temporary way is replaced by the permanent way at completion, thus explaining the term "Perway". Enlargement The vehicles or traffic using a tunnel can outgrow it, requiring replacement or enlargement: The 1832 double-track mile-long tunnel from Edge Hill to Lime Street in Liverpool was near totally removed, apart from a 50-metre section at Edge Hill and a section nearer to Lime Street, as four tracks were required. The tunnel was dug out into a very deep four-track cutting, with short tunnels in places along the cutting. Train services were not interrupted as the work progressed. There are other occurrences of tunnels being replaced by open cuts, for example, the Auburn Tunnel. The Farnworth Tunnel in England was enlarged using a tunnel boring machine (TBM) in 2015. The Rhyndaston Tunnel was enlarged using a borrowed TBM so as to be able to take ISO containers. Open building pit An open building pit consists of a horizontal and a vertical boundary that keeps groundwater and soil out of the pit. There are several potential alternatives and combinations for (horizontal and vertical) building pit boundaries. The most important difference with cut-and-cover is that the open building pit is muted after tunnel construction; no roof is placed.
Technology
Transport infrastructure
null
71886336
https://en.wikipedia.org/wiki/Nucleon%20magnetic%20moment
Nucleon magnetic moment
The nucleon magnetic moments are the intrinsic magnetic dipole moments of the proton and neutron, symbols μp and μn. The nucleus of an atom comprises protons and neutrons, both nucleons that behave as small magnets. Their magnetic strengths are measured by their magnetic moments. The nucleons interact with normal matter through either the nuclear force or their magnetic moments, with the charged proton also interacting by the Coulomb force. The proton's magnetic moment was directly measured in 1933 by Otto Stern team in University of Hamburg. While the neutron was determined to have a magnetic moment by indirect methods in the mid-1930s, Luis Alvarez and Felix Bloch made the first accurate, direct measurement of the neutron's magnetic moment in 1940. The proton's magnetic moment is exploited to make measurements of molecules by proton nuclear magnetic resonance. The neutron's magnetic moment is exploited to probe the atomic structure of materials using scattering methods and to manipulate the properties of neutron beams in particle accelerators. The existence of the neutron's magnetic moment and the large value for the proton magnetic moment indicate that nucleons are not elementary particles. For an elementary particle to have an intrinsic magnetic moment, it must have both spin and electric charge. The nucleons have spin ħ/2, but the neutron has no net charge. Their magnetic moments were puzzling and defied a valid explanation until the quark model for hadron particles was developed in the 1960s. The nucleons are composed of three quarks, and the magnetic moments of these elementary particles combine to give the nucleons their magnetic moments. Description The CODATA recommended value for the magnetic moment of the proton is μp =  =  The best available measurement for the value of the magnetic moment of the neutron is Here, μN is the nuclear magneton, a standard unit for the magnetic moments of nuclear components, and μB is the Bohr magneton, both being physical constants. In SI units, these values are and A magnetic moment is a vector quantity, and the direction of the nucleon's magnetic moment is determined by its spin. The torque on the neutron that results from an external magnetic field is towards aligning the neutron's spin vector opposite to the magnetic field vector. The nuclear magneton is the spin magnetic moment of a Dirac particle, a charged, spin-1/2 elementary particle, with a proton's mass p, in which anomalous corrections are ignored. The nuclear magneton is where is the elementary charge, and is the reduced Planck constant. The magnetic moment of such a particle is parallel to its spin. Since the neutron has no charge, it should have no magnetic moment by the analogous expression. The non-zero magnetic moment of the neutron thus indicates that it is not an elementary particle. The sign of the neutron's magnetic moment is that of a negatively charged particle. Similarly, that the magnetic moment of the proton, is not almost equal to 1 N indicates that it too is not an elementary particle. Protons and neutrons are composed of quarks, and the magnetic moments of the quarks can be used to compute the magnetic moments of the nucleons. Although the nucleons interact with normal matter through magnetic forces, the magnetic interactions are many orders of magnitude weaker than the nuclear interactions. The influence of the neutron's magnetic moment is therefore only apparent for low energy, or slow, neutrons. Because the value for the magnetic moment is inversely proportional to particle mass, the nuclear magneton is about 1/2000 as large as the Bohr magneton. The magnetic moment of the electron is therefore about 1000 times larger than that of the nucleons. The magnetic moments of the antiproton and antineutron have the same magnitudes as their antiparticles, the proton and neutron, but they have opposite sign. Measurement Proton The magnetic moment of the proton was discovered in 1933 by Otto Stern, Otto Robert Frisch and Immanuel Estermann at the University of Hamburg. The proton's magnetic moment was determined by measuring the deflection of a beam of molecular hydrogen by a magnetic field. Stern won the Nobel Prize in Physics in 1943 for this discovery. Neutron The neutron was discovered in 1932, and since it had no charge, it was assumed to have no magnetic moment. Indirect evidence suggested that the neutron had a non-zero value for its magnetic moment, however, until direct measurements of the neutron's magnetic moment in 1940 resolved the issue. Values for the magnetic moment of the neutron were independently determined by R. Bacher at the University of Michigan at Ann Arbor (1933) and I. Y. Tamm and S. A. Altshuler in the Soviet Union (1934) from studies of the hyperfine structure of atomic spectra. Although Tamm and Altshuler's estimate had the correct sign and order of magnitude (), the result was met with skepticism. By 1934 groups led by Stern, now at the Carnegie Institute of Technology in Pittsburgh, and I. I. Rabi at Columbia University in New York had independently measured the magnetic moments of the proton and deuteron. The measured values for these particles were only in rough agreement between the groups, but the Rabi group confirmed the earlier Stern measurements that the magnetic moment for the proton was unexpectedly large. Since a deuteron is composed of a proton and a neutron with aligned spins, the neutron's magnetic moment could be inferred by subtracting the deuteron and proton magnetic moments. The resulting value was not zero and had a sign opposite to that of the proton. By the late 1930s, accurate values for the magnetic moment of the neutron had been deduced by the Rabi group using measurements employing newly developed nuclear magnetic resonance techniques. The value for the neutron's magnetic moment was first directly measured by L. Alvarez and F. Bloch at the University of California at Berkeley in 1940. Using an extension of the magnetic resonance methods developed by Rabi, Alvarez and Bloch determined the magnetic moment of the neutron to be . By directly measuring the magnetic moment of free neutrons, or individual neutrons free of the nucleus, Alvarez and Bloch resolved all doubts and ambiguities about this anomalous property of neutrons. Unexpected consequences The large value for the proton's magnetic moment and the inferred negative value for the neutron's magnetic moment were unexpected and could not be explained. The unexpected values for the magnetic moments of the nucleons would remain a puzzle until the quark model was developed in the 1960s. The refinement and evolution of the Rabi measurements led to the discovery in 1939 that the deuteron also possessed an electric quadrupole moment. This electrical property of the deuteron had been interfering with the measurements by the Rabi group. The discovery meant that the physical shape of the deuteron was not symmetric, which provided valuable insight into the nature of the nuclear force binding nucleons. Rabi was awarded the Nobel Prize in 1944 for his resonance method for recording the magnetic properties of atomic nuclei. Nucleon gyromagnetic ratios The magnetic moment of a nucleon is sometimes expressed in terms of its -factor, a dimensionless scalar. The convention defining the -factor for composite particles, such as the neutron or proton, is where is the intrinsic magnetic moment, is the spin angular momentum, and is the effective -factor. While the -factor is dimensionless, for composite particles it is defined relative to the nuclear magneton. For the neutron, is , so the neutron's -factor is while the proton's g-factor is The gyromagnetic ratio, symbol , of a particle or system is the ratio of its magnetic moment to its spin angular momentum, or For nucleons, the ratio is conventionally written in terms of the proton mass and charge, by the formula The neutron's gyromagnetic ratio is The proton's gyromagnetic ratio is The gyromagnetic ratio is also the ratio between the observed angular frequency of Larmor precession and the strength of the magnetic field in nuclear magnetic resonance applications, such as in MRI imaging. For this reason, the quantity γ/2π called "gamma bar", expressed in the unit MHz/T, is often given. The quantities and are therefore convenient. Physical significance Larmor precession When a nucleon is put into a magnetic field produced by an external source, it is subject to a torque tending to orient its magnetic moment parallel to the field (in the case of the neutron, its spin is antiparallel to the field). As with any magnet, this torque is proportional the product of the magnetic moment and the external magnetic field strength. Since the nucleons have spin angular momentum, this torque will cause them to precess with a well-defined frequency, called the Larmor frequency. It is this phenomenon that enables the measurement of nuclear properties through nuclear magnetic resonance. The Larmor frequency can be determined from the product of the gyromagnetic ratio with the magnetic field strength. Since for the neutron the sign of γn is negative, the neutron's spin angular momentum precesses counterclockwise about the direction of the external magnetic field. Proton nuclear magnetic resonance Nuclear magnetic resonance employing the magnetic moments of protons is used for nuclear magnetic resonance (NMR) spectroscopy. Since hydrogen-1 nuclei are within the molecules of many substances, NMR can determine the structure of those molecules. Determination of neutron spin The interaction of the neutron's magnetic moment with an external magnetic field was exploited to determine the spin of the neutron. In 1949, D. Hughes and M. Burgy measured neutrons reflected from a ferromagnetic mirror and found that the angular distribution of the reflections was consistent with spin . In 1954, J. Sherwood, T. Stephenson, and S. Bernstein employed neutrons in a Stern–Gerlach experiment that used a magnetic field to separate the neutron spin states. They recorded the two such spin states, consistent with a spin  particle. Until these measurements, the possibility that the neutron was a spin  particle could not have been ruled out. Neutrons used to probe material properties Since neutrons are neutral particles, they do not have to overcome Coulomb repulsion as they approach charged targets, unlike protons and alpha particles. Neutrons can deeply penetrate matter. The magnetic moment of the neutron has therefore been exploited to probe the properties of matter using scattering or diffraction techniques. These methods provide information that is complementary to X-ray spectroscopy. In particular, the magnetic moment of the neutron is used to determine magnetic properties of materials at length scales of 1–100 Å using cold or thermal neutrons. B. Brockhouse and C. Shull won the Nobel Prize in physics in 1994 for developing these scattering techniques. Control of neutron beams by magnetism As neutrons carry no electric charge, neutron beams cannot be controlled by the conventional electromagnetic methods employed in particle accelerators. The magnetic moment of the neutron allows some control of neutrons using magnetic fields, however, including the formation of polarized neutron beams. One technique employs the fact that cold neutrons will reflect from some magnetic materials at great efficiency when scattered at small grazing angles. The reflection preferentially selects particular spin states, thus polarizing the neutrons. Neutron magnetic mirrors and guides use this total internal reflection phenomenon to control beams of slow neutrons. Nuclear magnetic moments Since an atomic nucleus consists of a bound state of protons and neutrons, the magnetic moments of the nucleons contribute to the nuclear magnetic moment, or the magnetic moment for the nucleus as a whole. The nuclear magnetic moment also includes contributions from the orbital motion of the charged protons. The deuteron, consisting of a proton and a neutron, has the simplest example of a nuclear magnetic moment. The sum of the proton and neutron magnetic moments gives 0.879 μN, which is within 3% of the measured value 0.857 μN. In this calculation, the spins of the nucleons are aligned, but their magnetic moments offset because of the neutron's negative magnetic moment. Nature of the nucleon magnetic moments A magnetic dipole moment can be generated by two possible mechanisms. One way is by a small loop of electric current, called an "Ampèrian" magnetic dipole. Another way is by a pair of magnetic monopoles of opposite magnetic charge, bound together in some way, called a "Gilbertian" magnetic dipole. Elementary magnetic monopoles remain hypothetical and unobserved, however. Throughout the 1930s and 1940s it was not readily apparent which of these two mechanisms caused the nucleon intrinsic magnetic moments. In 1930, Enrico Fermi showed that the magnetic moments of nuclei (including the proton) are Ampèrian. The two kinds of magnetic moments experience different forces in a magnetic field. Based on Fermi's arguments, the intrinsic magnetic moments of elementary particles, including the nucleons, have been shown to be Ampèrian. The arguments are based on basic electromagnetism, elementary quantum mechanics, and the hyperfine structure of atomic s-state energy levels. In the case of the neutron, the theoretical possibilities were resolved by laboratory measurements of the scattering of slow neutrons from ferromagnetic materials in 1951. Anomalous magnetic moments and meson physics The anomalous values for the magnetic moments of the nucleons presented a theoretical quandary for the 30 years from the time of their discovery in the early 1930s to the development of the quark model in the 1960s. Considerable theoretical efforts were expended in trying to understand the origins of these magnetic moments, but the failures of these theories were glaring. Much of the theoretical focus was on developing a nuclear-force equivalence to the remarkably successful theory explaining the small anomalous magnetic moment of the electron. The problem of the origins of the magnetic moments of nucleons was recognized as early as 1935. G. C. Wick suggested that the magnetic moments could be caused by the quantum-mechanical fluctuations of these particles in accordance with Fermi's 1934 theory of beta decay. By this theory, a neutron is partly, regularly and briefly, disassociated into a proton, an electron, and a neutrino as a natural consequence of beta decay. By this idea, the magnetic moment of the neutron was caused by the fleeting existence of the large magnetic moment of the electron in the course of these quantum-mechanical fluctuations, the value of the magnetic moment determined by the length of time the virtual electron was in existence. The theory proved to be untenable, however, when H. Bethe and R. Bacher showed that it predicted values for the magnetic moment that were either much too small or much too large, depending on speculative assumptions. Similar considerations for the electron proved to be much more successful. In quantum electrodynamics (QED), the anomalous magnetic moment of a particle stems from the small contributions of quantum mechanical fluctuations to the magnetic moment of that particle. The g-factor for a "Dirac" magnetic moment is predicted to be for a negatively charged, spin-1/2 particle. For particles such as the electron, this "classical" result differs from the observed value by around 0.1%; the difference compared to the classical value is the anomalous magnetic moment. The g-factor for the electron is measured to be QED is the theory of the mediation of the electromagnetic force by photons. The physical picture is that the effective magnetic moment of the electron results from the contributions of the "bare" electron, which is the Dirac particle, and the cloud of "virtual", short-lived electron–positron pairs and photons that surround this particle as a consequence of QED. The effects of these quantum mechanical fluctuations can be computed theoretically using Feynman diagrams with loops. The one-loop contribution to the anomalous magnetic moment of the electron, corresponding to the first-order and largest correction in QED, is found by calculating the vertex function shown in the diagram on the right. The calculation was discovered by J. Schwinger in 1948. Computed to fourth order, the QED prediction for the electron's anomalous magnetic moment agrees with the experimentally measured value to more than 10 significant figures, making the magnetic moment of the electron one of the most accurately verified predictions in the history of physics. Compared to the electron, the anomalous magnetic moments of the nucleons are enormous. The g-factor for the proton is 5.6, and the chargeless neutron, which should have no magnetic moment at all, has a g-factor of −3.8. Note, however, that the anomalous magnetic moments of the nucleons, that is, their magnetic moments with the expected Dirac particle magnetic moments subtracted, are roughly equal but of opposite sign: , but . The Yukawa interaction for nucleons was discovered in the mid-1930s, and this nuclear force is mediated by pion mesons. In parallel with the theory for the electron, the hypothesis was that higher-order loops involving nucleons and pions may generate the anomalous magnetic moments of the nucleons. The physical picture was that the effective magnetic moment of the neutron arose from the combined contributions of the "bare" neutron, which is zero, and the cloud of "virtual" pions and photons that surround this particle as a consequence of the nuclear and electromagnetic forces. The Feynman diagram at right is roughly the first-order diagram, with the role of the virtual particles played by pions. As noted by A. Pais, "between late 1948 and the middle of 1949 at least six papers appeared reporting on second-order calculations of nucleon moments". These theories were also, as noted by Pais, "a flop" they gave results that grossly disagreed with observation. Nevertheless, serious efforts continued along these lines for the next couple of decades, to little success. These theoretical approaches were incorrect because the nucleons are composite particles with their magnetic moments arising from their elementary components, quarks. Quark model of nucleon magnetic moments In the quark model for hadrons, the neutron is composed of one up quark (charge  ) and two down quarks (charge  ) while the proton is composed of one down quark (charge  ) and two up quarks (charge  ). The magnetic moment of the nucleons can be modeled as a sum of the magnetic moments of the constituent quarks, although this simple model belies the complexities of the Standard Model of Particle Physics. The calculation assumes that the quarks behave like pointlike Dirac particles, each having their own magnetic moment, as computed using an expression similar to the one above for the nuclear magneton: where the q-subscripted variables refer to quark magnetic moment, charge, or mass. Simplistically, the magnetic moment of a nucleon can be viewed as resulting from the vector sum of the three quark magnetic moments, plus the orbital magnetic moments caused by the movement of the three charged quarks within it. In one of the early successes of the Standard Model (SU(6) theory), in 1964 M. Beg, B. Lee, and A. Pais theoretically calculated the ratio of proton-to-neutron magnetic moments to be , which agrees with the experimental value to within 3%. The measured value for this ratio is . A contradiction of the quantum mechanical basis of this calculation with the Pauli exclusion principle led to the discovery of the color charge for quarks by O. Greenberg in 1964. From the nonrelativistic quantum-mechanical wave function for baryons composed of three quarks, a straightforward calculation gives fairly accurate estimates for the magnetic moments of neutrons, protons, and other baryons. For a neutron, the magnetic moment is given by where d and u are the magnetic moments for the down and up quarks respectively. This result combines the intrinsic magnetic moments of the quarks with their orbital magnetic moments and assumes that the three quarks are in a particular, dominant quantum state. The results of this calculation are encouraging, but the masses of the up or down quarks were assumed to be the mass of a nucleon. The masses of the quarks are actually only about 1% that of a nucleon. The discrepancy stems from the complexity of the Standard Model for nucleons, where most of their mass originates in the gluon fields, virtual particles, and their associated energy that are essential aspects of the strong force. Furthermore, the complex system of quarks and gluons that constitute a nucleon requires a relativistic treatment. Nucleon magnetic moments have been successfully computed from first principles, requiring significant computing resources.
Physical sciences
Physical constants
Physics
47734015
https://en.wikipedia.org/wiki/Parakaryon
Parakaryon
Parakaryon myojinensis, also known as the Myojin parakaryote, is a highly unusual species of single-celled organism known only from a single specimen, described in 2012. It has features of both prokaryotes and eukaryotes but is apparently distinct from either group, making it unique among organisms discovered thus far. It is the sole species in the genus Parakaryon. Etymology The generic name Parakaryon comes from Greek παρά (pará, "beside", "beyond", "near") and κάρυον (káryon, "nut", "kernel", "nucleus"), and reflects its distinction from eukaryotes and prokaryotes. The specific name myojinensis reflects the locality where the only sample was collected: from the bristle of a scale worm collected from hydrothermal vents at Myōjin Knoll (明神海丘, ), about deep in the Pacific Ocean, near Aogashima island, southeast of the Japanese archipelago. The authors explain the full binomial as "next to (eu)karyote from Myojin". Structure Parakaryon myojinensis has some structural features unique to eukaryotes, some features unique to prokaryotes, and some features different to both. The table below details these structures, with matching traits coloured beige. Interpretations Genuine species or artifact Yamaguchi et al. proposed in their 2012 paper that there were three reasons why the specimen they named P. myojinensis was not simply a result of parasitic or predatory bacteria living within another prokaryote host, which they acknowledged is known from several examples: "It is difficult to imagine that multiple bacteria of different species attacked a host at the same time." They referred to Figure 2d, showing the isolated forms of the inclusions, one large helix with three turns (volume 2.3 μm³) and two much smaller pieces (volumes 0.2 & 0.1 μm³). "Secondly, because the cytoplasms of the host and the endosymbionts show orderly and electron-dense cellular structures, no digestion in either host or endosymbionts appears to have occurred." "Lastly, if Parakaryon myojinensis originated due to a current interaction between predators and hosts, then there must be dense populations of predators and hosts, because predators need to find hosts quickly for survival once they are released from the previous host." In 2016, Yamaguchi et al. detailed the discovery of helical bacteria on polychaetes collected from the same location, which they named "Myojin spiral bacteria". In 2020, Yamaguchi and two others published a new short paper on their studies of the microbiota of polychaetes from Myojin Knoll. The authors stated "Among them, we often observed bacteria that contained intracellular bacteria on ultrathin sections." They studied one such specimen and concluded that the "host" bacterium was dead and its cell wall broken. The smaller bacteria could have been feeding on the larger bacterium but they also suggest "The association of the bacteria with dead bacteria could also have been artificially caused by the centrifugation steps used for the preparation of specimens for electron microscopy." In this paper, all five mentions of P. myojinensis were as a valid taxon with no implication that it is an artifact. Evolutionary significance It is not clear whether P. myojinensis can or should be classified as an eukaryote or a prokaryote, the two categories to which all other cellular life belongs. Adding to the difficulties of classification, only one instance of this organism has been discovered to date, and so scientists have been unable to study it further. Its discoverers suggested that additional specimens would be needed for culturing and DNA sequencing to place the organism in a phylogenetic context. British evolutionary biochemist Nick Lane hypothesized in a 2015 book that the existence of P. myojinensis could be the first known example of symbiogenesis outside eukaryotes, which could offer clues to the requirements for the development of complex life in general.
Biology and health sciences
Other organisms: General
Plants
53426369
https://en.wikipedia.org/wiki/Scotoplanes%20globosa
Scotoplanes globosa
Scotoplanes globosa, commonly known as the sea pig, is a species of sea cucumber that lives in the deep sea. It was first described by Hjalmar Théel, a Swedish scientist. Scotoplanes globosa, along with numerous other sea cucumbers were discovered by Théel during an expedition on between the years of 1873-1876. Scotoplanes globosa was officially described in 1882, 6 to 9 years after its first sighting. Scotoplanes globosa is most closely related to the genus Peniagone. Ecology Congregations of smaller Scotoplanes globosa are often observed on the ocean floor in groups of 10 to 30. However, groups of Scotoplanes globosa have been observed to be as many as 600 individuals in one congregation. A congregation of Scotoplanes globosa is called a "trawl". These groups of Scotoplanes globosa often appear to all be facing in one direction, into the ocean current. It is believed that this behavior aids S. globosa in the detection of the richest feeding sites. Scotoplanes globosa has also been observed to be the host of multiple deep-sea parasites, such as the small gastropods Stilapex and Crinolamia, and various parasitic crustaceans. These parasites typically bore small holes into the body wall of S. globosa. Scotoplanes globosa are also often accompanied by a symbiotic lithodid crab, the Neolithodes diomedea. It is believed that approximately 22% of Scotoplanes globosa are attended by at least one of these crabs. One possible theory is that these crabs latch onto S. globosa gaining access to nutrients and movement, while the host gets protection from parasites. At this time, scientists are unsure whether the relationship between S. globosa and N. diomedea is mutualistic or commensal. Anatomy Scotoplanes globosa is typically 2 to 15 cm in length and appear to be a translucent white color. S. globosa is covered in tube-like feet which are used in locomotion. S. globosa are bilaterally symmetrical, covered in tube-like feet which are used in locomotion and possibly respiration. The tube-like structures found on top of the Scotoplanes globosa are also feet, as opposed to antennae. Scientists are still unsure whether these upper-tube-feet are used in locomotion or used as sensory accessories. They are quite buoyant and are easily displaced by strong currents. Scotoplanes globosa were found to contain only one gonad in both males and females, with evidence that gametogenesis occurred. Locomotion Scotoplanes globosa has a soft, round body with five-to-seven pairs of long, tube-like limbs extending from its body. S. globosa uses these limbs for locomotion. They “walk” along the ocean floor using muscle constrictions to push fluid in and out the tube feet cavities. Scotoplanes is the only genus of holothurians that have been observed to "walk" in this manner. Distribution and habitat Scotoplanes globosa are found in almost all deep-sea regions in the world. Specifically, S. globosa live on the abyssal plain. They are commonly found off the coast of San Diego, as well as in the Arctic, Atlantic, Pacific, and Indian Oceans. Scotoplanes globosa typically live at depths of over 1,000 m (3,280 ft), and have been found in the deepest locations in the ocean, including the Kermadec Trench at a depth of 6,659 m (21,850 ft) and in the Philippine Trench at a depth of 9,997 m (32,800 ft) by the Galathea expedition in the 1950s. Scotoplanes globosa have been observed to face in a certain direction which is normally against the current and that is because it helps them to search for more fresh and better food quality. Diet Scotoplanes globosa is a deposit feeder, eating detritus which has sunk to the ocean floor. S. globosa has been observed to strongly prefer consuming fresh, recently fallen (approximately within the last 100 days) sediments on the surface of the ocean floor as opposed to older sediments. These freshly-fallen sediments are more nutrient-rich.  Scotoplanes globosa captures food through its mucus-covered tentacles which surround their mouth. S. globosa is also known to congregate around the carcasses of whales which have fallen to the seafloor. Lundsten et al.. (2010) determined that S. globosa find deep-sea whale carcasses by smell, as well as other nutrient-rich food sources; the extremely nutrient-rich whale carcasses also attract other deep-sea creatures in large numbers.''
Biology and health sciences
Echinoderms
Animals
59623952
https://en.wikipedia.org/wiki/Lower%20mantle
Lower mantle
The lower mantle, historically also known as the mesosphere, represents approximately 56% of Earth's total volume, and is the region from 660 to 2900 km below Earth's surface; between the transition zone and the outer core. The preliminary reference Earth model (PREM) separates the lower mantle into three sections, the uppermost (660–770 km), mid-lower mantle (770–2700 km), and the D layer (2700–2900 km). Pressure and temperature in the lower mantle range from 24–127 GPa and 1900–2600 K. It has been proposed that the composition of the lower mantle is pyrolitic, containing three major phases of bridgmanite, ferropericlase, and calcium-silicate perovskite. The high pressure in the lower mantle has been shown to induce a spin transition of iron-bearing bridgmanite and ferropericlase, which may affect both mantle plume dynamics and lower mantle chemistry. The upper boundary is defined by the sharp increase in seismic wave velocities and density at a depth of . At a depth of 660 km, ringwoodite () decomposes into Mg-Si perovskite and magnesiowüstite. This reaction marks the boundary between the upper mantle and lower mantle. This measurement is estimated from seismic data and high-pressure laboratory experiments. The base of the mesosphere includes the D″ zone which lies just above the mantle–core boundary at approximately . The base of the lower mantle is about 2700 km. Physical properties The lower mantle was initially labelled as the D-layer in Bullen's spherically symmetric model of the Earth. The PREM seismic model of the Earth's interior separated the D-layer into three distinctive layers defined by the discontinuity in seismic wave velocities: 660–770 km: A discontinuity in compression wave velocity (6–11%) followed by a steep gradient is indicative of the transformation of the mineral ringwoodite to bridgmanite and ferropericlase and the transition between the transition zone layer to the lower mantle. 770–2700 km: A gradual increase in velocity indicative of the adiabatic compression of the mineral phases in the lower mantle. 2700–2900 km: The D-layer is considered the transition from the lower mantle to the outer core. The temperature of the lower mantle ranges from at the topmost layer to at a depth of . Models of the temperature of the lower mantle approximate convection as the primary heat transport contribution, while conduction and radiative heat transfer are considered negligible. As a result, the lower mantle's temperature gradient as a function of depth is approximately adiabatic. Calculation of the geothermal gradient observed a decrease from at the uppermost lower mantle to at . Composition The lower mantle is mainly composed of three components, bridgmanite, ferropericlase, and calcium-silicate perovskite (CaSiO3-perovskite). The proportion of each component has been a subject of discussion historically where the bulk composition is suggested to be, Pyrolitic: derived from petrological composition trends from upper mantle peridotite suggesting homogeneity between the upper and lower mantle with a Mg/Si ratio of 1.27. This model implies that the lower mantle is composed of 75% bridgmanite, 17% ferropericlase, and 8% CaSiO3-perovskite by volume. Chondritic: suggests that the Earth's lower mantle was accreted from the composition of chondritic meteorite suggesting a Mg/Si ratio of approximately 1. This infers that bridgmanite and CaSiO3-perovskites are major components. Laboratory multi-anvil compression experiments of pyrolite simulated conditions of the adiabatic geotherm and measured the density using in situ X-ray diffraction. It was shown that the density profile along the geotherm is in agreement with the PREM model. The first principle calculation of the density and velocity profile across the lower mantle geotherm of varying bridgmanite and ferropericlase proportion observed a match to the PREM model at an 8:2 proportion. This proportion is consistent with the pyrolitic bulk composition at the lower mantle. Furthermore, shear wave velocity calculations of pyrolitic lower mantle compositions considering minor elements resulted in a match with the PREM shear velocity profile within 1%. On the other hand, Brillouin spectroscopic studies at relevant pressures and temperatures revealed that a lower mantle composed of greater than 93% bridgmanite phase has corresponding shear-wave velocities to measured seismic velocities. The suggested composition is consistent with a chondritic lower mantle. Thus, the bulk composition of the lower mantle is currently a subject of discussion. Spin transition zone The electronic environment of two iron-bearing minerals in the lower mantle (bridgmanite, ferropericlase) transitions from a high-spin (HS) to a low-spin (LS) state. Fe2+ in ferropericlase undergoes the transition between 50–90 GPa. Bridgmanite contains both Fe3+ and Fe2+ in the structure, the Fe2+ occupy the A-site and transition to a LS state at 120 GPa. While Fe3+ occupies both A- and B-sites, the B-site Fe3+ undergoes HS to LS transition at 30–70 GPa while the A-site Fe3+ exchanges with the B-site Al3+ cation and becomes LS. This spin transition of the iron cation results in the increase in partition coefficient between ferropericlase and bridgmanite to 10–14 depleting bridgmanite and enriching ferropericlase of Fe2+. The HS to LS transition are reported to affect the physical properties of the iron bearing minerals. For example, the density and incompressibility was reported to increase from HS to LS state in ferropericlase. The effects of the spin transition on the transport properties and rheology of the lower mantle is currently being investigated and discussed using numerical simulations. History Mesosphere (not to be confused with mesosphere, a layer of the atmosphere) is derived from "mesospheric shell", coined by Reginald Aldworth Daly, a Harvard University geology professor. In the pre-plate tectonics era, Daly (1940) inferred that the outer Earth consisted of three spherical layers: lithosphere (including the crust), asthenosphere, and mesospheric shell. Daly's hypothetical depths to the lithosphere-asthenosphere boundary ranged from , and the top of the mesospheric shell (base of the asthenosphere) were from . Thus, Daly's asthenosphere was inferred to be thick. According to Daly, the base of the solid Earth mesosphere could extend to the base of the mantle (and, thus, to the top of the core). A derivative term, mesoplates, was introduced as a heuristic, based on a combination of "mesosphere" and "plate", for postulated reference frames in which mantle hotspots exist.
Physical sciences
Tectonics
Earth science
53437703
https://en.wikipedia.org/wiki/Iron%28III%29%20citrate
Iron(III) citrate
Ferric citrate or iron(III) citrate describes any of several complexes formed upon binding any of the several conjugate bases derived from citric acid with ferric ions. Most of these complexes are orange or red-brown. They contain two or more Fe(III) centers. Ferric citrates contribute to the metabolism of iron by some organisms. Citrates, which are released by plant roots and by some microorganisms, can solubilize iron compounds in the soil. For example ferric hydroxide reacts with citrates to give form soluble complexes. This solubilization provides a pathway for the absorption of the ferric ions by various organisms. Ferric citrate is used in medicine to regulate the blood levels of iron in patients with chronic kidney disease on dialysis. It acts by forming an insoluble compound with phosphate present in the diet and thus minimizing its uptake by the digestive system. Structure Citrate forms a variety of coordination complexes with ferric ions. Some might be oligomers, and polymers. Thus, ferric citrate is not a single well-defined compound, but a family of compounds, many with similar formulas. These various forms can coexist in equilibrium. At physiological pH, ferric citrate forms an insoluble red polymer. In other conditions, it forms anionic complexes like []2()2]2−. In the present of excess citrate anions, the iron forms negatively charged complexes like [()2]5− and [()8()3]7−. Photoreduction The ion in ferric citrate (as in many iron(III) carboxylates) is reduced by exposure to light, especially blue and ultraviolet, to (ferrous) ion with concomitant oxidation of the carboxyl group adjacent to the hydroxyl, yielding carbon dioxide and acetonedicarboxylate: 2 + R2-C(OH)- → 2 + R2-C=O + + where -R represents the group -. This reaction plays an important role in plant metabolism: iron is carried up from the roots as ferric citrate dissolved in the sap, and photoreduced in the leaves to iron(II) that can be transported into the cells. Additional reading
Physical sciences
Citrates
Chemistry
77669791
https://en.wikipedia.org/wiki/Boranes
Boranes
A borane is a compound with the formula although examples include multi-boron derivatives. A large family of boron hydride clusters is also known. In addition to some applications in organic chemistry, the boranes have attracted much attention as they exhibit structures and bonding that differs strongly from the patterns seen in hydrocarbons. Hybrids of boranes and hydrocarbons, the carboranes, are also a well developed class of compounds. History The development of the chemistry of boranes led to innovations in synthetic methods as well as structure and bonding. First, new synthetic techniques were required to handle diborane and many of its derivatives, which are both pyrophoric and volatile. Alfred Stock invented the glass vacuum line for this purpose. The structure of diborane was correctly predicted in 1943 many years after its discovery. Interest in boranes increased during World War II due to the potential of uranium borohydride for enrichment of the uranium isotopes and as a source of hydrogen for inflating weather balloons. In the US, a team led by Schlesinger developed the basic chemistry of the anionic boron hydrides and the related aluminium hydrides. Schlesinger's work laid the foundation for a host of boron hydride reagents for organic synthesis, most of which were developed by his student Herbert C. Brown. Borane-based reagents are now widely used in organic synthesis. Brown was awarded the Nobel Prize in Chemistry in 1979 for this work. Synthesis Most boranes are prepared directly or indirectly from diborane. Diborane reacts with alkenes to give alkylboranes, a process known as hydroboration: Alkyl and aryl boranes can also be produced by alkylation of chloroboranes and boronic esters. Classes of boranes Binary boron hydrides The parent boranes are binary boron hydrides, starting with borane (BH3) and its dimer diborane (B2H6). Pyrolysis of these species leads to higher boranes, such as tetraborane and pentaborane. These two are early members of the boron hydride clusters. Primary and secondary boranes This family of boron hydrides includes mono- and dialkylboranes. The simplest members readily engage in redistribution reactions: With bulky substituents, primary and secondary boranes are more readily isolable and even useful. Examples include thexylborane and 9-BBN. Almost all primary and secondary boranes are dimeric with bridging hydrides. Tertiary boranes Most work focuses on trialkyl and triaryl boranes. These are all monomers (in contrast to the corresponding trialkyl and triarylaluminium compounds). Their BC3 cores are planar. Well known examples are trimethylboron, triethylboron, and triphenylboron. Many tertiary boranes are produced by hydroboration. Reactivity of boranes The lowest borane, exists only transiently, dimerizing instantly to form diborane, . Its adduct borane–tetrahydrofuran and borane–dimethylsulfide are useful in hydroboration reactions.
Physical sciences
Hydrogen compounds
Chemistry
71930047
https://en.wikipedia.org/wiki/Calcarisporiellales
Calcarisporiellales
Calcarisporiellaceae is a family of fungi within the subkingdom Mucoromycota. It is the only family in the order Calcarisporiellales, class Calcarisporiellomycetes, subphylum Calcarisporiellomycotina and phylum Calcarisporiellomycota. It contains two known genera, Calcarisporiella and Echinochlamydosporium. The two genera each have one species. General description They have a thallus that is branched, with septate (has a singular septum) hyphae. The vegetative hyphae is hyaline (has a glassy appearance), smooth and thin-walled. It has cultures with no distinctive smell. The sporangiophores (a receptacle in ferns which bears the sporangia, if present) simple, hyaline, smooth, arising from undifferentiated hyphae. The sporangia is unispored, ellipsoid (in shape), with or without a small columella. Spores are uninucleate (having a single nucleus), hyaline, smooth, thin-walled, ovoid to ellipsoid, with a rounded base. Chlamydospores (if present) are 1-celled, elongate to globose, thick-walled and spiny, and are born laterally on short hyphae. The sexual cycle not known, but they are saprotrophic in soil and non-nematophagous (not carnivorous). It can be found in soils. History Calcarisporiella was originally published in 1974 and originally thought to be an anamorphic member of the Pezizomycotina division, but later phylogenetic analysis of rDNA found that it was separate from the Endogonales and Mucorales clades. A new genus, Echinochlamydosporium, was described in 2011 and placed in Mortierellaceae family. Then in 2018, after molecular analyses, Echinochlamydosporium was transferred to a new family Calcarisporiellaceae with Calcarisporiella. The newly described Calcarisporiellomycota (comprising Calcarisporiella thermophila and Echinochlamydosporium variabile) represented a deep lineage with strongest affinities to Mucoromycota or Mortierellomycota. Evolution and systematics The Calcarisporiellaceae are a monophyletic group containing two species. According to a 2018 phylogenetic analysis, they are the sister taxon of the phylum Mucoromycota. Along with Mortierellomycota and Glomeromycota, they compose the fungal subkingdom Mucoromyceta. Phylum Calcarisporiellomycota Subphylum Calcarisporiellomycotina Class Calcarisporiellomycetes Order Calcarisporiellales Family Calcarisporiellaceae Calcarisporiella Calcarisporiella thermophila Echinochlamydosporium Echinochlamydosporium variabile
Biology and health sciences
Basics
Plants
56272178
https://en.wikipedia.org/wiki/Carbon%20budget
Carbon budget
A carbon budget is a concept used in climate policy to help set emissions reduction targets in a fair and effective way. It examines the "maximum amount of cumulative net global anthropogenic carbon dioxide () emissions that would result in limiting global warming to a given level". It can be expressed relative to the pre-industrial period (the year 1750). In this case, it is the total carbon budget. Or it can be expressed from a recent specified date onwards. In that case it is the remaining carbon budget. A carbon budget that will keep global warming below a specified temperature limit is also called an emissions budget or quota, or allowable emissions. Apart from limiting the global temperature increase, another objective of such an emissions budget can be to limit sea level rise. Scientists combine estimates of various contributing factors to calculate the carbon budget. The estimates take into account the available scientific evidence as well as value judgments or choices. Global carbon budgets can be further sub-divided into national emissions budgets. This can help countries set their own emission goals. Emissions budgets indicate a finite amount of carbon dioxide that can be emitted over time, before resulting in dangerous levels of global warming. The change in global temperature is independent of the source of these emissions, and is largely independent of the timing of these emissions. To translate global carbon budgets to the country level, a set of value judgments have to be made on how to distribute the remaining carbon budget over all the different countries. This should take into account aspects of equity and fairness between countries as well as other methodological choices. There are many differences between nations, such as population size, level of industrialisation, historic emissions, and mitigation capabilities. For this reason, scientists are attempting to allocate global carbon budgets among countries using various principles of equity. Definition The IPCC Sixth Assessment Reports defines carbon budget as the following two concepts: "An assessment of carbon cycle sources and sinks on a global level, through the synthesis of evidence for fossil fuel and cement emissions, emissions and removals associated with land use and land-use change, ocean and natural land sources and sinks of carbon dioxide (CO2), and the resulting change in atmospheric CO2 concentration. This is referred to as the global carbon budget."; or "The maximum amount of cumulative net global anthropogenic CO2 emissions that would result in limiting global warming to a given level with a given probability, taking into account the effect of other anthropogenic climate forcers. This is referred to as the total carbon budget when expressed starting from the pre-industrial period, and as the remaining carbon budget when expressed from a recent specified date." Global carbon budgets can be further divided into national emissions budgets, so that countries can set specific climate mitigation goals. An emissions budget may be distinguished from an emissions target, as an emissions target may be internationally or nationally set in accordance with objectives other than a specific global temperature and are commonly applied to the annual emissions in a single year as well. Estimations Recent and currently remaining carbon budget Several organisations provide annual updates to the remaining carbon budget, including the Global Carbon Project, the Mercator Research Institute on Global Commons and Climate Change (MCC) and the CONSTRAIN project. In March 2022, before formal publication of the "Global Carbon Budget 2021" preprint, scientists reported, based on Carbon Monitor (CM) data, that after COVID-19-pandemic-caused record-level declines in 2020, global emissions rebounded sharply by 4.8% in 2021, indicating that at the current trajectory, the carbon budget for a ⅔ likelihood for limiting warming to 1.5 °C would be used up within 9.5 years. In April 2022, the now reviewed and officially published The Global Carbon Budget 2021 concluded that fossil emissions rebounded from pandemic levels by around +4.8% relative to 2020 emissions – returning to 2019 levels. It identifies three major issues for improving reliable accuracy of monitoring, shows that China and India surpassed 2019 levels (by 5.7% and 3.2%) while the EU and the US stayed beneath 2019 levels (by 5.3% and 4.5%), quantifies various changes and trends, for the first time provides models' estimates that are linked to the official country GHG inventories reporting, and suggests that the remaining carbon budget at 1. Jan 2022 for a 50% likelihood to limit global warming to 1.5 °C (albeit a temporary exceedence is to be expected) is 120 GtC (420 Gt) – or 11 years of 2021 emissions levels. This does not mean that likely 11 years remain to cut emissions but that if emissions stayed the same, instead of increasing like in 2021, 11 years of constant GHG emissions would be left in the hypothetical scenario that all emissions suddenly ceased in the 12th year. (The 50% likelihood may be describable as a kind of minimum plausible deniability requirement as lower likelihoods would make the 1.5 °C goal "unlikely".) Moreover, other trackers show (or highlight) different amounts of carbon budget left, such as the MCC, which as of May 2022 shows "7 years 1 month left" and different likelihoods have different carbon budgets: a 83% likelihood would mean 6.6 ±0.1 years left (ending in 2028) according to CM data. In October 2023 a group of researchers updated the carbon budget including the CO2 emitted at 2020-2022 and new findings about the role of reduced presence of polluting particles in the atmosphere. They found we can emit 250 GtCO2 or 6 years of emissions at current level starting from January 2023, for having a 50% chance to stay below 1.5 degrees. For reaching this target humanity will need to zero CO2 emissions by the year 2034. To have a 50% chance of staying below 2 degrees humanity can emit 1220 Gt or 30 years of emissions at current level. Carbon budget in gigatonnes and factors The finding of an almost linear relationship between global temperature rise and cumulative carbon dioxide emissions has encouraged the estimation of global emissions budgets in order to remain below dangerous levels of warming. Since the pre-industrial period (year 1750) to 2019, approximately 2390 Gigatonnes of (Gt ) has already been emitted globally. Scientific estimations of the remaining global emissions budgets/quotas differ due to varied methodological approaches, and considerations of thresholds. Estimations might not include all amplifying climate change feedbacks, although the most authoritative carbon budget assessments as summarised by the IPCC do account explicitly for these. Scientists assess the size of remaining carbon budgets using estimates of: past warming caused by human activities, the amount of warming per cumulative unit of CO2 emissions (also known as the Transient Climate Response to cumulative Emissions of carbon dioxide, or TCRE), the amount of warming that could still occur once all emissions of CO2 are halted (known as the Zero Emissions Commitment), and the impact of Earth system feedbacks that would otherwise not be covered. The estimates vary according to the global temperature target that is chosen, the probability of staying below that target, and the emission of other non- greenhouse gases (GHGs). This approach was first applied in the 2018 Special report on Global Warming of 1.5 °C by the IPCC, and was also used in its 2021 Working Group I Contribution to the Sixth Assessment Report. Carbon budget estimates depend on the likelihood or probability of avoiding a temperature limit, and the assumed warming that is projected to be caused by non- emissions. These estimates assume non- emissions are also reduced in line with deep decarbonisation scenarios that reach global net zero emissions. Carbon budget estimates thus depend on how successful society is in reducing non- emissions together with carbon dioxide emissions. Scientists estimated that remaining carbon budgets can be 220 Gt higher or lower depending on how successful non- emissions are reduced. National emissions budgets Carbon budgets are applicable to the global level. To translate these global carbon budgets to the country level, a set of value judgments have to be made on how to distribute the total and remaining carbon budget. In light of the many differences between nations, including but not limited to population, level of industrialisation, national emissions histories, and mitigation capabilities, scientists have made attempts to allocate global carbon budgets among countries using methods that follow various principles of equity. Allocating national emissions budgets is comparable to sharing the effort to reduce global emissions, underlined by some assumptions of state-level responsibility of climate change. Many authors have conducted quantitative analyses which allocate emissions budgets, often simultaneously addressing disparities in historical GHG emissions between nations. One guiding principle that is used to allocate global emissions budgets to nations is the principle of "common but differentiated responsibilities and respective capabilities" that is included in the United Nations Framework Convention on Climate Change (UNFCCC). This principle is not defined in further detail in the UNFCCC but is broadly understood to recognize nations' different cumulative historical contributions to global emissions as well as their different development stages. From this perspective, those countries with greater emissions during a set time period (for example, since the pre-industrial era to the present) are the most responsible for addressing excess emissions, as are countries that are richer. Thus, their national emissions budgets have to be smaller than those from countries that have polluted less in the past, or are poorer. The concept of national historical responsibility for climate change has prevailed in the literature since the early 1990s and has been part of the key international agreements on climate change (UNFCCC, the Kyoto Protocol and the Paris Agreement). Consequently, those countries with the highest cumulative historical emissions have the most responsibility to take the strongest actions and help developing countries to mitigate their emissions and adapt to climate change. This principle is recognized in international treaties and has been part of the diplomatic strategies by developing countries, that argue that they need larger emissions budgets to reduce inequity and achieve sustainable development. Another common equity principle for calculating national emissions budgets is the "egalitarian" principle. This principle stipulates individuals should have equal rights, and therefore emissions budgets should be distributed proportionally according to state populations. Some scientists have thus reasoned the use of national per-capita emissions in national emissions budget calculations. This principle may be favoured by nations with larger or rapidly growing populations, but raises the question whether individuals can have a right to pollute. A third equity principle that has been employed in national budget calculations considers national sovereignty. The "sovereignty" principle highlights the equal right of nations to pollute. The grandfathering method for calculating national emissions budgets uses this principle. Grandfathering allocates these budgets proportionally according to emissions at a particular base year, and has been used under international regimes such as the Kyoto Protocol and the early phase of the European Union Emissions Trading Scheme (EU ETS) This principle is often favoured by developed countries, as it allocates larger emissions budgets to them. However, recent publications highlight that grandfathering is unsupported as an equity principle as it "creates 'cascading biases' against poorer states, is not a 'standard of equity'". Other scholars have highlighted that "to treat states as the owners of emission rights has morally problematic consequences". Pathways to stay within carbon budget The steps that can be taken to stay within one's carbon budget are explained within the concept of climate change mitigation.
Physical sciences
Climate change
Earth science
47774240
https://en.wikipedia.org/wiki/Homo%20naledi
Homo naledi
Homo naledi is an extinct species of archaic human discovered in 2013 in the Rising Star Cave system, Gauteng province, South Africa (See Cradle of Humankind), dating to the Middle Pleistocene 335,000–236,000 years ago. The initial discovery comprises 1,550 specimens of bone, representing 737 different skeletal elements, and at least 15 different individuals. Despite this exceptionally high number of specimens, their classification with other Homo species remains unclear. Along with similarities to contemporary Homo, they share several characteristics with the ancestral Australopithecus as well as early Homo (mosaic evolution), most notably a small cranial capacity of 465–610 cm3 (28.4–37.2 cu in), compared with 1,270–1,330 cm3 (78–81 cu in) in modern humans. They are estimated to have averaged in height and in weight, yielding a small relative brain size, encephalization quotient, of 4.5. H. naledi brain anatomy seems to have been similar to contemporary Homo, which could indicate comparable cognitive complexity. The persistence of small-brained humans for so long in the midst of bigger-brained contemporaries revises the previous conception that a larger brain would necessarily lead to an evolutionary advantage, and their mosaic anatomy greatly expands the known range of variation for the genus. H. naledi anatomy indicates that, although they were capable of long-distance travel with a humanlike stride and gait, they were more arboreal than other Homo, better adapted to climbing and suspensory behaviour in trees than endurance running. Tooth anatomy suggests consumption of gritty foods covered in particulates such as dust or dirt. Although they have not been associated with stone tools or any indication of material culture, they appear to have been dexterous enough to produce and handle tools, and therefore may have manufactured Early or Middle Stone Age industries found in excavations near their fossils, since no other human species in the vicinity at that time has been discovered. It has also been controversially postulated that these individuals were buried deliberately by being carried into and placed in the chamber. Some researchers suggest that H. naledi also may have carved crosshatched rock signs in a passage to what could be a burial chamber, but many paleontologists question this theory. Discovery On 13 September 2013 while exploring the Rising Star Cave system in the Cradle of Humankind, South Africa, cavers Rick Hunter and Steven Tucker found hominin fossils at the bottom of the Dinaledi Chamber. On 24 September, they returned to the chamber and took photographs that they showed to South African palaeoanthropologists Pedro Boshoff and Lee Rogers Berger on 1 October. Berger assembled an excavation team that included Hunter and Tucker, the so-called "Underground Astronauts". The chamber had been entered at least once before, by cavers in the early 1990s. They rearranged some bones and may have caused further damage, although much of the floor in the chamber had not been walked on prior to 2013. The site lies about from the main entrance, at the bottom of a vertical drop, and the long main passage is only at its narrowest. In total, more than 1,550 pieces of bone belonging to at least fifteen individuals (9 immature and 6 adults) have been recovered from the clay-rich sediments. Berger and colleagues published the findings in 2015. The fossils represent 737 anatomical elements – including portions of the skull, jaw, ribs, teeth, limbs, and inner ear bones – from old, adult, young, and infantile individuals. There are also some articulated or near-articulated elements, including the skull with the jaw bone, and nearly complete hands and feet. With the number of individuals of both genders across several age demographics, it then became the richest assemblage of associated fossil hominins discovered in Africa. Aside from the Sima de los Huesos collection and later Neanderthal and modern human samples, the excavation site has the most comprehensive representation of skeletal elements across the lifespan, and from multiple individuals, in the hominin fossil record by that time. The holotype specimen, DH1, comprises a male partial calvaria (top of the skull), partial maxilla, and nearly complete jawbone. The paratypes, DH2 through DH5, all comprise partial calvaria. Because the remains came from Rising Star Cave, in 2015, Berger and colleagues named the species Homo naledi with the specific name meaning "star" in the Sotho language. The remains of at least three additional individuals (two adults and a child) were reported in the Lesedi Chamber of the cave by John Hawks and colleagues in 2017. Classification In 2017, the Dinaledi remains were dated to 335,000–236,000 years ago in the Middle Pleistocene, using electron spin resonance (ESR) and uranium–thorium (U-Th) dating on three teeth, and U-Th and paleomagnetic dating of the sediments they were deposited in. Previously, the fossils were thought to have dated to 1–2 million years ago because previously no similarly small-brained hominins had been known from such a recent date in Africa. The smaller-brained Homo floresiensis of Indonesia lived on an isolated island and, apparently, became extinct shortly after the arrival of modern humans. The ability of such a small-brained hominin to have survived for so long in the midst of bigger-brained Homo greatly revises previous conceptions of human evolution and the notion that a larger brain would necessarily lead to an evolutionary advantage. Their mosaic anatomy also greatly expands the range of variation for the genus. H. naledi is hypothesised to have branched off very early from contemporaneous Homo. It is unclear whether they branched off at approximately the time of H. habilis, H. rudolfensis, and A. sediba, are a sister taxon to H. erectus and the contemporaneous large-brained Homo, or are a sister taxon to the descendants of H. heidelbergensis (modern humans and Neanderthals). This would mean that they branched off from contemporary Homo at latest before 900,000 years ago, and possibly as early as the Pliocene. It is also possible their ancestors speciated after an interbreeding event between Homo and late australopithecines. Comparison of skull features reveals that H. naledi has the closest affinities to H. erectus. It is unclear whether these H. naledi were an isolated population in the Cradle of Humankind, or ranged across Africa. If the latter, then several gracile hominin fossils from African sites that traditionally have been classified as late H. erectus might represent H. naledi specimens. Although earlier study placed H. naledi as the late offshoot from H. erectus in the phylogenetic tree, the recent study places H. naledi as the early offshoot from H. erectus in the phylogenetic tree. Anatomy Skull Two male H. naledi skulls from the Dinaledi chamber had cranial volumes of approximately , and two female skulls . A male H. naledi skull from the Lesedi chamber had a cranial volume of . The Dinaledi specimens are more similar to the cranial capacity of australopithecines. For comparison, H. erectus averaged approximately , and modern humans for males and females respectively. The Lesedi specimen is more within the range of H. habilis and H. e. georgicus. The encephalization quotient of H. naledi was estimated at 3.75, which is the same as the pygmy H. floresiensis, but notably smaller than all other Homo. Contemporary Homo were all above 6, H. e. georgicus at 3.55, and A. africanus at 3.81. It is unclear whether H. naledi inherited small brain size from the last common Homo ancestor, or whether it was evolved secondarily and more recently. The skull morphology is more similar to Homo, with a slenderer shape, the presence of temporal and occipital lobes of the brain, and reduced post-orbital constriction, with the skull not becoming narrower behind the eye-sockets. The frontal lobe morphology is more or less the same in all Homo brains despite size, and differs from Australopithecus, a characteristic that has been implicated in the production of tools, the development of language, and sociality. Similarly to modern humans (but not to fossil hominins, including South African australopithecines, H. erectus, and Neanderthals) the permanent second molar of H. naledi erupted comparatively late in life, emerging alongside the premolars instead of before, a characteristic that indicates a slower maturation unusually comparable to modern humans. The tooth formation rate of the front teeth is also most similar to modern humans. The overall size and shape of the molars most closely resemble those of three unidentified Homo specimens from the local Swartkrans and East African Koobi Fora Caves, and are similar in size (but not shape) to Pleistocene H. sapiens. The necks of the molars are proportionally similar to those of A. afarensis and Paranthropus. Unlike modern humans and contemporary Homo, H. naledi lacks several accessory dental features, and has a high frequency of individuals who present main cusps, namely the metacone (midline on the tongue-side) and hypocone (to the right on the lip-side) on the second and third molars, and a Y-shaped hypoconulid (a ridge on the lip-side toward the cheek) on all three molars. The premolars of H. naledi are characterised by a well-developed P3 and P4 metaconid, strongly developed P3 mesial marginal ridge, a larger P3 than P4, and tall crowns, distinguishing them from the premolars of other Homo species. Nonetheless, H. naledi also has many dental similarities with contemporary Homo. The anvil (a middle ear bone) more resembles those of chimps, gorillas, and Paranthropus than Homo. Like H. habilis and H. erectus, H. naledi has a well-developed brow-ridge with a fissure stretching across just above the ridge and, like H. erectus, a pronounced occipital bun. H. naledi has some facial similarities with H. rudolfensis. Build The H. naledi specimens are estimated to have, on average, stood approximately and weighed . This body mass is intermediate between what is typically seen in Australopithecus and Homo species. Like other Homo, female and male H. naledi were likely about the same size, males on average about 20% larger than females. A juvenile specimen, DH7, is skeletally consistent with a growth rate similar to the faster ape-like trajectories of MH1 (A. sediba) and Turkana boy (H. ergaster). Because dental development is so similar to that of modern humans, a slower maturation rate is not completely out of the question. Using the faster growth rate, DH7 would have died at 8–11 years old, but using the slower growth, DH7 would have died at 11–15 years old. Concerning the spine, only the tenth and eleventh thoracic vertebrae (in the chest region) are preserved from presumably a single individual, which are proportionally similar to those of contemporary Homo, although being the smallest recorded of any hominin. The two transverse processes of the vertebra, which jut out diagonally, are most similar to those of Neanderthals. The neural canals within are proportionally large, similar to modern humans, Neanderthals, and H. e. georgicus. The eleventh rib is straight like that of A. afarensis, and the twelfth rib is robust in cross-section like that of Neanderthals. Like Neanderthals, the twelfth rib appears to have supported strong intercostal muscles above, and a strong quadratus lumborum muscle below. Unlike Neanderthals, there was weak attachment to the diaphragm. Overall, this H. naledi specimen appears to have been small-bodied compared with other Homo species, although it is unclear whether this single specimen is representative of the species. The shoulders are more similar to those of australopithecines, with the shoulder blade situated higher on the back and farther from the midline, short clavicles, and little or no humeral torsion. Elevated shoulder and clavicle bones indicate a narrow chest. The pelvis and legs have features reminiscent of Australopithecus, including anterposteriorly compressed (from front to back) femoral necks, mediolaterally compressed (from left to right) tibiae, and a somewhat circular fibular neck; which indicate a wide abdomen. This combination would preclude efficient endurance running in H. naledi, unlike H. erectus and descendants. Instead, H. naledi appears to have been more arboreal. Limbs The metacarpal bone of the thumb, which is used in holding and manipulating large objects, was well-developed and had strong crests to support its opponens pollicis muscle used in precision-pinch gripping, and its thenar muscles. This is more similar to other Homo than Australopithecus. H. naledi appears to have had strong flexor pollicis longus muscles like modern humans, with humanlike palm and finger pads, which are important for forceful gripping between the thumb and fingers. Unlike Homo, the H. naledi thumb metacarpal joint is comparably small, relative to the thumb's length, and the thumb phalangeal joint is flattened. The distal thumb phalanx bone is robust, and proportionally more similar to those of H. habilis and P. robustus. The metacarpals of the other fingers share adaptations with modern humans and Neanderthals to be able to cup and manipulate objects, and the wrist joint is broadly similar to that of modern humans and Neanderthals. Conversely, the proximal phalanges are curved and are almost identical to those of A. afarensis and H. habilis, which is interpreted as an adaptation for climbing and suspensory behaviour. Such curvature is more pronounced in adults than juveniles, suggesting that adults climbed just as much or more so than juveniles, and this behaviour was common. The fingers are proportionally longer than those of any other fossil hominin, other than the arboreal Ardipithecus ramidus and a modern human specimen from Qafzeh cave, Israel, which is consistent with climbing behaviour. H. naledi was a biped and stood upright. Like other Homo, they had strong insertion for the gluteus muscles, well-defined linea aspera (a ridge running down the back of the femur), thick patellae, long tibiae, and gracile fibulae. These indicate that they were capable of long-distance travel. The H. naledi foot was similar to that of modern humans and other Homo, with adaptations for bipedalism and a humanlike gait. The heel bone has a low orientation, comparable to those of non-human great apes, and the ankle bone has a low declination, which possibly indicate the foot would have been subtly stiffer during the stance phase of walking before the foot pushed off the ground. Pathology The adult right mandible U.W. 101-1142 has a bony lesion, suggestive of a benign tumour. The individual would have experienced some swelling and localised discomfort, but the tumour's position near the medial pterygoid muscle (likely causing discomfort on the jaw hinge) may have impeded function of the muscle, and changed elevation of the right side of the jaw. Dental defects in H. naledi specimens during 1.6–2.8 and 4.3–7.6 months of development were most likely caused by seasonal stressors. This may have been due to extreme summer and winter temperatures causing food scarcity. Minimum winter temperatures of the area average about , and can drop below freezing. Staying warm for an infant of the small-bodied H. naledi would have been difficult, and winters likely increased susceptibility to respiratory diseases. Environmental stressors are consistent with present-day flu seasons in South Africa peaking during winter, and paediatric diarrhoea hospitalisation being most frequent at the height of the rainy season in summer. Local hominins were likely preyed upon by large carnivores, such as lions, leopards, and hyaenas. There seems to be a distinct paucity of large carnivore remains from the northern end of the Cradle of Humankind, where Rising Star Cave is located, possibly because carnivores preferred the Blaaubank River to the south that may have offered better hunting grounds with a greater abundance of large prey items. Alternatively, because many more sites are known in the south than the north, carnivore spatial patterns may not be well-represented by the fossil record (preservation bias). Culture Food Dental chipping and wearing indicates the habitual consumption of small hard objects, such as dirt and dust, and cup-shaped wearing on the back teeth may have stemmed from gritty particles. These could have originated from unwashed roots and tubers. Alternatively, aridity could have stirred up particulates onto food items, coating food in dust. It is possible that they commonly ate larger hard items, such as seeds and nuts, but these were processed into smaller pieces before consumption. H. naledi occupied a seemingly unique ecological niche from previous South African hominins, including Australopithecus and Paranthropus. The teeth of all three species indicate that they needed to exert high shearing force to chew through perhaps plant or muscle fibres. The teeth of other Homo cannot produce such high forces perhaps due to the use of some food processing techniques, such as cooking. Technology H. naledi could have produced Early Stone Age (Acheulean and possibly the earlier Oldowan) or Middle Stone Age industries because they have the same adaptations to the hand as other human species that are implicated in tool production. H. naledi is the only identified human species to have existed during the early Middle Stone Age of the Highveld region, South Africa, possibly indicating that this species manufactured and maintained this tradition at least during this time period. In this scenario, such industries and stone cutting techniques would have evolved independently several times among different Homo species and populations, or were transported over long distances by the inventors or apprentices and taught. Possible burials Since the first publication of results from the Dinaledi Chamber, there has been scholarly debate on whether the fossils excavated from the cave provide evidence of H. naledi engaging in intentional burial activity. If proven true, Dinaledi Chamber would be the oldest known hominin burial, beating out the c. 78,000 year old H. sapiens burial from Panga ya Saidi cave in Kenya by some 160,000 years. However, a lack of proof regarding the taphonomic, stratigraphic, and mineralogical claims made by the excavators has caused significant academic backlash. In 2015, excavating archaeologists Paul Dirks, Lee Berger, and their colleagues concluded that the bodies had to have been deliberately carried and placed into the chamber by people because they appear to have been intact when they were first deposited in the chamber. They found no evidence of trauma from being dropped into the chamber nor evidence of predation. Furthermore, the chamber is inaccessible to large predators, appears to be an isolated system, and has never been flooded. There is no hidden shaft through which people could have accidentally fallen in, and there is no evidence of some catastrophe that killed all the individuals inside the chamber. The excavating team stated that it is possible that the bodies were dropped down a chute and fell slowly due to the narrowness and irregularity of the path down. Thus, they concluded that, since natural forces were apparently not at play, the bodies must have been deliberately buried. Since the cave is unlit, those burying them would have required artificial light to navigate the cave. The archaeologists have reported finding evidence for fire which may support this claim, yet they have not published it as of July 2024. The excavators claim that the site was used repeatedly for burials since the bodies were not all deposited at the same time. In 2016, paleoanthropologist Aurore Val countered that discounting natural forces for depositing the bodies is unjustified. She identified evidence of damage done by beetles, beetle larvae, and snails, which facilitate decomposition. Since the chamber does not present ideal conditions for snails and does not contain snail shells, she argued that decomposition began before deposition in the chamber, potentially discounting the excavators' claims of intentional burial. Invertebrate damage to the fossils was later confirmed by a 2021 analysis of a fragmentary skull, although this analysis also concludes that it is likely that "some" hominin agency was involved in the deposition of the bone fragments. In 2017, Dirks, Berger, and colleagues reaffirmed that there is no evidence of water flow into the cave and that it is more likely that the bodies were deliberately deposited into the chamber. They theorized that as it is possible that the H. naledi bones were deposited by contemporary Homo, such as the ancestors of modern humans, rather than other H. naledi, but that the cultural behavior of burial practices is not impossible for H. naledi. They proposed that placement in the chamber may have been done to remove decaying bodies from a settlement, prevent scavengers, or as a consequence of social bonding and grief. During ongoing excavations in 2018, researchers began to hypothesize that Homo naledi engaged in burial practices. In 2018, anthropologist Charles Egeland and colleagues echoed Val's arguments and stated that there is insufficient evidence to conclude that such an early hominid species had developed a concept of afterlife as often associated with burials. They said that the preservation of the Dinaledi individuals is similar to those of baboon carcasses that accumulate in larger caves, either by natural death of cave-dwelling baboons or by a leopard dragging carcasses into caves. After papers with new findings were rejected by an unspecified journal, Berger et al. published three papers in 2023 as unreviewed preprints alongside a Netflix documentary titled Unknown: Cave of Bones. Critics argued that Berger et al. exploited eLife'''s new preprint publication model to garner attention. Even though the three 2023 articles have not undergone peer review, reviewer statements were published alongside them. The reviewer statements for all three articles were highly critical. One of the papers suggested H. naledi buried their dead near carvings on the cave walls. The carvings include geometrical shapes and a symbol composed of two cross-hatching equal signs. Other paleoanthropologists such as Michael Petraglia criticized the causative link between the H. naledi fossils and the incisions, pointing out that, without dating, correlation is not causation. In June 2024, Paleoanthropologists Kimberly K. Foecke and colleagues published a paper that found "deep structural issues with data analysis, visualization, and interpretation in addition to mischaracterization and mis-application of statistical methods in assessing data. We believe that the preprint represents an example of where data analysis has been heavily influenced by a presupposed narrative." A primary concern is that Berger's original geochemical soil analyses purported to show a difference between the soil directly surrounding the fossils and further away, evidence the ground had been dug up: i.e., the bodies were buried. Foecke et al.'' reanalyzed the soil and were unable to replicate Berger's findings, thus undermining a key piece of evidence in the burial hypothesis. Gallery
Biology and health sciences
Homo
Biology
53475230
https://en.wikipedia.org/wiki/Home%20energy%20storage
Home energy storage
Home energy storage devices store electricity locally, for later consumption. Usually, energy is stored in lithium-ion batteries, controlled by intelligent software to handle charging and discharging cycles. Companies are also developing smaller flow battery technology for home use. As a local energy storage technologies for home use, they are smaller relatives of battery-based grid energy storage and support the concept of distributed generation. When paired with on-site generation, they can virtually eliminate blackouts in an off-the-grid lifestyle. The stored energy commonly originates from on-site solar photovoltaic panels, generated during daylight hours, and the stored electricity consumed after sundown, when domestic energy demand peaks in homes unoccupied during the day. Small wind turbines are less common but still available for home use as a complement or alternative to solar panels. Market trends Automotive companies There has been a trend of automotive companies cooperating with other leaders in the energy industry in order to develop home energy storage solutions. This is likely due to a lot of the research and development that goes into powerful batteries having the potential to benefit both automotive and residential industries. Manufacturers such BMW in their partnership with Solarwatt and Nissan in conjunction with Eaton are strong examples of this trend. Additionally, BYD and Tesla market own-brand home energy storage devices to their customers. Despite initial high costs bringing a lot of scrutiny, the home energy storage market is seeing an increase in revenue following a trend in lowering prices Tariffs The units can also be programmed to exploit a differential tariff, that provide lower priced energy during hours of low demand - seven hours from 12:30am in the case of Britain's Economy 7 tariff - for consumption when prices are higher. Smart tariffs, stemming from the increasing prevalence of smart meters, will increasingly be paired with home energy storage devices to exploit low off-peak prices, and avoid higher-priced energy at times of peak demand. Advantages Overcoming grid losses Transmission of electrical power from power stations to population centres is inherently inefficient, due to transmission losses in electrical grids, particularly within power-hungry dense conurbations where power stations are harder to site. By allowing a greater proportion of on-site generated electricity to be consumed on-site, rather than exported to the energy grid, home energy storage devices can reduce the inefficiencies of grid transport. Energy grid support Home energy storage devices, when connected to a server via the internet, can theoretically be ordered to provide very short-term services to the energy grid:- Reduced peak hour demand stress - provision of short-term demand response during periods of peak demand reducing the need to inefficiently stand up short generation assets like diesel generators. Frequency correction - the provision of ultra short-term corrections, to keep mains frequency within the tolerances required by regulators (e.g., 50 Hz or 60 Hz +/- n%). Reduced reliance on fossil fuels Due to the above efficiencies, and their ability to boost the amount of solar energy consumed on-site, the devices reduce the amount of power generated using fossil fuels, namely natural gas, coal, oil and diesel. Disadvantages Environmental impact of batteries Lithium-ion batteries, a popular choice due to their relatively high charge cycle and lack of memory effect, are difficult to recycle. Lead-acid batteries are relatively easier to recycle and, due to the high resale value of the lead, 99% of those sold in the US get recycled. They have much shorter useful lives than a lithium-ion battery of a similar capacity, due to having a lower charge cycle, narrowing the environmental-impact gap. In addition, lead is a toxic heavy metal and the sulfuric acid in the electrolyte has a high environmental impact. Second life for electric vehicle batteries To offset the environmental impact of batteries, some manufacturers extend the useful life of used batteries taken from electric vehicles at the point where the cells will not sufficiently hold charge. Though considered end of life for electric vehicles, the batteries will function satisfactorily in home energy storage devices. Manufacturers supporting this include Nissan, BMW and Powervault. Salt water batteries Home Energy Storage devices can be paired with salt water batteries, which have a lower environmental impact due to their lack of toxic heavy metal and ease of recyclability. Saltwater batteries are no longer being produced on a commercial level after the bankruptcy of Aquion Energy in March 2017. Grid defection With an increasing amount of consumers choosing to implement solar panels that feed energy solely to their home and home batteries, grid defection has continued to grow. As the number of people of grid increases, the cost of the grid will be spread across fewer consumers making, "the incentive to go off-grid only grow". This is seen as an increasingly large disadvantage to home energy storage, as it could lead to the abandoning of a large infrastructure network created to maintain grids, price inflation for those on grid, and a hindrance to the energy transition. Other forms of storage Storing energy in batteries is far from the only option. Multiple forms of storing energy exist such as flywheels, hydroelectric, and thermal energy. Pico hydro (hydroelectric) Using a pumped-storage system of cisterns for energy storage and small generators, pico hydro generation may also be effective for "closed loop" home energy generation systems. Thermal energy storage A storage heater or heat bank (Australia) is an electrical heater which stores thermal energy during the evening, or at night when electricity is available at lower cost, and releases the heat during the day as required. Accumulators, like a hot water storage tank, are another type of storage heater but specifically store hot water for later use. Some systems may be portable or partially portable for easier transportation to another location, or use during transportation or travel.
Technology
Energy storage
null
47785595
https://en.wikipedia.org/wiki/Adzuki%20bean
Adzuki bean
Vigna angularis, also known as the , azuki bean, aduki bean, red bean, or red mung bean, is an annual vine widely cultivated throughout East Asia for its small (approximately long) bean. The cultivars most familiar in East Asia have a uniform red color, but there are white, black, gray, and variously mottled varieties. Scientists presume Vigna angularis var. nipponensis is the progenitor. Origin and diversity Speciation and domestication The wild ancestor of cultivated adzuki bean is probably Vigna angularis var. nipponensis, which is distributed across East Asia. Speciation between Vigna angularis var. nipponensis and Vigna angularis var. angularis occurred around years ago. Archaeologists estimate it was domesticated around 3000 BC. However, adzuki beans, as well as soybeans, dating from 3000 BC to 2000 BC are indicated to still be largely within the wild size range. Enlarged seeds occurred during the later Bronze Age or Iron Age, periods with plough use. Domestication of adzuki beans resulted in a trade-off between yield and seed size. Cultivated adzuki beans have fewer but longer pods, fewer but larger seeds, a shorter stature, and also a smaller overall seed yield than wild forms. The exact place of domestication is not known; multiple domestication origins in East Asia have been suggested. Seed remains of adzuki beans discovered in the Central Highlands of Japan were dated to c. 6,000–4,000 BP, and represent to date the oldest evidence for its cultivation, supporting an origin in Japan. Evidence suggests that "wild azuki bean have been domesticated and cultivated in Japan for over 10,000 years". Breeding In Japan, the adzuki bean was one of the first crops subjected to scientific plant breeding. Important breeding traits are yield, pureness of the bean colour, and the maturing time. Separate cultivars with smaller seeds and higher biomass are bred for fodder production and as green manure. Locally adapted cultivars are available in China, Japan, Korea, and Taiwan. More than 300 cultivars/landraces/breeding lines are registered in Japan. Large germplasm collections of adzuki bean are in China, at the Institute of Crop Germplasm Resources (CAAS), Beijing, with more than 3,700 accessions, and Japan, at the Tokachi Agricultural Experiment Station, Hokkaido, with about 2,500 accessions. Weed forms Weed forms of adzuki bean frequently occur in Japan. The wide spread of weed forms is due to adaptation to human-disturbed habitats, escapes of old cultivars, and natural establishment from derivatives of hybrids between cultivars and wild forms. In contrast to wild forms, the weed forms of adzuki bean are used as a substitute for the cultivated form and consumed as sweet beans, especially if cultivated adzuki beans are attacked by pests. However, in cultivated gardens the weed form is recognized as contamination and lowers the seed quality of adzuki cultivars. Names The name adzuki is a transliteration of the Japanese , as it was spelled according to historical kana orthography. The names azuki and aduki reflect the modern pronunciation (hiragana: ). All are meant to represent the same Modern Japanese pronunciation, azuki. Japanese also has a Chinese loanword, , which means "small bean", its counterpart being the soybean. It is common to write in kanji but pronounce it as azuki , an example of . In China, the corresponding name () still is used in botanical or agricultural parlance. In everyday Chinese, the more common terms are () and (), both meaning "red bean", because almost all Chinese cultivars are uniformly red. In English the beans are often described as "red beans" in the context of Chinese cuisine, such as with red bean paste. In Korean, adzuki beans are called () and it contrasts with (, "bean"), rather than being considered a type of it. ("beans") without qualifiers usually means soybeans. In Vietnamese it is called (literally: red bean). In some parts of India, the beans are referred to as "red chori". In Marathi, it is known as (), literally meaning 'red cowpea'. In Iraq its name is () meaning "red cowpeas". Cultivation Area and yield The adzuki bean is mainly cultivated in China (), Japan (), South Korea (), and Taiwan () (data published 2006). The bean is also grown commercially in the US, South America, India, New Zealand, Congo, and Angola. In Japan, the adzuki bean is the second most important legume after the soy bean. In 1998, the annual crop yield was around . In 2006, Japan consumed about /year. Japan is the largest importer of adzuki beans. The imports come from China, Korea, Colombia, Taiwan, US, Thailand, and Canada. Ecological requirements Optimal temperature range for adzuki bean growth is between . The crop is not frost-hardy and needs soil temperatures above ( optimal) for germination. Hot temperatures stimulate vegetative growth and are therefore less favorable for pea production. The adzuki bean is usually not irrigated. Annual rainfall ranges from in areas where the bean is grown. The plant can withstand drought but severe reduction in yield is expected. The cultivation of the adzuki bean is possible on preferably well drained soils with pH 5–7.5. Fertilizer application differs widely depending on expected yield but is generally similar to soybean. Due to nodulation with rhizobia, nitrogen fixation of up to is possible. Production The sowing of the peas is in depth in rows apart and within the row. Rarely seeds are sown by broadcast. The amount of seeds ranges between . Growth of the crop is slow, therefore weed control is crucial mainly between germination and flowering. Cultivation systems differ largely among different countries. In China adzuki bean is often grown in intercrops with maize, sorghum and millet while in Japan the bean is grown in crop rotations. Harvest of the peas should not be done as long as moisture content of the seed is higher than 16%. Pests and diseases Fungal and bacterial diseases of the adzuki bean are powdery mildew, brown stem rot, and bacterial blight. Furthermore, pests such as the adzuki pod worm, Japanese butterbur borer, and cutworm attack the crop. The bean weevil is an important storage pest. Botany The description of the adzuki bean can vary between authors because there are both wild and cultivated forms of the plant. The adzuki bean is an annual, rarely biennial bushy erect or twining herb usually between high. There exist climbing or prostrate forms of the plant. The stem is normally green and sparsely pilose. Roots The adzuki bean has a taproot type of root system that can reach a depth of from the point of seed germination. Leaves The leaves of the adzuki bean are trifoliate, pinnate and arranged alternately along the stem on a long petiole. Leaflets are ovate and about long and wide. Flowers Adzuki flowers are papilionaceous and bright yellow. The inflorescence is an axillary false raceme consisting of six to ten (two to twenty) flowers. Fruits Adzuki pods are smooth, cylindrical and thin-walled. The colour of the pods is green turning white to grey as they mature. The size is between with 2 to 14 seeds per pod. Pod shatter during seed ripening and harvesting might be a difficulty under certain conditions. Seeds The seeds are smooth and subcylindric with a length of , width of , thickness of . The thousand kernel weight is between 50 and 200 g. There are many different seed colours from maroon to blue-black mottled with straw. Physiology The emergence of the seedlings is hypogeal and takes 7–20 days. Compared to other pulses the growth of the plant is slow. Normally the adzuki plant reaches maturity between 80 and 120 days depending on the cultivar and the environmental conditions. Flowering lasts 30–40 days. Commonly the plant self-pollinates but cross-pollination also exists. Culinary uses In East Asian cuisine, the adzuki bean is commonly sweetened before eating. In particular, it is often boiled with sugar, producing red bean paste, a very common ingredient in all of these cuisines. It is common to add flavoring to the bean paste, such as chestnut. Red bean paste is used in many Chinese dishes, such as tangyuan, zongzi, mooncakes, baozi, and red bean ice. It serves as a filling in Japanese sweets such as anpan, dorayaki, imagawayaki, manjū, monaka, anmitsu, taiyaki, and daifuku. A more liquid version, using adzuki beans boiled with sugar and a pinch of salt, produces a sweet dish called hong dou tang. Some East Asian cultures enjoy red bean paste as a filling or topping for various kinds of waffles, pastries, baked buns, or biscuits. Adzuki beans are commonly eaten sprouted or boiled in a hot, tea-like drink. Traditionally in Japan, rice with adzuki beans (赤飯; sekihan) is cooked for auspicious occasions. Adzuki beans are used in amanattō and ice cream with the whole bean or as paste. Nutritional information Cooked adzuki beans are 66% water, 25% carbohydrates, including 7% dietary fiber, 8% protein, and contain negligible fat (table). In a 100-gram reference amount, cooked beans provide of food energy, a moderate to high content (10% or more of the Daily Value, DV) of the B vitamin folate (30% DV), and several dietary minerals (11% to 27% DV, table). Gallery
Biology and health sciences
Pulses
Plants
64603416
https://en.wikipedia.org/wiki/W%C4%93t%C4%81
Wētā
Wētā (also spelled weta in English) is the common name for a group of about 100 insect species in the families Anostostomatidae and Rhaphidophoridae endemic to New Zealand. They are giant flightless crickets, and some are among the heaviest insects in the world. Generally nocturnal, most small species are carnivores and scavengers while the larger species are herbivorous. Although some endemic birds (and tuatara) likely prey on them, wētā are disproportionately preyed upon by introduced mammals, and some species are now critically endangered. Name Wētā is a loanword, from the Māori-language word wētā, which refers to this whole group of large insects; some types of wētā have a specific Māori name. In New Zealand English, it is spelled either "weta" or "wētā", although the form with macrons is increasingly common in formal writing, as the Māori word weta (without macrons) instead means "filth or excrement". Words of Māori origin in New Zealand English are both singular and plural. General characteristics Many wētā are large by insect standards and some species are among the largest and heaviest in the world. Their physical appearance is like a katydid, long-horned grasshopper, or cricket, but the hind legs are enlarged and usually very spiny. Many are wingless. Because they can cope with variations in temperature, wētā are found in a variety of environments, including alpine, forests, grasslands, caves, shrub lands and urban gardens. They are nocturnal, and all New Zealand species are flightless but closely related to winged species in Australia. Different species have different diets. Most wētā are predators or omnivores preying on other invertebrates, but the tree and giant wētā eat mostly lichens, leaves, flowers, seed-heads, and fruit. Male giant wētā (Deinacrida spp.) are smaller than females and they show scramble competition for mates. Tree wētā (Hemideina spp.) males have larger heads than females and a polygynandrous mating system with harem formation and male-male competition for mates. Ground wētā (Hemiandrus spp.) males provide nuptial food gifts when mating and females of some species provide maternal care. Wētā eggs are laid in soil over the autumn and winter months and hatch the following spring. A wētā takes between one and two years to reach adulthood, and over this time will have to shed its skin around ten times as it grows. Wētā can bite with powerful mandibles. Tree wētā bites are painful but not particularly common. Tree wētā lift their hind legs in a defence displays to look large and spiky, but they tend to retreat if given the chance. Tree wētā raise their hind legs into the air in warning to foes, and then bring them down to stridulate. Pegs or ridges on the side of their abdomen are struck by a patch of fine pegs at the inner surface of their hind legs (femur) and this action makes a distinctive sound. These actions are also used in defence of a gallery by competing males. The female wētā looks as if she has a stinger, but it is an ovipositor, which enables her to lay eggs inside rotting or mossy wood or soil. Some species of Hemiandrus have very short ovipositors, related perhaps to their burrowing into soil and laying their eggs in a special chamber at the end of the burrow. Taxonomy and evolution Fossilised orthopterans have been found in Russia, China, South Africa, Australia, and New Zealand, but the relationships are open to different interpretations by scientists. Most wētā of both families are found in the Southern Hemisphere. Wētā were probably present in ancient Gondwana before Zealandia separated from it. Rhaphidophoridae dispersed over sea to colonise the Chatham Islands (Rēkohu), the Auckland (Motu Maha), Snares (Tini Heke), Bounty (Moutere Hauiri) and Campbell (Motu Ihupuku) Islands. The present species might have resulted from a recent radiation, which conflicts with those earlier ideas about dispersal of wētā forebears around the Southern Hemisphere (Wallis et al. 2000). Giant, tree, ground, and tusked wētā are all members of the family Anostostomatidae (formerly in the Stenopelmatidae, but recently separated). Cave wētā are better referred to as tokoriro, since they are members of the family Rhaphidophoridae, called cave crickets or camel crickets elsewhere, in a different ensiferan superfamily. In New Zealand there were as of 2014 19 genera of tokoriri, and their taxonomy is under review. Seven new species of South Island cave wētā were named and described in 2019, including Pleioplectron rodmorrisi. Species Giant wētā The 11 species of giant wētā (Deinacrida spp.) are endemic to New Zealand and legally protected. Giant wētā (wētā punga in Māori) are large by insect standards. They are heavy herbivorous Orthoptera with a body length of up to , excluding their long legs and antennae, and weigh about 20–30 g. A captive giant wētā (Deinacrida heteracantha) filled with eggs reached a record 70 g, making it one of the heaviest documented insects in the world and heavier than a sparrow. The largest species of giant wētā is the Little Barrier Island wētā, also known as the wētāpunga. Giant wētā tend to be less social and more passive than tree wētā (Hemideina spp.). They are classified in the genus Deinacrida, which is Greek for "terrible grasshopper". They are found primarily on small islands off the coast of the main islands or at high elevation on New Zealand's South Island (e.g. the alpine scree wētā D. connectens), and are sometimes considered examples of island gigantism. Tree wētā Tree wētā (Hemideina) are commonly encountered in suburban settings in New Zealand's North Island. They are up to 40 mm long and most commonly live in holes in trees formed by beetle and moth larvae or where rot has set in after a twig has broken off. The hole, called a gallery, is maintained by the wētā and any growth of the bark surrounding the opening is chewed away. They readily occupy a preformed gallery in a piece of wood (a "wētā motel") and can be kept in a suburban garden as pets. A gallery might house a harem of up to 10 adult females and one male. Tree wētā are nocturnal. Their diet consists of plants and small insects. The males have much larger jaws than the females, though both sexes will stridulate and bite when threatened. The seven species of tree wētā (pūtangatanga in Māori) are: The Auckland tree wētā Hemideina thoracica can be found throughout the North Island apart from the Wellington-Wairarapa region. Within this range are seven chromosome races. The Wellington tree wētā Hemideina crassidens occupies Wellington, the Wairarapa, the northern parts of the South Island, and the West Coast. Hemideina trewicki is found in Hawke's Bay. H. femorata is found in Marlborough and Canterbury. The rare H. ricta species occurs in Banks Peninsula. The West Coast bush wētā H. broughi largely overlaps with the Wellington tree wētā in Nelson and the northern portion of the West Coast. H. maori, the mountain stone wētā, lives above the tree line in the Southern Alps. The North Island species each have a distinctive set of chromosomes (karyotype). When the territories of species overlap, as with the related species H. femorata and H. ricta on Banks Peninsula, they may interbreed, although offspring are sterile. Tusked wētā Tusked wētā are characterised by long, curved tusks projecting forward from the male's mandibles. The tusks are used in male-to-male combat, not for biting. Female tusked wētā look similar to ground wētā. Tusked wētā are mainly carnivorous, eating worms and insects. There are three known species in two different subfamilies: the Northland tusked wētā Anisoura nicobarica (originally described as a ground wētā, Hemiandrus monstrosus), in the subfamily Deinacridinae; the Mercury Islands tusked wētā Motuweta isolata; and the most recently discovered, the Raukumara tusked wētā Motuweta riparia. Motuweta is in the same subfamily as ground wētā, Anostostomatinae. The Northland tusked wētā lives in tree holes, similar to tree wētā. The Mercury Islands or Middle Island tusked wētā was discovered in 1970. It is a ground-dwelling wētā, entombing itself in shallow burrows during the day, and is critically endangered: a Department of Conservation breeding programme has established new colonies on other islands in the Mercury group. The Raukumara tusked wētā was discovered in 1996, in the Raukumara Range near the Bay of Plenty. It has the unusual habit of diving into streams and hiding underwater for up to three minutes if threatened. Ground wētā Ground wētā are classified in to the two genera Hemiandrus and Anderus. The species in these two genera are each more closely related to winged Australian species than they are two each other. About 30 species of ground wētā occur in New Zealand, and several similar (undescribed) species are found in Australia. They are also very like the Californian Cnemotettix—a similarity perhaps due to their very similar habits and habitat. 19 Hemiandrus species have been described from New Zealand and other distinct populations require further study. They hide in burrows in the ground during the day, and those that live in open ground (e.g., H. focalis, H. maia) conceal their exit holes with a specially made perforated door. During the night, ground wētā hunt invertebrate prey and eat fruit. Most female ground wētā have long ovipositers (e.g. H. maculifrons), but some have short ovipositers and maternal care (e.g. H. maia, H. pallitarsis). Cave wētā The 60 species of cave wētā or tokoriro are only very distant relatives of the other types of wētā, being classified in several genera of subfamily Macropathinae in family Rhaphidophoridae. They have extra-long antennae, and may have long, slender legs and a passive demeanour. Although they have no hearing organs on their front legs like species of Hemideina and Deinacrida, some (such as Talitropsis) are very sensitive to ground vibrations sensed through pads on their feet. Specialised hairs on the cerci and organs on the antennae are also sensitive to low-frequency vibrations in the air. Although some do live in caves, most species (e.g. Talitropsis sedilloti) live in the forest among leaf litter, logs, under bark (e.g. Isoplectron), inside tree holes (e.g. Neonetus sp.) and amongst rocks in the mountains (e.g. Pharmacus). Cave-dwelling species may be active within the confines of their caves during the daytime, and those individuals close to cave entrances venture outside at night. Conservation Although wētā had native predators in the form of birds (especially the weka and kiwi), reptiles, and bats before the arrival of humans, introduced species such as cats, hedgehogs, rats (including kiore) and mustelids have caused a sharp increase in the rate of predation. They are also vulnerable to habitat destruction caused by humans and modification of their habitat caused by introduced browsers. New Zealand's Department of Conservation considers 16 of the over 100 species at risk. Programmes to prevent extinctions have been implemented since the 1970s. Some especially endangered species are tracked by radio beacons. In popular culture New Zealanders Peter Jackson, Richard Taylor, and Jamie Selkirk founded visual effects company Weta Digital (now known as Wētā FX), naming it after the insect. One of Jackson's films, King Kong, has among the Skull Island fauna oversized versions of the giant wētās, referred to with the scientific name "Deinacrida rex" or "Wētā-rex".
Biology and health sciences
Orthoptera
Animals