id stringlengths 2 8 | url stringlengths 31 117 | title stringlengths 1 71 | text stringlengths 153 118k | topic stringclasses 4 values | section stringlengths 4 49 ⌀ | sublist stringclasses 9 values |
|---|---|---|---|---|---|---|
550104 | https://en.wikipedia.org/wiki/Bowen%27s%20reaction%20series | Bowen's reaction series | Within the field of geology, Bowen's reaction series is the work of the Canadian petrologist Norman L. Bowen, who summarized, based on experiments and observations of natural rocks, the sequence of crystallization of common silicate minerals from typical basaltic magma undergoing fractional crystallization (i.e. crystallization wherein early-formed crystals are removed from the magma by crystal settling, leaving behind a liquid of slightly different composition). Bowen's reaction series is able to explain why certain types of minerals tend to be found together while others are almost never associated with one another. He experimented in the early 1900s with powdered rock material that was heated until it melted and then allowed to cool to a target temperature whereupon he observed the types of minerals that formed in the rocks produced. He repeated this process with progressively cooler temperatures and the results he obtained led him to formulate his reaction series which is still accepted today as the idealized progression of minerals produced by cooling basaltic magma that undergoes fractional crystallization. Based upon Bowen's work, one can infer from the minerals present in a rock the relative conditions under which the material had formed.
Description
The series is divided into two branches, the continuous (felsic minerals: feldspars) and the discontinuous (mafic minerals). The minerals at the top of the illustration (given aside) are first to crystallize and so the temperature gradient can be read to be from high to low with the high-temperature minerals being on the top and the low-temperature ones on the bottom. The branch on the right of the illustration is the continuous one (with continuous solid solutions of felsic minerals) and results in progressively sodium-rich plagioclase at lowering temperatures. In the discontinuous series mafic minerals such as olivine will first crystallize at a higher temperature, as magma cools. However, if they are not precipitated (settled) out, the composition of the magma does not change and as the magma further cools the olivine will recrystallise as pyroxene.
Since the surface of the Earth is a low temperature environment compared to the zones of rock formation, the chart also reflects the relative stability of minerals, with the ones at bottom being most stable and the ones at top being quickest to weather, known as the Goldich dissolution series. This is because minerals are most stable in the temperature and pressure conditions closest to those under which they had formed.
| Physical sciences | Petrology | Earth science |
550137 | https://en.wikipedia.org/wiki/Elliptic%20operator | Elliptic operator | In the theory of partial differential equations, elliptic operators are differential operators that generalize the Laplace operator. They are defined by the condition that the coefficients of the highest-order derivatives be positive, which implies the key property that the principal symbol is invertible, or equivalently that there are no real characteristic directions.
Elliptic operators are typical of potential theory, and they appear frequently in electrostatics and continuum mechanics. Elliptic regularity implies that their solutions tend to be smooth functions (if the coefficients in the operator are smooth). Steady-state solutions to hyperbolic and parabolic equations generally solve elliptic equations.
Definitions
Let be a linear differential operator of order m on a domain in Rn given by
where denotes a multi-index, and denotes the partial derivative of order in .
Then is called elliptic if for every x in and every non-zero in Rn,
where .
In many applications, this condition is not strong enough, and instead a uniform ellipticity condition may be imposed for operators of order m = 2k:
where C is a positive constant. Note that ellipticity only depends on the highest-order terms.
A nonlinear operator
is elliptic if its linearization is; i.e. the first-order Taylor expansion with respect to u and its derivatives about any point is an elliptic operator.
Example 1 The negative of the Laplacian in Rd given by is a uniformly elliptic operator. The Laplace operator occurs frequently in electrostatics. If ρ is the charge density within some region Ω, the potential Φ must satisfy the equation
Example 2 Given a matrix-valued function A(x) which uniformly positive definite for every x, having components aij, the operator is elliptic. This is the most general form of a second-order divergence form linear elliptic differential operator. The Laplace operator is obtained by taking A = I. These operators also occur in electrostatics in polarized media.
Example 3 For p a non-negative number, the p-Laplacian is a nonlinear elliptic operator defined by A similar nonlinear operator occurs in glacier mechanics. The Cauchy stress tensor of ice, according to Glen's flow law, is given by for some constant B. The velocity of an ice sheet in steady state will then solve the nonlinear elliptic system where ρ is the ice density, g is the gravitational acceleration vector, p is the pressure and Q is a forcing term.
Elliptic regularity theorems
Let L be an elliptic operator of order 2k with coefficients having 2k continuous derivatives. The Dirichlet problem for L is to find a function u, given a function f and some appropriate boundary values, such that Lu = f and such that u has the appropriate boundary values and normal derivatives. The existence theory for elliptic operators, using Gårding's inequality, Lax–Milgram lemma and Fredholm alternative, states the sufficient condition for a weak solution u to exist in the Sobolev space Hk.
For example, for a Second-order Elliptic operator as in Example 2,
There is a number γ>0 such that for each μ>γ, each , there exists a unique solution of the boundary value problem, which is based on Lax-Milgram lemma.
Either (a) for any , (1) has a unique solution, or (b) has a solution , which is based on the property of compact operators and Fredholm alternative.
This situation is ultimately unsatisfactory, as the weak solution u might not have enough derivatives for the expression Lu to be well-defined in the classical sense.
The elliptic regularity theorem guarantees that, provided f is square-integrable, u will in fact have 2k square-integrable weak derivatives. In particular, if f is infinitely-often differentiable, then so is u.
For L as in Example 2,
Interior regularity: If m is a natural number, (2) , is a weak solution to (1), then for any open set V in U with compact closure, (3), where C depends on U, V, L, m, per se , which also holds if m is infinity by Sobolev embedding theorem.
Boundary regularity: (2) together with the assumption that is indicates that (3) still holds after replacing V with U, i.e. , which also holds if m is infinity.
Any differential operator exhibiting this property is called a hypoelliptic operator; thus, every elliptic operator is hypoelliptic. The property also means that every fundamental solution of an elliptic operator is infinitely differentiable in any neighborhood not containing 0.
As an application, suppose a function satisfies the Cauchy–Riemann equations. Since the Cauchy-Riemann equations form an elliptic operator, it follows that is smooth.
Properties
For L as in Example 2 on U, which is an open domain with C1 boundary, then there is a number γ>0 such that for each μ>γ, satisfies the assumptions of Lax–Milgram lemma.
Invertibility: For each μ>γ, admits a compact inverse.
Eigenvalues and eigenvectors: If A is symmetric, bi,c are zero, then (1) Eigenvalues of L, are real, positive, countable, unbounded (2) There is an orthonormal basis of L2(U) composed of eigenvectors of L. (See Spectral theorem.)
Generates a semigroup on L2(U): −L generates a semigroup of bounded linear operators on L2(U) s.t. in the norm of L2(U), for every , by Hille–Yosida theorem.
General definition
Let be a (possibly nonlinear) differential operator between vector bundles of any rank. Take its principal symbol with respect to a one-form . (Basically, what we are doing is replacing the highest order covariant derivatives by vector fields .)
We say is weakly elliptic if is a linear isomorphism for every non-zero .
We say is (uniformly) strongly elliptic if for some constant ,
for all and all .
The definition of ellipticity in the previous part of the article is strong ellipticity. Here is an inner product. Notice that the are covector fields or one-forms, but the are elements of the vector bundle upon which acts.
The quintessential example of a (strongly) elliptic operator is the Laplacian (or its negative, depending upon convention). It is not hard to see that needs to be of even order for strong ellipticity to even be an option. Otherwise, just consider plugging in both and its negative. On the other hand, a weakly elliptic first-order operator, such as the Dirac operator can square to become a strongly elliptic operator, such as the Laplacian. The composition of weakly elliptic operators is weakly elliptic.
Weak ellipticity is nevertheless strong enough for the Fredholm alternative, Schauder estimates, and the Atiyah–Singer index theorem. On the other hand, we need strong ellipticity for the maximum principle, and to guarantee that the eigenvalues are discrete, and their only limit point is infinity.
| Mathematics | Differential equations | null |
550334 | https://en.wikipedia.org/wiki/Riftia | Riftia | Riftia pachyptila, commonly known as the giant tube worm and less commonly known as the giant beardworm, is a marine invertebrate in the phylum Annelida (formerly grouped in phylum Pogonophora and Vestimentifera) related to tube worms commonly found in the intertidal and pelagic zones. R. pachyptila lives on the floor of the Pacific Ocean near hydrothermal vents. The vents provide a natural ambient temperature in their environment ranging from 2 to 30 °C, and this organism can tolerate extremely high hydrogen sulfide levels. These worms can reach a length of , and their tubular bodies have a diameter of .
Its common name "giant tube worm" is, however, also applied to the largest living species of shipworm, Kuphus polythalamius, which despite the name "worm", is a bivalve mollusc rather than an annelid.
Discovery
R. pachyptila was discovered in 1977 on an expedition of the American bathyscaphe DSV Alvin to the Galápagos Rift led by geologist Jack Corliss. The discovery was unexpected, as the team was studying hydrothermal vents and no biologists were included in the expedition. Many of the species found living near hydrothermal vents during this expedition had never been seen before.
At the time, the presence of thermal springs near the midoceanic ridges was known. Further research uncovered aquatic life in the area, despite the high temperature (around 350–380 °C).
Many samples were collected, including bivalves, polychaetes, large crabs, and R. pachyptila. It was the first time that species was observed.
Development
R. pachyptila develops from a free-swimming, pelagic, nonsymbiotic trochophore larva, which enters juvenile (metatrochophore) development, becoming sessile, and subsequently acquiring symbiotic bacteria. The symbiotic bacteria, on which adult worms depend for sustenance, are not present in the gametes, but are acquired from the environment through the skin in a process akin to an infection. The digestive tract transiently connects from a mouth at the tip of the ventral medial process to a foregut, midgut, hindgut, and anus and was previously thought to have been the method by which the bacteria are introduced into adults. After symbionts are established in the midgut, they undergo substantial remodelling and enlargement to become the trophosome, while the remainder of the digestive tract has not been detected in adult specimens.
Body structure
Isolating the vermiform body from white chitinous tube, a small difference exists from the classic three subdivisions typical of phylum Pogonophora: the prosoma, the mesosoma, and the metasoma.
The first body region is the vascularized branchial plume, which is bright red due to the presence of hemoglobin that contain up to 144 globin chains (each presumably including associated heme structures). These tube worm hemoglobins are remarkable for carrying oxygen in the presence of sulfide, without being inhibited by this molecule, as hemoglobins in most other species are. The plume provides essential nutrients to bacteria living inside the trophosome. If the animal perceives a threat or is touched, it retracts the plume and the tube is closed due to the obturaculum, a particular operculum that protects and isolates the animal from the external environment.
The second body region is the vestimentum, formed by muscle bands, having a winged shape, and it presents the two genital openings at the end. The heart, extended portion of dorsal vessel, enclose the vestimentum.
In the middle part, the trunk or third body region, is full of vascularized solid tissue, and includes body wall, gonads, and the coelomic cavity. Here is located also the trophosome, spongy tissue where a billion symbiotic, thioautotrophic bacteria and sulfur granules are found. Since the mouth, digestive system, and anus are missing, the survival of R. pachyptila is dependent on this mutualistic symbiosis. This process, known as chemosynthesis, was recognized within the trophosome by Colleen Cavanaugh.
The soluble hemoglobins, present in the tentacles, are able to bind O2 and H2S, which are necessary for chemosynthetic bacteria. Due to the capillaries, these compounds are absorbed by bacteria. During the chemosynthesis, the mitochondrial enzyme rhodanase catalyzes the disproportionation reaction of the thiosulfate anion S2O32- to sulfur S and sulfite SO32- . The R. pachyptila’s bloodstream is responsible for absorption of the O2 and nutrients such as carbohydrates.
Nitrate and nitrite are toxic, but are required for biosynthetic processes. The chemosynthetic bacteria within the trophosome convert nitrate to ammonium ions, which then are available for production of amino acids in the bacteria, which are in turn released to the tube worm. To transport nitrate to the bacteria, R. pachyptila concentrates nitrate in its blood, to a concentration 100 times more concentrated than the surrounding water. The exact mechanism of R. pachyptila’s ability to withstand and concentrate nitrate is still unknown.
In the posterior part, the fourth body region, is the opistosome, which anchors the animal to the tube and is used for the storage of waste from bacterial reactions.
Symbiosis
The discovery of bacterial invertebrate chemoautotrophic symbiosis, particularly in vestimentiferan tubeworms R. pachyptila and then in vesicomyid clams and mytilid mussels revealed the chemoautotrophic potential of the hydrothermal vent tube worm. Scientists discovered a remarkable source of nutrition that helps to sustain the conspicuous biomass of invertebrates at vents. Many studies focusing on this type of symbiosis revealed the presence of chemoautotrophic, endosymbiotic, sulfur-oxidizing bacteria mainly in R. pachyptila, which inhabits extreme environments and is adapted to the particular composition of the mixed volcanic and sea waters. This special environment is filled with inorganic metabolites, essentially carbon, nitrogen, oxygen, and sulfur. In its adult phase, R. pachyptila lacks a digestive system. To provide its energetic needs, it retains those dissolved inorganic nutrients (sulfide, carbon dioxide, oxygen, nitrogen) into its plume and transports them through a vascular system to the trophosome, which is suspended in paired coelomic cavities and is where the intracellular symbiotic bacteria are found. The trophosome is a soft tissue that runs through almost the whole length of the tube's coelom. It retains a large number of bacteria on the order of 109 bacteria per gram of fresh weight. Bacteria in the trophosome are retained inside bacteriocytes, thereby having no contact with the external environment. Thus, they rely on R. pachyptila for the assimilation of nutrients needed for the array of metabolic reactions they employ and for the excretion of waste products of carbon fixation pathways. At the same time, the tube worm depends completely on the microorganisms for the byproducts of their carbon fixation cycles that are needed for its growth.
Initial evidence for a chemoautotrophic symbiosis in R. pachyptila came from microscopic and biochemical analyses showing Gram-negative bacteria packed within a highly vascularized organ in the tubeworm trunk called the trophosome. Additional analyses involving stable isotope, enzymatic, and physiological characterizations confirmed that the end symbionts of R. pachyptila oxidize reduced-sulfur compounds to synthesize ATP for use in autotrophic carbon fixation through the Calvin cycle. The host tubeworm enables the uptake and transport of the substrates required for thioautotrophy, which are HS−, O2, and CO2, receiving back a portion of the organic matter synthesized by the symbiont population. The adult tubeworm, given its inability to feed on particulate matter and its entire dependency on its symbionts for nutrition, the bacterial population is then the primary source of carbon acquisition for the symbiosis. Discovery of bacterial–invertebrate chemoautotrophic symbioses, initially in vestimentiferan tubeworms and then in vesicomyid clams and mytilid mussels, pointed to an even more remarkable source of nutrition sustaining the invertebrates at vents.
Endosymbiosis with chemoautotrophic bacteria
A wide range of bacterial diversity is associated with symbiotic relationships with R. pachyptila. Many bacteria belong to the phylum Campylobacterota (formerly class Epsilonproteobacteria) as supported by the recent discovery in 2016 of the new species Sulfurovum riftiae belonging to the phylum Campylobacterota, family Helicobacteraceae isolated from R. pachyptila collected from the East Pacific Rise. Other symbionts belong to the class Delta-, Alpha- and Gammaproteobacteria. The Candidatus Endoriftia persephone (Gammaproteobacteria) is a facultative R. pachyptila symbiont and has been shown to be a mixotroph, thereby exploiting both Calvin Benson cycle and reverse TCA cycle (with an unusual ATP citrate lyase) according to availability of carbon resources and whether it is free living in the environment or inside a eukaryotic host. The bacteria apparently prefer a heterotrophic lifestyle when carbon sources are available.
Evidence based on 16S rRNA analysis affirms that R. pachyptila chemoautotrophic bacteria belong to two different clades: Gammaproteobacteria and Campylobacterota (e.g. Sulfurovum riftiae) that get energy from the oxidation of inorganic sulfur compounds such as hydrogen sulfide (H2S, HS−, S2-) to synthesize ATP for carbon fixation via the Calvin cycle. Unfortunately, most of these bacteria are still uncultivable. Symbiosis works so that R. pachyptila provides nutrients such as HS−, O2, CO2 to bacteria, and in turn it receives organic matter from them. Thus, because of lack of a digestive system, R. pachyptila depends entirely on its bacterial symbiont to survive.
In the first step of sulfide-oxidation, reduced sulfur (HS−) passes from the external environment into R. pachyptila blood, where, together with O2, it is bound by hemoglobin, forming the complex Hb-O2-HS− and then it is transported to the trophosome, where bacterial symbionts reside. Here, HS− is oxidized to elemental sulfur (S0) or to sulfite (SO32-).
In the second step, the symbionts make sulfite-oxidation by the "APS pathway", to get ATP. In this biochemical pathway, AMP reacts with sulfite in the presence of the enzyme APS reductase, giving APS (adenosine 5'-phosphosulfate). Then, APS reacts with the enzyme ATP sulfurylase in presence of pyrophosphate (PPi) giving ATP (substrate-level phosphorylation) and sulfate (SO42-) as end products. In formulas:
AMP + SO3^2- ->[APSreductase] APS
APS + PPi ->[ATP sulfurylase] ATP + SO4^2-
The electrons released during the entire sulfide-oxidation process enter in an electron transport chain, yielding a proton gradient that produces ATP (oxidative phosphorylation). Thus, ATP generated from oxidative phosphorylation and ATP produced by substrate-level phosphorylation become available for CO2 fixation in Calvin cycle, whose presence has been demonstrated by the presence of two key enzymes of this pathway: phosphoribulokinase and RubisCO.
To support this unusual metabolism, R. pachyptila has to absorb all the substances necessary for both sulfide-oxidation and carbon fixation, that is: HS−, O2 and CO2 and other fundamental bacterial nutrients such as N and P. This means that the tubeworm must be able to access both oxic and anoxic areas.
Oxidation of reduced sulfur compounds requires the presence of oxidized reagents such as oxygen and nitrate. Hydrothermal vents are characterized by conditions of high hypoxia. In hypoxic conditions, sulfur-storing organisms start producing hydrogen sulfide. Therefore, the production of in H2S in anaerobic conditions is common among thiotrophic symbiosis. H2S can be damaging for some physiological processes as it inhibits the activity of cytochrome c oxidase, consequentially impairing oxidative phosphorylation. In R. pachyptila the production of hydrogen sulfide starts after 24h of hypoxia. In order to avoid physiological damage some animals, including Riftia pachyptila are able to bind H2S to haemoglobin in the blood to eventually expel it in the surrounding environment.
Carbon fixation and organic carbon assimilation
Unlike metazoans, which respire carbon dioxide as a waste product, R. pachyptila-symbiont association has a demand for a net uptake of CO2 instead, as a cnidarian-symbiont associations.
Ambient deep-sea water contains an abundant amount of inorganic carbon in the form of bicarbonate HCO3−, but it is actually the chargeless form of inorganic carbon, CO2, that is easily diffusible across membranes. The low partial pressures of CO2 in the deep-sea environment is due to the seawater alkaline pH and the high solubility of CO2, yet the pCO2 of the blood of R. pachyptila may be as much as two orders of magnitude greater than the pCO2 of deep-sea water.
CO2 partial pressures are transferred to the vicinity of vent fluids due to the enriched inorganic carbon content of vent fluids and their lower pH. CO2 uptake in the worm is enhanced by the higher pH of its blood (7.3–7.4), which favors the bicarbonate ion and thus promotes a steep gradient across which CO2 diffuses into the vascular blood of the plume. The facilitation of CO2 uptake by high environmental pCO2 was first inferred based on measures of elevated blood and coelomic fluid pCO2 in tubeworms, and was subsequently demonstrated through incubations of intact animals under various pCO2 conditions.
Once CO2 is fixed by the symbionts, it must be assimilated by the host tissues. The supply of fixed carbon to the host is transported via organic molecules from the trophosome in the hemolymph, but the relative importance of translocation and symbiont digestion is not yet known. Studies proved that within 15 min, the label first appears in symbiont-free host tissues, and that indicates a significant amount of release of organic carbon immediately after fixation. After 24 h, labeled carbon is clearly evident in the epidermal tissues of the body wall. Results of the pulse-chase autoradiographic experiments were also evident with ultrastructural evidence for digestion of symbionts in the peripheral regions of the trophosome lobules.
Sulfide acquisition
In deep-sea hydrothermal vents, sulfide and oxygen are present in different areas. Indeed, the reducing fluid of hydrothermal vents is rich in sulfide, but poor in oxygen, whereas sea water is richer in dissolved oxygen. Moreover, sulfide is immediately oxidized by dissolved oxygen to form partly, or totally, oxidized sulfur compounds like thiosulfate (S2O32-) and ultimately sulfate (SO42-), respectively less, or no longer, usable for microbial oxidation metabolism. This causes the substrates to be less available for microbial activity, thus bacteria are constricted to compete with oxygen to get their nutrients. In order to avoid this issue, several microbes have evolved to make symbiosis with eukaryotic hosts. In fact, R. pachyptila is able to cover the oxic and anoxic areas to get both sulfide and oxygen thanks to its hemoglobin that can bind sulfide reversibly and apart from oxygen by functional binding sites determined to be zinc ions embedded in the A2 chains of the hemoglobins. and then transport it to the trophosome, where bacterial metabolism can occur. It has also been suggested that cysteine residues are involved in this process.
Symbiont acquisition
The acquisition of a symbiont by a host can occur in these ways:
Environmental transfer (symbiont acquired from a free-living population in the environment)
Vertical transfer (parents transfer symbiont to offspring via eggs)
Horizontal transfer (hosts that share the same environment)
Evidence suggests that R. pachyptila acquires its symbionts through its environment. In fact, 16S rRNA gene analysis showed that vestimentiferan tubeworms belonging to three different genera: Riftia, Oasisia, and Tevnia, share the same bacterial symbiont phylotype.
This proves that R. pachyptila takes its symbionts from a free-living bacterial population in the environment. Other studies also support this thesis, because analyzing R. pachyptila eggs, 16S rRNA belonging to the symbiont was not found, showing that the bacterial symbiont is not transmitted by vertical transfer.
Another proof to support the environmental transfer comes from several studies conducted in the late 1990s. PCR was used to detect and identify a R. pachyptila symbiont gene whose sequence was very similar to the fliC gene that encodes some primary protein subunits (flagellin) required for flagellum synthesis. Analysis showed that R. pachyptila symbiont has at least one gene needed for flagellum synthesis. Hence, the question arose as to the purpose of the flagellum. Flagellar motility would be useless for a bacterial symbiont transmitted vertically, but if the symbiont came from the external environment, then a flagellum would be essential to reach the host organism and to colonize it. Indeed, several symbionts use this method to colonize eukaryotic hosts.
Thus, these results confirm the environmental transfer of R. pachyptila symbiont.
Reproduction
R. pachyptila is a dioecious vestimentiferan. Individuals of this species are sessile and are found clustered together around deep-sea hydrothermal vents of the East Pacific Rise and the Galapagos Rift. The size of a patch of individuals surrounding a vent is within the scale of tens of metres.
The male's spermatozoa are thread-shaped and are composed of three distinct regions: the acrosome (6 μm), the nucleus (26 μm) and the tail (98 μm). Thus, the single spermatozoa is about 130 μm long overall, with a diameter of 0.7 μm, which becomes narrower near the tail area, reaching 0.2 μm. The sperm is arranged into an agglomeration of around 340–350 individual spermatozoa that create a torch-like shape. The cup part is made up of acrosomes and nucleus, while the handle is made up by the tails. The spermatozoa in the package are held together by fibrils. Fibrils also coat the package itself to ensure cohesion.
The large ovaries of females run within the gonocoel along the entire length of the trunk and are ventral to the trophosome. Eggs at different maturation stages can be found in the middle area of the ovaries, and depending on their developmental stage, are referred to as: oogonia, oocytes, and follicular cells. When the oocytes mature, they acquire protein and lipid yolk granules.
Males release their sperm into sea water. While the released agglomerations of spermatozoa, referred to as spermatozeugmata, do not remain intact for more than 30 seconds in laboratory conditions, they may maintain integrity for longer periods of time in specific hydrothermal vent conditions. Usually, the spermatozeugmata swim into the female's tube. Movement of the cluster is conferred by the collective action of each spermatozoon moving independently. Reproduction has also been observed involving only a single spermatozoon reaching the female's tube. Generally, fertilization in R. pachyptila is considered internal. However, some argue that, as the sperm is released into sea water and only afterwards reaches the eggs in the oviducts, it should be defined as internal-external.
R. pachyptila is completely dependent on the production of volcanic gases and the presence of sulfide-oxidizing bacteria. Therefore, its metapopulation distribution is profoundly linked to volcanic and tectonic activity that create active hydrothermal vent sites with a patchy and ephemeral distribution. The distance between active sites along a rift or adjacent segments can be very high, reaching hundreds of km. This raises the question regarding larval dispersal. R. pachytpila is capable of larval dispersal across distances of 100 to 200 km and cultured larvae show to be viable for 38 days. Though dispersal is considered to be effective, the genetic variability observed in R. pachyptila metapopulation is low compared to other vent species. This may be due to high extinction events and colonization events, as R. pachyptila is one of the first species to colonize a new active site.
The endosymbionts of R. pachyptila are not passed to the fertilized eggs during spawning, but are acquired later during the larval stage of the vestimentiferan worm. R. pachyptila planktonic larvae that are transported through sea-bottom currents until they reach active hydrothermal vents sites, are referred to as trophocores. The trophocore stage lacks endosymbionts, which are acquired once larvae settle in a suitable environment and substrate. Free-living bacteria found in the water column are ingested randomly and enter the worm through a ciliated opening of the branchial plume. This opening is connected to the trophosome through a duct that passes through the brain. Once the bacteria are in the gut, the ones that are beneficial to the individual, namely sulfide- oxidizing strains are phaghocytized by epithelial cells found in the midgut are then retained. Bacteria that do not represent possible endosymbionts are digested. This raises questions as to how R. pachyptila manages to discern between essential and nonessential bacterial strains. The worm's ability to recognise a beneficial strain, as well as preferential host-specific infection by bacteria have been both suggested as being the drivers of this phenomenon.
Growth rate and age
R. pachyptila has the fastest growth rate of any known marine invertebrate. These organisms have been known to colonize a new site, grow to sexual maturity, and increase in length to 4.9 feet (1.5 m) in less than two years.
Because of the peculiar environment in which R. pachyptila thrives, this species differs greatly from other deep-sea species that do not inhabit hydrothermal vents sites; the activity of diagnostic enzymes for glycolysis, citric acid cycle and transport of electrons in the tissues of R. pachyptila is very similar to the activity of these enzymes in the tissues of shallow-living animals. This contrasts with the fact that deep-sea species usually show very low metabolic rates, which in turn suggests that low water temperature and high pressure in the deep sea do not necessarily limit the metabolic rate of animals and that hydrothermal vents sites display characteristics that are completely different from the surrounding environment, thereby shaping the physiology and biological interactions of the organisms living in these sites.
| Biology and health sciences | Lophotrochozoa | Animals |
550379 | https://en.wikipedia.org/wiki/Teikei | Teikei | is a system of community-supported agriculture in Japan, where consumers purchase food directly from farmers. Teikei is closely associated with small-scale, local, organic farming, and volunteer-based, non-profit partnerships between producers and consumers. Millions of Japanese consumers participate in teikei. It is widely cited as the origin of community-supported agriculture around the world.
While there is some disagreement as to the "first" teikei group, the concept can be traced back to the mid-1960s, when a group of Japanese women banded together to purchase fresh milk. A general movement towards consumer-farmer partnerships in Japan in the late 1960s and early 1970s was driven by environmental issues and distrust of the quality of food in the conventional food system.
One of the founding teikei groups, the Japan Organic Agriculture Association (JOAA), founded in 1971, describes teikei as "an idea to create an alternative distribution system, not depending on the conventional market. Though the forms of teikei vary, it is basically a direct distribution system. To carry it out, the producer(s) and the consumer(s) have talks and contact to deepen their mutual understanding: both of them provide labor and capital to support their own delivery system.... Teikei is not only a practical idea but also a dynamic philosophy to make people think of a better way of life either as a producer or as a consumer through their interaction."
Teikei in Japanese means "cooperation", "joint business", or "link-up". In reference to CSA, it is commonly associated with the slogan "food with the farmer's face on it".
Teikei abroad
While there is no evidence that teikei was the inspiration for community-supported agriculture (CSA) in the United States, there have been a few examples of CSA programs that have followed the Japanese model closely. The system is valued for its ability to make small-scale agriculture more economically viable and give more control to consumers over the food they consume. Although the CSA movement was born out of anthroposophical and biodynamic farming initiatives, CSA has also enjoyed the benefit of gaining popularity through the booming organic farming movement in the United States that experienced rapid and growth in the late '90s and early '00s, leading to the establishment of organic produce in mainstream retail sales.
| Technology | Agriculture, labor and economy | null |
550622 | https://en.wikipedia.org/wiki/Tetraquark | Tetraquark | In particle physics, a tetraquark is an exotic meson composed of four valence quarks. A tetraquark state has long been suspected to be allowed by quantum chromodynamics, the modern theory of strong interactions. A tetraquark state is an example of an exotic hadron that lies outside the conventional quark model classification. A number of different types of tetraquark have been observed.
History and discoveries
Several tetraquark candidates have been reported by particle physics experiments in the 21st century. The quark contents of these states are almost all qQ, where q represents a light (up, down or strange) quark, Q represents a heavy (charm or bottom) quark, and antiquarks are denoted with an overline. The existence and stability of tetraquark states with the qq (or QQ) have been discussed by theoretical physicists for a long time, however these are yet to be reported by experiments.
Timeline
In 2003, a particle temporarily called X(3872), by the Belle experiment in Japan, was proposed to be a tetraquark candidate, as originally theorized. The name X is a temporary name, indicating that there are still some questions about its properties to be tested. The number following is the mass of the particle in .
In 2004, the DsJ(2632) state seen in Fermilab's SELEX was suggested as a possible tetraquark candidate.
In 2007, Belle announced the observation of the Z(4430) state, a tetraquark candidate. There are also indications that the Y(4660), also discovered by Belle in 2007, could be a tetraquark state.
In 2009, Fermilab announced that they have discovered a particle temporarily called Y(4140), which may also be a tetraquark.
In 2010, two physicists from DESY and a physicist from Quaid-i-Azam University re-analyzed former experimental data and announced that, in connection with the (5S) meson (a form of bottomonium), a well-defined tetraquark resonance exists.
In June 2013, the BES III experiment in China and the Belle experiment in Japan independently reported on Zc(3900), the first confirmed four-quark state.
In 2014, the Large Hadron Collider experiment LHCb confirmed the existence of the Z(4430) state with a significance of over 13.9 σ.
In February 2016, the DØ experiment reported evidence of a narrow tetraquark candidate, named X(5568), decaying to .
In December 2017, DØ also reported observing the X(5568) using a different final state.
However, it was not observed in searches by the LHCb, CMS, CDF, or ATLAS experiments.
In June 2016, LHCb announced the discovery of three additional tetraquark candidates, called X(4274), X(4500) and X(4700).
In 2020, LHCb announced the discovery of a
tetraquark: X(6900). In 2022, ATLAS also observed X(6900), and in 2023, CMS reported an observation of three such states, X(6600), X(6900), and X(7300).
In 2021, LHCb announced the discovery of four additional tetraquarks, including cu.
In 2022, LHCb announced the discovery of cu and cd.
| Physical sciences | Bosons | Physics |
551061 | https://en.wikipedia.org/wiki/Allotropes%20of%20carbon | Allotropes of carbon | Carbon is capable of forming many allotropes (structurally different forms of the same element) due to its valency (tetravalent). Well-known forms of carbon include diamond and graphite. In recent decades, many more allotropes have been discovered and researched, including ball shapes such as buckminsterfullerene and sheets such as graphene. Larger-scale structures of carbon include nanotubes, nanobuds and nanoribbons. Other unusual forms of carbon exist at very high temperatures or extreme pressures. Around 500 hypothetical 3‑periodic allotropes of carbon are known at the present time, according to the Samara Carbon Allotrope Database (SACADA).
Atomic and diatomic carbon
Under certain conditions, carbon can be found in its atomic form. It can be formed by vaporizing graphite, by passing large electric currents to form a carbon arc under very low pressure. It is extremely reactive, but it is an intermediate product used in the creation of carbenes.
Diatomic carbon can also be found under certain conditions. It is often detected via spectroscopy in extraterrestrial bodies, including comets and certain stars.
Diamond
Diamond is a well-known allotrope of carbon. The hardness, extremely high refractive index, and high dispersion of light make diamond useful for industrial applications and for jewelry. Diamond is the hardest known natural mineral. This makes it an excellent abrasive and makes it hold polish and luster extremely well. No known naturally occurring substance can cut or scratch diamond, except another diamond. In diamond form, carbon is one of the costliest elements.
The crystal structure of diamond is a face-centered cubic lattice having eight atoms per unit cell to form a diamond cubic structure. Each carbon atom is covalently bonded to four other carbons in a tetrahedral geometry. These tetrahedrons together form a 3-dimensional network of six-membered carbon rings in the chair conformation, allowing for zero bond angle strain. The bonding occurs through sp3 hybridized orbitals to give a C-C bond length of 154 pm. This network of unstrained covalent bonds makes diamond extremely strong. Diamond is thermodynamically less stable than graphite at pressures below .
The dominant industrial use of diamond is cutting, drilling (drill bits), grinding (diamond edged cutters), and polishing. Most uses of diamonds in these technologies do not require large diamonds, and most diamonds that are not gem-quality can find an industrial use. Diamonds are embedded in drill tips and saw blades or ground into a powder for use in grinding and polishing applications (due to its extraordinary hardness). Specialized applications include use in laboratories as containment for high pressure experiments (see diamond anvil), high-performance bearings, and specialized windows of technical apparatuses.
The market for industrial-grade diamonds operates much differently from its gem-grade counterpart. Industrial diamonds are valued mostly for their hardness and heat conductivity, making many of the gemological characteristics of diamond, including clarity and color, mostly irrelevant. This helps explain why 80% of mined diamonds (equal to about 100 million carats or 20 tonnes annually) are unsuitable for use as gemstones and known as bort, are destined for industrial use. In addition to mined diamonds, synthetic diamonds found industrial applications almost immediately after their invention in the 1950s; another 400 million carats (80 tonnes) of synthetic diamonds are produced annually for industrial use, which is nearly four times the mass of natural diamonds mined over the same period.
With the continuing advances being made in the production of synthetic diamond, future applications are beginning to become feasible. Garnering much excitement is the possible use of diamond as a semiconductor suitable to build microchips from, or the use of diamond as a heat sink in electronics. Significant research efforts in Japan, Europe, and the United States are under way to capitalize on the potential offered by diamond's unique material properties, combined with increased quality and quantity of supply starting to become available from synthetic diamond manufacturers.
Graphite
Graphite, named by Abraham Gottlob Werner in 1789, from the Greek γράφειν (, "to draw/write", for its use in pencils) is one of the most common allotropes of carbon. Unlike diamond, graphite is an electrical conductor. Thus, it can be used in, for instance, electrical arc lamp electrodes. Likewise, under standard conditions, graphite is the most stable form of carbon. Therefore, it is used in thermochemistry as the standard state for defining the heat of formation of carbon compounds.
Graphite conducts electricity, due to delocalization of the pi bond electrons above and below the planes of the carbon atoms. These electrons are free to move, so are able to conduct electricity. However, the electricity is only conducted along the plane of the layers. In diamond, all four outer electrons of each carbon atom are 'localized' between the atoms in covalent bonding. The movement of electrons is restricted and diamond does not conduct an electric current. In graphite, each carbon atom uses only 3 of its 4 outer energy level electrons in covalently bonding to three other carbon atoms in a plane. Each carbon atom contributes one electron to a delocalized system of electrons that is also a part of the chemical bonding. The delocalized electrons are free to move throughout the plane. For this reason, graphite conducts electricity along the planes of carbon atoms, but does not conduct electricity in a direction at right angles to the plane.
Graphite powder is used as a dry lubricant. Although it might be thought that this industrially important property is due entirely to the loose interlamellar coupling between sheets in the structure, in fact in a vacuum environment (such as in technologies for use in space), graphite was found to be a very poor lubricant. This fact led to the discovery that graphite's lubricity is due to adsorbed air and water between the layers, unlike other layered dry lubricants such as molybdenum disulfide. Recent studies suggest that an effect called superlubricity can also account for this effect.
When a large number of crystallographic defects (physical) bind these planes together, graphite loses its lubrication properties and becomes pyrolytic carbon, a useful material in blood-contacting implants such as prosthetic heart valves.
Graphite is the most stable allotrope of carbon. Contrary to popular belief, high-purity graphite does not readily burn, even at elevated temperatures. For this reason, it is used in nuclear reactors and for high-temperature crucibles for melting metals. At very high temperatures and pressures (roughly 2000 °C and 5 GPa), it can be transformed into diamond.
Natural and crystalline graphites are not often used in pure form as structural materials due to their shear-planes, brittleness and inconsistent mechanical properties.
In its pure glassy (isotropic) synthetic forms, pyrolytic graphite and carbon fiber graphite are extremely strong, heat-resistant (to 3000 °C) materials, used in reentry shields for missile nosecones, solid rocket engines, high temperature reactors, brake shoes and electric motor brushes.
Intumescent or expandable graphites are used in fire seals, fitted around the perimeter of a fire door. During a fire the graphite intumesces (expands and chars) to resist fire penetration and prevent the spread of fumes. A typical start expansion temperature (SET) is between 150 and 300 °C.
Graphite's specific gravity is 2.3, which makes it less dense than diamond.
Graphite is slightly more reactive than diamond. This is because the reactants are able to penetrate between the hexagonal layers of carbon atoms in graphite. It is unaffected by ordinary solvents, dilute acids, or fused alkalis. However, chromic acid oxidizes it to carbon dioxide.
Graphene
A single layer of graphite is called graphene and has extraordinary electrical, thermal, and physical properties. It can be produced by epitaxy on an insulating or conducting substrate or by mechanical exfoliation (repeated peeling) from graphite. Its applications may include replacing silicon in high-performance electronic devices. With two layers stacked, bilayer graphene results with different properties.
Lonsdaleite (hexagonal diamond)
Lonsdaleite is an allotrope sometimes called "hexagonal diamond", formed from graphite present in meteorites upon their impact on the earth. The great heat and pressure of the impact transforms the graphite into a denser form similar to diamond but retaining graphite's hexagonal crystal lattice. "Hexagonal diamond" has also been synthesized in the laboratory, by compressing and heating graphite either in a static press or using explosives. It can also be produced by the thermal decomposition of a polymer, poly(hydridocarbyne), at atmospheric pressure, under inert gas atmosphere (e.g. argon, nitrogen), starting at temperature .
Graphenylene
Graphenylene is a single layer carbon material with biphenylene-like subunits as basis in its hexagonal lattice structure. It is also known as biphenylene-carbon.
Carbophene
Carbophene is a 2 dimensional covalent organic framework. 4-6 carbophene has been synthesized from 1-3-5 trihydroxybenzene. It consists of 4-carbon and 6-carbon rings in 1:1 ratio. The angles between the three σ-bonds of the orbitals are approximately 120°, 90°, and 150°.
AA'-graphite
AA'-graphite is an allotrope of carbon similar to graphite, but where the layers are positioned differently to each other as compared to the order in graphite.
Diamane
Diamane is a 2D form of diamond. It can be made via high pressures, but without that pressure, the material reverts to graphene. Another technique is to add hydrogen atoms, but those bonds are weak. Using fluorine (xenon-difluoride) instead brings the layers closer together, strengthening the bonds. This is called f-diamane.
Amorphous carbon
Amorphous carbon is the name used for carbon that does not have any crystalline structure. As with all glassy materials, some short-range order can be observed, but there is no long-range pattern of atomic positions. While entirely amorphous carbon can be produced, most amorphous carbon contains microscopic crystals of graphite-like, or even diamond-like carbon.
Coal and soot or carbon black are informally called amorphous carbon. However, they are products of pyrolysis (the process of decomposing a substance by the action of heat), which does not produce true amorphous carbon under normal conditions.
Nanocarbons
Buckminsterfullerenes
The buckminsterfullerenes, or usually just fullerenes or buckyballs for short, were discovered in 1985 by a team of scientists from Rice University and the University of Sussex, three of whom were awarded the 1996 Nobel Prize in Chemistry. They are named for the resemblance to the geodesic structures devised by Richard Buckminster "Bucky" Fuller. Fullerenes are positively curved molecules of varying sizes composed entirely of carbon, which take the form of a hollow sphere, ellipsoid, or tube (the C60 version has the same form as a traditional stitched soccer ball).
As of the early twenty-first century, the chemical and physical properties of fullerenes are still under heavy study, in both pure and applied research labs. In April 2003, fullerenes were under study for potential medicinal use — binding specific antibiotics to the structure to target resistant bacteria and even target certain cancer cells such as melanoma.
Carbon nanotubes
Carbon nanotubes, also called buckytubes, are cylindrical carbon molecules with novel properties that make them potentially useful in a wide variety of applications (e.g., nano-electronics, optics, materials applications, etc.). They exhibit extraordinary strength, unique electrical properties, and are efficient conductors of heat. Non-carbon nanotubes have also been synthesized.
Carbon nanotubes are a members of the fullerene structural family, which also includes buckyballs. Whereas buckyballs are spherical in shape, a nanotube is cylindrical, with at least one end typically capped with a hemisphere of the buckyball structure. Their name is derived from their size, since the diameter of a nanotube is on the order of a few nanometers (approximately 50,000 times smaller than the width of a human hair), while they can be up to several centimeters in length. There are two main types of nanotubes: single-walled nanotubes (SWNTs) and multi-walled nanotubes (MWNTs).
Carbon nanobuds
Carbon nanobuds are a newly discovered allotrope of carbon in which fullerene like "buds" are covalently attached to the outer sidewalls of the carbon nanotubes. This hybrid material has useful properties of both fullerenes and carbon nanotubes. For instance, they have been found to be exceptionally good field emitters.
Schwarzites
Schwarzites are negatively curved carbon surfaces originally proposed by decorating triply periodic minimal surfaces with carbon atoms. The geometric topology of the structure is determined by the presence of ring defects, such as heptagons and octagons, to graphene's hexagonal lattice.
(Negative curvature bends surfaces outwards like a saddle rather than bending inwards like a sphere.)
Recent work has proposed zeolite-templated carbons (ZTCs) may be schwarzites. The name, ZTC, derives from their origin inside the pores of zeolites, crystalline silicon dioxide minerals. A vapor of carbon-containing molecules is injected into the zeolite, where the carbon gathers on the pores' walls, creating the negative curve. Dissolving the zeolite leaves the carbon. A team generated structures by decorating the pores of a zeolite with carbon through a Monte Carlo method. Some of the resulting models resemble schwarzite-like structures.
Glassy carbon
Glassy carbon or vitreous carbon is a class of non-graphitizing carbon widely used as an electrode material in electrochemistry, as well as for high-temperature crucibles and as a component of some prosthetic devices.
It was first produced by Bernard Redfern in the mid-1950s at the laboratories of The Carborundum Company, Manchester, UK. He had set out to develop a polymer matrix to mirror a diamond structure and discovered a resole (phenolic) resin that would, with special preparation, set without a catalyst. Using this resin, the first glassy carbon was produced.
The preparation of glassy carbon involves subjecting the organic precursors to a series of heat treatments at temperatures up to 3000 °C. Unlike many non-graphitizing carbons, they are impermeable to gases and are chemically extremely inert, especially those prepared at very high temperatures. It has been demonstrated that the rates of oxidation of certain glassy carbons in oxygen, carbon dioxide or water vapor are lower than those of any other carbon. They are also highly resistant to attack by acids. Thus, while normal graphite is reduced to a powder by a mixture of concentrated sulfuric and nitric acids at room temperature, glassy carbon is unaffected by such treatment, even after several months.
Carbon nanofoam
Carbon nanofoam is the fifth known allotrope of carbon, discovered in 1997 by Andrei V. Rode and co-workers at the Australian National University in Canberra. It consists of a low-density cluster-assembly of carbon atoms strung together in a loose three-dimensional web.
Each cluster is about 6 nanometers wide and consists of about 4000 carbon atoms linked in graphite-like sheets that are given negative curvature by the inclusion of heptagons among the regular hexagonal pattern. This is the opposite of what happens in the case of buckminsterfullerenes, in which carbon sheets are given positive curvature by the inclusion of pentagons.
The large-scale structure of carbon nanofoam is similar to that of an aerogel, but with 1% of the density of previously produced carbon aerogels – only a few times the density of air at sea level. Unlike carbon aerogels, carbon nanofoam is a poor electrical conductor.
Carbide-derived carbon
Carbide-derived carbon (CDC) is a family of carbon materials with different surface geometries and carbon ordering that are produced via selective removal of metals from metal carbide precursors, such as TiC, SiC, , , etc. This synthesis is accomplished using chlorine treatment, hydrothermal synthesis, or high-temperature selective metal desorption under vacuum. Depending on the synthesis method, carbide precursor, and reaction parameters, multiple carbon allotropes can be achieved, including endohedral particles composed of predominantly amorphous carbon, carbon nanotubes, epitaxial graphene, nanocrystalline diamond, onion-like carbon, and graphitic ribbons, barrels, and horns. These structures exhibit high porosity and specific surface areas, with highly tunable pore diameters, making them promising materials for supercapacitor-based energy storage, water filtration and capacitive desalinization, catalyst support, and cytokine removal.
Other metastable carbon phases, some diamondlike, have been produced from reactions of SiC or CH3SiCl3 with CF4.
Linear acetylenic carbon
A one-dimensional carbon polymer with the structure —(C≡C)n—. Its structure is relatively like that of Amorphous carbon.
Cyclocarbons
Cyclo[18]carbon (C18) was synthesized in 2019.
Other possible allotropes
Many other allotropes have been hypothesized but have yet to be synthesized.
bcc-carbon: At ultrahigh pressures of above 1000 GPa, diamond is predicted to transform into a body-centered cubic structure. This phase has importance in astrophysics and deep interiors of planets like Uranus and Neptune. Various structures have been proposed. Superdense and superhard material resembling this phase was synthesized and published in 1979 and reported to have the Im space group with eight atoms per primitive unit cell (16 atoms per conventional unit cell). Claims were made that the so-called C structure had been synthesized, having eight-carbon cubes similar to cubane in the Imm space group, with eight atoms per primitive unit cell, or 16 atoms per conventional unit cell (also called supercubane, see illustration to the right). But a paper in 1988 claimed that a better theory was that the structure was the same as that of an allotrope of silicon called Si-III or γ-silicon, the so-called BC8 structure with space group Ia and 8 atoms per primitive unit cell (16 atoms per conventional unit cell). In 2008 it was reported that the cubane-like structure had been identified. A paper in 2012 considered four proposed structures, the supercubane structure, the BC8 structure, a structure with clusters of four carbon atoms in tetrahedra in space group I3m having four atoms per primitive unit cell (eight per conventional unit cell), and a structure the authors called "carbon sodalite". They found in favor of this carbon sodalite structure, with a calculated density of 2.927 g/cm, shown in the upper left of the illustration under the abstract. This structure has just six atoms per primitive unit cell (twelve per conventional unit cell). The carbon atoms are in the same locations as the silicon and aluminum atoms of the mineral sodalite. The space group, I3m, is the same as the fully expanded form of sodalite would have if sodalite had just silicon or just aluminum.
bct-carbon: Body-centered tetragonal carbon was proposed by theorists in 2010.
Chaoite is a mineral believed to have been formed in meteorite impacts. It has been described as slightly harder than graphite with a reflection color of grey to white. However, the existence of carbyne phases is disputed – see the article on chaoite for details.
D-carbon: D-carbon was proposed by theorists in 2018. D-carbon is an orthorhombic sp3 carbon allotrope (6 atoms per cell). Total-energy calculations demonstrate that D-carbon is energetically more favorable than the previously proposed T6 structure (with 6 atoms per cell) as well as many others.
Haeckelites: Ordered arrangements of pentagons, hexagons, and heptagons which can either be flat or tubular.
The Laves graph or K4 crystal is a theoretically predicted three-dimensional crystalline metastable carbon structure in which each carbon atom is bonded to three others, at 120° angles (like graphite), but where the bond planes of adjacent layers lie at an angle of 70.5°, rather than coinciding.
M-carbon: Monoclinic C-centered carbon is thought to have been first created in 1963 by compressing graphite at room temperature. Its structure was theorized in 2006, then in 2009 it was related to those experimental observations. Many structural candidates, including bct-carbon, were proposed to be equally compatible with experimental data available at the time, until in 2012 it was shown theoretically that this structure is kinetically the most likely to form from graphite. High-resolution data appeared shortly after, demonstrating that among all structure candidates only M-carbon is compatible with experiment.
Metallic carbon: Theoretical studies have shown that there are regions in the phase diagram, at extremely high pressures, where carbon has metallic character. Laser shock experiments and theory indicate that above 600 GPa liquid carbon is metallic.
Novamene: A combination of both hexagonal diamond and sp2 hexagons as in graphene.
Phagraphene: Graphene-like allotrope with distorted Dirac cones.
Prismane C8 is a theoretically predicted metastable carbon allotrope comprising an atomic cluster of eight carbon atoms, with the shape of an elongated triangular bipyramid—a six-atom triangular prism with two more atoms above and below its bases.
Protomene: A hexagonal crystal structure with a fully relaxed primitive cell involving 48 atoms. Out of these, 12 atoms have the potential to switch hybridization between sp2 and sp3, forming dimers.
Q-carbon: Ferromagnetic carbon was discovered in 2015.
T-carbon: Every carbon atom in diamond is replaced with a carbon tetrahedron (hence 'T-carbon'). This was proposed by theorists in 1985.
There is evidence that white dwarf stars have a core of crystallized carbon and oxygen nuclei. The largest of these found in the universe so far, BPM 37093, is located away in the constellation Centaurus. A news release from the Harvard-Smithsonian Center for Astrophysics described the -wide stellar core as a diamond, and it was named as Lucy, after the Beatles' song "Lucy in the Sky With Diamonds"; however, it is more likely an exotic form of carbon. Penta-graphene is a predicted carbon allotrope that utilizes the Cairo pentagonal tiling.
U carbon is predicted to consist of corrugated layers tiled with six- or 12-atom rings, linked by covalent bonds. Notably, it can be harder than steel, as conductive as stainless steel, highly reflective and ferromagnetic, behaving as a permanent magnet at temperatures up to 125 °C.
Zayedene: A combination of linear sp carbon chains and sp3 bulk carbon. The structure of these crystalline carbon allotropes consists of sp chains inserted in cylindrical cavities periodically arranged in hexagonal diamond (lonsdaleite).
Variability of carbon
The system of carbon allotropes spans an astounding range of extremes, considering that they are all merely structural formations of the same element.
Between diamond and graphite:
Diamond crystallizes in the cubic system but graphite crystallizes in the hexagonal system.
Diamond is clear and transparent, but graphite is black and opaque.
Diamond is the hardest mineral known (10 on the Mohs scale), but graphite is one of the softest (1–2 on Mohs scale).
Diamond is the ultimate abrasive, but graphite is soft and is a very good lubricant.
Diamond is an excellent electrical insulator, but graphite is an excellent conductor.
Diamond is an excellent thermal conductor, but some forms of graphite are used for thermal insulation (for example heat shields and firebreaks).
At standard temperature and pressure, graphite is the thermodynamically stable form. Thus diamonds do not exist forever. The conversion from diamond to graphite, however, has a very high activation energy and is therefore extremely slow.
Despite the hardness of diamonds, the chemical bonds that hold the carbon atoms in diamonds together are actually weaker than those that hold together graphite. The difference is that in diamond, the bonds form an inflexible three-dimensional lattice. In graphite, the atoms are tightly bonded into sheets, but the sheets can slide easily over each other, making graphite soft.
| Physical sciences | Group 14 | Chemistry |
551217 | https://en.wikipedia.org/wiki/Mudflat | Mudflat | Mudflats or mud flats, also known as tidal flats or, in Ireland, slob or slobs, are coastal wetlands that form in intertidal areas where sediments have been deposited by tides or rivers. A global analysis published in 2019 suggested that tidal flat ecosystems are as extensive globally as mangroves, covering at least of the Earth's surface. They are found in sheltered areas such as bays, bayous, lagoons, and estuaries; they are also seen in freshwater lakes and salty lakes (or inland seas) alike, wherein many rivers and creeks end. Mudflats may be viewed geologically as exposed layers of bay mud, resulting from deposition of estuarine silts, clays and aquatic animal detritus. Most of the sediment within a mudflat is within the intertidal zone, and thus the flat is submerged and exposed approximately twice daily.
A recent global remote sensing analysis estimated that approximately 50% of the global extent of tidal flats occurs within eight countries (Indonesia, China, Australia, United States, Canada, India, Brazil, and Myanmar) and that 44% of the world's tidal flats occur within Asia (). A 2022 analysis of tidal wetland losses and gains estimates that global tidal flats experienced losses of between 1999 and 2019, which were largely offset by global gains of over the same time period.
In the past tidal flats were considered unhealthy, economically unimportant areas and were often dredged and developed into agricultural land. Some mudflats can be extremely treacherous to walk on. For example, the mudflats surrounding Anchorage, Alaska, are made from fine glacial-silt which does not easily separate out its water, and, although seemingly solid, can quickly gel and become like quicksand when disturbed by stepping on it. Four people are known to have become stuck up to their waists and drowned when the tide came in, and many others are rescued from the Anchorage mudflats each year.
On the Baltic Sea coast of Germany in places, mudflats are exposed not by tidal action, but by wind-action driving water away from the shallows into the sea. This kind of wind-affected mudflat is called Windwatt in German.
Ecology
Tidal flats, along with intertidal salt marshes and mangrove forests, are important ecosystems. They usually support a large population of wildlife, and are a key habitat that allows tens of millions of migratory shorebirds to migrate from breeding sites in the northern hemisphere to non-breeding areas in the southern hemisphere. They are often of vital importance to migratory birds, as well as certain species of crabs, mollusks and fish. In the United Kingdom mudflats have been classified as a Biodiversity Action Plan priority habitat.
The maintenance of mudflats is important in preventing coastal erosion. However, mudflats worldwide are under threat from predicted sea level rises, land claims for development, dredging due to shipping purposes, and chemical pollution. In some parts of the world, such as East and South-East Asia, mudflats have been reclaimed for aquaculture, agriculture, and industrial development. For example, around the Yellow Sea region of East Asia, more than 65% of mudflats present in the early 1950s had been destroyed by the late 2000s. It is estimated that up to 16% of the world tidal flats have disappeared since the mid-1980s.
Mudflat sediment deposits are focused into the intertidal zone which is composed of a barren zone and marshes. Within these areas are various ratios of sand and mud that make up the sedimentary layers. The associated growth of coastal sediment deposits can be attributed to rates of subsidence along with rates of deposition (example: silt transported via river) and changes in sea level.
Barren zones extend from the lowest portion of the intertidal zone to the marsh areas. Beginning in close proximity to the tidal bars, sand dominated layers are prominent and become increasingly muddy throughout the tidal channels. Common bedding types include laminated sand, ripple bedding, and bay mud. Bioturbation also has a strong presence in barren zones.
Marshes contain an abundance of herbaceous plants while the sediment layers consist of thin sand and mud layers. Mudcracks are a common as well as wavy bedding planes. Marshes are also the origins of coal/peat layers because of the abundant decaying plant life.
Salt pans can be distinguished in that they contain thinly laminated layers of clayey silt. The main source of the silt comes from rivers. Dried up mud along with wind erosion forms silt dunes. When flooding, rain or tides come in, the dried sediment is then re-distributed.
Selected example areas
Arcachon Bay, France
Banc d'Arguin, Mauritania
Chamiza Wetland, Chile
Great Rann of Kutch, India
Belhaven, East Lothian Scotland, United Kingdom
Bridgwater Bay and Morecambe Bay, United Kingdom
Cape Cod Bay, Massachusetts, United States
Cook Inlet, Alaska, United States
Lindisfarne Island, England, United Kingdom
Minas Basin, Nova Scotia, Canada
Moreton Bay, Queensland, Australia
North Slob, Wexford, Ireland
Kneiss Archipelago, Tunisia
Padilla Bay, Washington, United States
Plymouth Bay, Massachusetts, United States
Port of Tacoma, Washington, United States
Port Susan, Warm Beach, Washington, United States
Skagit Bay, Washington, United States
Snettisham, Norfolk England, United Kingdom
Wadden Sea: Netherlands, Germany, Denmark
West coast of Andros Island, Bahamas
Yellow Sea: China, North Korea, South Korea
| Physical sciences | Oceanic and coastal landforms | Earth science |
551359 | https://en.wikipedia.org/wiki/Magnetic%20quantum%20number | Magnetic quantum number | In atomic physics, a magnetic quantum number is a quantum number used to distinguish quantum states of an electron or other particle according to its angular momentum along a given axis in space. The orbital magnetic quantum number ( or ) distinguishes the orbitals available within a given subshell of an atom. It specifies the component of the orbital angular momentum that lies along a given axis, conventionally called the z-axis, so it describes the orientation of the orbital in space. The spin magnetic quantum number specifies the z-axis component of the spin angular momentum for a particle having spin quantum number . For an electron, is , and is either + or −, often called "spin-up" and "spin-down", or α and β. The term magnetic in the name refers to the magnetic dipole moment associated with each type of angular momentum, so states having different magnetic quantum numbers shift in energy in a magnetic field according to the Zeeman effect.
The four quantum numbers conventionally used to describe the quantum state of an electron in an atom are the principal quantum number n, the azimuthal (orbital) quantum number , and the magnetic quantum numbers and . Electrons in a given subshell of an atom (such as s, p, d, or f) are defined by values of (0, 1, 2, or 3). The orbital magnetic quantum number takes integer values in the range from to , including zero. Thus the s, p, d, and f subshells contain 1, 3, 5, and 7 orbitals each. Each of these orbitals can accommodate up to two electrons (with opposite spins), forming the basis of the periodic table.
Other magnetic quantum numbers are similarly defined, such as for the z-axis component the total electronic angular momentum , and for the nuclear spin . Magnetic quantum numbers are capitalized to indicate totals for a system of particles, such as or for the total z-axis orbital angular momentum of all the electrons in an atom.
Derivation
There is a set of quantum numbers associated with the energy states of the atom. The four quantum numbers , , , and specify the complete quantum state of a single electron in an atom called its wavefunction or orbital. The Schrödinger equation for the wavefunction of an atom with one electron is a separable partial differential equation. (This is not the case for the neutral helium atom or other atoms with mutually interacting electrons, which require more sophisticated methods for solution) This means that the wavefunction as expressed in spherical coordinates can be broken down into the product of three functions of the radius, colatitude (or polar) angle, and azimuth:
The differential equation for can be solved in the form . Because values of the azimuth angle differing by 2 radians (360 degrees) represent the same position in space, and the overall magnitude of does not grow with arbitrarily large as it would for a real exponent, the coefficient must be quantized to integer multiples of , producing an imaginary exponent: . These integers are the magnetic quantum numbers. The same constant appears in the colatitude equation, where larger values of tend to decrease the magnitude of and values of greater than the azimuthal quantum number do not permit any solution for
As a component of angular momentum
The axis used for the polar coordinates in this analysis is chosen arbitrarily. The quantum number refers to the projection of the angular momentum in this arbitrarily-chosen direction, conventionally called the -direction or quantization axis. , the magnitude of the angular momentum in the -direction, is given by the formula:
.
This is a component of the atomic electron's total orbital angular momentum , whose magnitude is related to the azimuthal quantum number of its subshell by the equation:
,
where is the reduced Planck constant. Note that this for and approximates for high . It is not possible to measure the angular momentum of the electron along all three axes simultaneously. These properties were first demonstrated in the Stern–Gerlach experiment, by Otto Stern and Walther Gerlach.
Effect in magnetic fields
The quantum number refers, loosely, to the direction of the angular momentum vector. The magnetic quantum number only affects the electron's energy if it is in a magnetic field because in the absence of one, all spherical harmonics corresponding to the different arbitrary values of are equivalent. The magnetic quantum number determines the energy shift of an atomic orbital due to an external magnetic field (the Zeeman effect) — hence the name magnetic quantum number. However, the actual magnetic dipole moment of an electron in an atomic orbital arises not only from the electron angular momentum but also from the electron spin, expressed in the spin quantum number.
Since each electron has a magnetic moment in a magnetic field, it will be subject to a torque which tends to make the vector parallel to the field, a phenomenon known as Larmor precession.
| Physical sciences | Atomic physics | Physics |
551388 | https://en.wikipedia.org/wiki/Engineering%20management | Engineering management | Engineering management is the application of engineering methods, tools, and techniques to business management systems. Engineering management is a career that brings together the technological problem-solving ability of engineering and the organizational, administrative, legal and planning abilities of management in order to oversee the operational performance of complex engineering-driven enterprises.
Universities offering bachelor degrees in engineering management typically have programs covering courses such as engineering management, project management, operations management, logistics, supply chain management, programming concepts, programming applications, operations research, engineering law, value engineering, quality control, quality assurance, six sigma, safety engineering, systems engineering, engineering leadership, accounting, applied engineering design, business statistics and calculus. A Master of Engineering Management (MEM) and Master of Business Engineering (MBE) are sometimes compared to a Master of Business Administration (MBA) for professionals seeking a graduate degree as a qualifying credential for a career in engineering management.
History
Stevens Institute of Technology is believed to have the oldest engineering management department, established as the School of Business Engineering in 1908. This was later called the Bachelor of Engineering in Engineering Management (BEEM) program and moved into the School of Systems and Enterprises. Syracuse University established the first graduate engineering management degree in the United States, which was first offered in 1957. In 1967 the first university department explicitly titled "Engineering Management" was founded at the Missouri University of Science and Technology (Missouri S&T, formerly the University of Missouri-Rolla, formerly Missouri School of Mines). In 1959, Western Michigan University began offering the predecessor to the modern engineering management bachelor's degree (titled "Industrial Supervision") and in 1977, Western Michigan University started its MS degree in Manufacturing Administration, later renamed as Engineering Management.
Outside the United States, in Germany the first department concentrating on Engineering Management was established 1927 at the Technische Hochschule in Charlottenburg (now Technische Universität Berlin). In Turkey the Istanbul Technical University has a Management Engineering Department established in 1982, offering a number of graduate and undergraduate programs in Management Engineering (in English). In UK the University of Warwick has a specialised department WMG (previously known as Warwick Manufacturing Group) established in 1980, which offers a graduate programme in MSc Engineering Business Management.
Michigan Technological University began an Engineering Management program in the School of Business & Economics in the Fall of 2012.
In Canada, Memorial University of Newfoundland has started a complete master's degree Program in Engineering Management.
In Denmark, the Technical University of Denmark offers a MSc program in Engineering Management (in English).
In Pakistan, University of Engineering and Technology, Taxila, University of Engineering and Technology, Lahore and National University of Science and Technology (NUST) offer admission both at Master and Doctorate level in Engineering Management while Capital University of Science & Technology (CUST), NED University of Engineering & Technology, Karachi and Ghulam Ishaq Khan Institute of Engineering Sciences and Technology have been running a Master of Engineering/MS in Engineering Management program. A variant of this program is within Quality Management. COMSATS (CIIT) offers a MSc Project Management program to Local and Overseas Pakistanis as an on-campus/off-campus student.
In Italy, the first Engineering Management program was established in 1972 at the University of Calabria by Beniamino Andreatta. Politecnico di Milano offers degrees in Management Engineering., among many other public or private (and publicly-accredited) universities belonging to the same post-secondary academic degrees' classification.
In Morocco, École Nationale Supérieure des Mines de Rabat offers an Engineering Management degree (three years of study full time with a selective admission for Associate or bachelor degree holders). The degree offered is referred to locally as Diplôme d'Ingénieur and is equivalent to Master level degree.
In Russia, since 2014 the Faculty of Engineering Management of The Russian Presidential Academy of National Economy and Public Administration (RANEPA) offers bachelor's and master's degrees in Engineering Management.
In France, the EPF will offer, from January 2018, a 2-year Engineering & Management major in English for the 4th and 5th years of its 5-year Engineering master's degree. The final two years are open to students who have completed an undergraduate engineering degree elsewhere.
Areas of practice
Engineering management is a broad field and can cover a wide range of technical and managerial topics. An important resource is the Engineering Management Body of Knowledge (EMBoK). The topics below are representative of typical topics in the field.
Leadership and organization management
Leadership and organization management are concerned with the skills involving positive direction of technical organizations and motivation of employees. Often a manager must shape engineering policy within an organization.
Operations, operations research, and supply chain
Operations management is concerned with designing and controlling the process of production and redesigning business operations in the production of goods or services. Operations research deals with quantitative models of complex operations and uses these models to support decision-making in any sector of industry or public services. Supply chain management is the process of planning, implementing and managing the flow of goods, services and related information from the point of origin to the point of consumption.
Engineering law
Engineering law and the related statutes are critical to management practice and engineering. Engineering legislation makes engineering a controlled activity and an engineering manager must know which statutes apply to their practice. Codes of ethics can be enshrined in law. Professional misconduct and negligence are defined in law. An engineering manager must be licensed as an engineer and may have engineers, technicians and natural scientists reporting to her or him. Understanding how licensed engineers supervise non-licensed technicians and natural scientists is critical to safe practice.
An engineering manager must always use engineering legislation to push back against schedule pressure or budget pressure to ensure public safety.
Management of technology
Introducing and utilizing new technology is a major route to cost reduction and quality improvement in production engineering.
The management of technology (MOT) theme builds on the foundation of management topics in accounting, finance, economics, organizational behavior and organizational design. Courses in this theme deal with operational and organizational issues related to managing innovation and technological change.
New product development and product engineering
New product development (NPD) is the complete process of bringing a new product to market. Product engineering refers to the process of designing and developing a device, assembly, or system such that it be produced as an item for sale through some production manufacturing process. Product engineering usually entails activity dealing with issues of cost, producibility, quality, performance, reliability, serviceability, intended lifespan and user features. Project management techniques are used to manage the design and development progress using the phase-gate model in the product development process. Design for manufacturability (also sometimes known as design for manufacturing or DFM) is the general engineering art of designing products in such a way that they are easy to manufacture.
Systems engineering
Systems engineering is an interdisciplinary field of engineering and engineering management that focuses on how to design and manage complex systems over their life cycles.
Industrial engineering
Industrial engineering is a branch of engineering which deals with the optimization of complex processes, systems or organizations. Industrial engineers work to eliminate waste of time, money, materials, man-hours, machine time, energy and other resources that do not generate value.
Management science
Management science uses various scientific research-based principles, strategies, and analytical methods including mathematical modeling, statistics and numerical algorithms to improve an organization's ability to enact rational and meaningful management decisions by arriving at optimal or near optimal solutions to complex decision problems.
Engineering design management
Engineering design management represents the adaptation and application of customary management practices, with the intention of achieving a productive engineering design process. Engineering design management is primarily applied in the context of engineering design teams, whereby the activities, outputs and influences of design teams are planned, guided, monitored and controlled.
Human factors safety culture
Critical to management success in engineering is the study of human factors and safety culture involved with highly complex tasks within organizations large and small. In complex engineering systems, human factors safety culture can be critical in preventing catastrophe and minimizing the realized hazard rate. Critical areas of safety culture are minimizing blame avoidance, minimizing power distance, an appropriate ambiguity tolerance and minimizing a culture of concealment. Increasing organizational empathy and an ability to clearly report problems up the chain of management is important to the success of any engineering program.
Managing an engineering firm is in opposition to the management of a law firm. Law firms keep secrets while engineering firms succeed when information is deiminated clearly and quickly. Engineering managers must push against a culture of concealment which may be promoted by the law department.
Managers in an engineering firm must be ready to push back against schedule and budget constraints from the executive suite. Engineering managers must use engineering law to push back against the executive suite to ensure public safety. The executive suite in an engineering organization can become consumed with financial data imperiling public safety.
Education
Engineering management programs typically include instruction in accounting, economics, finance, project management, systems engineering, industrial engineering, mathematical modeling and optimization, management information systems, quality control and six sigma, operations management, operations research, human resources management, industrial psychology, safety and health.
There are many options for entering into engineering management, albeit that the foundation requirement is an engineering license.
Undergraduate degrees
Although most engineering management programs are geared toward graduate studies, there are a number of institutions that teach EM at the undergraduate level. Over twenty undergraduate engineering management related programs are accredited by ABET including: West Point (United States Military Academy), Western Michigan University (ABET-accredited by ETAC of ABET), Stevens Institute of Technology, Clarkson University, Gonzaga University, Virginia Tech, Arizona State University, and the Missouri University of Science and Technology. Graduates of these programs regularly command nearly $65,000 their first year out of school.
Outside the US, Istanbul Technical University Management Engineering Department offers an undergraduate degree in Management Engineering, attracting top students. The University of Waterloo offers a 4-year undergraduate degree (five years including co-op education) in the field of Management Engineering. This is the first program of its kind in Canada. In Peru, Universidad del Pacífico offers a five-year undergraduate degree in this field, the first program in this country. In Germany, ESB Business School offers a 4-year undergraduate program which consists of five semesters at ESB Business School, two mandatory internships, one is mandatory to be in another country than Germany, and also one mandatory semester abroad. In the annual applied university ranking of the magazine Wirtschaftswoche, the Engineering Management course of ESB Business School is ranked on place five of all applied universities in Germany. The magazine surveyed more than 500 recruiters in the German industry from which university they are most likely to recruit students and which universities satisfy their needs regarding experience in working with projects, multilingual education, and ability to communicate most.
Graduate degrees
Many universities offer Master of Engineering Management degrees.
Northwestern University offers the Master of Engineering Management (MEM) program since 1976. The program is administered out of the Department of Industrial Engineering and Management Sciences. Students take courses across different schools of the university such as McCormick school of Engineering, Kellogg School of Management, Farley Center for Entrepreneurship and Innovation, Segal Design Institute. Graduates students are admitted based on eligibility criteria that includes minimum work experience of 3 years.
Missouri S&T is credited with awarding the first Ph.D. in Engineering Management in 1984. The National Institute of Industrial Engineering based in Mumbai has been awarding degrees in the field of Post Graduate Diploma in Industrial Engineering since 1973 and the Fellowship (Doctoral) degrees have been awarded since 2008.
Western Michigan University began offering the MS in Manufacturing Administration degree in 1977 and later renamed the degree as Master of Science in Engineering Management. WMU's MSEM alumni work in the automotive, medical, manufacturing, and service sectors, often in roles of project manager, engineering manager, and senior leadership in engineering and technical organizations.
Cornell University started one of the first Engineering Management Masters programs in 1988 with the launch of their Master of Engineering (M.Eng.) in Engineering Management. The program allows students access to courses and programs in the College of Engineering, Johnson School of Management, and across Cornell University more broadly.
Massachusetts Institute of Technology offers a Master in System Design and Management, which is a member of the Consortium of Engineering Management.
Lamar University offers a Master of Engineering Management degree with flexible content to adjust to diverse engineering fields, with core content that includes operations management, accounting, and decision sciences.
Netaji Subhas Institute of Technology (NSIT) New Delhi also provides M.tech degree in Engineering management. Admission to this program happens through GATE (Graduate Aptitude Test in Engineering) examination.
Students in the University of Kansas' Engineering Management Program are practicing professionals employed by over 100 businesses, manufacturing, government or consulting firms. There are over 200 actively enrolled students in the program and approximately 500 alumni.
Istanbul Technical University Management Engineering Department offers a graduate degree, and a Ph.D. in Management Engineering.
Management engineering consulting
Large and small engineering driven firms often require the expertise of external management consultants that specialize in companies where engineering practice and product development are key drivers of value. Most engineering management consultants will have as a minimum a professional engineering qualification. But usually they will also have graduate degrees in engineering and or business or a management consulting designation. It involves providing management consulting service that is specific to professional engineering practice or to the engineering industry sector. Engineering management consultancies, are typically boutique firms and have a more specialized focus than the traditional mainstream consulting firms, A T Kearney, Boston Consulting Group, KPMG, PWC, and McKinsey. Applied science and engineering practice requires a combination of "management art", science, and engineering practice. There are many professional service companies delivering services in a consultancy type relationship to the engineering industry, including law, accounting, human resources, marketing, politics, economics, finance, public affairs, and communication. Commonly, engineering management consultants are used when firms require a combination of special technical knowledge, and management know how, to enhance knowledge or transform organizational performance and also keep any intellectual property developed confidential.
Engineering management consulting is concerned with the development, improvement, implementation and evaluation of integrated systems of organizations, people, money, knowledge, information, equipment, energy, materials and/or processes. Management Engineering Consultants strive to improve upon existing organizations, processes, products or systems. Engineering management consulting draws upon the principles and methods of engineering analysis and synthesis, as well as the mathematical, physical and social sciences together with the principles and methods of engineering design to specify, predict, and evaluate the results to be obtained from such systems or processes. Engineering management consulting can focus on the social impact of the product, process or system that is being analyzed. There is also an overlap between engineering management consulting and management science in services that require the adoption of more analytical approaches to problem solving.
Examples of where engineering management consulting might be used include developing and leading a company wide business transformation initiative, or designing and implementing a new product development process, designing and implementing a manufacturing engineering process, including an automated assembly workstation. Management engineers may specialize in the acquisition and implementation of computer aided design (CAD), computer-aided manufacturing (CAM) and computer-aided engineering (CAE) applications. Services may include strategizing for various operational logistics, new product introductions, or consulting as an efficiency expert. It may include using management science techniques to develop a new financial algorithm or loan system for a bank, streamlining operation and emergency room location or usage in a hospital, planning complex distribution schemes for materials or products (referred to as supply chain management), and shortening lines (or queues) at a bank, hospital, or a theme park. Management engineering consultants typically use computer simulation (especially discrete event simulation), along with extensive mathematical tools and modeling and computational methods for system analysis, evaluation, and optimization.
Professional organizations
There are a number of societies and organizations dedicated to the field of engineering management. One of the largest societies is a division of IEEE, the Engineering Management Society, which regularly publishes a trade magazine. Another prominent professional organization in the field is the American Society for Engineering Management (ASEM), which was founded in 1979 by a group of 20 engineering managers from industry. ASEM currently certifies engineering managers (two levels) via the Certified Associate in Engineering Management (CAEM) or Certified Professional in Engineering Management (CPEM) certification exam. The Master of Engineering Management Programs Consortium is a consortium of nine universities intended to raise the value and visibility of the MEM degree. Also, engineering management graduate programs have the possibility of being accredited by ABET, ATMAE, or ASEM. In Canada, the Canadian Society for Engineering Management (CSEM) is a constituent society of the Engineering Institute of Canada (EIC), Canada's oldest learned engineering society.
| Technology | Basics | null |
551448 | https://en.wikipedia.org/wiki/Liana | Liana | A liana is a long-stemmed woody vine that is rooted in the soil at ground level and uses trees, as well as other means of vertical support, to climb up to the canopy in search of direct sunlight. The word liana does not refer to a taxonomic grouping, but rather a habit of plant growth – much like tree or shrub. It comes from standard French liane, itself from an Antilles French dialect word meaning to sheave.
Ecology
Lianas are characteristic of tropical moist broadleaf forests (especially seasonal forests), but may be found in temperate rainforests and temperate deciduous forests. There are also temperate lianas, for example the members of the Clematis or Vitis (wild grape) genera. Lianas can form bridges amidst the forest canopy, providing arboreal animals, including ants and many other invertebrates, lizards, rodents, sloths, monkeys, and lemurs with paths across the forest. For example, in the Eastern tropical forests of Madagascar, many lemurs achieve higher mobility from the web of lianas draped amongst the vertical tree species. Many lemurs prefer trees with lianas because of their roots.
Lianas do not derive nutrients directly from trees but live on and derive nutrients at the expense of trees. Specifically, they greatly reduce tree growth and tree reproduction, greatly increase tree mortality, prevent tree seedlings from establishing, alter the course of regeneration in forests, and ultimately decrease tree population growth rates. For example, forests without lianas grow 150% more fruit; trees with lianas have twice the probability of dying.
Lianas are uniquely adapted to living in such forests as they use the host tree, for stability, to reach to top of the canopy. Lianas directly damage hosts by mechanical abrasion and strangulation, render hosts more susceptible to ice and wind damage, and increase the probability that the host tree falls. Lianas also provide support for weaker trees when strong winds blow by laterally anchoring them to stronger trees. However, they may be destructive in that when one tree falls, the connections made by the lianas may cause many other trees to fall. Because of these negative effects, trees which remain free of lianas are at an advantage; some species have evolved characteristics which help them avoid or shed lianas.
Some lianas attain to great length, such as Bauhinia sp. in Surinam which has grown as long as 600 meters (2000'). Hawkins has accepted a length of 1.5 km (1 mile) for an Entada phaseoloides. The longest monocot liana is Calamus manan (or Calamus ornatus) at exactly 240 meters (787'). Dr. Francis E. Putz states that lianas (species not indicated) have weighed "hundreds of tons" and been a half mile (0.8 km) in length. One way of distinguishing lianas from trees and shrubs is based on the stiffness, specifically, the Young's modulus of various parts of the stem. Trees and shrubs have young twigs and smaller branches which are quite flexible and older growth such as trunks and large branches which are stiffer. A liana often has stiff young growths and older, more flexible growth at the base of the stem.
Examples
Some families and genera containing liana species include:
| Biology and health sciences | Plant anatomy and morphology: General | Biology |
551731 | https://en.wikipedia.org/wiki/Electrification | Electrification | Electrification is the process of powering by electricity and, in many contexts, the introduction of such power by changing over from an earlier power source. In the context of history of technology and economic development, electrification refers to the build-out of the electricity generation and electric power distribution systems. In the context of sustainable energy, electrification refers to the build-out of super grids with energy storage to accommodate the energy transition to renewable energy and the switch of end-uses to electricity.
The electrification of particular sectors of the economy, particularly out of context, is called by modified terms such as factory electrification, household electrification, rural electrification and railway electrification. In the context of sustainable energy, terms such as transport electrification (referring to electric vehicles) or heating electrification (referring to heat pumps) are used. It may also apply to changing industrial processes such as smelting, melting, separating or refining from coal or coke heating, or to chemical processes to some type of electric process such as electric arc furnace, electric induction or resistance heating, or electrolysis or electrolytic separating.
Benefits of electrification
Electrification was called "the greatest engineering achievement of the 20th Century" by the National Academy of Engineering, and it continues in both rich and poor countries.
Benefits of electric lighting
Electric lighting is highly desirable. The light is much brighter than oil or gas lamps, and there is no soot. Although early electricity was very expensive compared to today, it was far cheaper and more convenient than oil or gas lighting. Electric lighting was so much safer than oil or gas that some companies were able to pay for the electricity with the insurance savings.
Pre-electric power
In 1851, Charles Babbage stated:One of the inventions most important to a class of highly skilled workers (engineers) would be a small motive power - ranging perhaps from the force of from half a man to that of two horses, which might commence as well as cease its action at a moment's notice, require no expense of time for its management and be of modest cost both in original cost and in daily expense.
To be efficient steam engines needed to be several hundred horsepower. Steam engines and boilers also required operators and maintenance. For these reasons the smallest commercial steam engines were about 2 horsepower. This was above the need for many small shops. Also, a small steam engine and boiler cost about $7,000 while an old blind horse that could develop 1/2 horsepower cost $20 or less. Machinery to use horses for power cost $300 or less.
Many power requirements were less than that of a horse. Shop machines, such as woodworking lathes, were often powered with a one- or two-man crank. Household sewing machines were powered with a foot treadle; however, factory sewing machines were steam-powered from a line shaft. Dogs were sometimes used on machines such as a treadmill, which could be adapted to churn butter.
In the late 19th century specially designed power buildings leased space to small shops. These building supplied power to the tenants from a steam engine through line shafts.
Electric motors were several times more efficient than small steam engines because central station generation was more efficient than small steam engines and because line shafts and belts had high friction losses.
Electric motors were more efficient than human or animal power. The conversion efficiency for animal feed to work is between 4 and 5% compared to over 30% for electricity generated using coal.
Economic impact of electrification
Electrification and economic growth are highly correlated. In economics, the efficiency of electrical generation has been shown to correlate with technological progress.
In the U.S. from 1870 to 1880 each man-hour was provided with .55 hp. In 1950 each man-hour was provided with 5 hp, or a 2.8% annual increase, declining to 1.5% from 1930 to 1950. The period of electrification of factories and households from 1900 to 1940, was one of high productivity and economic growth.
Most studies of electrification and electric grids focused on industrial core countries in Europe and the United States. Elsewhere, wired electricity was often carried on and through the circuits of colonial rule. Some historians and sociologists considered the interplay of colonial politics and the development of electric grids: in India, Rao showed that linguistics-based regional politics—not techno-geographical considerations—led to the creation of two separate grids; in colonial Zimbabwe (Rhodesia), Chikowero showed that electrification was racially based and served the white settler community while excluding Africans; and in Mandate Palestine, Shamir claimed that British electric concessions to a Zionist-owned company deepened the economic disparities between Arabs and Jews.
Current extent of electrification
While electrification of cities and homes has existed since the late 19th century, about 840 million people (mostly in Africa) had no access to grid electricity in 2017, down from 1.2 billion in 2010.
Vast gains in electrification were seen in the 1970s and 1980s—from 49% of the world's population in 1970 to 76% in 1990. By the early 2010s, 81–83% of the world's population had access to electricity.
Electrification for sustainable energy
Clean energy is mostly generated in the form of electricity, such as renewable energy or nuclear power. Switching to these energy sources requires that end uses, such as transport and heating, be electrified for the world's energy systems to be sustainable.
In the U.S. and Canada the use of heat pumps (HP) is economic if powered with solar photovoltaic (PV) devices to offset propane heating in rural areas and natural gas heating in cities. A 2023 study investigated: (1) a residential natural gas-based heating system and grid electricity, (2) a residential natural gas-based heating system with PV to serve the electric load, (3) a residential HP system with grid electricity, and (4) a residential HP+PV system. It found that under typical inflation conditions, the lifecycle cost of natural gas and reversible, air-source heat pumps are nearly identical, which in part explains why heat pump sales have surpassed gas furnace sales in the U.S. for the first time during a period of high inflation. With higher rates of inflation or lower PV capital costs, PV becomes a hedge against rising prices and encourages the adoption of heat pumps by also locking in both electricity and heating cost growth. The study concludes: "The real internal rate of return for such prosumer technologies is 20x greater than a long-term certificate of deposit, which demonstrates the additional value PV and HP technologies offer prosumers over comparably secure investment vehicles while making substantive reductions in carbon emissions." This approach can be improved by integrating a thermal battery into the heat pump+solar energy heating system.
Transport electrification
It is easier to sustainably produce electricity than it is to sustainably produce liquid fuels. Therefore, adoption of electric vehicles is a way to make transport more sustainable. Hydrogen vehicles may be an option for larger vehicles which have not yet been widely electrified, such as long distance lorries. While electric vehicle technology is relatively mature in road transport, electric shipping and aviation are still early in their development, hence sustainable liquid fuels may have a larger role to play in these sectors.
Heating electrification
A large fraction of the world population cannot afford sufficient cooling for their homes. In addition to air conditioning, which requires electrification and additional power demand, passive building design and urban planning will be needed to ensure cooling needs are met in a sustainable way. Similarly, many households in the developing and developed world suffer from fuel poverty and cannot heat their houses enough. Existing heating practices are often polluting.
A key sustainable solution to heating is electrification (heat pumps, or the less efficient electric heater). The IEA estimates that heat pumps currently provide only 5% of space and water heating requirements globally, but could provide over 90%. Use of ground source heat pumps not only reduces total annual energy loads associated with heating and cooling, it also flattens the electric demand curve by eliminating the extreme summer peak electric supply requirements. However, heat pumps and resistive heating alone will not be sufficient for the electrification of industrial heat. This because in several processes higher temperatures are required which cannot be achieved with these types of equipment. For example, for the production of ethylene via steam cracking temperatures as high as 900 °C are required. Hence, drastically new processes are required. Nevertheless, power-to-heat is expected to be the first step in the electrification of the chemical industry with an expected large-scale implementation by 2025.
Some cities in the United States have started prohibiting gas hookups for new houses, with state laws passed and under consideration to either require electrification or prohibit local requirements. The UK government is experimenting with electrification for home heating to meet its climate goals. Ceramic and Induction heating for cooktops as well as industrial applications (for instance steam crackers) are examples of technologies that can be used to transition away from natural gas.
Energy resilience
Electricity is a "sticky" form of energy, in that it tends to stay in the continent or island where it is produced. It is also multi-sourced; if one source suffers a shortage, electricity can be produced from other sources, including renewable sources. As a result, in the long term it is a relatively resilient means of energy transmission. In the short term, because electricity must be supplied at the same moment it is consumed, it is somewhat unstable, compared to fuels that can be delivered and stored on-site. However, that can be mitigated by grid energy storage and distributed generation.
Managing variable energy sources
Solar and wind are variable renewable energy sources that supply electricity intermittently depending on the weather and the time of day. Most electrical grids were constructed for non-intermittent energy sources such as coal-fired power plants. As larger amounts of solar and wind energy are integrated into the grid, changes have to be made to the energy system to ensure that the supply of electricity is matched to demand. In 2019, these sources generated 8.5% of worldwide electricity, a share that has grown rapidly.
There are various ways to make the electricity system more flexible. In many places, wind and solar production are complementary on a daily and a season scale: There is more wind during the night and in winter, when solar energy production is low. Linking distant geographical regions through long-distance transmission lines allows for further cancelling out of variability. Energy demand can be shifted in time through energy demand management and the use of smart grids, matching the times when variable energy production is highest. With storage, energy produced in excess can be released when needed. Building additional capacity for wind and solar generation can help to ensure that enough electricity is produced even during poor weather; during optimal weather energy generation may have to be curtailed. The final mismatch may be covered by using dispatchable energy sources such as hydropower, bioenergy, or natural gas.
Energy storage
Energy storage helps overcome barriers for intermittent renewable energy, and is therefore an important aspect of a sustainable energy system. The most commonly used storage method is pumped-storage hydroelectricity, which requires locations with large differences in height and access to water. Batteries, and specifically lithium-ion batteries, are also deployed widely. They contain cobalt, which is largely mined in Congo, a politically unstable region. More diverse geographical sourcing may ensure the stability of the supply-chain and their environmental impacts can be reduced by downcycling and recycling. Batteries typically store electricity for short periods; research is ongoing into technology with sufficient capacity to last through seasons. Pumped hydro storage and power-to-gas with capacity for multi-month usage has been implemented in some locations.
As of 2018, thermal energy storage is typically not as convenient as burning fossil fuels. High upfront costs form a barrier for implementation. Seasonal thermal energy storage requires large capacity; it has been implemented in some high-latitude regions for household heat.
History of electrification
The earliest commercial uses of electricity were electroplating and the telegraph.
Development of magnetos, dynamos and generators
In the years 1831–1832, Michael Faraday discovered the operating principle of electromagnetic generators. The principle, later called Faraday's law, is based on an electromotive force generated in an electrical conductor that is subjected to a varying magnetic flux as, for example, a wire moving through a magnetic field. Faraday built the first electromagnetic generator, called the Faraday disk, a type of homopolar generator, using a copper disc rotating between the poles of a horseshoe magnet. Faraday's first electromagnetic generator produced a small DC voltage.
Around 1832, Hippolyte Pixii improved the magneto by using a wire wound horseshoe, with the extra coils of conductor generating more current, but it was AC. André-Marie Ampère suggested a means of converting current from Pixii's magneto to DC using a rocking switch. Later segmented commutators were used to produce direct current.
Around 1838-40, William Fothergill Cooke and Charles Wheatstone developed a telegraph. In 1840 Wheatstone was using a magneto that he developed to power the telegraph. Wheatstone and Cooke made an important improvement in electrical generation by using a battery-powered electromagnet in place of a permanent magnet, which they patented in 1845. The self-excited magnetic field dynamo did away with the battery to power electromagnets. This type of dynamo was made by several people in 1866.
The first practical generator, the Gramme machine, was made by Z.T. Gramme, who sold many of these machines in the 1870s. British engineer R.E.B. Crompton improved the generator to allow better air cooling and made other mechanical improvements. Compound winding, which gave more stable voltage with load, improved the operating characteristics of generators.
The improvements in electrical generation technology in the 19th century increased its efficiency and reliability greatly. The first magnetos only converted a few percent of mechanical energy to electricity. By the end of the 19th century the highest efficiencies were over 90%.
Electric lighting
Arc lighting
Sir Humphry Davy invented the carbon arc lamp in 1802 upon discovering that electricity could produce a light arc with carbon electrodes. However, it was not used to any great extent until a practical means of generating electricity was developed.
Carbon arc lamps were started by making contact between two carbon electrodes, which were then separated to within a narrow gap. Because the carbon burned away, the gap had to be constantly readjusted. Several mechanisms were developed to regulate the arc. A common approach was to feed a carbon electrode by gravity and maintain the gap with a pair of electromagnets, one of which retracted the upper carbon after the arc was started and the second controlled a brake on the gravity feed.
Arc lamps of the time had very intense light output – in the range of 4,000 candlepower (candelas) – and released a lot of heat, and they were a fire hazard, all of which made them inappropriate for lighting homes.
In the 1850s, many of these problems were solved by the arc lamp invented by William Petrie and William Staite. The lamp used a magneto-electric generator and had a self-regulating mechanism to control the gap between the two carbon rods. Their light was used to light up the National Gallery in London and was a great novelty at the time. These arc lamps and designs similar to it, powered by large magnetos, were first installed on English lighthouses in the mid 1850s, but the technology suffered power limitations.
The first successful arc lamp (the Yablochkov candle) was developed by Russian engineer Pavel Yablochkov using the Gramme generator. Its advantage lay in the fact that it did not require the use of a mechanical regulator like its predecessors. It was first exhibited at the Paris Exposition of 1878 and was heavily promoted by Gramme. The arc light was installed along the half mile length of Avenue de l'Opéra, Place du Theatre Francais and around the Place de l'Opéra in 1878.
R. E. B. Crompton developed a more sophisticated design in 1878 which gave a much brighter and steadier light than the Yablochkov candle. In 1878, he formed Crompton & Co. and began to manufacture, sell and install the Crompton lamp. His concern was one of the first electrical engineering firms in the world.
Incandescent light bulbs
Various forms of incandescent light bulbs had numerous inventors; however, the most successful early bulbs were those that used a carbon filament sealed in a high vacuum. These were invented by Joseph Swan in 1878 in Britain and by Thomas Edison in 1879 in the US. Edison’s lamp was more successful than Swan’s because Edison used a thinner filament, giving it higher resistance and thus conducting much less current. Edison began commercial production of carbon filament bulbs in 1880. Swan's light began commercial production in 1881.
Swan's house, in Low Fell, Gateshead, was the world's first to have working light bulbs installed. The Lit & Phil Library in Newcastle, was the first public room lit by electric light, and the Savoy Theatre was the first public building in the world lit entirely by electricity.
Central power stations and isolated systems
The first central station providing public power is believed to be one at Godalming, Surrey, UK, in autumn 1881. The system was proposed after the town failed to reach an agreement on the rate charged by the gas company, so the town council decided to use electricity. The system lit up arc lamps on the main streets and incandescent lamps on a few side streets with hydroelectric power. By 1882 between 8 and 10 households were connected, with a total of 57 lights. The system was not a commercial success, and the town reverted to gas.
The first large scale central distribution supply plant was opened at Holborn Viaduct in London in 1882. Equipped with 1000 incandescent lightbulbs that replaced the older gas lighting, the station lit up Holborn Circus including the offices of the General Post Office and the famous City Temple church. The supply was a direct current at 110 V; due to power loss in the copper wires, this amounted to 100 V for the customer.
Within weeks, a parliamentary committee recommended passage of the landmark 1882 Electric Lighting Act, which allowed the licensing of persons, companies or local authorities to supply electricity for any public or private purposes.
The first large scale central power station in America was Edison's Pearl Street Station in New York, which began operating in September 1882. The station had six 200 horsepower Edison dynamos, each powered by a separate steam engine. It was located in a business and commercial district and supplied 110 volt direct current to 85 customers with 400 lamps. By 1884 Pearl Street was supplying 508 customers with 10,164 lamps.
By the mid-1880s, other electric companies were establishing central power stations and distributing electricity, including Crompton & Co. and the Swan Electric Light Company in the UK, Thomson-Houston Electric Company and Westinghouse in the US and Siemens in Germany. By 1890 there were 1000 central stations in operation. The 1902 census listed 3,620 central stations. By 1925 half of power was provided by central stations.
Load factor and isolated systems
One of the biggest problems facing the early power companies was the hourly variable demand. When lighting was practically the only use of electricity, demand was high during the first hours before the workday and the evening hours when demand peaked. As a consequence, most early electric companies did not provide daytime service, with two-thirds providing no daytime service in 1897.
The ratio of the average load to the peak load of a central station is called the load factor. For electric companies to increase profitability and lower rates, it was necessary to increase the load factor. The way this was eventually accomplished was through motor load. Motors are used more during daytime and many run continuously. Electric street railways were ideal for load balancing. Many electric railways generated their own power and also sold power and operated distribution systems.
The load factor adjusted upward by the turn of the 20th century—at Pearl Street the load factor increased from 19.3% in 1884 to 29.4% in 1908. By 1929, the load factor around the world was greater than 50%, mainly due to motor load.
Before widespread power distribution from central stations, many factories, large hotels, apartment and office buildings had their own power generation. Often this was economically attractive because the exhaust steam could be used for building and industrial process heat, which today is known as cogeneration or combined heat and power (CHP). Most self-generated power became uneconomical as power prices fell. As late as the early 20th century, isolated power systems greatly outnumbered central stations. Cogeneration is still commonly practiced in many industries that use large amounts of both steam and power, such as pulp and paper, chemicals and refining. The continued use of private electric generators is called microgeneration.
Direct current electric motors
The first commutator DC electric motor capable of turning machinery was invented by the British scientist William Sturgeon in 1832. The crucial advance that this represented over the motor demonstrated by Michael Faraday was the incorporation of a commutator. This allowed Sturgeon's motor to be the first capable of providing continuous rotary motion.
Frank J. Sprague improved on the DC motor in 1884 by solving the problem of maintaining a constant speed with varying load and reducing sparking from the brushes. Sprague sold his motor through Edison Co. It is easy to vary speed with DC motors, which made them suited for a number of applications such as electric street railways, machine tools and certain other industrial applications where speed control was desirable.
Manufacturing was transitioned from line shaft and belt drive using steam engines and water power to electric motors.
Alternating current
Although the first power stations supplied direct current, the distribution of alternating current soon became the most favored option. The main advantages of AC were that it could be transformed to high voltage to reduce transmission losses and that AC motors could easily run at constant speeds.
Alternating current technology was rooted in Faraday's 1830–31 discovery that a changing magnetic field can induce an electric current in a circuit.
The first person to conceive of a rotating magnetic field was Walter Baily who gave a workable demonstration of his battery-operated polyphase motor aided by a commutator on June 28, 1879, to the Physical Society of London. Nearly identical to Baily’s apparatus, French electrical engineer Marcel Deprez in 1880 published a paper that identified the rotating magnetic field principle and that of a two-phase AC system of currents to produce it. In 1886, English engineer Elihu Thomson built an AC motor by expanding upon the induction-repulsion principle and his wattmeter.
It was in the 1880s that the technology was commercially developed for large scale electricity generation and transmission. In 1882 the British inventor and electrical engineer Sebastian de Ferranti, working for the company Siemens collaborated with the distinguished physicist Lord Kelvin to pioneer AC power technology including an early transformer.
A power transformer developed by Lucien Gaulard and John Dixon Gibbs was demonstrated in London in 1881, and attracted the interest of Westinghouse. They also exhibited the invention in Turin in 1884, where it was adopted for an electric lighting system. Many of their designs were adapted to the particular laws governing electrical distribution in the UK.
Sebastian Ziani de Ferranti went into this business in 1882 when he set up a shop in London designing various electrical devices. Ferranti believed in the success of alternating current power distribution early on, and was one of the few experts in this system in the UK. With the help of Lord Kelvin, Ferranti pioneered the first AC power generator and transformer in 1882. John Hopkinson, a British physicist, invented the three-wire (three-phase) system for the distribution of electrical power, for which he was granted a patent in 1882.
The Italian inventor Galileo Ferraris invented a polyphase AC induction motor in 1885. The idea was that two out-of-phase, but synchronized, currents might be used to produce two magnetic fields that could be combined to produce a rotating field without any need for switching or for moving parts. Other inventors were the American engineers Charles S. Bradley and Nikola Tesla, and the German technician Friedrich August Haselwander. They were able to overcome the problem of starting up the AC motor by using a rotating magnetic field produced by a poly-phase current. Mikhail Dolivo-Dobrovolsky introduced the first three-phase induction motor in 1890, a much more capable design that became the prototype used in Europe and the U.S. By 1895 GE and Westinghouse both had AC motors on the market. With single phase current either a capacitor or coil (creating inductance) can be used on part of the circuit inside the motor to create a rotating magnetic field. Multi-speed AC motors that have separately wired poles have long been available, the most common being two speed. Speed of these motors is changed by switching sets of poles on or off, which was done with a special motor starter for larger motors, or a simple multiple speed switch for fractional horsepower motors.
AC power stations
The first AC power station was built by the English electrical engineer Sebastian de Ferranti. In 1887 the London Electric Supply Corporation hired Ferranti for the design of their power station at Deptford. He designed the building, the generating plant and the distribution system. It was built at the Stowage, a site to the west of the mouth of Deptford Creek once used by the East India Company. Built on an unprecedented scale and pioneering the use of high voltage (10,000 V) AC current, it generated 800 kilowatts and supplied central London. On its completion in 1891 it was the first truly modern power station, supplying high-voltage AC power that was then "stepped down" with transformers for consumer use on each street. This basic system remains in use today around the world.
In the U.S., George Westinghouse, who had become interested in the power transformer developed by Gaulard and Gibbs, began to develop his AC lighting system, using a transmission system with a 20:1 step up voltage with step-down. In 1890 Westinghouse and Stanley built a system to transmit power several miles to a mine in Colorado. A decision was taken to use AC for power transmission from the Niagara Power Project to Buffalo, New York. Proposals submitted by vendors in 1890 included DC and compressed air systems. A combination DC and compressed air system remained under consideration until late in the schedule. Despite the protestations of the Niagara commissioner William Thomson (Lord Kelvin) the decision was taken to build an AC system, which had been proposed by both Westinghouse and General Electric. In October 1893 Westinghouse was awarded the contract to provide the first three 5,000 hp, 250 rpm, 25 Hz, two phase generators. The hydro power plant went online in 1895, and it was the largest one until that date.
By the 1890s, single and poly-phase AC was undergoing rapid introduction. In the U.S. by 1902, 61% of generating capacity was AC, increasing to 95% in 1917. Despite the superiority of alternating current for most applications, a few existing DC systems continued to operate for several decades after AC became the standard for new systems.
Steam turbines
The efficiency of steam prime movers in converting the heat energy of fuel into mechanical work was a critical factor in the economic operation of steam central generating stations. Early projects used reciprocating steam engines, operating at relatively low speeds. The introduction of the steam turbine fundamentally changed the economics of central station operations. Steam turbines could be made in larger ratings than reciprocating engines, and generally had higher efficiency. The speed of steam turbines did not fluctuate cyclically during each revolution. This made parallel operation of AC generators feasible, and improved the stability of rotary converters for production of direct current for traction and industrial uses. Steam turbines ran at higher speed than reciprocating engines, not being limited by the allowable speed of a piston in a cylinder. This made them more compatible with AC generators with only two or four poles; no gearbox or belted speed increaser was needed between the engine and the generator. It was costly and ultimately impossible to provide a belt-drive between a low-speed engine and a high-speed generator in the very large ratings required for central station service.
The modern steam turbine was invented in 1884 by British engineer Sir Charles Parsons, whose first model was connected to a dynamo that generated 7.5 kW (10 hp) of electricity. The invention of Parsons's steam turbine made cheap and plentiful electricity possible. Parsons turbines were widely introduced in English central stations by 1894; the first electric supply company in the world to generate electricity using turbo generators was Parsons's own electricity supply company Newcastle and District Electric Lighting Company, set up in 1894. Within Parsons's lifetime, the generating capacity of a unit was scaled up by about 10,000 times.
The first U.S. turbines were two De Leval units at Edison Co. in New York in 1895. The first U.S. Parsons turbine was at Westinghouse Air Brake Co. near Pittsburgh.
Steam turbines also had capital cost and operating advantages over reciprocating engines. The condensate from steam engines was contaminated with oil and could not be reused, while condensate from a turbine is clean and typically reused. Steam turbines were a fraction of the size and weight of comparably rated reciprocating steam engine. Steam turbines can operate for years with almost no wear. Reciprocating steam engines required high maintenance. Steam turbines can be manufactured with capacities far larger than any steam engines ever made, giving important economies of scale.
Steam turbines could be built to operate on higher pressure and temperature steam. A fundamental principle of thermodynamics is that the higher the temperature of the steam entering an engine, the higher the efficiency. The introduction of steam turbines motivated a series of improvements in temperatures and pressures. The resulting increased conversion efficiency lowered electricity prices.
The power density of boilers was increased by using forced combustion air and by using compressed air to feed pulverized coal. Also, coal handling was mechanized and automated.
Electrical grid
With the realization of long distance power transmission it was possible to interconnect different central stations to balance loads and improve load factors. Interconnection became increasingly desirable as electrification grew rapidly in the early years of the 20th century.
Charles Merz, of the Merz & McLellan consulting partnership, built the Neptune Bank Power Station near Newcastle upon Tyne in 1901, and by 1912 had developed into the largest integrated power system in Europe. In 1905 he tried to influence Parliament to unify the variety of voltages and frequencies in the country's electricity supply industry, but it was not until World War I that Parliament began to take this idea seriously, appointing him head of a Parliamentary Committee to address the problem. In 1916 Merz pointed out that the UK could use its small size to its advantage, by creating a dense distribution grid to feed its industries efficiently. His findings led to the Williamson Report of 1918, which in turn created the Electricity Supply Bill of 1919. The bill was the first step towards an integrated electricity system in the UK.
The more significant Electricity (Supply) Act of 1926, led to the setting up of the National Grid. The Central Electricity Board standardised the nation's electricity supply and established the first synchronised AC grid, running at 132 kilovolts and 50 Hertz. This started operating as a national system, the National Grid, in 1938.
In the United States it became a national objective after the power crisis during the summer of 1918 in the midst of World War I to consolidate supply. In 1934 the Public Utility Holding Company Act recognized electric utilities as public goods of importance along with gas, water, and telephone companies and thereby were given outlined restrictions and regulatory oversight of their operations.
Household electrification
The electrification of households in Europe and North America began in the early 20th century in major cities and in areas served by electric railways and increased rapidly until about 1930 when 70% of households were electrified in the U.S.
Rural areas were electrified first in Europe, and in the U.S. the Rural Electric Administration, established in 1935 brought electrification to the underserviced rural areas.
In the Soviet Union, as in the United States, rural electrification progressed more slowly than in urban areas. It wasn't until the Brezhnev era that electrification became widespread in rural regions, with the Soviet rural electrification drive largely completed by the early 1970s.
In China, the turmoil of the Warlord Era, the Civil War and the Japanese invasion in the early 20th century delayed electrification for decades. It was only after the establishment of the People's Republic of China in 1949 that the country was positioned to pursue widespread electrification. During the Mao years, while electricity became commonplace in cities, rural areas were largely neglected. At the time of Mao's death in 1976, 25% of Chinese households still lacked access to electricity.
Deng Xiaoping, who became China's paramount leader in 1978, initiated a rural electrification drive as part of a broader modernization effort. By the late 1990s, electricity had become ubiquitious in regional areas. The very last remote villages in China were connected to the grid in 2015.
Historical cost of electricity
Central station electric power generating provided power more efficiently and at lower cost than small generators. The capital and operating cost per unit of power were also cheaper with central stations. The cost of electricity fell dramatically in the first decades of the twentieth century due to the introduction of steam turbines and the improved load factor after the introduction of AC motors. As electricity prices fell, usage increased dramatically and central stations were scaled up to enormous sizes, creating significant economies of scale. For the historical cost see Ayres-Warr (2002) Fig. 7.
| Technology | Energy: General | null |
551824 | https://en.wikipedia.org/wiki/Courier | Courier | A courier is a person or organization that delivers a message, package or letter from one place or person to another place or person. Typically, a courier provides their courier service on a commercial contract basis; however, some couriers are government or state agency employees (for example: a diplomatic courier).
Duties and functions
Couriers are distinguished from ordinary mail services by features such as speed, security, tracking, signature, specialization and individualization of express services, and swift delivery times, which are optional for most everyday mail services. As a premium service, couriers are usually more expensive than standard mail services, and their use is normally limited to packages where one or more of these features are considered important enough to warrant the cost.
Courier services operate on all scales, from within specific towns or cities, to regional, national and global services. Large courier companies include DHL, PathaoDTDC, FedEx, EMS International, TNT, UPS, India Post, J&T Express and Aramex. These offer services worldwide, typically via a hub and spoke model.
Couriers services utilizing courier software provide electronic proof of delivery and electronic tracking details.
Before the industrial era
In ancient history, messages were hand-delivered using a variety of methods, including runners, homing pigeons and riders on horseback. Before the introduction of mechanized courier services, foot messengers physically ran miles to their destinations. Xenophon attributed the first use of couriers to the Persian prince Cyrus the Younger.
Famously, the Ancient Greek courier Pheidippides is said to have run 26 miles from Marathon to Athens to bring the news of the Greek victory over the Persians in 490 BCE. The long-distance race known as a marathon is named for this run.
Hezekiah
Judah's king, Hezekiah, dates between 200 and 400 BCE, where several couriers brought letters throughout the land of Judah and Israel (cf. 2 Chron 30 ESV).
Anabasii
Starting at the time of Augustus, the ancient Greeks and Romans made use of a class of horse and chariot-mounted couriers called anabasii to quickly bring messages and commands from long distances. The word anabasii comes from the Greek ἀνάβασις(anábasis, "ascent, mounting"). They were contemporary with the Greek hemeredromi, who carried their messages by foot.
In Roman Britain, Rufinus made use of anabasii, as documented in Saint Jerome's memoirs (adv. Ruffinum, l. 3. c. 1.): "Idcircone Cereales et Anabasii tui per diversas provincias cucurrerunt, ut laudes meas legerent?" ("Is it on that account that your Cereales and Anabasii circulated through many provinces, so that they might read my praises?")
Middle Ages
In the Middle Ages, royal courts maintained their own messengers who were paid little more than common labourers.
Types
In cities, there are often bicycle couriers or motorcycle couriers but for consignments requiring delivery over greater distance networks, this may often include trucks, railroads and aircraft.
Many companies which operate under a just-in-time or "JIT" inventory method often use on-board couriers (OBCs). On-board couriers are individuals who can travel at a moment's notice anywhere in the world, usually via commercial airlines. While this type of service is the second costliest—general aviation charters are far more expensive—companies analyze the cost of service to engage an on-board courier versus the "cost" the company will realize should the product not arrive by a specified time (an assembly line stopping, untimely court filing, lost sales from product or components missing a delivery deadline, loss of life from a delayed organ transplant).
By country
Australia
The courier business in Australia is a very competitive industry and is mainly concentrated in the high population areas in and around the capital cities. With such a vast mass of land to cover the courier companies tend to transport either by air or by the main transport routes and national highways. The only large company that provides a country-wide service is Australia Post.Australian Post operates quite differently to government departments, as it is government-owned enterprise focused on service delivery in a competitive market. It operates in a fully competitive market against other delivery services such as Fastway, UPS, and Transdirect.
China
International courier services in China include TNT, EMS International, DHL, FedEx and UPS. These companies provide nominal worldwide service for both inbound and outbound shipments, connecting China to countries such as the US, Australia, United Kingdom, and New Zealand. Of the international courier services, the Dutch company TNT is considered to have the most capable local fluency and efficacy for third- and fourth- tiered cities. EMS International is a unit of China Post, and as such is not available for shipments originating outside China.
Domestic courier services include SF Express, STO Express (申通), ZTO Express (中通), YTO Express (圆通), E-EMS (E邮宝), Cainiao Express (菜鸟) and many other operators of sometimes microscopic scales. E-EMS, is the special product of a co-operative arrangement between China Post and Alipay, which is the online payment unit of Alibaba Group. It is only available for the delivery of online purchases made using Alipay.
Within the Municipality of Beijing, TongCheng KuaiDi (同城快递), also a unit of China Post, provides intra-city service using cargo bicycles.
India
International courier services in India include DHL, FedEx, Blue Dart Express,Spicexpress and Logistics Pvt Ltd, Ekart, DTDC, VRL Courier Services, Delhivery, TNT, Amazon.com, OCS and Gati Ltd. Apart from these, several local couriers also operate across India. Almost all of these couriers can be tracked online. India Post, an undertaking by the Indian government, is the largest courier service with around 155 thousand branches (comprising 139 thousand (90%) in rural areas and 16 thousand (10%) in urban areas). All couriers use the PIN code or postal index number introduced by India Post to locate delivery address. Additionally, the contact number of the recipient and sender are voluntarily added on the courier for ease of locating the address.
Bangladesh
The history of courier services in Bangladesh dates back to the late 1970s when private companies started offering delivery and parcel services. These companies played a crucial role in facilitating the movement of documents and goods within the country. Over the years, the courier industry in Bangladesh has grown significantly, adapting to changes in technology and expanding its services to include international shipments. Today, various local and international courier companies operate in Bangladesh, contributing to the country's logistics and trade networks.
Couriers that operate across Bangladesh include Sundarban Courier Service(40% market share), SA Paribahan, Pathao Courier, e-dak Courier, RedX, Sheba Delivery, Janani Express Parcel Service, Delivery Tiger, eCourier, Karatoa Courier Service, Sonar Courier. Almost all of these couriers can be tracked online. Also international courier services in Bangladesh include DHL, FedEx, United Express, Royale International Bangladesh, DSL Worldwide Courier Service, Aramex, Pos Laju, J&T Express, and Amazon.com.
Malaysia
International courier services in Malaysia include DHL, FedEx, Pgeon, Skynet Express, ABX Express, GDex, Pos Laju, J&T Express, and Amazon.com. Apart from these, several local couriers also operate across Malaysia. Almost all of these couriers can be tracked online.
Ireland
The main courier services available in Ireland as alternatives to the national An Post system are Parcel Direct Ireland, DHL, UPS, TNT, DPD and FedEx.
Singapore
There are several international courier companies in Singapore including TNT, DHL and FedEx. Despite being a small country, the demand for courier services is high. Many local courier companies have sprung up to meet this demand. Most courier companies in Singapore focus on local deliveries instead of international freight.
United Kingdom
The genus of the UK same-day courier market stems from the London Taxi companies but soon expanded into dedicated motorcycle despatch riders with the taxi companies setting up separate arms to their companies to cover the courier work. During the late 1970s small provincial and regional companies were popping up throughout the country. Today, there are many large companies offering next-day courier services.
There are many 'specialist' couriers usually for the transportation of items such as freight/pallets, sensitive documents and liquids.
The 'Man & Van'/Freelance courier business model, is highly popular in the United Kingdom, with thousands upon thousands of independent couriers and localised companies, offering next-day and same day services. This is likely to be so popular because of the low business requirements (a vehicle) and the lucrative number of items sent within the UK every day. In fact, from 1988 to 2016, UK couriers were considered universally self employed, though the number of salaried couriers employed by firms has grown substantially since then. However, since the dawn of the electronic age the way in which businesses use couriers has changed dramatically. Prior to email and the ability to create PDFs, documents represented a significant proportion of the business. However, over the past five years, documentation revenues have decreased by 50 percent. Customers are also demanding more from their courier partners. Therefore, more organisations prefer to use the services of larger organisations who are able to provide more flexibility and levels of service, which has led to another level of courier company, regional couriers. This is usually a local company which has expanded to more than one office to cover an area.
Some UK couriers offer next-day services to other European countries. FedEx offers next-day air delivery to many EU countries. Cheaper 'by-road' options are also available, varying from two days' delivery time (such as France), to up to a week (former USSR countries).
Large couriers often require an account to be held (and this can include daily scheduled collections). Senders are therefore primarily in the commercial/industrial sector (and not the general public); some couriers such as DHL do however allow public sending (at higher cost than regular senders).
In recent years, the increased popularity of Black Friday in the UK has placed some firms under operational stress.
The process of booking a courier has changed; it is no longer a lengthy task of making numerous calls to different courier companies to request a quote. Booking a courier is predominantly carried out online. The courier industry has been quick to adapt to an ever-changing digital landscape, meeting the needs of mobile and desktop consumers as well as e-commerce and online retailers, offering end users access to instant online payments, parcel tracking, delivery notifications, and the convenience of door to door collection and delivery to almost any destination in the world.
United States
The courier industry has long held an important place in United States commerce and has been involved in pivotal moments in the nation's history such as westward migration and the gold rush. Wells Fargo was founded in 1852 and rapidly became the preeminent package delivery company. The company specialised in shipping gold, packages and newspapers throughout the West, making a Wells Fargo office in every camp and settlement a necessity for commerce and connections to home. Shortly afterward, the Pony Express was established to move packages more quickly than the traditional stagecoach. It illustrated the demand for timely deliveries across the nation, a concept that continued to evolve with the railroads, automobiles and interstate highways and which has emerged into today's courier industry.
The courier industry in United States is a $59 billion industry, with 90% of the business shared by DHL, FedEx, UPS and USA Couriers. On the other hand, regional and/or local courier and delivery services were highly diversified and tended to be smaller operations; the top 50 firms accounted for just a third of the sector's revenues. USPS is mail or packages delivered by the government and are the only ones who can legally ship to mailboxes.
In a 2019 quarterly earnings call, the CEO of FedEx named Amazon as a direct competitor, cementing the e-commerce company's growth into the field of logistics.
In fiction
| Technology | Media and communication: Basics | null |
551920 | https://en.wikipedia.org/wiki/Spider%20monkey | Spider monkey | Spider monkeys are New World monkeys belonging to the genus Ateles, part of the subfamily Atelinae, family Atelidae. Like other atelines, they are found in tropical forests of Central and South America, from southern Mexico to Brazil. The genus consists of seven species, all of which are under threat; the brown spider monkey is critically endangered. They are also notable for their ability to be easily bred in captivity.
Disproportionately long limbs and long prehensile tails make them one of the largest New World monkeys and give rise to their common name. Spider monkeys live in the upper layers of the rainforest and forage in the high canopy, from . They primarily eat fruits, but will also occasionally consume leaves, flowers, and insects. Due to their large size, spider monkeys require large tracts of moist evergreen forests, and prefer undisturbed primary rainforest. They are social animals and live in bands of up to 35 individuals, but will split up to forage during the day.
Recent meta-analyses on primate cognition studies indicated spider monkeys are the most intelligent New World monkeys. They can produce a wide range of sounds and will "bark" when threatened; other vocalisations include a whinny similar to a horse and prolonged screams.
They are an important food source due to their large size, so are widely hunted by local human populations; they are also threatened by habitat destruction due to logging and land clearing. Spider monkeys are susceptible to malaria and are used in laboratory studies of the disease. The population trend for spider monkeys is decreasing; the IUCN Red List lists one species as vulnerable, five species as endangered and one species as critically endangered.
Evolutionary history
Theories abound about the evolution of the atelines; one theory is they are most closely related to the woolly spider monkeys (Brachyteles), and most likely split from woolly monkeys (Lagothrix) in the South American lowland forest, to evolve their unique locomotory system. This theory is not supported by fossil evidence. Other theories include Brachyteles, Lagothrix and Ateles in an unresolved trichotomy, and two clades, one composed of Ateles and Lagothrix and the other of Alouatta and Brachyteles. More recent molecular evidence suggests the Atelinae split in the middle to late Miocene (13 Ma), separating spider monkeys from the woolly spider monkeys and the woolly monkeys.
Taxonomic classification
The genus name Ateles derives from the ancient greek word (), meaning "incomplete, imperfect", in reference to the reduced or non-existent thumbs of spider monkeys.
The genus contains seven species, and seven subspecies.
Family Atelidae
Subfamily Alouattinae: howler monkeys
Subfamily Atelinae
Genus Ateles: spider monkeys
Red-faced spider monkey, Ateles paniscus
White-fronted spider monkey, Ateles belzebuth
Peruvian spider monkey, Ateles chamek
Brown spider monkey, Ateles hybridus
White-cheeked spider monkey, Ateles marginatus
Black-headed spider monkey, Ateles fusciceps
Brown-headed spider monkey, Ateles fusciceps fusciceps
Colombian spider monkey, Ateles fusciceps rufiventris
Geoffroy's spider monkey, Ateles geoffroyi
Hooded spider monkey Ateles geoffroyi grisescens
Yucatan spider monkey, Ateles geoffroyi yucatanensis
Mexican spider monkey, Ateles geoffroyi vellerosus
Nicaraguan spider monkey, Ateles geoffroyi geoffroyi
Ornate spider monkey, Ateles geoffroyi ornatus
Genus Brachyteles: muriquis (woolly spider monkeys)
Genus Lagothrix: woolly monkeys
Anatomy and physiology
Spider monkeys are among the largest New World monkeys; black-headed spider monkeys, the largest spider monkey, have an average weight of for males and for females. Disproportionately long, spindly limbs inspired the spider monkey's common name. Their deftly prehensile tails, which may be up to long, have very flexible, hairless tips and skin grooves similar to fingerprints. This adaptation to their strictly arboreal lifestyle serves as a fifth hand. When the monkey walks, its arms practically drag on the ground. Unlike many monkeys, they do not use their arms for balance when walking, instead relying on their tails. The hands are long, narrow, and hook-like and have reduced or nonexistent thumbs. The fingers are elongated and recurved.
Their hair is coarse, ranging in color from ruddy gold to brown and black, or white in a rare number of specimens. The hands and feet are usually black. Heads are small with hairless faces. The nostrils are very far apart, which is a distinguishing feature of spider monkeys.
Spider monkeys are highly agile, and they are said to be second only to the gibbons in this respect. They have been seen in the wild jumping from tree to tree.
Female spider monkeys have a clitoris that is especially developed; it may be referred to as a pseudo-penis because it has an interior passage, or urethra, that makes it almost identical to the penis, and retains and distributes urine droplets as the female moves around. This urine is emptied at the bases of the clitoris, and collects in skin folds on either side of a groove on the perineal. Researchers and observers of spider monkeys of South America look for a scrotum to determine the animal sex because these female spider monkeys have pendulous and erectile clitorises long enough to be mistaken for a penis; researchers may also determine the animal's sex by identifying scent-marking glands that may be present on the clitoris.
Behavior
Spider monkeys form loose groups, typically with 15 to 25 individuals, but sometimes up to 30 or 40. During the day, groups break up into subgroups. The size of subgroups and the degree to which they avoid each other during the day depends on food competition and the risk of predation. The average subgroup size is between 2 and 8 but can sometimes be up to 17 animals. Also less common in primates, females rather than males disperse at puberty to join new groups. Males tend to stick together for their whole lives. Hence, males in a group are more likely to be related and have closer bonds than females. Males also cement bonds through "grappling": prolonged hugging, face greeting, tail intertwining, and genital manipulation. However, the strongest social bonds are between females and their young offspring.
Spider monkeys communicate their intentions and observations using postures and stances, such as postures of sexual receptivity and of attack. When a spider monkey sees a human approaching, it barks loudly similar to a dog. When a monkey is approached, it climbs to the end of the branch it is on and shakes it vigorously to scare away the possible threat. It shakes the branches with its feet, hands, or a combination while hanging from its tail. It may also scratch its limbs or body with various parts of its hands and feet. Seated monkeys may sway and make noise. Males and occasionally adult females growl menacingly at the approach of a human. If the pursuer continues to advance, the monkeys may break off live or dead tree limbs weighing up to and drop them towards the intruder. The monkeys also defecate and urinate toward the intruder.
Spider monkeys are diurnal and spend the night sleeping in carefully selected trees. Groups are thought to be directed by a lead female, which is responsible for planning an efficient feeding route each day. Grooming is not as important to social interaction, owing perhaps to a lack of thumbs.
Spider monkeys have been observed avoiding the upper canopy of the trees for locomotion. One researcher speculated this was because the thin branches at the tops of trees do not support the monkeys as well.
At , the spider monkey brain is twice the size of the brain of a howler monkey of equivalent body size; this is thought to be a result of the spider monkeys' complex social system and their frugivorous diets, which consist primarily of ripe fruit from a wide variety (over 150 species) of plants. This requires the monkeys to remember when and where fruit can be found. The slow development may also play a role: the monkeys may live from 20 to 27 years or more, and females give birth once every 17 to 45 months. Gummy, presumably the oldest spider monkey in captivity, is presumed to have been born wild in 1962, resided at Fort Rickey Children's Discovery Zoo located in Rome, New York, and died at the age of 61, after living about twice as long as the average spider monkey.
Diet
Spider monkeys eat fleshy fruits 71 to 83 percent of the time. They can live for long periods on only one or two kinds of fruits and nuts. They eat the fruits of many big forest trees, and because they swallow fruits whole, the seeds are eventually excreted and fertilized by the feces. Studies show the diet of spider monkeys changes their reproductive, social, and physical behavioral patterns. Most feeding happens from dawn to 10 am. Afterward, the adults rest while the young play. Through the rest of the day, they may feed infrequently until around 10 pm. If food is scarce, they may eat insects, leaves, bird eggs, bark and honey.
Spider monkeys have a unique way of getting food: a lead female is generally responsible for finding food sources. If she cannot find enough food for the group, it splits into smaller groups that forage separately. The traveling groups have four to nine animals. Each group is closely associated with its territory. If the group is big, it spreads out.
Reproduction
The female chooses a male from her group for mating. Both males and females use "anogenital sniffing" to check their mates for readiness for copulation. The gestation period ranges from 226 to 232 days. Each female bears only one offspring on average, every three to four years.
Until six to ten months of age, infants rely completely on their mothers. Males are not involved in raising the offspring.
A mother carries her infant around her belly for the first month after birth. After this, she carries it on her lower back. The infant wraps its tail around its mother's and tightly grabs her midsection. Mothers are very protective of their young and are generally attentive mothers. They have been seen grabbing their young and putting them on their backs for protection and to help them navigate from tree to tree. They help the more independent young to cross by pulling branches closer together. Mothers also groom their young.
Male spider monkeys are one of the few primates that do not have a penis bone (baculum).
Cultural depictions
Spider monkeys are found in many aspects of the Mesoamerican cultures. In the Aztec 260-day calendar, Spider Monkey (Nahua Ozomatli) serves as the name for the 11th day. In the corresponding Maya calendar, Howler Monkey (Batz) is substituted for Spider Monkey. In present-day Maya religious feasts, spider monkey impersonators serve as a kind of demonic clowns. In Classical Maya art, they are ubiquitous, often shown carrying cacao pods.
Captain Simian & the Space Monkeys features a spider monkey named Spydor who is the smallest of the crew.
| Biology and health sciences | Primates | null |
552234 | https://en.wikipedia.org/wiki/Fluctuation%E2%80%93dissipation%20theorem | Fluctuation–dissipation theorem | The fluctuation–dissipation theorem (FDT) or fluctuation–dissipation relation (FDR) is a powerful tool in statistical physics for predicting the behavior of systems that obey detailed balance. Given that a system obeys detailed balance, the theorem is a proof that thermodynamic fluctuations in a physical variable predict the response quantified by the admittance or impedance (in their general sense, not only in electromagnetic terms) of the same physical variable (like voltage, temperature difference, etc.), and vice versa. The fluctuation–dissipation theorem applies both to classical and quantum mechanical systems.
The fluctuation–dissipation theorem was proven by Herbert Callen and Theodore Welton in 1951
and expanded by Ryogo Kubo. There are antecedents to the general theorem, including Einstein's explanation of Brownian motion
during his annus mirabilis and Harry Nyquist's explanation in 1928 of Johnson noise in electrical resistors.
Qualitative overview and examples
The fluctuation–dissipation theorem says that when there is a process that dissipates energy, turning it into heat (e.g., friction), there is a reverse process related to thermal fluctuations. This is best understood by considering some examples:
Drag and Brownian motion
If an object is moving through a fluid, it experiences drag (air resistance or fluid resistance). Drag dissipates kinetic energy, turning it into heat. The corresponding fluctuation is Brownian motion. An object in a fluid does not sit still, but rather moves around with a small and rapidly-changing velocity, as molecules in the fluid bump into it. Brownian motion converts heat energy into kinetic energy—the reverse of drag.
Resistance and Johnson noise
If electric current is running through a wire loop with a resistor in it, the current will rapidly go to zero because of the resistance. Resistance dissipates electrical energy, turning it into heat (Joule heating). The corresponding fluctuation is Johnson noise. A wire loop with a resistor in it does not actually have zero current, it has a small and rapidly-fluctuating current caused by the thermal fluctuations of the electrons and atoms in the resistor. Johnson noise converts heat energy into electrical energy—the reverse of resistance.
Light absorption and thermal radiation
When light impinges on an object, some fraction of the light is absorbed, making the object hotter. In this way, light absorption turns light energy into heat. The corresponding fluctuation is thermal radiation (e.g., the glow of a "red hot" object). Thermal radiation turns heat energy into light energy—the reverse of light absorption. Indeed, Kirchhoff's law of thermal radiation confirms that the more effectively an object absorbs light, the more thermal radiation it emits.
Examples in detail
The fluctuation–dissipation theorem is a general result of statistical thermodynamics that quantifies the relation between the fluctuations in a system that obeys detailed balance and the response of the system to applied perturbations.
Brownian motion
For example, Albert Einstein noted in his 1905 paper on Brownian motion that the same random forces that cause the erratic motion of a particle in Brownian motion would also cause drag if the particle were pulled through the fluid. In other words, the fluctuation of the particle at rest has the same origin as the dissipative frictional force one must do work against, if one tries to perturb the system in a particular direction.
From this observation Einstein was able to use statistical mechanics to derive the Einstein–Smoluchowski relation
which connects the diffusion constant D and the particle mobility μ, the ratio of the particle's terminal drift velocity to an applied force. kB is the Boltzmann constant, and T is the absolute temperature.
Thermal noise in a resistor
In 1928, John B. Johnson discovered and Harry Nyquist explained Johnson–Nyquist noise. With no applied current, the mean-square voltage depends on the resistance , , and the bandwidth over which the voltage is measured:
This observation can be understood through the lens of the fluctuation-dissipation theorem. Take, for example, a simple circuit consisting of a resistor with a resistance and a capacitor with a small capacitance . Kirchhoff's voltage law yields
and so the response function for this circuit is
In the low-frequency limit , its imaginary part is simply
which then can be linked to the power spectral density function of the voltage via the fluctuation-dissipation theorem
The Johnson–Nyquist voltage noise was observed within a small frequency bandwidth centered around . Hence
General formulation
The fluctuation–dissipation theorem can be formulated in many ways; one particularly useful form is the following:.
Let be an observable of a dynamical system with Hamiltonian subject to thermal fluctuations.
The observable will fluctuate around its mean value
with fluctuations characterized by a power spectrum .
Suppose that we can switch on a time-varying, spatially constant field which alters the Hamiltonian
to .
The response of the observable to a time-dependent field is
characterized to first order by the susceptibility or linear response function
of the system
where the perturbation is adiabatically (very slowly) switched on at .
The fluctuation–dissipation theorem relates the two-sided power spectrum (i.e. both positive and negative frequencies) of to the imaginary part of the Fourier transform of the susceptibility :
which holds under the Fourier transform convention . The left-hand side describes fluctuations in , the right-hand side is closely related to the energy dissipated by the system when pumped by an oscillatory field . The spectrum of fluctuations reveal the linear response, because past fluctuations cause future fluctuations via a linear response upon itself.
This is the classical form of the theorem; quantum fluctuations are taken into account by replacing with (whose limit for is ). A proof can be found by means of the LSZ reduction, an identity from quantum field theory.
The fluctuation–dissipation theorem can be generalized in a straightforward way to the case of space-dependent fields, to the case of several variables or to a quantum-mechanics setting.
Derivation
Classical version
We derive the fluctuation–dissipation theorem in the form given above, using the same notation.
Consider the following test case: the field f has been on for infinite time and is switched off at t=0
where is the Heaviside function.
We can express the expectation value of by the probability distribution W(x,0) and the transition probability
The probability distribution function W(x,0) is an equilibrium distribution and hence
given by the Boltzmann distribution for the Hamiltonian
where .
For a weak field , we can expand the right-hand side
here is the equilibrium distribution in the absence of a field.
Plugging this approximation in the formula for yields
where A(t) is the auto-correlation function of x in the absence of a field:
Note that in the absence of a field the system is invariant under time-shifts.
We can rewrite using the susceptibility
of the system and hence find with the above equation (*)
Consequently,
To make a statement about frequency dependence, it is necessary to take the Fourier transform of equation (**). By integrating by parts, it is possible to show that
Since is real and symmetric, it follows that
Finally, for stationary processes, the Wiener–Khinchin theorem states that the two-sided spectral density is equal to the Fourier transform of the auto-correlation function:
Therefore, it follows that
Quantum version
The fluctuation-dissipation theorem relates the correlation function of the observable of interest (a measure of fluctuation) to the imaginary part of the response function in the frequency domain (a measure of dissipation). A link between these quantities can be found through the so-called Kubo formula
which follows, under the assumptions of the linear response theory, from the time evolution of the ensemble average of the observable in the presence of a perturbing source. Once Fourier transformed, the Kubo formula allows writing the imaginary part of the response function as
In the canonical ensemble, the second term can be re-expressed as
where in the second equality we re-positioned using the cyclic property of trace. Next, in the third equality, we inserted next to the trace and interpreted as a time evolution operator with imaginary time interval . The imaginary time shift turns into a factor after Fourier transform
and thus the expression for can be easily rewritten as the quantum fluctuation-dissipation relation
where the power spectral density is the Fourier transform of the auto-correlation and is the Bose-Einstein distribution function. The same calculation also yields
thus, differently from what obtained in the classical case, the power spectral density is not exactly frequency-symmetric in the quantum limit. Consistently, has an imaginary part originating from the commutation rules of operators. The additional "" term in the expression of at positive frequencies can also be thought of as linked to spontaneous emission. An often cited result is also the symmetrized power spectral density
The "" can be thought of as linked to quantum fluctuations, or to zero-point motion of the observable . At high enough temperatures, , i.e. the quantum contribution is negligible, and we recover the classical version.
Violations in glassy systems
While the fluctuation–dissipation theorem provides a general relation between the response of systems obeying detailed balance, when detailed balance is violated comparison of fluctuations to dissipation is more complex. Below the so called glass temperature , glassy systems are not equilibrated, and slowly approach their equilibrium state. This slow approach to equilibrium is synonymous with the violation of detailed balance. Thus these systems require large time-scales to be studied while they slowly move toward equilibrium.
To study the violation of the fluctuation-dissipation relation in glassy systems, particularly spin glasses, researchers have performed numerical simulations of macroscopic systems (i.e. large compared to their correlation lengths) described by the three-dimensional Edwards-Anderson model using supercomputers. In their simulations, the system is initially prepared at a high temperature, rapidly cooled to a temperature below the glass temperature , and left to equilibrate for a very long time under a magnetic field . Then, at a later time , two dynamical observables are probed, namely the response function
and the spin-temporal correlation function
where is the spin living on the node of the cubic lattice of volume , and is the magnetization density. The fluctuation-dissipation relation in this system can be written in terms of these observables as
Their results confirm the expectation that as the system is left to equilibrate for longer times, the fluctuation-dissipation relation is closer to be satisfied.
In the mid-1990s, in the study of dynamics of spin glass models, a generalization of the fluctuation–dissipation theorem was discovered that holds for asymptotic non-stationary states, where the temperature appearing in the equilibrium relation is substituted by an effective temperature with a non-trivial dependence on the time scales. This relation is proposed to hold in glassy systems beyond the models for which it was initially found.
| Physical sciences | Statistical mechanics | Physics |
552520 | https://en.wikipedia.org/wiki/Standard%20error | Standard error | The standard error (SE) of a statistic (usually an estimate of a parameter) is the standard deviation of its sampling distribution or an estimate of that standard deviation. If the statistic is the sample mean, it is called the standard error of the mean (SEM). The standard error is a key ingredient in producing confidence intervals.
The sampling distribution of a mean is generated by repeated sampling from the same population and recording of the sample means obtained. This forms a distribution of different means, and this distribution has its own mean and variance. Mathematically, the variance of the sampling mean distribution obtained is equal to the variance of the population divided by the sample size. This is because as the sample size increases, sample means cluster more closely around the population mean.
Therefore, the relationship between the standard error of the mean and the standard deviation is such that, for a given sample size, the standard error of the mean equals the standard deviation divided by the square root of the sample size. In other words, the standard error of the mean is a measure of the dispersion of sample means around the population mean.
In regression analysis, the term "standard error" refers either to the square root of the reduced chi-squared statistic or the standard error for a particular regression coefficient (as used in, say, confidence intervals).
Standard error of the sample mean
Exact value
Suppose a statistically independent sample of observations is taken from a statistical population with a standard deviation of . The mean value calculated from the sample, , will have an associated standard error on the mean, , given by:
Practically this tells us that when trying to estimate the value of a population mean, due to the factor , reducing the error on the estimate by a factor of two requires acquiring four times as many observations in the sample; reducing it by a factor of ten requires a hundred times as many observations.
Estimate
The standard deviation of the population being sampled is seldom known. Therefore, the standard error of the mean is usually estimated by replacing with the sample standard deviation instead:
As this is only an estimator for the true "standard error", it is common to see other notations here such as:
A common source of confusion occurs when failing to distinguish clearly between:
the standard deviation of the population (),
the standard deviation of the sample (),
the standard deviation of the mean itself (, which is the standard error), and
the estimator of the standard deviation of the mean (, which is the most often calculated quantity, and is also often colloquially called the standard error).
Accuracy of the estimator
When the sample size is small, using the standard deviation of the sample instead of the true standard deviation of the population will tend to systematically underestimate the population standard deviation, and therefore also the standard error. With n = 2, the underestimate is about 25%, but for n = 6, the underestimate is only 5%. Gurland and Tripathi (1971) provide a correction and equation for this effect. Sokal and Rohlf (1981) give an equation of the correction factor for small samples of n < 20. See unbiased estimation of standard deviation for further discussion.
Derivation
The standard error on the mean may be derived from the variance of a sum of independent random variables, given the definition of variance and some properties thereof. If is a sample of independent observations from a population with mean and standard deviation , then we can define the total
which due to the Bienaymé formula, will have variance
where we've approximated the standard deviations, i.e., the uncertainties, of the measurements themselves with the best value for the standard deviation of the population. The mean of these measurements is given by
The variance of the mean is then
The standard error is, by definition, the standard deviation of which is the square root of the variance:
For correlated random variables the sample variance needs to be computed according to the Markov chain central limit theorem.
Independent and identically distributed random variables with random sample size
There are cases when a sample is taken without knowing, in advance, how many observations will be acceptable according to some criterion. In such cases, the sample size is a random variable whose variation adds to the variation of such that,
which follows from the law of total variance.
If has a Poisson distribution, then with estimator . Hence the estimator of becomes , leading the following formula for standard error:
(since the standard deviation is the square root of the variance).
Student approximation when σ value is unknown
In many practical applications, the true value of σ is unknown. As a result, we need to use a distribution that takes into account that spread of possible σs.
When the true underlying distribution is known to be Gaussian, although with unknown σ, then the resulting estimated distribution follows the Student t-distribution. The standard error is the standard deviation of the Student t-distribution. T-distributions are slightly different from Gaussian, and vary depending on the size of the sample. Small samples are somewhat more likely to underestimate the population standard deviation and have a mean that differs from the true population mean, and the Student t-distribution accounts for the probability of these events with somewhat heavier tails compared to a Gaussian. To estimate the standard error of a Student t-distribution it is sufficient to use the sample standard deviation "s" instead of σ, and we could use this value to calculate confidence intervals.
Note: The Student's probability distribution is approximated well by the Gaussian distribution when the sample size is over 100. For such samples one can use the latter distribution, which is much simpler.
Also, even though the 'true' distribution of the population is unknown, assuming normality of the sampling distribution makes sense for a reasonable sample size, and under certain sampling conditions, see CLT. If these conditions are not met, then using a Bootstrap distribution to estimate the Standard Error is often a good workaround, but it can be computationally intensive.
Assumptions and usage
An example of how is used is to make confidence intervals of the unknown population mean. If the sampling distribution is normally distributed, the sample mean, the standard error, and the quantiles of the normal distribution can be used to calculate confidence intervals for the true population mean. The following expressions can be used to calculate the upper and lower 95% confidence limits, where is equal to the sample mean, is equal to the standard error for the sample mean, and 1.96 is the approximate value of the 97.5 percentile point of the normal distribution:
In particular, the standard error of a sample statistic (such as sample mean) is the actual or estimated standard deviation of the sample mean in the process by which it was generated. In other words, it is the actual or estimated standard deviation of the sampling distribution of the sample statistic. The notation for standard error can be any one of SE, SEM (for standard error of measurement or mean), or SE.
Standard errors provide simple measures of uncertainty in a value and are often used because:
in many cases, if the standard error of several individual quantities is known then the standard error of some function of the quantities can be easily calculated;
when the probability distribution of the value is known, it can be used to calculate an exact confidence interval;
when the probability distribution is unknown, Chebyshev's or the Vysochanskiï–Petunin inequalities can be used to calculate a conservative confidence interval; and
as the sample size tends to infinity the central limit theorem guarantees that the sampling distribution of the mean is asymptotically normal.
Standard error of mean versus standard deviation
In scientific and technical literature, experimental data are often summarized either using the mean and standard deviation of the sample data or the mean with the standard error. This often leads to confusion about their interchangeability. However, the mean and standard deviation are descriptive statistics, whereas the standard error of the mean is descriptive of the random sampling process. The standard deviation of the sample data is a description of the variation in measurements, while the standard error of the mean is a probabilistic statement about how the sample size will provide a better bound on estimates of the population mean, in light of the central limit theorem.
Put simply, the standard error of the sample mean is an estimate of how far the sample mean is likely to be from the population mean, whereas the standard deviation of the sample is the degree to which individuals within the sample differ from the sample mean. If the population standard deviation is finite, the standard error of the mean of the sample will tend to zero with increasing sample size, because the estimate of the population mean will improve, while the standard deviation of the sample will tend to approximate the population standard deviation as the sample size increases.
Extensions
Finite population correction (FPC)
The formula given above for the standard error assumes that the population is infinite. Nonetheless, it is often used for finite populations when people are interested in measuring the process that created the existing finite population (this is called an analytic study). Though the above formula is not exactly correct when the population is finite, the difference between the finite- and infinite-population versions will be small when sampling fraction is small (e.g. a small proportion of a finite population is studied). In this case people often do not correct for the finite population, essentially treating it as an "approximately infinite" population.
If one is interested in measuring an existing finite population that will not change over time, then it is necessary to adjust for the population size (called an enumerative study). When the sampling fraction (often termed f) is large (approximately at 5% or more) in an enumerative study, the estimate of the standard error must be corrected by multiplying by a ''finite population correction'' (a.k.a.: FPC):
which, for large N:
to account for the added precision gained by sampling close to a larger percentage of the population. The effect of the FPC is that the error becomes zero when the sample size n is equal to the population size N.
This happens in survey methodology when sampling without replacement. If sampling with replacement, then FPC does not come into play.
Correction for correlation in the sample
If values of the measured quantity A are not statistically independent but have been obtained from known locations in parameter space x''', an unbiased estimate of the true standard error of the mean (actually a correction on the standard deviation part) may be obtained by multiplying the calculated standard error of the sample by the factor f'':
where the sample bias coefficient ρ is the widely used Prais–Winsten estimate of the autocorrelation-coefficient (a quantity between −1 and +1) for all sample point pairs. This approximate formula is for moderate to large sample sizes; the reference gives the exact formulas for any sample size, and can be applied to heavily autocorrelated time series like Wall Street stock quotes. Moreover, this formula works for positive and negative ρ alike. | Mathematics | Statistics and probability | null |
553121 | https://en.wikipedia.org/wiki/Regulation%20of%20gene%20expression | Regulation of gene expression | Regulation of gene expression, or gene regulation, includes a wide range of mechanisms that are used by cells to increase or decrease the production of specific gene products (protein or RNA). Sophisticated programs of gene expression are widely observed in biology, for example to trigger developmental pathways, respond to environmental stimuli, or adapt to new food sources. Virtually any step of gene expression can be modulated, from transcriptional initiation, to RNA processing, and to the post-translational modification of a protein. Often, one gene regulator controls another, and so on, in a gene regulatory network.
Gene regulation is essential for viruses, prokaryotes and eukaryotes as it increases the versatility and adaptability of an organism by allowing the cell to express protein when needed. Although as early as 1951, Barbara McClintock showed interaction between two genetic loci, Activator (Ac) and Dissociator (Ds), in the color formation of maize seeds, the first discovery of a gene regulation system is widely considered to be the identification in 1961 of the lac operon, discovered by François Jacob and Jacques Monod, in which some enzymes involved in lactose metabolism are expressed by E. coli only in the presence of lactose and absence of glucose.
In multicellular organisms, gene regulation drives cellular differentiation and morphogenesis in the embryo, leading to the creation of different cell types that possess different gene expression profiles from the same genome sequence. Although this does not explain how gene regulation originated, evolutionary biologists include it as a partial explanation of how evolution works at a molecular level, and it is central to the science of evolutionary developmental biology ("evo-devo").
Regulated stages of gene expression
Any step of gene expression may be modulated, from signaling to transcription to post-translational modification of a protein. The following is a list of stages where gene expression is regulated, where the most extensively utilized point is transcription initiation, the first stage in transcription:
Signal transduction
Chromatin, chromatin remodeling, chromatin domains
Transcription
Post-transcriptional modification
RNA transport
Translation
mRNA degradation
Modification of DNA
In eukaryotes, the accessibility of large regions of DNA can depend on its chromatin structure, which can be altered as a result of histone modifications directed by DNA methylation, ncRNA, or DNA-binding protein. Hence these modifications may up or down regulate the expression of a gene. Some of these modifications that regulate gene expression are inheritable and are referred to as epigenetic regulation.
Structural
Transcription of DNA is dictated by its structure. In general, the density of its packing is indicative of the frequency of transcription. Octameric protein complexes called histones together with a segment of DNA wound around the eight histone proteins (together referred to as a nucleosome) are responsible for the amount of supercoiling of DNA, and these complexes can be temporarily modified by processes such as phosphorylation or more permanently modified by processes such as methylation. Such modifications are considered to be responsible for more or less permanent changes in gene expression levels.
Chemical
Methylation of DNA is a common method of gene silencing. DNA is typically methylated by methyltransferase enzymes on cytosine nucleotides in a CpG dinucleotide sequence (also called "CpG islands" when densely clustered). Analysis of the pattern of methylation in a given region of DNA (which can be a promoter) can be achieved through a method called bisulfite mapping. Methylated cytosine residues are unchanged by the treatment, whereas unmethylated ones are changed to uracil. The differences are analyzed by DNA sequencing or by methods developed to quantify SNPs, such as Pyrosequencing (Biotage) or MassArray (Sequenom), measuring the relative amounts of C/T at the CG dinucleotide. Abnormal methylation patterns are thought to be involved in oncogenesis.
Histone acetylation is also an important process in transcription. Histone acetyltransferase enzymes (HATs) such as CREB-binding protein also dissociate the DNA from the histone complex, allowing transcription to proceed. Often, DNA methylation and histone deacetylation work together in gene silencing. The combination of the two seems to be a signal for DNA to be packed more densely, lowering gene expression.
Regulation of transcription
Regulation of transcription thus controls when transcription occurs and how much RNA is created. Transcription of a gene by RNA polymerase can be regulated by several mechanisms.
Specificity factors alter the specificity of RNA polymerase for a given promoter or set of promoters, making it more or less likely to bind to them (i.e., sigma factors used in prokaryotic transcription).
Repressors bind to the Operator, coding sequences on the DNA strand that are close to or overlapping the promoter region, impeding RNA polymerase's progress along the strand, thus impeding the expression of the gene. The image to the right demonstrates regulation by a repressor in the lac operon.
General transcription factors position RNA polymerase at the start of a protein-coding sequence and then release the polymerase to transcribe the mRNA.
Activators enhance the interaction between RNA polymerase and a particular promoter, encouraging the expression of the gene. Activators do this by increasing the attraction of RNA polymerase for the promoter, through interactions with subunits of the RNA polymerase or indirectly by changing the structure of the DNA.
Enhancers are sites on the DNA helix that are bound by activators in order to loop the DNA bringing a specific promoter to the initiation complex. Enhancers are much more common in eukaryotes than prokaryotes, where only a few examples exist (to date).
Silencers are regions of DNA sequences that, when bound by particular transcription factors, can silence expression of the gene.
Regulation by RNA
RNA can be an important regulator of gene activity, e.g. by microRNA (miRNA), antisense-RNA, or long non-coding RNA (lncRNA). LncRNAs differ from mRNAs in the sense that they have specified subcellular locations and functions. They were first discovered to be located in the nucleus and chromatin, and the localizations and functions are highly diverse now. Some still reside in chromatin where they interact with proteins. While this lncRNA ultimately affects gene expression in neuronal disorders such as Parkinson, Huntington, and Alzheimer disease, others, such as, PNCTR(pyrimidine-rich non-coding transcriptors), play a role in lung cancer. Given their role in disease, lncRNAs are potential biomarkers and may be useful targets for drugs or gene therapy, although there are no approved drugs that target lncRNAs yet. The number of lncRNAs in the human genome remains poorly defined, but some estimates range from 16,000 to 100,000 lnc genes.
Epigenetic gene regulation
Epigenetics refers to the modification of genes that is not changing the DNA or RNA sequence. Epigenetic modifications are also a key factor in influencing gene expression. They occur on genomic DNA and histones and their chemical modifications regulate gene expression in a more efficient manner. There are several modifications of DNA (usually methylation) and more than 100 modifications of RNA in mammalian cells.” Those modifications result in altered protein binding to DNA and a change in RNA stability and translation efficiency.
Special cases in human biology and disease
Regulation of transcription in cancer
In vertebrates, the majority of gene promoters contain a CpG island with numerous CpG sites. When many of a gene's promoter CpG sites are methylated the gene becomes silenced. Colorectal cancers typically have 3 to 6 driver mutations and 33 to 66 hitchhiker or passenger mutations. However, transcriptional silencing may be of more importance than mutation in causing progression to cancer. For example, in colorectal cancers about 600 to 800 genes are transcriptionally silenced by CpG island methylation (see regulation of transcription in cancer). Transcriptional repression in cancer can also occur by other epigenetic mechanisms, such as altered expression of microRNAs. In breast cancer, transcriptional repression of BRCA1 may occur more frequently by over-expressed microRNA-182 than by hypermethylation of the BRCA1 promoter (see Low expression of BRCA1 in breast and ovarian cancers).
Regulation of transcription in addiction
One of the cardinal features of addiction is its persistence. The persistent behavioral changes appear to be due to long-lasting changes, resulting from epigenetic alterations affecting gene expression, within particular regions of the brain. Drugs of abuse cause three types of epigenetic alteration in the brain. These are (1) histone acetylations and histone methylations, (2) DNA methylation at CpG sites, and (3) epigenetic downregulation or upregulation of microRNAs. (See Epigenetics of cocaine addiction for some details.)
Chronic nicotine intake in mice alters brain cell epigenetic control of gene expression through acetylation of histones. This increases expression in the brain of the protein FosB, important in addiction. Cigarette addiction was also studied in about 16,000 humans, including never smokers, current smokers, and those who had quit smoking for up to 30 years. In blood cells, more than 18,000 CpG sites (of the roughly 450,000 analyzed CpG sites in the genome) had frequently altered methylation among current smokers. These CpG sites occurred in over 7,000 genes, or roughly a third of known human genes. The majority of the differentially methylated CpG sites returned to the level of never-smokers within five years of smoking cessation. However, 2,568 CpGs among 942 genes remained differentially methylated in former versus never smokers. Such remaining epigenetic changes can be viewed as “molecular scars” that may affect gene expression.
In rodent models, drugs of abuse, including cocaine, methamphetamine, alcohol and tobacco smoke products, all cause DNA damage in the brain. During repair of DNA damages some individual repair events can alter the methylation of DNA and/or the acetylations or methylations of histones at the sites of damage, and thus can contribute to leaving an epigenetic scar on chromatin.
Such epigenetic scars likely contribute to the persistent epigenetic changes found in addiction.
Regulation of transcription in learning and memory
In mammals, methylation of cytosine (see Figure) in DNA is a major regulatory mediator. Methylated cytosines primarily occur in dinucleotide sequences where cytosine is followed by a guanine, a CpG site. The total number of CpG sites in the human genome is approximately 28 million. and generally about 70% of all CpG sites have a methylated cytosine.
In a rat, a painful learning experience, contextual fear conditioning, can result in a life-long fearful memory after a single training event. Cytosine methylation is altered in the promoter regions of about 9.17% of all genes in the hippocampus neuron DNA of a rat that has been subjected to a brief fear conditioning experience. The hippocampus is where new memories are initially stored.
Methylation of CpGs in a promoter region of a gene represses transcription while methylation of CpGs in the body of a gene increases expression. TET enzymes play a central role in demethylation of methylated cytosines. Demethylation of CpGs in a gene promoter by TET enzyme activity increases transcription of the gene.
When contextual fear conditioning is applied to a rat, more than 5,000 differentially methylated regions (DMRs) (of 500 nucleotides each) occur in the rat hippocampus neural genome both one hour and 24 hours after the conditioning in the hippocampus. This causes about 500 genes to be up-regulated (often due to demethylation of CpG sites in a promoter region) and about 1,000 genes to be down-regulated (often due to newly formed 5-methylcytosine at CpG sites in a promoter region). The pattern of induced and repressed genes within neurons appears to provide a molecular basis for forming the first transient memory of this training event in the hippocampus of the rat brain.
Post-transcriptional regulation
After the DNA is transcribed and mRNA is formed, there must be some sort of regulation on how much the mRNA is translated into proteins. Cells do this by modulating the capping, splicing, addition of a Poly(A) Tail, the sequence-specific nuclear export rates, and, in several contexts, sequestration of the RNA transcript. These processes occur in eukaryotes but not in prokaryotes. This modulation is a result of a protein or transcript that, in turn, is regulated and may have an affinity for certain sequences.
Three prime untranslated regions and microRNAs
Three prime untranslated regions (3'-UTRs) of messenger RNAs (mRNAs) often contain regulatory sequences that post-transcriptionally influence gene expression. Such 3'-UTRs often contain both binding sites for microRNAs (miRNAs) as well as for regulatory proteins. By binding to specific sites within the 3'-UTR, miRNAs can decrease gene expression of various mRNAs by either inhibiting translation or directly causing degradation of the transcript. The 3'-UTR also may have silencer regions that bind repressor proteins that inhibit the expression of a mRNA.
The 3'-UTR often contains miRNA response elements (MREs). MREs are sequences to which miRNAs bind. These are prevalent motifs within 3'-UTRs. Among all regulatory motifs within the 3'-UTRs (e.g. including silencer regions), MREs make up about half of the motifs.
As of 2014, the miRBase web site, an archive of miRNA sequences and annotations, listed 28,645 entries in 233 biologic species. Of these, 1,881 miRNAs were in annotated human miRNA loci. miRNAs were predicted to have an average of about four hundred target mRNAs (affecting expression of several hundred genes). Freidman et al. estimate that >45,000 miRNA target sites within human mRNA 3'-UTRs are conserved above background levels, and >60% of human protein-coding genes have been under selective pressure to maintain pairing to miRNAs.
Direct experiments show that a single miRNA can reduce the stability of hundreds of unique mRNAs. Other experiments show that a single miRNA may repress the production of hundreds of proteins, but that this repression often is relatively mild (less than 2-fold).
The effects of miRNA dysregulation of gene expression seem to be important in cancer. For instance, in gastrointestinal cancers, a 2015 paper identified nine miRNAs as epigenetically altered and effective in down-regulating DNA repair enzymes.
The effects of miRNA dysregulation of gene expression also seem to be important in neuropsychiatric disorders, such as schizophrenia, bipolar disorder, major depressive disorder, Parkinson's disease, Alzheimer's disease and autism spectrum disorders.
Regulation of translation
The translation of mRNA can also be controlled by a number of mechanisms, mostly at the level of initiation. Recruitment of the small ribosomal subunit can indeed be modulated by mRNA secondary structure, antisense RNA binding, or protein binding. In both prokaryotes and eukaryotes, a large number of RNA binding proteins exist, which often are directed to their target sequence by the secondary structure of the transcript, which may change depending on certain conditions, such as temperature or presence of a ligand (aptamer). Some transcripts act as ribozymes and self-regulate their expression.
Examples of gene regulation
Enzyme induction is a process in which a molecule (e.g., a drug) induces (i.e., initiates or enhances) the expression of an enzyme.
The induction of heat shock proteins in the fruit fly Drosophila melanogaster.
The Lac operon is an interesting example of how gene expression can be regulated.
Viruses, despite having only a few genes, possess mechanisms to regulate their gene expression, typically into an early and late phase, using collinear systems regulated by anti-terminators (lambda phage) or splicing modulators (HIV).
Gal4 is a transcriptional activator that controls the expression of GAL1, GAL7, and GAL10 (all of which code for the metabolic of galactose in yeast). The GAL4/UAS system has been used in a variety of organisms across various phyla to study gene expression.
Developmental biology
A large number of studied regulatory systems come from developmental biology. Examples include:
The colinearity of the Hox gene cluster with their nested antero-posterior patterning
Pattern generation of the hand (digits - interdigits): the gradient of sonic hedgehog (secreted inducing factor) from the zone of polarizing activity in the limb, which creates a gradient of active Gli3, which activates Gremlin, which inhibits BMPs also secreted in the limb, results in the formation of an alternating pattern of activity as a result of this reaction–diffusion system.
Somitogenesis is the creation of segments (somites) from a uniform tissue (Pre-somitic Mesoderm). They are formed sequentially from anterior to posterior. This is achieved in amniotes possibly by means of two opposing gradients, Retinoic acid in the anterior (wavefront) and Wnt and Fgf in the posterior, coupled to an oscillating pattern (segmentation clock) composed of FGF + Notch and Wnt in antiphase.
Sex determination in the soma of a Drosophila requires the sensing of the ratio of autosomal genes to sex chromosome-encoded genes, which results in the production of sexless splicing factor in females, resulting in the female isoform of doublesex.
Circuitry
Up-regulation and down-regulation
Up-regulation is a process which occurs within a cell triggered by a signal (originating internal or external to the cell), which results in increased expression of one or more genes and as a result the proteins encoded by those genes. Conversely, down-regulation is a process resulting in decreased gene and corresponding protein expression.
Up-regulation occurs, for example, when a cell is deficient in some kind of receptor. In this case, more receptor protein is synthesized and transported to the membrane of the cell and, thus, the sensitivity of the cell is brought back to normal, reestablishing homeostasis.
Down-regulation occurs, for example, when a cell is overstimulated by a neurotransmitter, hormone, or drug for a prolonged period of time, and the expression of the receptor protein is decreased in order to protect the cell (see also tachyphylaxis).
Inducible vs. repressible systems
Gene Regulation can be summarized by the response of the respective system:
Inducible systems - An inducible system is off unless there is the presence of some molecule (called an inducer) that allows for gene expression. The molecule is said to "induce expression". The manner by which this happens is dependent on the control mechanisms as well as differences between prokaryotic and eukaryotic cells.
Repressible systems - A repressible system is on except in the presence of some molecule (called a corepressor) that suppresses gene expression. The molecule is said to "repress expression". The manner by which this happens is dependent on the control mechanisms as well as differences between prokaryotic and eukaryotic cells.
The GAL4/UAS system is an example of both an inducible and repressible system. Gal4 binds an upstream activation sequence (UAS) to activate the transcription of the GAL1/GAL7/GAL10 cassette. On the other hand, a MIG1 response to the presence of glucose can inhibit GAL4 and therefore stop the expression of the GAL1/GAL7/GAL10 cassette.
Theoretical circuits
Repressor/Inducer: an activation of a sensor results in the change of expression of a gene
negative feedback: the gene product downregulates its own production directly or indirectly, which can result in
keeping transcript levels constant/proportional to a factor
inhibition of run-away reactions when coupled with a positive feedback loop
creating an oscillator by taking advantage in the time delay of transcription and translation, given that the mRNA and protein half-life is shorter
positive feedback: the gene product upregulates its own production directly or indirectly, which can result in
signal amplification
bistable switches when two genes inhibit each other and both have positive feedback
pattern generation
Study methods
In general, most experiments investigating differential expression used whole cell extracts of RNA, called steady-state levels, to determine which genes changed and by how much. These are, however, not informative of where the regulation has occurred and may mask conflicting regulatory processes (see post-transcriptional regulation), but it is still the most commonly analysed (quantitative PCR and DNA microarray).
When studying gene expression, there are several methods to look at the various stages. In eukaryotes these include:
The local chromatin environment of the region can be determined by ChIP-chip analysis by pulling down RNA Polymerase II, Histone 3 modifications, Trithorax-group protein, Polycomb-group protein, or any other DNA-binding element to which a good antibody is available.
Epistatic interactions can be investigated by synthetic genetic array analysis
Due to post-transcriptional regulation, transcription rates and total RNA levels differ significantly. To measure the transcription rates nuclear run-on assays can be done and newer high-throughput methods are being developed, using thiol labelling instead of radioactivity.
Only 5% of the RNA polymerised in the nucleus exits, and not only introns, abortive products, and non-sense transcripts are degradated. Therefore, the differences in nuclear and cytoplasmic levels can be seen by separating the two fractions by gentle lysis.
Alternative splicing can be analysed with a splicing array or with a tiling array (see DNA microarray).
All in vivo RNA is complexed as RNPs. The quantity of transcripts bound to specific protein can be also analysed by RIP-Chip. For example, DCP2 will give an indication of sequestered protein; ribosome-bound gives and indication of transcripts active in transcription (although a more dated method, called polysome fractionation, is still popular in some labs)
Protein levels can be analysed by Mass spectrometry, which can be compared only to quantitative PCR data, as microarray data is relative and not absolute.
RNA and protein degradation rates are measured by means of transcription inhibitors (actinomycin D or α-Amanitin) or translation inhibitors (Cycloheximide), respectively.
| Biology and health sciences | Molecular biology | Biology |
553313 | https://en.wikipedia.org/wiki/Platybelodon | Platybelodon | Platybelodon ("flat-spear tusk") is an extinct genus of large herbivorous proboscidean mammals related to modern-day elephants, placed in the "shovel tusker" family Amebelodontidae. Species lived during the middle Miocene Epoch in Africa, Asia and the Caucasus.
Distribution
P. grangeri fossils are known from China.
Palaeobiology
Platybelodon was previously believed to have fed in the swampy areas of grassy savannas, using its teeth to shovel up aquatic and semi-aquatic vegetation. However, wear patterns on the teeth suggest that it used its lower tusks to strip bark from trees, and may have used the sharp incisors that formed the edge of the "shovel" more like a modern-day scythe, grasping branches with its trunk and rubbing them against the lower teeth to cut it from a tree. Adults in particular might have eaten coarser vegetation more frequently than juveniles.
Images
| Biology and health sciences | Proboscidea | Animals |
553383 | https://en.wikipedia.org/wiki/Moorland | Moorland | Moorland or moor is a type of habitat found in upland areas in temperate grasslands, savannas, and shrublands and montane grasslands and shrublands biomes, characterised by low-growing vegetation on acidic soils. Moorland, nowadays, generally means uncultivated hill land (such as Dartmoor in South West England), but also includes low-lying wetlands (such as Sedgemoor, also South West England). It is closely related to heath, although experts disagree on what precisely distinguishes these types of vegetation. Generally, moor refers to highland and high rainfall zones, whereas heath refers to lowland zones which are more likely to be the result of human activity.
Moorland habitats mostly occur in tropical Africa, northern and western Europe, and South America. Most of the world's moorlands are diverse ecosystems. In the extensive moorlands of the tropics, biodiversity can be extremely high. Moorland also bears a relationship to tundra (where the subsoil is permafrost or permanently frozen soil), appearing as the tundra and the natural tree zone. The boundary between tundra and moorland constantly shifts with climatic change.
Heather moorland
Heathland and moorland are the most extensive areas of semi-natural vegetation in the British Isles. The eastern British moorlands are similar to heaths but are differentiated by having a covering of peat. On western moors, the peat layer may be several metres thick. Scottish "muirs" are generally heather moors, but also have extensive covering of grass, cotton-grass, mosses, bracken and under-shrubs such as crowberry, with the wetter moorland having sphagnum moss merging into bog-land.
There is uncertainty about how many moors were created by human activity. Oliver Rackham writes that pollen analysis shows that some moorland, such as in the islands and extreme north of Scotland, are clearly natural, never having had trees, whereas much of the Pennine moorland area was forested in Mesolithic times. How much the deforestation was caused by climatic changes and how much by human activity is uncertain.
Ecology
A variety of distinct habitat types are found in different world regions of moorland. The wildlife and vegetation forms often lead to high endemism because of the severe soil and microclimate characteristics. An example of this is the Exmoor Pony, a rare horse breed which has adapted to the harsh conditions in England's Exmoor.
In Europe, the associated fauna consists of bird species such as red grouse, hen harrier, merlin, golden plover, curlew, skylark, meadow pipit, whinchat, ring ouzel, and twite. Other species dominate in moorlands elsewhere. Reptiles are few due to the cooler conditions. In Europe, only the common viper is frequent, though in other regions moorlands are commonly home to dozens of reptile species. Amphibians such as frogs are well represented in moorlands. When moorland is overgrazed, woody vegetation is often lost, being replaced by coarse, unpalatable grasses and bracken, with a greatly reduced fauna.
Some hill sheep breeds, such as Scottish Blackface and the Lonk, thrive on the austere conditions of heather moors.
Management
Burning of moorland has been practised for a number of reasons, for example, when grazing is insufficient to control growth. This is recorded in Britain in the fourteenth century. Uncontrolled burning frequently caused (and causes) problems and was forbidden by statute in 1609. With the rise of sheep and grouse management in the nineteenth century, it again became common practice. Heather is burnt at about 10 or 12 years old when it will regenerate easily. Left longer, the woodier stems will burn more aggressively and will hinder regrowth. Burning of moorland vegetation needs to be very carefully controlled, as the peat itself can catch fire, and this can be difficult if not impossible to extinguish. In addition, uncontrolled burning of heather can promote alternative bracken and rough grass growth, which ultimately produces poorer grazing. As a result, burning is now a controversial practice; Rackham calls it "second-best land management".
Mechanical cutting of the heather has been used in Europe, but it is important for the material to be removed to avoid smothering regrowth. If heather and other vegetation are left for too long, a large volume of dry and combustible material builds up. This may result in a wildfire burning out a large area, although it has been found that heather seeds germinate better if subject to the brief heat of controlled burning.
In terms of managing moorlands for wildlife, in the UK, vegetation characteristics are important for passerine abundance, whilst predator control benefits red grouse, golden plover, and curlew abundances. To benefit multiple species, many management options are required. However, management needs to be carried out in locations that are also suitable for species in terms of physical characteristics such as topography, climate and soil.
In literature
The development of a sensitivity to nature and one's physical surroundings grew with the rise of interest in landscape painting, and particularly the works of artists that favoured wide and deep prospects, and rugged scenery. To the English Romantic imagination, moorlands fitted this image perfectly, enhancing the emotional impact of the story by placing it within a heightened and evocative landscape. Moorland forms the setting of various works of late Romantic English literature, ranging from the Yorkshire moorland in Emily Brontë's Wuthering Heights and The Secret Garden by Frances Hodgson Burnett to Dartmoor in Arthur Conan Doyle's Holmesian mystery The Hound of the Baskervilles. They are also featured in Charlotte Bronte's Jane Eyre representing the heroine's desolation and loneliness after leaving Mr Rochester.
Enid Blyton's Famous Five series featured the young protagonists adventuring across various moorlands where they confronted criminals or other individuals of interest. Such a setting enhanced the plot as the drama unfolded away from the functioning world where the children could solve their own problems and face greater danger. Moorland in the Forest of Bowland in Lancashire is the setting for Walter Bennett's The Pendle Witches, the true story of some of England's most infamous witch trials. In Erin Hunter's Warriors series, one of the four Clans, WindClan, lives in the moorland alone.
Michael Jecks, author of Knights Templar Mysteries, sets his books in and around Dartmoor, England. Paul Kingsnorth’s Beast is also set on a western English moor, using the barren landscape and fields of heather to communicate themes of timelessness and distance from civilization.
Notable moorlands
Africa
Democratic Republic of the Congo
Ruwenzori-Virunga montane moorlands
Ethiopia
Ethiopian montane moorlands
Kenya
East African montane moorlands
Mount Kenya
Rwanda
Ruwenzori-Virunga montane moorlands
Sudan
East African montane moorlands
Ethiopian montane moorlands
Tanzania
East African montane moorlands
Kilimanjaro
Mount Meru
Uganda
East African montane moorlands
Europe
Austria
Tanner Moor
Längsee Moor
Moorbad Gmös
Belgium
Weißer Stein (Eifel)
High Fens
France
Monts d'Arrée
Germany
Großes Torfmoor
Hücker Moor
Oppenwehe Moor
Worringer Bruch
High Fens
The Netherlands
Dwingelderveld
Bargerveen
Fochteloërveen
The Peel
Great Britain
Great Britain is home to an estimated 10–15% of the world's moors. Notable areas of upland moorland in Britain include the Lake District, the Pennines (including the Dark Peak and Forest of Bowland), Mid Wales, the Southern Uplands of Scotland, the Scottish Highlands, and a few pockets in the West Country.
Bleaklow, Dark Peak
Bodmin Moor, Cornwall
Black Mountains, Wales
Brecon Beacons, Wales
Dartmoor, Devon
Drumossie Moor, often called Culloden Moor, the site of the Battle of Culloden
Exmoor, West Somerset & North Devon
Forest of Bowland, Lancashire
Hexhamshire Moors, Northumberland and County Durham
North York Moors, North Yorkshire
Migneint, Gwynedd
Mynydd Hiraethog, Denbighshire and Conwy
Penwith, Cornwall
Rannoch Moor, Highlands, Scotland
Rombalds Moor (including Ilkley Moor), West Yorkshire
Rossendale Valley, Lancashire
Saddleworth Moor, Greater Manchester
Shropshire Hills, small pockets of moorland such as the Long Mynd
West Pennine Moors, including Oswaldtwistle Moor, Haslingden Moor, Rivington Moor and Darwen Moor in Lancashire
Yorkshire Dales National Park, North Yorkshire
Ythan Estuary complex, Aberdeenshire, Scotland: largest coastal moorland in the British Isles, known for high biodiversity
Spain
Moorlands are called páramos in Spanish. They are particularly common in Northern Spain and the Meseta Central.
Boedo, Palencia, Castile
Páramo del Duratón, Castile
Paramo de Masa, Burgos, Castile
Páramo del Sil, Galicia
Las Loras, Castile
North America
United States
Two similar habitats, although more arid, found in western North America:
Siskiyou plateau
High Desert (Oregon)
South America
Argentina and Chile
Magellanic moorland
Colombia
Colombia is one of only three countries in the world to be home to páramo (tropical moorland) and more than 60% of the paramo regions are found on its soil.
Sumapaz Páramo, Bogota
Chingaza National Natural Park, Cundinamarca department
Oceta Páramo, Boyacá Department
Iguaque, Boyacá Department
Puracé, Cauca Department
Páramo de Santurbán, Santander Department
| Physical sciences | Biomes: General | Earth science |
1392524 | https://en.wikipedia.org/wiki/Rafflesia%20arnoldii | Rafflesia arnoldii | Rafflesia arnoldii, the corpse flower, or giant padma, Its local name is Petimum Sikinlili. It is a species of flowering plant in the parasitic genus Rafflesia within the family Rafflesiaceae. It is noted for producing the largest individual flower on Earth. It has a strong and unpleasant odor of decaying flesh. It is native to the rainforests of Sumatra and Borneo. Although there are some plants with larger flowering organs like the titan arum (Amorphophallus titanum) and talipot palm (Corypha umbraculifera), those are technically clusters of many flowers.
Rafflesia arnoldii is one of the three national flowers in Indonesia, the other two being the white jasmine (Jasminum sambac) and moon orchid (Phalaenopsis amabilis). It was officially recognized as a national "rare flower" () in Presidential Decree No. 4 in 1993.
Taxonomy
The first European to find Rafflesia was the ill-fated French explorer Louis Auguste Deschamps. He was a member of a French scientific expedition to Asia and the Pacific, detained by the Dutch for three years on the Indonesian island of Java, where, in 1797, he collected a specimen, which was probably what is now known as R. patma. During the return voyage in 1798, his ship was taken by the British, with whom France was at war, and all his papers and notes were confiscated. Joseph Banks is said to have agitated for the return of the stolen documents, but apparently to no avail: they were lost, turned up for sale around 1860, went to the British Museum of Natural History, where they were promptly lost again. They did not see the light of day until 1954, when they were rediscovered at the Museum. To everyone's surprise, his notes and drawings indicate that he had found and studied the plants long before the British. It is thought quite possible the British purposely hid Deschamps' notes, to claim the 'glory' of 'discovery' for themselves.
In 1818 the British surgeon Joseph Arnold collected a specimen of another Rafflesia species found by a Malay servant in a part of Sumatra, then a British colony called British Bencoolen (now Bengkulu), during an expedition run by the recently appointed Lieutenant-Governor of Bencoolen, Stamford Raffles. Arnold contracted a fever and died soon after the discovery, the preserved material being sent to Banks. Banks passed on the materials, and the honour to study them was given to Robert Brown. The British Museum's resident botanical artist Franz Bauer was commissioned to make illustrations of the new plants. Brown eventually gave a speech before the June 1820 meeting of the Linnean Society of London, where he first introduced the genus and its until then two species. Brown gave the generic name Rafflesia in honour of Raffles. Bauer completed his pictures some time in mid-1821, but the actual article on the subject continued to languish.
William Jack, Arnold's successor in the Sumatran Bencoolen colony, recollected the plant and was the first to officially describe the new species under the name R. titan in 1820. It is thought quite likely that Jack rushed the name to publication because he feared that the French might publish what they knew of the species, and thus rob the British of potential 'glory'. Apparently aware of Jack's work, Brown finally had the article published in the Transactions of the Linnean Society a year later, formally introducing the name R. arnoldii (he ignores Jack's work in his article).
Because Jack's name has priority, R. arnoldii should technically be a synonym of R. titan, but at least in Britain, it was common at the time to recognize the names introduced by well-regarded scientists such as Brown, over what should taxonomically be the correct name. This was pointed out by the Dutch Rafflesia expert Willem Meijer in his monographic addition to the book series Flora Malesiana in 1997. Instead of sinking R. arnoldii into synonymy, however, he declared that the name R. titan was "incompletely known": the plant material used by Jack to describe the plant has been lost.
In 1999, the British botanical historian David Mabberley, in response to Meijer's findings, attempted to rescue Brown's names from synonymy. This is known as 'conservation' in taxonomy, and normally this requires making a formal proposal to the committee of the International Code of Botanical Nomenclature (ICBN). Mabberley thought he found a loophole around such a formal review by noting that while Brown was notoriously slow to get his papers published, he often had a handful of pre-print pages privately printed to exchange with other botanists: one of these pre-prints had been recently bought by the Hortus Botanicus Leiden, and it was dated April 1821. Mabberley thus proposed that this document be considered the official effective publication, stating this would invalidate Jack's earlier name. For some reason Mabberley uses 1821, a few months after Brown's pre-print, as the date of Jack's publication, instead of the 1820 publication date in Singapore. Confusingly, the record in the International Plant Names Index (IPNI) still has yet another date, "1823?", as it was in the Index Kewensis before Meijer's 1997 work. Mabberley's proposals regarding Brown's name were accepted by institutions, such as the Index Kewensis.
Mabberley also pointed out that the genus Rafflesia was thus first validated by an anonymous report on the meeting published in the Annals of Philosophy in September 1820 (the name was technically an unpublished nomen nudum until this publication). Mabberley claimed the author was Samuel Frederick Gray. However, as that is nowhere stated in the Annals, per Article 46.8 of the code of ICBN, Mabberley was wrong to formally ascribe the validation to Gray. The validation of the name was thus attributed to one Thomas Thomson, the editor of the Annals in 1820, by the IPNI. Mabberley admitted his error in 2017. This Thomson was not the botanist Thomas Thomson, who was three years old in 1820, but his identically named father, a chemist, and Rafflesia is thus the only botanical taxon this man ever published.
Errata
An old Kew webpage claimed that Sophia Hull was present when the specimen was collected and finished the color drawing that Arnold had started of the plant. It also stated that Brown had originally wanted to call the plant genus Arnoldii.
Regional names
It is called kerubut in Sumatra. In the kecamatan ('district') of Pandam Gadang, it is known as cendawan biriang in the Minangkabau language.
Description
Although Rafflesia is a vascular plant, it lacks any observable leaves, stems or even roots, and does not have chlorophyll. It lives as a holoparasite on vines of the genus Tetrastigma (most commonly Tetrastigma augustifolia). Similar to fungi, individuals grow as a mass of thread-like strands of tissue completely embedded within and in intimate contact with surrounding host cells from which nutrients and water are obtained. It can only be seen outside the host plant when it is ready to reproduce; the only part of Rafflesia that is identifiable as distinctly plant-like are the flowers, though even these are unusual since they attain massive proportions, have a reddish-brown colouration, and stink of rotting flesh. According to Sandved, the flower opens with a hissing sound.
The flower of Rafflesia arnoldii grows to a diameter of around , weighing up to 11 kilograms (24 lb). These flowers emerge from very large, cabbage-like, maroon or dark brown buds typically about wide, but the largest (and the largest flower bud ever recorded) found at Mount Sago, Sumatra in May 1956 was in diameter. Indonesian researchers often refer to the bud as a 'knop' (knob).
Called a "monster flower", Rafflesia arnoldii produces the largest single bloom and can grow up to three feet (one meter) in diameter and weigh up to . The plant is native to the rainforests of Malaysia and Indonesia.
Ecology
Habitat
Rafflesia arnoldii is found in both secondary and primary rainforests.
The only host plant species of R. arnoldii is Tetrastigma leucostaphylum in West Sumatra. Tetrastigma are themselves parasites of a sort, using the strength and upright growth of other surrounding plants to reach the light. The host plants of the host plants – the trees that Tetrastigma uses to climb up to light, are relatively limited in number of species, although they are generally the closest tree to the vine. When it is young, at least at the locations studied in West Sumatra, areas of primary forest, the vine climbs on sapling trees and bushes of Laportea stimulans and Coffea canephora in the undergrowth, in the subcanopy a Campnosperma species is the most important, whereas the only large tree the vine grows in is also Laportea stimulans. Tetrastigma often can completely envelop its host at the subcanopy level, choking out the light to such degree that the forest floor below the canopy is completely dark–this is apparently preferred by Rafflesia arnoldii, as the most knops are found at the darkest locations in the forest. The most common plant associated with Rafflesia arnoldii is the smallish tree Coffea canephora (the well-known robusta coffee), which is actually not native to the area, but was introduced from Africa. It covers most of the undergrowth, with a Importance Value Index (IVI) of over 100%, and is also the main component of the subcanopy with an IVI of 52.74%. The dominant tall tree in these areas is Toona sureni, which has a canopy IVI of 4.97%.
Other important components of the ecosystem around Rafflesia arnoldii plants at this location are, in the undergrowth, the Urticaceae Laportea stimulans (IVI: 55.81%) and Villebrunea rubescens (IVI: 50.10%), as well as the wild cinnamon Cinnamomum burmannii (IVI: 24.33%) and the fig Ficus disticha (IVI: 23.83%). In the subcanopy the main plants are Toona sureni (IVI: 34.11%), Laportea stimulans (IVI: 24.62%), Cinnamomum burmannii (IVI: 18.45%) and Ficus ampelos (IVI: 14.53%). The main trees found in the canopy are, besides the Toona, a Shorea species (IVI: 26.24%), Aglaia argentea (IVI: 25.94%), Ficus fistulosa (IVI: 16.08%) and Macaranga gigantea (IVI: 13.06%).
Rafflesia arnoldii has been found to infect hosts growing in alkaline, neutral and acidic soils. It is not found far from water. It has been found from 490 to 1,024 meters in altitude.
Reproduction
The buds take many months to develop and the flower lasts for just a few days. The flowers are dioecious – either male or female, thus both flowers are needed for successful pollination.
When Rafflesia is ready to reproduce, a tiny bud forms outside the root or stem of its host and develops over a period of a year. The cabbage-like head that develops eventually opens to reveal the flower. The stigmas or stamens are attached to a spiked disk inside the flower. A foul smell of rotting meat attracts flies and beetles. To pollinate successfully, the flies and/or beetles must visit both the male and female plants, in that order. The fruit produced are round berries filled with numerous minute seeds.
The flies Drosophila colorata, Chrysomya megacephala and Sarcophaga haemorrhoidalis visit the late flowers. Black ants of the genus Euprenolepis may feed on the developing flower buds, perhaps killing them.
Conservation
It has not been assessed for the IUCN Red List, but the conservation status of the Rafflesia arnoldii is currently of concern due to anthropogenic and biological factors. Anthropogenic factors contributing to the decline are primarily deforestation and harvesting; biological factors contributing to the decline include the plant's dioecious nature, limited population, and skewed sex ratio, with the majority of the flowers being male. However, ecotourism is thought to be a main threat to the species. At locations which are regularly visited by tourists the number of flower buds produced per year has decreased.
| Biology and health sciences | Malpighiales | Plants |
1392630 | https://en.wikipedia.org/wiki/Basement%20membrane | Basement membrane | The basement membrane, also known as base membrane, is a thin, pliable sheet-like type of extracellular matrix that provides cell and tissue support and acts as a platform for complex signalling. The basement membrane sits between epithelial tissues including mesothelium and endothelium, and the underlying connective tissue.
Structure
As seen with the electron microscope, the basement membrane is composed of two layers, the basal lamina and the reticular lamina. The underlying connective tissue attaches to the basal lamina with collagen VII anchoring fibrils and fibrillin microfibrils.
The basal lamina layer can further be subdivided into two layers based on their visual appearance in electron microscopy. The lighter-colored layer closer to the epithelium is called the lamina lucida, while the denser-colored layer closer to the connective tissue is called the lamina densa. The electron-dense lamina densa layer is about 30–70 nanometers thick and consists of an underlying network of reticular collagen IV fibrils which average 30 nanometers in diameter and 0.1–2 micrometers in thickness and are coated with the heparan sulfate-rich proteoglycan perlecan. In addition to collagen, this supportive matrix contains intrinsic macromolecular components. The lamina lucida layer is made up of laminin, integrins, entactins, and dystroglycans. Integrins are a key component of hemidesmosomes which serve to anchor the epithelium to the underlying basement membrane.
To represent the above in a visually organised manner, the basement membrane is organized as follows:
Epithelial/mesothelial/endothelial tissue (outer layer)
Basement membrane
Basal lamina
Lamina lucida
laminin
integrins (hemidesmosomes)
nidogens
dystroglycans
Lamina densa
collagen IV (coated with perlecan, rich in heparan sulfate)
Attaching proteins (between the basal and reticular laminae)
collagen VII (anchoring fibrils)
fibrillin (microfibrils)
Lamina reticularis
collagen III (as reticular fibers)
Connective tissue (Lamina propria)
Function
The primary function of the basement membrane is to anchor down the epithelium to its loose connective tissue (the dermis or lamina propria) underneath. This is achieved by cell-matrix adhesions through substrate adhesion molecules (SAMs).
The basement membrane acts as a mechanical barrier, preventing malignant cells from invading the deeper tissues. Early stages of malignancy that are thus limited to the epithelial layer by the basement membrane are called carcinoma in situ.
The basement membrane is also essential for angiogenesis (development of new blood vessels). Basement membrane proteins have been found to accelerate differentiation of endothelial cells.
The most notable examples of basement membranes is the glomerular basement membrane of the kidney, by the fusion of the basal lamina from the endothelium of glomerular capillaries and the podocyte basal lamina, and between lung alveoli and pulmonary capillaries, by the fusion of the basal lamina of the lung alveoli and of the basal lamina of the lung capillaries, which is where oxygen and diffusion occurs (gas exchange).
As of 2017, other roles for basement membrane include blood filtration and muscle homeostasis. Fractones may be a type of basement membrane, serving as a niche for stem cells.
Clinical significance
Some diseases result from a poorly functioning basement membrane. The cause can be genetic defects, injuries by the body's own immune system, or other mechanisms. Diseases involving basement membranes at multiple locations include:
Genetic defects in the collagen fibers of the basement membrane, including Alport syndrome and Knobloch syndrome
Autoimmune diseases targeting basement membranes. Non-collagenous domain basement membrane collagen type IV is autoantigen (target antigen) of autoantibodies in the autoimmune disease Goodpasture's syndrome.
A group of diseases stemming from improper function of basement membrane zone are united under the name epidermolysis bullosa.
In histopathology, thickened basement membranes are found in several inflammatory diseases, such as lichen sclerosus, systemic lupus erythematosus or dermatomyositis in the skin, or collagenous colitis in the colon.
Evolutionary origin
These are only found within diploblastic and homoscleromorphic sponge animals. The homoscleromorph were found to be sister to diploblasts in some studies, making the membrane originate once in the history of life. But more recent studies have disregarded diploblast-homoscleromorph group, so other sponges may have lost it (most probable) or the origin in the two groups may be separate.
| Biology and health sciences | Tissues | Biology |
30372477 | https://en.wikipedia.org/wiki/Qingdao%20Jiaozhou%20Bay%20Bridge | Qingdao Jiaozhou Bay Bridge | Qingdao Jiaozhou Bay Bridge is a long roadway bridge in Qingdao, Shandong Province, China, which is part of the Jiaozhou Bay Connection Project. The longest continuous segment of the bridge is , making it one of the longest bridges in the world.
Description
The Jiaozhou Bay Bridge transects Jiaozhou Bay, which reduces the road distance between Licang District and Huangdao District in Qingdao by , compared with the expressway along the coast of the bay, reducing travel time by 20 to 30 minutes. The design of the bridge is T-shaped with the main entry and exit points in Huangdao District and Licang District, Qingdao. A branch to Hongdao Subdistrict is connected by a semi-directional T interchange to the main span. The construction used 450,000 tons of steel and of concrete. The bridge is designed to be able to withstand severe earthquakes, typhoons, and collisions from ships. It is supported by 5,238 concrete piles. The cross section consists of two beams in total wide carrying six lanes with two shoulders.
The Jiaozhou Bay Bridge has three navigable sections: the Cangkou Channel Bridge to the west, the Dagu Channel Bridge to the east, and the Hongdao Channel Bridge to the north. The long Cangkou Channel Bridge has the largest span of the entire Jiaozhou Bay Bridge, . The Hongdao Channel Bridge has a span of . The non-navigable sections of the bridge have a span of .
Length
The length of the Jiaozhou Bay Bridge is , of which are over water, representing the aggregate length of three legs of the bridge.
The Jiaozhou Bay Bridge is part of the Jiaozhou Bay Connection Project, which includes overland expressways and the Qingdao Jiaozhou Bay tunnel. The aggregated length of the project is , which is by many sources listed as length of the Jiaozhou Bay Bridge.
The Jiaozhou Bay Connection Project consists of two non-connected sections: a -long expressway that includes the Jiaozhou Bay Bridge and a -long expressway that includes the Qingdao Jiaozhou Bay Tunnel.
The section is further broken into multiple parts:
- Jiaozhou Bay Bridge of which 25.9 km (16.1 mi) is over water in the aggregate of three legs.
- Licang District side land bridge
- Huangdao District side land bridge
- Hongdao Subdistrict connection
Records
After the bridge opened, Guinness World Records listed it at , which made it the longest bridge over water (aggregate length). The Guinness title was taken by the Hong Kong-Zhuhai-Macau Bridge in October 2018.
The bridge builder Shandong Gaosu Group claimed that Jiaozhou Bay Bridge had the first oversea interchange in the world and that it has the world's largest number of oversea bored concrete piles.
History
The bridge was the idea of a local official in the Chinese Communist Party who was subsequently dismissed for corruption. It was designed by the Shandong Gaosu Group. It took four years to build, and employed at least 10,000 people. It opened on 30 June 2011 for traffic.
The Qingdao Jiaozhou Bay tunnel opened on the same day as the bridge. It transects Jiaozhou Bay, also connecting Huangdao District and the city of Qingdao, between the narrow mouth of the bay, which is wide. The tunnel travels underground for .
Concerns regarding the bridge's safety were raised when Chinese media reported that the bridge was opened with faulty elements, such as incomplete crash-barriers, missing lighting, and loose nuts on guard-rails, with workers stating that it would take two months before finishing all of the projects related to the bridge. Shao Xinpeng, the bridge's chief engineer, claimed that in spite of the safety report, the bridge was safe and ready for traffic, adding that the problems highlighted in the reports were not major.
The bridge was reported by the official state-run television company CCTV to cost CN¥10 billion (, GB£900 million). Other sources reported costs as high as CN¥55 billion (US$8.8 billion, GB£5.5 billion).
| Technology | Bridges | null |
2829963 | https://en.wikipedia.org/wiki/Airborne%20collision%20avoidance%20system | Airborne collision avoidance system | An airborne collision avoidance system (ACAS, usually pronounced as ay-kas) operates independently of ground-based equipment and air traffic control in warning pilots of the presence of other aircraft that may present a threat of collision. If the risk of collision is imminent, the system recommends a maneuver that will reduce the risk of collision. ACAS standards and recommended practices are mainly defined in annex 10, volume IV, of the Convention on International Civil Aviation. Much of the technology being applied to both military and general aviation today has been undergoing development by NASA and other partners since the 1980s.
A distinction is increasingly being made between ACAS and ASAS (airborne separation assurance system). ACAS is being used to describe short-range systems intended to prevent actual metal-on-metal collisions. In contrast, ASAS is being used to describe longer-range systems used to maintain standard en route separation between aircraft ( horizontal and vertical).
TCAS
As of 2022, the only implementations that meets the ACAS II standards set by ICAO are Versions 7.0 and 7.1 of TCAS II (Traffic Collision Avoidance System) produced by Garmin, Rockwell Collins, Honeywell and ACSS (Aviation Communication & Surveillance Systems; an L-3 Communications and Thales Avionics company).
As of 1973, the United States Federal Aviation Administration (FAA) standard for transponder minimal operational performance, Technical Standard Order (TSO) C74c, contained errors which caused compatibility problems with air traffic control radar beacon system (ATCRBS) radar and Traffic Collision Avoidance System (TCAS) abilities to detect aircraft transponders. First called "The Terra Problem", there have since been individual FAA Airworthiness Directives issued against various transponder manufacturers in an attempt to repair the operational deficiencies, to enable newer radars and TCAS systems to operate. Unfortunately, the defect is in the TSO, and the individual corrective actions to transponders have led to significant differences in the logical behavior of transponders by make and mark, as proven by an FAA study of in-situ transponders. In 2009, a new version, TSO C74d was defined with tighter technical requirements.
AIS-P (ACAS) is a modification which both corrects the transponder deficiencies (the transponder will respond to all varieties of radar and TCAS), then adds an Automatic Independent Surveillance with Privacy augmentation. The AIS-P protocol does not suffer from the saturation issue in high density traffic, does not interfere with the Air Traffic Control (ATC) radar system or TCAS, and conforms to the internationally approved Mode S data packet standard. It awaits member country submission to the ICAO as a requested approval.
Other collision avoidance systems
Modern aircraft can use several types of collision avoidance systems to prevent unintentional contact with other aircraft, obstacles, or the ground.
Aircraft collision avoidance
Some of the systems are designed to avoid collisions with other aircraft and UAVs. They are referred to as "electronic conspicuity" by the UK CAA.
Airborne radar can detect the relative location of other aircraft, and has been in military use since World War II, when it was introduced to help night fighters (such as the de Havilland Mosquito and Messerschmitt Bf 110) locate bombers. While larger civil aircraft carry weather radar, sensitive anti-collision radar is rare in non-military aircraft.
Traffic collision avoidance system (TCAS), the implementation of ACAS, actively interrogates the transponders of other aircraft and negotiates collision-avoidance tactics with them in case of a threat. TCAS systems are relatively expensive, and tend to appear only on larger aircraft. They are effective in avoiding collisions only with other aircraft that are equipped with functioning transponders with altitude reporting.
a Portable Collision Avoidance System (PCAS) is a less expensive, passive version of TCAS designed for general aviation use. PCAS systems do not actively interrogate the transponders of other aircraft, but listen passively to responses from other interrogations. PCAS is subject to the same limitations as TCAS, although the cost for PCAS is significantly less.
FLARM is a small-size, low-power device (commonly used in gliders or other light aircraft) which broadcasts its own position and speed vector (as obtained with an integrated GPS) over a license-free ISM band radio transmission. At the same time it listens to other devices based on the same standard. Intelligent motion prediction algorithms predict short-term conflicts and warn the pilot accordingly by acoustical and visual means. FLARM incorporates a high-precision WAAS 16-channel GPS receiver and an integrated low-power radio transceiver. Static obstacles are included in FLARM's database. No warning is given if an aircraft does not contain an additional FLARM device.
Terrain collision avoidance
a Ground proximity warning system (GPWS), or Ground collision warning system (GCWS), which uses a radar altimeter to detect proximity to the ground or unusual descent rates. GPWS is common on civil airliners and larger general aviation aircraft.
a Terrain awareness and warning system (TAWS) uses a digital terrain map, together with position information from a navigation system such as GPS, to predict whether the aircraft's current flight path could put it in conflict with obstacles such as mountains or high towers, that would not be detected by GPWS (which uses the ground elevation directly beneath the aircraft). One of the best examples of this type of technology is the Auto-GCAS (Automatic Ground Collision Avoidance System) and PARS (Pilot Activated Recovery System) that was installed on the entire USAF fleet of F-16's in 2014.
Synthetic vision provides pilots with a computer-generated simulation of their outside environment for use in low or zero-visibility situations. Information used to present warnings is often taken from GPS, INS, or gyroscopic sensors.
| Technology | Aircraft components | null |
2830083 | https://en.wikipedia.org/wiki/Huronian%20glaciation | Huronian glaciation | The Huronian glaciation (or Makganyene glaciation) was a period where at least three ice ages occurred during the deposition of the Huronian Supergroup. Deposition of this largely sedimentary succession extended from approximately 2.5 to 2.2 billion years ago (Gya), during the Siderian and Rhyacian periods of the Paleoproterozoic era. Evidence for glaciation is mainly based on the recognition of diamictite, that is interpreted to be of glacial origin. Deposition of the Huronian succession is interpreted to have occurred within a rift basin that evolved into a largely marine passive margin setting. The glacial diamictite deposits within the Huronian are on par in thickness with Quaternary analogs.
Description
The three glacial diamictite-bearing units of the Huronian are, from the oldest to youngest, the Ramsay Lake, Bruce, and Gowganda formations. Although there are other glacial deposits recognized throughout the world at this time, the Huronian is restricted to the region north of Lake Huron, between Sault Ste. Marie, Ontario, and Rouyn-Noranda, Quebec. Other similar deposits are known from elsewhere in North America, as well as Australia and South Africa.
The Huronian glaciation broadly coincides with the Great Oxygenation Event, a time of increased atmospheric oxygen and decreased atmospheric methane. The oxygen reacted with the methane to form carbon dioxide and water, both much weaker greenhouse gases than methane, greatly reducing the efficacy of the greenhouse effect, especially as water vapor readily precipitated out of the air with dropping temperature. This caused an icehouse effect and, possibly compounded by the low solar irradiation at the time as well as reduced geothermal activities, the combination of increasing free oxygen (which causes oxidative damage to organic compounds) and climatic stresses likely caused an extinction event, the first and longest lasting in the Earth's history, which wiped out most of the anaerobe-dominated microbial mats both on the Earth's surface and in shallow seas.
Discovery and name
In 1907, Arthur Philemon Coleman first inferred a "lower Huronian ice age" from analysis of a geological formation near Lake Huron in Ontario. In his honour, the lower (glacial) member of the Gowganda Formation is referred to as the Coleman member. These rocks have been studied in detail by numerous geologists and are considered to represent the type example of a Paleoproterozoic glaciation.
The confusion of the terms glaciation and ice age has led to the more recent impression that the entire time period represents a single glacial event. The term Huronian is used to describe a lithostratigraphic supergroup and should not be used to describe glacial cycles, according to The North American Stratigraphic Code, which defines the proper naming of geologic physical and chrono units. Diachronic or geochronometric units should be used.
Geology and climate
The Gowganda Formation (2.3 Gya) contains "the most widespread and most convincing glaciogenic deposits of this era", according to Eyles and Young. In North America, similar-age deposits are exposed in Michigan, the Medicine Bow Mountains, Wyoming, Chibougamau, Quebec, and central Nunavut. Globally, they occur in the Griquatown Basin of South Africa, as well as India and Australia.
The tectonic setting was one of a rifting continental margin. New continental crust would have resulted in chemical weathering. This weathering would pull CO2 out of the atmosphere, cooling the planet through the reduction in greenhouse effect.
Popular perception is that one or more of the glaciations may have been snowball Earth events, when all or most of Earth's surface was covered in ice. However the palaeomagnetic evidence that suggests ice sheets were present at low latitudes is contested, and the glacial sediments (diamictites) are discontinuous, alternating with carbonate and other sedimentary rocks, indicating temperate climates, providing scant evidence for global glaciation.
Implications
Before the Huronian Ice Age, most organisms were anaerobic, relying on chemosynthesis and retinal-based anoxygenic photosynthesis for production of biological energy and biocompounds. But around this time, cyanobacteria evolved porphyrin-based oxygenic photosynthesis, which produced dioxygen as a waste product. At first, most of this oxygen was dissolved in the ocean and afterwards absorbed through the reduction by surface ferrous compounds, atmospheric methane and hydrogen sulfide. However, as the cyanobacterial photosynthesis continued, the cumulative oxygen oversaturated the reductive reservoir of the Earth's surface and spilled out as free oxygen that "polluted" the atmosphere, leading to a permanent change to the atmospheric chemistry known as the Great Oxygenation Event.
The once-reducing atmosphere, now an oxidizing one, was highly reactive and toxic to the anaerobic biosphere. Furthermore, atmospheric methane was depleted by oxygen and reduced to trace gas levels, and replaced by much less powerful greenhouse gases such as carbon dioxide and water vapor, the latter of which was also readily precipitated out of the air at low temperatures. Earth's surface temperature dropped significantly, partly because of the reduced greenhouse effect and partly because solar luminosity and/or geothermal activities were also lower at that time, leading to an icehouse Earth.
After the combined impact of oxidization and climate change devastated the anaerobic biosphere (then likely dominated by archaeal microbial mats), aerobic organisms capable of oxygen respiration were able to proliferate rapidly and exploit the ecological niches vacated by anaerobes in most environments. The surviving anaerobe colonies were forced to adapt a symbiotic living among aerobes, with the anaerobes contributing the organic materials that aerobes needed, and the aerobes consuming and "detoxing" the surrounding of oxygen molecules lethal to the anaerobes. This might have also caused some anaerobic archaea to begin invaginating their cell membranes into endomembranes in order to shield and protect the cytoplasmic nucleic acids, allowing endosymbiosis with aerobic eubacteria (which eventually became ATP-producing mitochondria), and this symbiogenesis contributed to the evolution of eukaryotic organisms during the Proterozoic.
| Physical sciences | Events | Earth science |
2830096 | https://en.wikipedia.org/wiki/Hirnantian%20glaciation | Hirnantian glaciation | The Hirnantian glaciation, also known as the Andean-Saharan glaciation, Early Paleozoic Ice Age (EPIA), the Early Paleozoic Icehouse, the Late Ordovician glaciation, or the end-Ordovician glaciation, occurred during the Paleozoic from approximately 460 Ma to around 420 Ma, during the Late Ordovician and the Silurian period. The major glaciation during this period was formerly thought only to consist of the Hirnantian glaciation itself but has now been recognized as a longer, more gradual event, which began as early as the Darriwilian, and possibly even the Floian. Evidence of this glaciation can be seen in places such as Arabia, North Africa, South Africa, Brazil, Peru, Bolivia, Chile, Argentina, and Wyoming. More evidence derived from isotopic data is that during the Late Ordovician, tropical ocean temperatures were about 5 °C cooler than present day; this would have been a major factor that aided in the glaciation process.
The Late Ordovician glaciation is widely considered to be the leading cause of the Late Ordovician mass extinction, and it is the only glacial episode that appears to have coincided with a major mass extinction of nearly 61% of marine life. Estimates of peak ice sheet volume range from 50 to 250 million cubic kilometres, and its duration from 35 million to less than 1 million years. At its height during the Hirnantian, the ice age is believed to have been significantly more extreme than the Last Glacial Maximum occurring during the terminal Pleistocene. Glaciation of the Northern Hemisphere was minimal because a large amount of the land was in the Southern Hemisphere.
Timeline
Pre-Hirnantian glaciations
The earliest evidence for possible glaciation comes from Floian conodont apatite oxygen isotope fluctuations, which display a periodicity characteristic of Milankovitch cycles and have been interpreted as reflecting cyclic waxing and waning of polar ice caps. A speculated glaciation in the middle Darriwilian corresponds to the MDICE positive carbon isotope excursion. Sea level changes likely reflective of glacioeustasy are known from this geologic stage, around 467 Ma. However, there are no known Middle Ordovician glacial deposits that would provide direct geological evidence of glaciation. Isotopic evidence from the Sandbian reveals three possible glaciations: an early Sandbian glaciation, a middle Sandbian glaciation, a late Sandbian glaciation. Although biostratigraphy dating the glacial deposits in Gondwana has been problematic, there is evidence suggesting the presence of glaciation by the Sandbian stage (approximately 451–461 Ma). Graptolite distribution during the time interval delineated by the Nemacanthus gracilis graptolite biozone indicates a latitudinal extent of the subtropics and tropics similar to that of today, as evidenced by a steep faunal gradient that is uncharacteristic of greenhouse periods, suggesting that Earth was in a mild icehouse state by the start of the Sandbian, around 460 Ma. Many possible short glaciation occurred during the Katian: three very short glaciations during the early Katian, the Rakvere glaciation during the late early Katian, a middle Katian glaciation, the Early Ashgill glaciation of the early late Katian, and a latest Katian glaciation that was followed by a rapid warming event in the Paraorthograptus pacificus graptolite biozone immediately before the Hirnantian glaciation itself. Evidence of major changes in bottom water formation, which usually indicates a sudden change in global climate, is known from the Katian. Shifts in isotopic ratios of carbon and neodymium that correspond to graptolite biostratigraphy lend further evidence in favour of the existence of glacioeustatic cycles during the Katian, as do conodont apatite δ18O fluctuations from Kentucky and Quebec that likely reflect glacioeustatic sea level changes. However, the existence of glacials during the Katian remains controversial. Katian brachiopod and seawater δ18O values from Cincinnati Arch indicate ocean temperatures characteristic of a global greenhouse state.
Hirnantian glaciation
At the Katian-Hirnantian boundary, a sudden cooling event caused a rapid expansion of glaciers, resulting in one of the most severe glaciations of the Phanerozoic, an extreme cooling event generally believed to be coincident with the first pulse of the Late Ordovician mass extinction. An δ18O shift occurs at the start of the Hirnantian; the magnitude of this shift (+2-4‰) was extraordinary. Its direction implies glacial cooling and possibly increases in ice-volume. The observed shifts in the δ18O isotopic indicator would require a sea-level fall of 100 meters and a drop of 10 °C in tropical ocean temperatures to have occurred during this glacial episode. Sedimentological data shows that Late Ordovician ice sheets glacierized the Al Kufrah Basin. Ice sheets also probably formed continuous ice cover over North Africa and the Arabian Peninsula. In all areas of North Africa where Early Silurian shale occurs, Late Ordovician glaciogenic deposits occur beneath, likely due to the anoxia promoted in these basins.
At the end of the Hirnantian, an abrupt retreat of glaciers concurrent with the second pulse of the Late Ordovician mass extinction occurred, after which Earth receded back into a much warmer climate during the Rhuddanian. Late Hirnantian warming was marked by a similarly meteoric shift in δ18O towards more negative values. δ13C values likewise fall sharply at the beginning of the Silurian.
Silurian glaciations
Following the relatively warm Rhuddanian, glacial events occurred during the early and latest Aeronian. A further glaciation occurred from the late Telychian to middle Sheinwoodian. From the early to late Homerian, Earth was in yet another glacial phase. The last major glaciation of the EPIA occurred during the Ludfordian and was associated with the Lau event.
During this period, glaciation is known from Arabia, Sahara, West Africa, the south Amazon, and the Andes, and the centre of glaciation is known to have migrated from the Sahara in the Ordovician (450–440 Ma) to South America in the Silurian (440–420 Ma). According to Eyles and Young, "A major glacial episode at c. 440 Ma, is recorded in Late Ordovician strata (predominantly Ashgillian) in West Africa (Tamadjert Formation of the Sahara), in Morocco (Tindouf Basin) and in west-central Saudi Arabia, all areas at polar latitudes at the time. From the Late Ordovician to the Early Silurian the centre of glaciation moved from northern Africa to southwestern South America." Continental glaciers developed in Africa and eastern Brazil, while alpine glaciers formed in the Andes. In western South America (Peru, Bolivia and northern Argentina) were found glacio-marine diamictites interbedded with turbidites, shales, mud flows and debris flows, dated as early Silurian (Llandonvery), with a southward extension into northern Argentina and western Paraguay, and with a probably northern extension into Peru, Ecuador and Colombia.
A major ice age, the Andean-Saharan was preceded by the Cryogenian ice ages (720–630 Ma, the Sturtian and Marinoan glaciations), often referred to as Snowball Earth, and followed by the Karoo Ice Age (350–260 Ma).
Evidence
Lithologic
The sequence of the stratigraphic architecture of the Bighorn Dolomite (which represents end of the Ordovician period), is consistent with the gradual buildup of glacial ice. The sequences of the Bighorn Dolomite display systematic changes in their component cycles, and the changes in these cycles are interpreted as being a change from a greenhouse climate to a transitional ice house climate.
Possible causes
CO2 depletion
One of the factors that hindered glaciation during the early Paleozoic was atmospheric CO2 concentrations, which at the time were somewhere between 8 and 20 times pre-industrial levels. However, solar irradiance was significantly lower during the Late Ordovician; 450 million years ago, solar irradiance of Earth was about 1312.00 Wm−2 compared to 1360.89 Wm−2 in the present day. Furthermore, CO2 concentrations are thought to have dropped significantly in the Hirnantian, which could have induced widespread glaciation during an overall cooling trend. Methods for the removal of CO2 during this time were not well known, and are still hotly debated, with the radiation of terrestrial plants, enhanced oceanic organic carbon burial, and a reduction in volcanic outgassing of carbon dioxide having been proposed. It could have been possible for glaciation to initiate with high levels of CO2, but it would have depended highly on continental configuration.
Silicate weathering
Long-term silicate weathering is a major mechanism through which CO2 is removed from the atmosphere, converting it into bicarbonate which is stored in marine sediments. This has often been linked to the Taconic Orogeny, a mountain-building event on the east coast of Laurentia (present-day North America). Another hypothesis is that a hypothetical large igneous province in the Katian led to basaltic flooding caused by high continental volcanic activity during that period. In the short term, this would have released a large amount of CO2 into the atmosphere, which may explain a warming pulse in the Katian. However, in the long term flood basalts would have left behind plains of basaltic rock, replacing exposures of granitic rock. Basaltic rocks weather substantially faster than granitic rocks, which would quickly remove CO2 from the atmosphere at a much faster rate than before the volcanic activity. CO2 levels could also have decreased due to accelerated silicate weathering caused by the expansion of terrestrial non-vascular plants. Vascular plants only appeared 15 Ma after the glaciation.
Organic carbon burial
Isotopic evidence points to a global Hirnantian positive shift in δ13C at nearly the same time as the positive shift in marine carbonate δ18O. This shift is known as the Hirnantian Isotopic Carbon Excursion (HICE). The positive shift in δ13C implies a change in the carbon cycle leading to more burial of organic carbon, though some researchers hold a conflicting interpretation of this δ13C change as being caused by increased weathering of carbonate platforms exposed by sea level fall. This enhanced organic carbon burial resulted in a decrease in the atmospheric CO2 levels and an inverse greenhouse effect, allowing glaciation to occur more readily.
Gamma-ray burst
A gamma-ray burst (GRB) has been suggested by some researchers as the cause of the abrupt glaciation at the beginning of the Hirnantian. The effects of a ten second GRB occurring within two kiloparsecs of Earth would have delivered it a fluence of 100 kilojoules per square metre. This would have resulted in large amounts of nitric acid raining down on Earth's surface in the aftermath of the gamma-ray burst, causing blooms of nitrate-limited photosynthesisers that would have sequestered large amounts of carbon dioxide from the atmosphere. Additionally, the GRB would have initiated a major depletion of ozone, another potent greenhouse gas, through its reaction with nitric oxide produced as a result of the GRB's dissociation of diatomic nitrogen and subsequent reaction of nitrogen atoms with oxygen.
Asteroid impact
Ordovician meteor event
The breakup of the L-chondrite parent body caused a rain of extraterrestrial material onto the Earth called the Ordovician meteor event. This event increased stratospheric dust by 3 or 4 orders of magnitude and may have triggered the ice age by reflecting sunlight back into space.
Deniliquin impact structure
A 2023 paper has proposed that the Hirnantian glaciation could have come about due to an impact winter generated by the impact that formed the Deniliquin multiple-ring feature in what is now southeastern Australia, although this hypothesis currently remains untested.
Debris ring
A 2024 study suggests that rather than a complete breakup or outright impact, the L-chondrite parent body may have had a near-miss encounter with Earth, causing a part of it to break off from Earth's gravitational pull. This debris may have formed a planetary ring, and down-falling debris from the ring may have shaded Earth from the sun's rays and triggering significant cooling. Evidence for this comes from the fact that craters dating from the Ordovician meteor event appear to cluster in a distinctive band around the Earth instead of being randomly scattered, which may have come from debris falling to Earth from the ring. This ring may have lasted for nearly 40 million years.
Volcanic aerosols
Although volcanic activity often leads to warming through the release of greenhouse gasses, it may also lead to cooling via the production of aerosols, light-blocking particles. There is good evidence for elevated volcanic activity through the Hirnantian, based on anomalously high concentrations of mercury (Hg) in many areas. Sulphur dioxide (SO2) and other sulphurous volcanic gasses are converted into sulphate aerosols in the stratosphere, and short, periodic large igneous province eruptions may be able to account for cooling in this way. Although there is no direct evidence for a large igneous province during the Hirnantian, volcanism could still be a major factor. Explosive volcanic eruptions, which regularly send debris and volatiles into the stratosphere, would be even more effective at producing sulfate aerosols. Ash beds are common in the Late Ordovician, and Hirnantian pyrite records sulphur isotope anomalies consistent with stratospheric eruptions. The enormous megaeruption that formed the Deicke bentonite layer in particular has been linked to global cooling due to it coinciding with a major positive oxygen isotope excursion and the high sulphur concentration observed in its bentonite layer.
Sea level change
One of the possible causes for the temperature drop during this period is a drop in sea level. Sea level must drop prior to the initiation of extensive ice sheets in order for it to be a possible trigger. A drop in sea level allows more land to become available for ice sheet growth. There is wide debate on the timing of sea level change, but there is some evidence that a sea level drop started before the Ashgillian, which would have made it a contributing factor to glaciation.
Palaeogeography
The possible setup of the paleogeography during the period from 460 Ma to 440 Ma falls in a range between the Caradocian and the Ashgillian. The choice of setup is important, because the Caradocian setup is more likely to produce glacial ice at high CO2 concentrations, and the Ashgillian is more likely to produce glacial ice at low CO2 concentrations.
The height of the land mass above sea level also plays an important role, especially after ice sheets have been established. A higher elevation allows ice sheets to remain with more stability, but a lower elevation allows ice sheets to develop more readily. The Caradocian is considered to have a lower surface elevation, and though it would be better for initiation during high CO2, it would have a harder time maintaining glacial coverage.
From what we know about tectonic movement, the time span required to allow the southward movement of Gondwana toward the South Pole would have been too long to trigger this glaciation. Tectonic movement tends to take several million years, but the scale of the glaciation seems to have occurred in less than 1 million years, but the exact time frame of glaciation ranges from less than 1 million years to 35 million years, so it could still be possible for tectonic movement to have triggered this glacial period. Alternatively, true polar wander (TPW) and not conventional plate motion may have been responsible for the initiation of the Hirnantian glaciation. Palaeomagnetic data from between 450 and 440 Ma indicates a TPW of around ~50˚ occurring at a maximum speed of ~55 cm per year, which better explains the rapid motion of the continents than conventional plate tectonics.
Poleward ocean heat transport
Ocean heat transport is a major driver in the warming of the poles, taking warm water from the equator and distributing it to higher latitudes. A weakening of this heat transport may have allowed the poles to cool enough to form ice under high CO2 conditions. Due to the paleogeographic configuration of the continents, global ocean heat transport is thought to have been stronger in the Late Ordovician. However, research shows that in order for glaciation to occur, poleward heat transport had to be lower, which creates a discrepancy in what is known.
Orbital parameters
Orbital parameters may have acted in conjunction with some of the above parameters to help start glaciation. The variation of the earth's precession, and eccentricity, could have set the off the tipping point for initiation of glaciation. The Orbit at this time is thought to have been in a cold summer orbit for the Southern Hemisphere. This type of orbital configuration is a change in the orbital precession such that during the summer when the hemisphere is tilted toward the sun (in this case the earth) the earth is furthest away from the sun, and orbital eccentricity such that the orbit of the earth is more elongated which would enhance the effect of precession.
Coupled models have shown that in order to maintain ice at the pole in the Southern Hemisphere, the earth would have to be in a cold summer configuration. The glaciation was most likely to start during a cold summer period because this configuration enhances the chance of snow and ice surviving throughout the summer.
End of the event
The cause for the end of the Late Ordovician Glaciation is a matter of intense research, but evidence shows that the deglaciation in the terminal Hirnantian may have occurred abruptly, as Silurian strata marks a significant change from the glacial deposits left during the Late Ordovician. Though the Hirnantian glaciation ended rapidly, milder glaciations continued to occur throughout the subsequent Silurian period, with the last glacial phase occurring in the Late Silurian.
Ice collapse
One of the possible causes for the end of the Hirnantian glaciation is that during the glacial maximum, the ice reached out too far and began collapsing on itself. The ice sheet initially stabilized once it reached as far north as Ghat, Libya and developed a large proglacial fan-delta system. A glaciotectonic fold and thrust belt began to form from repeated small-scale fluctuations in the ice. The glaciotectonic fold and thrust belt eventually led to ice sheet collapse and retreat of the ice to south of Ghat. Once stabilized south of Ghat, the ice sheet began advancing north again. This cycle slowly shrank more south each time which lead to further retreat and further collapse of glacial conditions. This recursion allowed the melting of the ice sheet, and rising sea level. This hypothesis is supported by glacial deposits and large land formations found in Ghat, Libya which is part of the Murzuq Basin.
CO2
As the Ice sheets began to increase the weathering of silicate rocks and basaltic important to carbon sequestration (the silicates through the Carbonate–silicate cycle, the basalt through forming calcium carbonate) decreased, which caused levels to rise again, this in turned helped push deglaciation. This deglaciation cause the transformation of silicates exposed to the air (thus given the opportunity to bind to its ) and weathering of basaltic rock to start back up which caused glaciation to occur again.
Significance
Even before the mass extinction at the end of the Ordovician, which resulted in a significant drop in chitinozoan diversity and abundance, the biodiversity of chitinozoans was adversely impacted by the onset of the Andean-Saharan glaciation. Following a peak in diversity in the late Darriwilian, chitinozoans declined in diversity as the Late Ordovician progressed. An exception to this declining trend of chitinozoan diversity was exhibited in Laurentia due to its low latitude position and warmer climate.
The Late Ordovician Glaciation coincided with the second largest of the five major extinction events, known as the Late Ordovician mass extinction. This period is the only known glaciation to occur alongside of a mass extinction event. The extinction event consisted of two discrete pulses. The first pulse of extinctions is thought to have taken place because of the rapid cooling, and increased oxygenation of the water column. This first pulse was the larger of the two and caused the extinction of most of the marine animal species that existed in the shallow and deep oceans. The second phase of extinction was associated with strong sea level rise, and due to the atmospheric conditions, namely oxygen levels being at or below 50% of present-day levels, high levels of anoxic waters would have been common. This anoxia would have killed off many of the survivors of the first extinction pulse. In all the extinction event of the Late Ordovician saw a loss of 85% of marine animal species and 26% of animal families.
The deglaciation at the end of the Homerian glacial interval was coeval with the first major radiation of trilete spore-producing plants, harbingering the dawn of the Silurian-Devonian Terrestrial Revolution. The later middle Ludfordian glaciation caused a sea level drop that created vast areas of new terrestrial habitats that were promptly colonised by land plants, further facilitating their diversification. The warming during the Pridoli that marked the end of the Andean-Saharan glaciation saw further floral expansion.
| Physical sciences | Events | Earth science |
2831333 | https://en.wikipedia.org/wiki/Postharvest | Postharvest | In agriculture, postharvest handling is the stage of crop production immediately following harvest, including cooling, cleaning, sorting and packing. The instant a crop is removed from the ground, or separated from its parent plant, it begins to deteriorate. Postharvest treatment largely determines final quality, whether a crop is sold for fresh consumption, or used as an ingredient in a processed food product.
Goals
The most important goals of post-harvest handling are keeping the product cool, to avoid moisture loss and slow down undesirable chemical changes, and avoiding physical damage such as bruising, to delay spoilage. Sanitation is also an important factor, to reduce the possibility of pathogens that could be carried by fresh produce, for example, as residue from contaminated washing water.
After the field, post-harvest processing is usually continued in a packing house. This can be a simple shed, providing shade and running water, or a large-scale, sophisticated, mechanised facility, with conveyor belts, automated sorting and packing stations, walk-in coolers and the like. In mechanised harvesting, processing may also begin as part of the actual harvest process, with initial cleaning and sorting performed by the harvesting machinery.
Initial post-harvest storage conditions are critical to maintaining quality. Each crop has an optimum range of storage temperature and humidity. Also, certain crops cannot be effectively stored together, as unwanted chemical interactions can result. Various methods of high-speed cooling, and sophisticated refrigerated and atmosphere-controlled environments, are employed to prolong freshness, particularly in large-scale operations.
Postharvest shelf life
Once harvested, vegetables and fruits are subject to the active process of degradation. Numerous biochemical processes continuously change the original composition of the crop until it becomes unmarketable. The period during which consumption is considered acceptable is defined as the time of "postharvest shelf life".
Postharvest shelf life is typically determined by objective methods that determine the overall appearance, taste, flavor, and texture of the commodity. These methods usually include a combination of sensorial, biochemical, mechanical, and colorimetric (optical) measurements. A recent study attempted (and failed) to discover a biochemical marker and fingerprint methods as indices for freshness.
Postharvest physiology
Postharvest physiology is the scientific study of the plant physiology of living plant tissues after picking. It has direct applications to postharvest handling in establishing the storage and transport conditions that best prolong shelf life.
An example of the importance of the field to post-harvest handling is the discovery that ripening of fruit can be delayed, and thus their storage prolonged, by preventing fruit tissue respiration. This insight allowed scientists to bring to bear their knowledge of the fundamental principles and mechanisms of respiration, leading to post-harvest storage techniques such as cold storage, gaseous storage, and waxy skin coatings.
| Technology | Agronomical techniques | null |
3794290 | https://en.wikipedia.org/wiki/Mantella | Mantella | Mantella (also known as golden frogs or Malagasy poison frogs) are a prominent genus of aposematic frogs in the family Mantellidae, endemic to the island of Madagascar. Members of Mantella are diurnal and terrestrial, with bright aposematic coloration or cryptic markings.
Natural history
Mantella are an example of convergent evolution—the independent evolution of a similar trait with species of a different lineage—with the Latin American family Dendrobatidae in size, appearance, and some behavioral characteristics. During the description of the first specimens from 1866 to 1872, Alfred Grandidier described both the brown mantella (Mantella betsileo) and Malagasy mantella (Mantella madagascariensis) and placed them within the genus Dendrobates based on their close resemblance.
This placement was heavily debated until 1882, when George Albert Boulenger created the genus Mantella after describing both Cowan's mantella (Mantella cowanii) and, in 1888, Baron's mantella (Mantella baroni). M. baroni was named after the gentleman that collected the specimens, Rev. Richard Baron, a missionary and botanist living in Madagascar. Baron was also interested in geology and herpetology, collecting many specimens during his extensive expeditions across the country. This species is incredibly similar in coloration to M. madagascariensis, except for the ventral/underside markings. In 1889, after the description of M. baroni, French naturalist Alexandre Thominot described Phrynomantis maculatus, with its type locality on Réunion Island. However, this locality was later corrected to the off-shore Malagasy islands of Nosy Bé and Nosy Komba and P. maculatus was synonymized with M. baroni.
The genus remained within Dendrobatidae until the late 19th century. The Royal Natural History (1893) by Richard Lydekker included the genus Mantella as one of two genera representing Dendrobatidae, saying that they could be "distinguished by the tip of the tongue being notched; while in Dendrobates of Tropical America the tongue is entire."
During the first quarter of the 20th century, another three species of Mantella were described, including the golden mantella (Mantella aurantiaca), by the French herpetologist François Mocquard in 1900. In his work "Synopsis des familles, genres et espèces des reptiles écailleux et des batraciens de Madagascar" published in 1909, Mocquard gave a detailed description of Mantella and the species within the genus. Within the document, six species are described, including one unusual description of Mantella attemsi, described in 1901 by Franz Josef Maria Werner, an Austrian zoologist and explorer. Mocquard's work describes M. attemsi as follows: "First digit extends as far as the second. [Replilatero-dorsal] present, starting at the rear of the upper eyelid. Skin very porous, slightly rough on the back and the head, stomach side smooth; lower back of the legs very rough. Back a dark red-brown, rest of the body black." This species was later synonymised with M. betsileo.
Description
Species of this genus are small, varying in length between . Most Mantella species are sexually dimorphic in size, with females being larger than males. Mantella vary in shape from streamlined to plump/rounded bodies, with skin that is either smooth or granular. They have small, angular heads, with large eyes that are either entirely dark or have lighter coloration around the edge of the iris. Mantella have a very distinct tympanum. The tips or discs of the fingers are slightly enlarged, though those of the climbing mantella (Mantella laevigata) are distinctly larger than in other members of the genus. They have four fingers on each forelimb and five on each hindlimb; some species have webbed digits, while others do not. The tibiotarsal articulation is roughly between the shoulder and the nostrils.
Many species of Mantella are similar to the neotropical family Dendrobatidae in their use of aposematism (from Greek ἀπό apo away, σῆμα sema sign), a defense mechanism that uses dramatic coloration to deter potential predators. Coloration and markings vary between species, with combinations of green, red, orange, yellow, blue, brown, white and black. These colorations are often evidence that the specimen produces toxic, pharmacologically active alkaloid secretions. There are significant similarities between a few species of Mantella and Dendrobatidae, notably the golden mantella (Mantella aurantiaca) and the golden poison frog (Phyllobates terribilis). Cowan's mantella (Mantella cowanii) and certain variations of the Harlequin poison frog (Oophaga histrionica) are also very similar in coloration. Most members of the genus also exhibit aposematism on the ventral region, excluding the golden mantella and black-eared mantella (Mantella milotympanum). The venter is normally uniform black, dark grey, or brown and are often marked with blueish or white spots, flecks, or blotches. There are similar blueish to white markings in the form of either spots or a continuous horseshoe-shaped marking on the vocal sac. These characteristics can be used to distinguish between species, such as Baron's mantella (Mantella baroni) and the Malagasy mantella (Mantella madagascariensis), when locality data is unavailable.
Mantella show a variety in alkaloid profiles between individual frogs of the Ranomafana region. These same alkaloids have been found to be sequestered by certain insects. It has also been observed that Mantella retain alkaloids in their skin for years in captivity. This, combined with analyses of stomach contents and diet, suggests that members of Mantella obtain at least some of their alkaloids from arthropod prey.
Distribution
Mantella are endemic to the island of Madagascar and its smaller coastal islands ("Nosy" in Malagasy). They inhabit a wide variety of different habitat types including primary rainforests, secondary rainforests, swamps, bamboo forests, semi-arid streambeds, slow moving forest streams, seasonal streams, montane grassland savannah, and wet canyons.
Some members of the genus such as Ebenau's mantella (Mantella ebenaui), the brown mantella (Mantella betsileo), and Cowan's mantella (Mantella cowanii) are highly adaptable and have been reported in a wide variety of habitats. On the island of Nosy Boraha (Sainte Marie), M. ebenaui have been found living in garbage dumps, feeding on flies. Similar behavior has been reported in western Madagascar, with M. betsileo inhabiting rubbish piles behind human dwellings.
Locality variations
There are several populations of Mantella species that exhibit unusual coloration, some of which are intermediates between species living in sympatry. For example, there are populations of yellow mantella (Mantella crocea) and black-eared mantella (Mantella milotympanum) found in Fierenana, Andriabe, Ambohitantely Reserve and Savakoanina that have green, red and yellow colourations. This often makes it difficult to distinguish between the two species.
Populations of Baron's mantella (Mantella baroni) have also been reported at Pic d'Ivohibe Reserve, being almost entirely green in coloration with black patches and spotting, and lacking their distinctive orange and irregular black crossbands. These specimens are referred to as Mantella aff. baroni.
Malagasy mantella (Mantella madagascariensis), a species similar in appearance to M. baroni, is also notably variable among different localities. Niagarakely is one such locality within the Anosibe An'ala District of the Alaotra-Mangoro Region. Here, M. madagascariensis exhibit highly broken yellow/green and mottled black dorsal coloration.
Species
There are currently 16 species of Mantella, with five recognized species groups. Most species are easily identifiable by their color patterns, although there are a number of locality variations with an uncertain taxonomic status.
Threats
Several species in the genus are threatened because of habitat loss (due to subsistence agriculture, timber extraction and charcoal production, fires, draining of wetlands, the spread of invasive eucalyptus, and expanding human settlements), mining, hybridization and over-collection for the international pet trade. As a result of these threats, various Mantella sp. are listed as least concern, near threatened, vulnerable, endangered, and critically endangered by the IUCN Red List of Threatened Species.
Species in this genus have tested positive for Batrachochytrium dendrobatidis (Bd). As of yet, there have been no negative effects observed within amphibian populations in Madagascar, suggesting that the Bd strain has a low virulence level but should be closely monitored.
Gallery
| Biology and health sciences | Frogs and toads | Animals |
3796570 | https://en.wikipedia.org/wiki/Ficus%20sycomorus | Ficus sycomorus | Ficus sycomorus, called the sycamore fig or the fig-mulberry (because the leaves resemble those of the mulberry), sycamore, or sycomore, is a fig species that has been cultivated since ancient times.
Etymology and naming
The specific name came into English in the 14th century as sicamour, derived from Old French sagremore, sicamor. This in turn derives from Latin , from Ancient Greek () 'fig-mulberry'. The Greek name may be from the Greek tree-names 'fig' and 'mulberry', or it may derive from the Hebrew name for the mulberry, .
The name sycamore spelled with an A, has also been used for unrelated trees: the great maple, Acer pseudoplatanus, or plane trees, Platanus. The spelling "sycomore", with an O rather than an A as the second vowel is, if used, specific to Ficus sycomorus.
Distribution
Ficus sycomorus is native to Africa south of the Sahel and north of the Tropic of Capricorn, excluding the central-west rainforest areas. It grows naturally in Lebanon; in the southern Arabian Peninsula; in Cyprus; in very localised areas in Madagascar; and in Israel, Palestine and Egypt. In its native habitat, the tree is usually found in rich soils along rivers and in mixed woodlands.
Description
Ficus sycomorus grows to 20 m tall and has a considerable spread, with a dense round crown of spreading branches. The leaves are heart-shaped with a round apex, 14 cm long by 10 cm wide, and arranged spirally around the twig. They are dark green above and lighter with prominent yellow veins below, and both surfaces are rough to the touch. The petiole is 0.5–3 cm long and pubescent. The fruit is a large edible fig, 2–3 cm in diameter, ripening from buff-green to yellow or red. They are borne in thick clusters on long branchlets or the leaf axil. Flowering and fruiting occurs year-round, peaking from July to December. The bark is green-yellow to orange and exfoliates in papery strips to reveal the yellow inner bark. Like all other figs, it contains a latex.
The fruit is produced year round, starting in April or a bit later depending on variety, and continuing into winter. It is sometimes separated into five successive "crops".
Cultivation
Two major varieties are known in Egypt. Roumi (also called Falaki or Turki), which has more horizontally spread branches, stouter shoots and petioles, more densely spaced leaves that are wider than they are long, and larger, flatter, broad pink fruits; and Kelabi (also called Arabi or Beledi), which has more vertical branches, is more slender, has smaller leaves and has smaller yellowish pear shaped fruits.
In modern history, many Egyptians would once a year (on the day of a particular saint) make a ring of bruises and cuts around the base of their sycamore trees.
According to botanists Daniel Zohary and Maria Hopf, cultivation of this species was "almost exclusively" by the ancient Egyptians. Remains of F. sycomorus begin to appear in predynastic times and occur in quantity from the start of the third millennium BC. It was the ancient Egyptian tree of life. Zohary and Hopf note that "the fruit and the timber, and sometimes even the twigs, are richly represented in the tombs of the Egyptian Early, Middle and Late Kingdoms." In numerous cases the parched fruiting bodies, known as sycons, "bear characteristic gashing marks indicating that this art, which induces ripening, was practised in Egypt in ancient times."
Although this species of fig requires the presence of the symbiotic wasp Ceratosolen arabicus to reproduce sexually, and this insect is extinct in Egypt, Zohay and Hopf have no doubt that Egypt was "the principal area of sycamore fig development." Some of the caskets of mummies in Egypt are made from the wood of this tree. In tropical areas where the wasp is common, complex mini-ecosystems involving the wasp, nematodes, other parasitic wasps, and various larger predators revolve around the life cycle of the fig. The trees' random production of fruit in such environments assures its constant attendance by the insects and animals which form this ecosystem.
Sycamores were often planted around artificial pools in ancient Egyptian gardens.
The sycamore tree was brought to Israel by Philistines during the Iron Age, along with opium poppy and cumin. These sycamore trees used to be numerous in western Beirut, lending their name to the neighborhood of Gemmayzeh (( ), "sycamore fig"). However, the trees have largely disappeared from this area.
Gardens
In the Near East F. sycomorus is an orchard and ornamental tree of great importance and extensive use. It has wide-spreading branches and affords shade.
In religion
Judaism and Christianity
In the Hebrew Bible, the sycomore is mentioned seven times (; Strong's number 8256) and once in the New Testament ( or ; Strong's number 4809). It was a popular and valuable fruit tree in Jericho and Canaan.
In El Matareya, there is a sycamore known as the Tree of the Virgin, which serves as a pilgrimage site. It is not the same tree; instead, when the previous tree that stands in this spot dies, a new one is planted from cuttings of the old tree. It is said that the Holy Family took refuge in this tree. The Coptic pope Theophilus also recounted that Joseph had a walking stick, which an infant Jesus broke. When Joseph buried the pieces of the stick, a sycamore grew forth and provided shelter.
Other religions
In Ancient Egypt, the sycamore was associated with the goddesses Hathor, Isis, and Nut. In the case of the latter, prayers exist referring to the "sycamore of Nut", and asking for water and breath. These goddesses were sometimes depicted as trees, sometimes standing in front of them with vessels of water, or sometimes as a tree with human body parts, such as an arm or breast. It was the most significant depicted life giving tree in ancient Egypt. Sycamores are referenced in ancient Egyptian love poetry as a meeting place for lovers. There are references to twin sycamores of turquoise in funerary contexts which Ra comes forth from, indicating they likely face east, or are located on the eastern horizon.
In modern Egyptian folklore, the sycamore retains an association with mysticism and magic. In the story "It Serves Me Right!", it is used to represent the Tree of Lifespans. The fruit from this tree dries up at the end of a life, but is fresh when one still has more life to live. Therefore, the inhabitants of a land found at the bottom of a well in the story only eat the dry, bad sycamore fruits and leave the good ones alone.
In Kikuyu religion, the sycomore is a sacred tree. All sacrifices to Ngai (or Murungu), the supreme creator, were performed under the tree. Whenever the mugumo tree fell, it symbolised a bad omen and rituals had to be performed by elders in the society. Some of those ceremonies carried out under the Mugumo tree are still observed.
Gallery
| Biology and health sciences | Rosales | null |
3797203 | https://en.wikipedia.org/wiki/Angular%20momentum%20operator | Angular momentum operator | In quantum mechanics, the angular momentum operator is one of several related operators analogous to classical angular momentum. The angular momentum operator plays a central role in the theory of atomic and molecular physics and other quantum problems involving rotational symmetry. Being an observable, its eigenfunctions represent the distinguishable physical states of a system's angular momentum, and the corresponding eigenvalues the observable experimental values. When applied to a mathematical representation of the state of a system, yields the same state multiplied by its angular momentum value if the state is an eigenstate (as per the eigenstates/eigenvalues equation). In both classical and quantum mechanical systems, angular momentum (together with linear momentum and energy) is one of the three fundamental properties of motion.
There are several angular momentum operators: total angular momentum (usually denoted J), orbital angular momentum (usually denoted L), and spin angular momentum (spin for short, usually denoted S). The term angular momentum operator can (confusingly) refer to either the total or the orbital angular momentum. Total angular momentum is always conserved, see Noether's theorem.
Overview
In quantum mechanics, angular momentum can refer to one of three different, but related things.
Orbital angular momentum
The classical definition of angular momentum is . The quantum-mechanical counterparts of these objects share the same relationship:
where r is the quantum position operator, p is the quantum momentum operator, × is cross product, and L is the orbital angular momentum operator. L (just like p and r) is a vector operator (a vector whose components are operators), i.e. where Lx, Ly, Lz are three different quantum-mechanical operators.
In the special case of a single particle with no electric charge and no spin, the orbital angular momentum operator can be written in the position basis as:
where is the vector differential operator, del.
Spin angular momentum
There is another type of angular momentum, called spin angular momentum (more often shortened to spin), represented by the spin operator . Spin is often depicted as a particle literally spinning around an axis, but this is only a metaphor: the closest classical analog is based on wave circulation. All elementary particles have a characteristic spin (scalar bosons have zero spin). For example, electrons always have "spin 1/2" while photons always have "spin 1" (details below).
Total angular momentum
Finally, there is total angular momentum , which combines both the spin and orbital angular momentum of a particle or system:
Conservation of angular momentum states that J for a closed system, or J for the whole universe, is conserved. However, L and S are not generally conserved. For example, the spin–orbit interaction allows angular momentum to transfer back and forth between L and S, with the total J remaining constant.
Commutation relations
Commutation relations between components
The orbital angular momentum operator is a vector operator, meaning it can be written in terms of its vector components . The components have the following commutation relations with each other:
where denotes the commutator
This can be written generally as
where l, m, n are the component indices (1 for x, 2 for y, 3 for z), and denotes the Levi-Civita symbol.
A compact expression as one vector equation is also possible:
The commutation relations can be proved as a direct consequence of the canonical commutation relations , where is the Kronecker delta.
There is an analogous relationship in classical physics:
where Ln is a component of the classical angular momentum operator, and is the Poisson bracket.
The same commutation relations apply for the other angular momentum operators (spin and total angular momentum):
These can be assumed to hold in analogy with L. Alternatively, they can be derived as discussed below.
These commutation relations mean that L has the mathematical structure of a Lie algebra, and the are its structure constants. In this case, the Lie algebra is SU(2) or SO(3) in physics notation ( or respectively in mathematics notation), i.e. Lie algebra associated with rotations in three dimensions. The same is true of J and S. The reason is discussed below. These commutation relations are relevant for measurement and uncertainty, as discussed further below.
In molecules the total angular momentum F is the sum of the rovibronic (orbital) angular momentum N, the electron spin angular momentum S, and the nuclear spin angular momentum I. For electronic singlet states the rovibronic angular momentum is denoted J rather than N. As explained by Van Vleck,
the components of the molecular rovibronic angular momentum referred to molecule-fixed axes have different commutation relations from those given above which are for the components about space-fixed axes.
Commutation relations involving vector magnitude
Like any vector, the square of a magnitude can be defined for the orbital angular momentum operator,
is another quantum operator. It commutes with the components of ,
One way to prove that these operators commute is to start from the [Lℓ, Lm] commutation relations in the previous section:
Mathematically, is a Casimir invariant of the Lie algebra SO(3) spanned by .
As above, there is an analogous relationship in classical physics:
where is a component of the classical angular momentum operator, and is the Poisson bracket.
Returning to the quantum case, the same commutation relations apply to the other angular momentum operators (spin and total angular momentum), as well,
Uncertainty principle
In general, in quantum mechanics, when two observable operators do not commute, they are called complementary observables. Two complementary observables cannot be measured simultaneously; instead they satisfy an uncertainty principle. The more accurately one observable is known, the less accurately the other one can be known. Just as there is an uncertainty principle relating position and momentum, there are uncertainty principles for angular momentum.
The Robertson–Schrödinger relation gives the following uncertainty principle:
where is the standard deviation in the measured values of X and denotes the expectation value of X. This inequality is also true if x, y, z are rearranged, or if L is replaced by J or S.
Therefore, two orthogonal components of angular momentum (for example Lx and Ly) are complementary and cannot be simultaneously known or measured, except in special cases such as .
It is, however, possible to simultaneously measure or specify L2 and any one component of L; for example, L2 and Lz. This is often useful, and the values are characterized by the azimuthal quantum number (l) and the magnetic quantum number (m). In this case the quantum state of the system is a simultaneous eigenstate of the operators L2 and Lz, but not of Lx or Ly. The eigenvalues are related to l and m, as shown in the table below.
Quantization
In quantum mechanics, angular momentum is quantized – that is, it cannot vary continuously, but only in "quantum leaps" between certain allowed values. For any system, the following restrictions on measurement results apply, where is reduced Planck constant:
Derivation using ladder operators
A common way to derive the quantization rules above is the method of ladder operators. The ladder operators for the total angular momentum are defined as:
Suppose is a simultaneous eigenstate of and (i.e., a state with a definite value for and a definite value for ). Then using the commutation relations for the components of , one can prove that each of the states and is either zero or a simultaneous eigenstate of and , with the same value as for but with values for that are increased or decreased by respectively. The result is zero when the use of a ladder operator would otherwise result in a state with a value for that is outside the allowable range. Using the ladder operators in this way, the possible values and quantum numbers for and can be found.
Since and have the same commutation relations as , the same ladder analysis can be applied to them, except that for there is a further restriction on the quantum numbers that they must be integers.
Visual interpretation
Since the angular momenta are quantum operators, they cannot be drawn as vectors like in classical mechanics. Nevertheless, it is common to depict them heuristically in this way. Depicted on the right is a set of states with quantum numbers , and for the five cones from bottom to top. Since , the vectors are all shown with length . The rings represent the fact that is known with certainty, but and are unknown; therefore every classical vector with the appropriate length and z-component is drawn, forming a cone. The expected value of the angular momentum for a given ensemble of systems in the quantum state characterized by and could be somewhere on this cone while it cannot be defined for a single system (since the components of do not commute with each other).
Quantization in macroscopic systems
The quantization rules are widely thought to be true even for macroscopic systems, like the angular momentum L of a spinning tire. However they have no observable effect so this has not been tested. For example, if is roughly 100000000, it makes essentially no difference whether the precise value is an integer like 100000000 or 100000001, or a non-integer like 100000000.2—the discrete steps are currently too small to measure. For most intents and purposes, the assortment of all the possible values of angular momentum is effectively continuous at macroscopic scales.
Angular momentum as the generator of rotations
The most general and fundamental definition of angular momentum is as the generator of rotations. More specifically, let be a rotation operator, which rotates any quantum state about axis by angle . As , the operator approaches the identity operator, because a rotation of 0° maps all states to themselves. Then the angular momentum operator about axis is defined as:
where 1 is the identity operator. Also notice that R is an additive morphism : ; as a consequence
where exp is matrix exponential. The existence of the generator is guaranteed by the Stone's theorem on one-parameter unitary groups.
In simpler terms, the total angular momentum operator characterizes how a quantum system is changed when it is rotated. The relationship between angular momentum operators and rotation operators is the same as the relationship between Lie algebras and Lie groups in mathematics, as discussed further below.
Just as J is the generator for rotation operators, L and S are generators for modified partial rotation operators. The operator
rotates the position (in space) of all particles and fields, without rotating the internal (spin) state of any particle. Likewise, the operator
rotates the internal (spin) state of all particles, without moving any particles or fields in space. The relation J = L + S comes from:
i.e. if the positions are rotated, and then the internal states are rotated, then altogether the complete system has been rotated.
SU(2), SO(3), and 360° rotations
Although one might expect (a rotation of 360° is the identity operator), this is not assumed in quantum mechanics, and it turns out it is often not true: When the total angular momentum quantum number is a half-integer (1/2, 3/2, etc.), , and when it is an integer, . Mathematically, the structure of rotations in the universe is not SO(3), the group of three-dimensional rotations in classical mechanics. Instead, it is SU(2), which is identical to SO(3) for small rotations, but where a 360° rotation is mathematically distinguished from a rotation of 0°. (A rotation of 720° is, however, the same as a rotation of 0°.)
On the other hand, in all circumstances, because a 360° rotation of a spatial configuration is the same as no rotation at all. (This is different from a 360° rotation of the internal (spin) state of the particle, which might or might not be the same as no rotation at all.) In other words, the operators carry the structure of SO(3), while and carry the structure of SU(2).
From the equation , one picks an eigenstate and draws
which is to say that the orbital angular momentum quantum numbers can only be integers, not half-integers.
Connection to representation theory
Starting with a certain quantum state , consider the set of states for all possible and , i.e. the set of states that come about from rotating the starting state in every possible way. The linear span of that set is a vector space, and therefore the manner in which the rotation operators map one state onto another is a representation of the group of rotation operators.
From the relation between J and rotation operators,
(The Lie algebras of SU(2) and SO(3) are identical.)
The ladder operator derivation above is a method for classifying the representations of the Lie algebra SU(2).
Connection to commutation relations
Classical rotations do not commute with each other: For example, rotating 1° about the x-axis then 1° about the y-axis gives a slightly different overall rotation than rotating 1° about the y-axis then 1° about the x-axis. By carefully analyzing this noncommutativity, the commutation relations of the angular momentum operators can be derived.
(This same calculational procedure is one way to answer the mathematical question "What is the Lie algebra of the Lie groups SO(3) or SU(2)?")
Conservation of angular momentum
The Hamiltonian H represents the energy and dynamics of the system. In a spherically symmetric situation, the Hamiltonian is invariant under rotations:
where R is a rotation operator. As a consequence, , and then due to the relationship between J and R. By the Ehrenfest theorem, it follows that J is conserved.
To summarize, if H is rotationally-invariant (The Hamiltonian function defined on an inner product space is said to have rotational invariance if its value does not change when arbitrary rotations are applied to its coordinates.), then total angular momentum J is conserved. This is an example of Noether's theorem.
If H is just the Hamiltonian for one particle, the total angular momentum of that one particle is conserved when the particle is in a central potential (i.e., when the potential energy function depends only on ). Alternatively, H may be the Hamiltonian of all particles and fields in the universe, and then H is always rotationally-invariant, as the fundamental laws of physics of the universe are the same regardless of orientation. This is the basis for saying conservation of angular momentum is a general principle of physics.
For a particle without spin, J = L, so orbital angular momentum is conserved in the same circumstances. When the spin is nonzero, the spin–orbit interaction allows angular momentum to transfer from L to S or back. Therefore, L is not, on its own, conserved.
Angular momentum coupling
Often, two or more sorts of angular momentum interact with each other, so that angular momentum can transfer from one to the other. For example, in spin–orbit coupling, angular momentum can transfer between L and S, but only the total J = L + S is conserved. In another example, in an atom with two electrons, each has its own angular momentum J1 and J2, but only the total J = J1 + J2 is conserved.
In these situations, it is often useful to know the relationship between, on the one hand, states where all have definite values, and on the other hand, states where all have definite values, as the latter four are usually conserved (constants of motion). The procedure to go back and forth between these bases is to use Clebsch–Gordan coefficients.
One important result in this field is that a relationship between the quantum numbers for :
For an atom or molecule with J = L + S, the term symbol gives the quantum numbers associated with the operators .
Orbital angular momentum in spherical coordinates
Angular momentum operators usually occur when solving a problem with spherical symmetry in spherical coordinates. The angular momentum in the spatial representation is
In spherical coordinates the angular part of the Laplace operator can be expressed by the angular momentum. This leads to the relation
When solving to find eigenstates of the operator , we obtain the following
where
are the spherical harmonics.
| Physical sciences | Quantum mechanics | Physics |
3797882 | https://en.wikipedia.org/wiki/Tetrahedral%20molecular%20geometry | Tetrahedral molecular geometry | In a tetrahedral molecular geometry, a central atom is located at the center with four substituents that are located at the corners of a tetrahedron. The bond angles are arccos(−) = 109.4712206...° ≈ 109.5° when all four substituents are the same, as in methane () as well as its heavier analogues. Methane and other perfectly symmetrical tetrahedral molecules belong to point group Td, but most tetrahedral molecules have lower symmetry. Tetrahedral molecules can be chiral.
Tetrahedral bond angle
The bond angle for a symmetric tetrahedral molecule such as CH4 may be calculated using the dot product of two vectors. As shown in the diagram at left, the molecule can be inscribed in a cube with the tetravalent atom (e.g. carbon) at the cube centre which is the origin of coordinates, O. The four monovalent atoms (e.g. hydrogens) are at four corners of the cube (A, B, C, D) chosen so that no two atoms are at adjacent corners linked by only one cube edge.
If the edge length of the cube is chosen as 2 units, then the two bonds OA and OB correspond to the vectors and , and the bond angle is the angle between these two vectors. This angle may be calculated from the dot product of the two vectors, defined as where denotes the length of vector a. As shown in the diagram, the dot product here is –1 and the length of each vector is , so that and the tetrahedral bond angle .
An alternative proof using trigonometry is shown in the diagram at right.
Examples
Main group chemistry
Aside from virtually all saturated organic compounds, most compounds of Si, Ge, and Sn are tetrahedral. Often tetrahedral molecules feature multiple bonding to the outer ligands, as in xenon tetroxide (XeO4), the perchlorate ion (), the sulfate ion (), the phosphate ion (). Thiazyl trifluoride () is tetrahedral, featuring a sulfur-to-nitrogen triple bond.
Other molecules have a tetrahedral arrangement of electron pairs around a central atom; for example ammonia () with the nitrogen atom surrounded by three hydrogens and one lone pair. However the usual classification considers only the bonded atoms and not the lone pair, so that ammonia is actually considered as pyramidal. The H–N–H angles are 107°, contracted from 109.5°. This difference is attributed to the influence of the lone pair which exerts a greater repulsive influence than a bonded atom.
Transition metal chemistry
Again the geometry is widespread, particularly so for complexes where the metal has d0 or d10 configuration. Illustrative examples include tetrakis(triphenylphosphine)palladium(0) (), nickel carbonyl (), and titanium tetrachloride (). Many complexes with incompletely filled d-shells are often tetrahedral, e.g. the tetrahalides of iron(II), cobalt(II), and nickel(II).
Water structure
In the gas phase, a single water molecule has an oxygen atom surrounded by two hydrogens and two lone pairs, and the geometry is simply described as bent without considering the nonbonding lone pairs.
However, in liquid water or in ice, the lone pairs form hydrogen bonds with neighboring water molecules. The most common arrangement of hydrogen atoms around an oxygen is tetrahedral with two hydrogen atoms covalently bonded to oxygen and two attached by hydrogen bonds. Since the hydrogen bonds vary in length many of these water molecules are not symmetrical and form transient irregular tetrahedra between their four associated hydrogen atoms.
Bitetrahedral structures
Many compounds and complexes adopt bitetrahedral structures. In this motif, the two tetrahedra share a common edge. The inorganic polymer silicon disulfide features an infinite chain of edge-shared tetrahedra. In a completely saturated hydrocarbon system, bitetrahedral molecule C8H6 has been proposed as a candidate for the molecule with the shortest possible carbon-carbon single bond.
Exceptions and distortions
Inversion of tetrahedra occurs widely in organic and main group chemistry. The Walden inversion illustrates the stereochemical consequences of inversion at carbon. Nitrogen inversion in ammonia also entails transient formation of planar .
Inverted tetrahedral geometry
Geometrical constraints in a molecule can cause a severe distortion of idealized tetrahedral geometry. In compounds featuring "inverted" tetrahedral geometry at a carbon atom, all four groups attached to this carbon are on one side of a plane. The carbon atom lies at or near the apex of a square pyramid with the other four groups at the corners.
The simplest examples of organic molecules displaying inverted tetrahedral geometry are the smallest propellanes, such as [1.1.1]propellane; or more generally the paddlanes, and pyramidane ([3.3.3.3]fenestrane). Such molecules are typically strained, resulting in increased reactivity.
Planarization
A tetrahedron can also be distorted by increasing the angle between two of the bonds. In the extreme case, flattening results. For carbon this phenomenon can be observed in a class of compounds called the fenestranes.
Tetrahedral molecules with no central atom
A few molecules have a tetrahedral geometry with no central atom. An inorganic example is tetraphosphorus () which has four phosphorus atoms at the vertices of a tetrahedron and each bonded to the other three. An organic example is tetrahedrane () with four carbon atoms each bonded to one hydrogen and the other three carbons. In this case the theoretical C−C−C bond angle is just 60° (in practice the angle will be larger due to bent bonds), representing a large degree of strain.
| Physical sciences | Bond structure | Chemistry |
21365918 | https://en.wikipedia.org/wiki/Cirrhosis | Cirrhosis | Cirrhosis, also known as liver cirrhosis or hepatic cirrhosis, chronic liver failure or chronic hepatic failure and end-stage liver disease, is an acute condition of the liver in which the normal functioning tissue, or parenchyma, is replaced with scar tissue (fibrosis) and regenerative nodules as a result of chronic liver disease. Damage to the liver leads to repair of liver tissue and subsequent formation of scar tissue. Over time, scar tissue and nodules of regenerating hepatocytes can replace the parenchyma, causing increased resistance to blood flow in the liver's capillaries—the hepatic sinusoids—and consequently portal hypertension, as well as impairment in other aspects of liver function. The disease typically develops slowly over months or years.
Stages of cirrhosis include compensated cirrhosis and decompensated cirrhosis. Early symptoms may include tiredness, weakness, loss of appetite, unexplained weight loss, nausea and vomiting, and discomfort in the right upper quadrant of the abdomen. As the disease worsens, symptoms may include itchiness, swelling in the lower legs, fluid build-up in the abdomen, jaundice, bruising easily, and the development of spider-like blood vessels in the skin. The fluid build-up in the abdomen may develop into spontaneous infections. More serious complications include hepatic encephalopathy, bleeding from dilated veins in the esophagus, stomach, or intestines, and liver cancer.
Cirrhosis is most commonly caused by medical conditions including alcohol-related liver disease, metabolic dysfunction–associated steatohepatitis (MASH – the progressive form of metabolic dysfunction–associated steatotic liver disease, previously called non-alcoholic fatty liver disease or NAFLD), heroin abuse, chronic hepatitis B, and chronic hepatitis C. Chronic heavy drinking can cause alcoholic liver disease. Liver damage has also been attributed to heroin usage over an extended period of time as well. MASH has several causes, including obesity, high blood pressure, abnormal levels of cholesterol, type 2 diabetes, and metabolic syndrome. Less common causes of cirrhosis include autoimmune hepatitis, primary biliary cholangitis, and primary sclerosing cholangitis that disrupts bile duct function, genetic disorders such as Wilson's disease and hereditary hemochromatosis, and chronic heart failure with liver congestion.
Diagnosis is based on blood tests, medical imaging, and liver biopsy.
Hepatitis B vaccine can prevent hepatitis B and the development of cirrhosis from it, but no vaccination against hepatitis C is available. No specific treatment for cirrhosis is known, but many of the underlying causes may be treated by medications that may slow or prevent worsening of the condition. Hepatitis B and C may be treatable with antiviral medications. Avoiding alcohol is recommended in all cases. Autoimmune hepatitis may be treated with steroid medications. Ursodiol may be useful if the disease is due to blockage of the bile duct. Other medications may be useful for complications such as abdominal or leg swelling, hepatic encephalopathy, and dilated esophageal veins. If cirrhosis leads to liver failure, a liver transplant may be an option. Biannual screening for liver cancer using abdominal ultrasound, possibly with additional blood tests, is recommended due to the high risk of hepatocellular carcinoma arising from dysplastic nodules.
Cirrhosis affected about 2.8 million people and resulted in 1.3 million deaths in 2015. Of these deaths, alcohol caused 348,000 (27%), hepatitis C caused 326,000 (25%), and hepatitis B caused 371,000 (28%). In the United States, more men die of cirrhosis than women. The first known description of the condition is by Hippocrates in the fifth century BCE. The term "cirrhosis" was derived in 1819 from the Greek word "kirrhos", which describes the yellowish color of a diseased liver.
Signs and symptoms
Cirrhosis can take quite a long time to develop, and symptoms may be slow to emerge. Some early symptoms include tiredness, weakness, loss of appetite, weight loss, and nausea. Early signs may also include redness on the palms known as palmar erythema. People may also feel discomfort in the right upper abdomen around the liver.
As cirrhosis progresses, symptoms may include neurological changes affecting both the peripheral and central nervous systems, disrupting the neurotransmission within the brain and causing neuromuscular fatigue. This can consist of cognitive impairments, confusion, memory loss, sleep disorders, and personality changes.Steatorrhea or presence of undigested fats in stool is also a symptom of cirrhosis.
Worsening cirrhosis can cause a build-up of fluid in different parts of the body such as the legs (edema) and abdomen (ascites). Other signs of advancing disease include itchy skin, bruising easily, dark urine, and yellowing of the skin.
Liver dysfunction
These features are a direct consequence of liver cells not functioning:
Spider angiomata or spider nevi happen when there is dilatation of vasculature beneath the skin surface. There is a central, red spot with reddish extensions that radiate outward. This creates a visual effect that resembles a spider. It occurs in about one-third of cases. The likely cause is an increase in estrogen. Cirrhosis causes a rise of estrogen due to increased conversion of androgens into estrogen.
Palmar erythema, a reddening of the palm below the thumb and little finger, is seen in about 23% of cirrhosis cases, and results from increased circulating estrogen levels.
Gynecomastia, or the increase of breast size in men, is caused by increased estradiol (a potent type of estrogen). This can occur in up to two-thirds of cases.
Hypogonadism signifies a decreased functionality of the gonads. This can result in impotence, infertility, loss of sexual drive, and testicular atrophy. A swollen scrotum may also be evident.
Liver size can be enlarged, normal, or shrunken in people with cirrhosis. As the disease progresses, the liver will typically shrink due to the result of scarring.
Jaundice is the yellowing of the skin. It can additionally cause yellowing of mucous membranes notably of the white of the eyes. This phenomenon is due to increased levels of bilirubin, which may also cause the urine to be dark-colored.
Portal hypertension
Liver cirrhosis makes it hard for blood to flow in the portal venous system. This resistance creates a backup of blood and increases pressure. This results in portal hypertension. Effects of portal hypertension include:
Ascites is a build-up of fluid in the peritoneal cavity in the abdomen
An enlarged spleen in 35–50% of cases
Esophageal varices and gastric varices result from collateral circulation in the esophagus and stomach (a process called portacaval anastomosis). When the blood vessels in this circulation become enlarged, they are called varices. Varices are more likely to rupture at this point. Variceal rupture often leads to severe bleeding, which can be fatal.
Caput medusae are dilated paraumbilical collateral veins due to portal hypertension. Blood from the portal venous system may be forced through the paraumbilical veins and ultimately to the abdominal wall veins. The created pattern resembles the head of Medusa, hence the name.
Cruveilhier-Baumgarten bruit is bruit in the epigastric region (on examination by stethoscope). It is due to extra connections forming between the portal system and the paraumbilical veins.
Other nonspecific signs
Some signs that may be present include changes in the nails (such as Muehrcke's lines, Terry's nails, and nail clubbing). Additional changes may be seen in the hands (Dupuytren's contracture) as well as the skin/bones (hypertrophic osteoarthropathy).
Advanced disease
As the disease progresses, complications may develop. In some people, these may be the first signs of the disease.
Bruising and bleeding can result from decreased production of blood clotting factors.
Hepatic encephalopathy (HE) occurs when ammonia and related substances build up in the blood. This build-up affects brain function when they are not cleared from the blood by the liver. Symptoms can include unresponsiveness, forgetfulness, trouble concentrating, changes in sleep habits, or psychosis. One classic physical examination finding is asterixis. This is the asynchronous flapping of outstretched, dorsiflexed hands. Fetor hepaticus is a musty breath odor resulting from increased dimethyl sulfide and is a feature of HE.
Increased sensitivity to medication can be caused by decreased metabolism of the active compounds.
Acute kidney injury (particularly hepatorenal syndrome).
Cachexia associated with muscle wasting and weakness.
Causes
Cirrhosis has many possible causes, and more than one cause may be present. History taking is of importance in trying to determine the most likely cause. Globally, 57% of cirrhosis is attributable to either hepatitis B (30%) or hepatitis C (27%). Alcohol use disorder is another major cause, accounting for about 20–40% of the cases.
Common causes
Alcoholic liver disease (ALD, or alcoholic cirrhosis) develops for 10–20% of individuals who drink heavily for a decade or more. Alcohol seems to injure the liver by blocking the normal metabolism of protein, fats, and carbohydrates. This injury happens through the formation of acetaldehyde from alcohol. Acetaldehyde is reactive and leads to the accumulation of other reactive products in the liver. People with ALD may also have concurrent alcoholic hepatitis. Associated symptoms are fever, hepatomegaly, jaundice, and anorexia. AST and ALT blood levels are both elevated, but at less than 300 IU/liter, with an AST:ALT ratio > 2.0, a value rarely seen in other liver diseases. In the United States, 40% of cirrhosis-related deaths are due to alcohol.
In non-alcoholic fatty liver disease (NAFLD), fat builds up in the liver and eventually causes scar tissue. This type of disorder can be caused by obesity, diabetes, malnutrition, coronary artery disease, and steroids. Though similar in signs to alcoholic liver disease, no history of notable alcohol use is found. Blood tests and medical imaging are used to diagnose NAFLD and NASH, and sometimes a liver biopsy is needed.
Chronic hepatitis C, an infection with the hepatitis C virus, causes inflammation of the liver and a variable grade of damage to the organ. Over several decades, this inflammation and damage can lead to cirrhosis. Among people with chronic hepatitis C, 20–30% develop cirrhosis. Cirrhosis caused by hepatitis C and alcoholic liver disease are the most common reasons for liver transplant. Both hepatitis C and hepatitis B–related cirrhosis can also be attributed with heroin addiction.
Chronic hepatitis B causes liver inflammation and injury that over several decades can lead to cirrhosis. Hepatitis D is dependent on the presence of hepatitis B and accelerates cirrhosis in co-infection.
Less common causes
In primary biliary cholangitis (previously known as primary biliary cirrhosis), the bile ducts become damaged by an autoimmune process. This leads to liver damage. Some people may have no symptoms, while others may present with fatigue, pruritus, or skin hyperpigmentation. The liver is typically enlarged which is referred to as hepatomegaly. Rises in alkaline phosphatase, cholesterol, and bilirubin levels occur. Patients are usually positive for anti-mitochondrial antibodies.
Primary sclerosing cholangitis is a disorder of the bile ducts that presents with pruritus, steatorrhea, fat-soluble vitamin deficiencies, and metabolic bone disease. A strong association with inflammatory bowel disease is seen, especially ulcerative colitis.
Autoimmune hepatitis is caused by an attack of the liver by lymphocytes. This causes inflammation and eventually scarring as well as cirrhosis. Findings include elevations in serum globulins, especially gamma globulins.
Hereditary hemochromatosis usually presents with skin hyperpigmentation, diabetes mellitus, pseudogout, or cardiomyopathy. All of these are due to signs of iron overload. Family history of cirrhosis is common as well.
Wilson's disease is an autosomal recessive disorder characterized by low ceruloplasmin in the blood and increased copper of the liver. Copper in the urine is also elevated. People with Wilson's disease may also have Kayser–Fleischer rings in the cornea and altered mental status.
Indian childhood cirrhosis is a form of neonatal cholestasis characterized by deposition of copper in the liver
Alpha-1 antitrypsin deficiency is an autosomal co-dominant disorder of low levels of the enzyme alpha-1 antitrypsin
Cardiac cirrhosis is due to chronic right-sided heart failure, which leads to liver congestion
Galactosemia
Glycogen storage disease type IV
Cystic fibrosis
Hepatotoxic drugs or toxins, such as acetaminophen (paracetamol), methotrexate, or amiodarone
Pathophysiology
The liver plays a vital role in many metabolic processes in the body including protein synthesis, detoxification, nutrient storage (such as glycogen), platelet production and clearance of bilirubin. With progressive liver damage; hepatocyte death and replacement of functional liver tissue with fibrosis in cirrhosis, these processes are disrupted. This leads to many of the metabolic derangements and symptoms seen in cirrhosis.
Cirrhosis is often preceded by hepatitis and fatty liver (steatosis), independent of the cause. If the cause is removed at this stage, the changes are fully reversible.
The pathological hallmark of cirrhosis is the development of scar tissue that replaces normal tissue, which is normally organized into lobules. This scar tissue blocks the portal flow of blood through the organ, raising the blood pressure. This manifests as portal hypertension in which the pressure gradient between the portal circulation as compared to the systemic circulation is elevated. This portal hypertension leads to decreased sinusoidal flow from liver cells to nearby sinusoids in the liver, and increased lymph production with extravasation of lymph to the extracellular space, causing ascites. This also causes reduced cardiac return and central blood volume, which activates the renin-angiotensin system (RAAS) which causes kidneys to reabsorb sodium and water, causing water retention and further ascites. Activation of the RAAS also causes kidney vasoconstriction and may cause kidney injury.
Research has shown the pivotal role of the stellate cell, that normally stores vitamin A, in the development of cirrhosis. Damage to the liver tissue from inflammation leads to the activation of stellate cells, which increases fibrosis through the production of myofibroblasts, and obstructs hepatic blood flow. In addition, stellate cells secrete TGF beta 1, which leads to a fibrotic response and proliferation of connective tissue. TGF-β1 have been implicated in the process of activating hepatic stellate cells (HSCs) with the magnitude of fibrosis being in proportion to increase in TGF β levels. ACTA2 is associated with TGF β pathway that enhances contractile properties of HSCs leading to fibrosis. Furthermore, HSCs secrete TIMP1 and TIMP2, naturally occurring inhibitors of matrix metalloproteinases (MMPs), which prevent MMPs from breaking down the fibrotic material in the extracellular matrix.
As this cascade of processes continues, fibrous tissue bands (septa) separate hepatocyte nodules, which eventually replace the entire liver architecture, leading to decreased blood flow throughout. The spleen becomes congested, and enlarged, resulting in its retention of platelets, which are needed for normal blood clotting. Portal hypertension is responsible for the most severe complications of cirrhosis.
Diagnosis
The diagnosis of cirrhosis in an individual is based on multiple factors. Cirrhosis may be suspected from laboratory findings, physical exam, and the person's medical history. Imaging is generally obtained to evaluate the liver. A liver biopsy will confirm the diagnosis; however, is generally not required.
Imaging
Ultrasound is routinely used in the evaluation of cirrhosis. It may show a small and shrunken liver in advanced disease. On ultrasound, there is increased echogenicity with irregular appearing areas. Other suggestive findings are an enlarged caudate lobe, liver surface nodularity widening of the fissures and enlargement of the spleen. An enlarged spleen, which normally measures less than in adults, may suggest underlying portal hypertension. Ultrasound may also screen for hepatocellular carcinoma and portal hypertension. This is done by assessing flow in the hepatic vein. An increased portal vein pulsatility may be seen. However, this may be a sign of elevated right atrial pressure. Portal vein pulsatility are usually measured by a pulsatility indices (PI). A number above a certain values indicates cirrhosis (see table below).
Other scans include CT of the abdomen and MRI. A CT scan is non-invasive and may be helpful in the diagnosis. Compared to the ultrasound, CT scans tend to be more expensive. MRI provides excellent evaluation; however, is a high expense.
Portable ultrasound is a low cost tool to identify the sign of liver surface nodularity with a good diagnostic accuracy.
Cirrhosis is also diagnosable through a variety of new elastography techniques. When a liver becomes cirrhotic it will generally become stiffer. Determining the stiffness through imaging can determine the location and severity of disease. Techniques include transient elastography, acoustic radiation force impulse imaging, supersonic shear imaging and magnetic resonance elastography. Transient elastography and magnetic resonance elastography can help identify the stage of fibrosis. Compared to a biopsy, elastography can sample a much larger area and is painless. It shows a reasonable correlation with the severity of cirrhosis. Other modalities have been introduced which are incorporated into ultrasonagraphy systems. These include 2-dimensional shear wave elastography and point shear wave elastography which uses acoustic radiation force impulse imaging.
Rarely are diseases of the bile ducts, such as primary sclerosing cholangitis, causes of cirrhosis. Imaging of the bile ducts, such as ERCP or MRCP (MRI of biliary tract and pancreas) may aid in the diagnosis.
Lab findings
The best predictors of cirrhosis are ascites, platelet count < 160,000/mm3, spider angiomata, and a Bonacini cirrhosis discriminant score greater than 7 (as the sum of scores for platelet count, ALT/AST ratio and INR as per table).
These findings are typical in cirrhosis:
Thrombocytopenia, typically multifactorial, is due to alcoholic marrow suppression, sepsis, lack of folate, platelet sequestering in the spleen, and decreased thrombopoietin. However, this rarely results in a platelet count < 50,000/mL.
Aminotransferases AST and ALT are moderately elevated, with AST > ALT. However, normal aminotransferase levels do not preclude cirrhosis.
Alkaline phosphatase – slightly elevated but less than 2–3 times the upper limit of normal.
Gamma-glutamyl transferase – correlates with AP levels. Typically much higher in chronic liver disease from alcohol.
Bilirubin levels are normal when compensated, but may elevate as cirrhosis progresses.
Albumin levels fall as the synthetic function of the liver declines with worsening cirrhosis since albumin is exclusively synthesized in the liver.
Prothrombin time increases, since the liver synthesizes clotting factors.
Globulins increase due to shunting of bacterial antigens away from the liver to lymphoid tissue.
Serum sodium levels fall(hyponatremia) due to inability to excrete free water resulting from high levels of ADH and aldosterone.
Leukopenia and neutropenia are due to splenomegaly with splenic margination.
Coagulation defects occur, as the liver produces most of the coagulation factors, thus coagulopathy correlates with worsening liver disease.
Glucagon is increased in cirrhosis.
Vasoactive intestinal peptide is increased as blood is shunted into the intestinal system because of portal hypertension.
Vasodilators are increased (such as nitric oxide and carbon monoxide) reducing afterload with compensatory increase in cardiac output, mixed venous oxygen saturation.
Renin is increased (as well as sodium retention in kidneys) secondary to a fall in systemic vascular resistance.
FibroTest is a biomarker for fibrosis that may be used instead of a biopsy.
Other laboratory studies performed in newly diagnosed cirrhosis may include:
Serology for hepatitis viruses, autoantibodies (ANA, anti-smooth muscle, antimitochondria, anti-LKM)
Ferritin and transferrin saturation: markers of iron overload as in hemochromatosis, copper and ceruloplasmin: markers of copper overload as in Wilson's disease
Immunoglobulin levels (IgG, IgM, IgA) – these immunoglobins are nonspecific, but may help in distinguishing various causes.
IgG level is elevated in chronic hepatitis, alcoholic and autoimmune hepatitis. It's slow and sustained increase is seen in viral hepatitis.
IgM significantly increased in primary biliary cirrhosis and moderately increased in viral hepatitis and cirrhosis.
IgA is increased in alcoholic cirrhosis and primary biliary cirrhosis.
Cholesterol and glucose
Alpha 1-antitrypsin
Markers of inflammation and immune cell activation are typically elevated in cirrhotic patients, especially in the decompensated disease stage:
C-reactive protein (CRP)
Procalcitonin (PCT)
Presepsin
soluble CD14
soluble CD163
soluble CD206 (mannose receptor)
soluble TREM-1
The link between gut microbiota constitution and liver health (Particularly in Cirrhosis) has been well described, however specific biomarkers for prediction of Cirrhosis still requires further research. A 2014 study identified 15 microbial biomarkers from the gut microbiota. These could potentially be used to discriminate patients with liver cirrhosis from healthy individuals.
Pathology
The gold standard for diagnosis of cirrhosis is a liver biopsy. This is usually carried out as a fine-needle approach, through the skin (percutaneous), or internal jugular vein (transjugular). Endoscopic ultrasound-guided liver biopsy (EUS), using the percutaneous or transjugular route, has become a good alternative to use. EUS can target liver areas that are widely separated, and can deliver bi-lobar biopsies. A biopsy is not necessary if the clinical, laboratory, and radiologic data suggest cirrhosis. Furthermore, a small but significant risk of complications is associated with liver biopsy, and cirrhosis itself predisposes for complications caused by liver biopsy.
Once the biopsy is obtained, a pathologist will study the sample. Cirrhosis is defined by its features on microscopy: (1) the presence of regenerating nodules of hepatocytes and (2) the presence of fibrosis, or the deposition of connective tissue between these nodules. The pattern of fibrosis seen can depend on the underlying insult that led to cirrhosis. Fibrosis can also proliferate even if the underlying process that caused it has resolved or ceased. The fibrosis in cirrhosis can lead to destruction of other normal tissues in the liver: including the sinusoids, the space of Disse, and other vascular structures, which leads to altered resistance to blood flow in the liver, and portal hypertension.
As cirrhosis can be caused by many different entities which injure the liver in different ways, cause-specific abnormalities may be seen. For example, in chronic hepatitis B, there is infiltration of the liver parenchyma with lymphocytes. In congestive hepatopathy there are erythrocytes and a greater amount of fibrosis in the tissue surrounding the hepatic veins. In primary biliary cholangitis, there is fibrosis around the bile duct, the presence of granulomas and pooling of bile. Lastly in alcoholic cirrhosis, there is infiltration of the liver with neutrophils.
Macroscopically, the liver is initially enlarged, but with the progression of the disease, it becomes smaller. Its surface is irregular, the consistency is firm, and if associated with steatosis the color is yellow. Depending on the size of the nodules, there are three macroscopic types: micronodular, macronodular, and mixed cirrhosis. In the micronodular form (Laennec's cirrhosis or portal cirrhosis), regenerating nodules are under 3 mm. In macronodular cirrhosis (post-necrotic cirrhosis), the nodules are larger than 3 mm. Mixed cirrhosis consists of nodules of different sizes.
Grading
The severity of cirrhosis is commonly classified with the Child–Pugh score (also known as the Child–Pugh–Turcotte score). This system was devised in 1964 by Child and Turcotte, and modified in 1973 by Pugh and others. It was first established to determine who would benefit from elective surgery for portal decompression. This scoring system uses multiple lab values including bilirubin, albumin, and INR. The presence of ascites and severity of encephalopathy is also included in the scoring. The classification system includes class A, B, or C. Class A has a favorable prognosis while class C is at high risk of death.
The Child-Pugh score is a validated predictor of mortality after a major surgery. For example, Child class A patients have a 10% mortality rate and Child class B patients have a 30% mortality rate while Child class C patients have a 70–80% mortality rate after abdominal surgery. Elective surgery is usually reserved for those in Child class A patients. There is an increased risk for Child class B individuals and they may require medical optimization. Overall, it is not recommended for Child class C patients to undergo elective surgery.
In the past, the Child-Pugh classification was used to determine people who were candidates for a liver transplant. Child-Pugh class B is usually an indication for evaluation for transplant. However, there were many issues when applying this score to liver transplant eligibility. Thus, the MELD score was created.
The Model for End-Stage Liver Disease (MELD) score was later developed and approved in 2002. It was approved by the United Network for Organ Sharing (UNOS) as a way to determine the allocation of liver transplants to awaiting people in the United States. It is also used as a validated survival predictor of cirrhosis, alcoholic hepatitis, acute liver failure, and acute hepatitis. The variables included bilirubin, INR, creatinine, and dialysis frequency. In 2016, sodium was added to the variables and the score is often referred to as MELD-Na.
MELD-Plus is a further risk score to assess severity of chronic liver disease. It was developed in 2017 as a result of a collaboration between Massachusetts General Hospital and IBM. Nine variables were identified as effective predictors for 90-day mortality after a discharge from a cirrhosis-related hospital admission. The variables include all Model for End-Stage Liver Disease (MELD)'s components, as well as sodium, albumin, total cholesterol, white blood cell count, age, and length of stay.
The hepatic venous pressure gradient (difference in venous pressure between incoming and outgoing blood to the liver) also determines the severity of cirrhosis, although it is hard to measure. A value of 16 mm or more means a greatly increased risk of death.
Prevention
Key prevention strategies for cirrhosis are population-wide interventions to reduce alcohol intake (through pricing strategies, public health campaigns, and personal counseling), programs to reduce the transmission of viral hepatitis, and screening of relatives of people with hereditary liver diseases.
Little is known about factors affecting cirrhosis risk and progression. However, many studies have provided increasing evidence for the protective effects of coffee consumption against the progression of liver disease. These effects are more noticeable in liver disease that is associated with alcohol use disorder. Coffee has antioxidant and antifibrotic effects. Caffeine may not be the important component; polyphenols may be more important. Drinking two or more cups of coffee a day is associated with improvements in the liver enzymes ALT, AST, and GGT. Even in those with liver disease, coffee consumption can lower fibrosis and cirrhosis.
Treatment
Generally, liver damage from cirrhosis cannot be reversed, but treatment can stop or delay further progression and reduce complications. A healthy diet is encouraged, as cirrhosis may be an energy-consuming process. A recommended diet consists of high-protein, high-fiber diet plus supplementation with branched-chain amino acids. Close follow-up is often necessary. Antibiotics are prescribed for infections, and various medications can help with itching. Laxatives, such as lactulose, decrease the risk of constipation. Carvedilol increases survival benefit for people with cirrhosis and portal hypertension.
Diuretics in combination with low salt diet reduce fluid in body which helps reduce oedema.
Alcoholic cirrhosis caused by alcohol use disorder is treated by abstaining from alcohol. Treatment for hepatitis-related cirrhosis involves medications used to treat the different types of hepatitis, such as interferon for viral hepatitis and corticosteroids for autoimmune hepatitis.
Cirrhosis caused by Wilson's disease is treated by removing the copper which builds up in organs. This is carried out using chelation therapy such as penicillamine. When the cause is an iron overload, iron is removed using a chelation agent such as deferoxamine or by bloodletting.
As of 2021, there are recent studies studying drugs to prevent cirrhosis caused by non-alcoholic fatty liver disease (NAFLD or NASH). The drug semaglutide was shown to provide greater NASH resolution versus placebo. No improvement in fibrosis was observed. A combination of cilofexor/firsocostat was studied in people with bridging fibrosis and cirrhosis. It was observed to have led to improvements in NASH activity with a potential antifibrotic effect. Lanifibranor is also shown to prevent worsening fibrosis.
Preventing further liver damage
Regardless of the underlying cause of cirrhosis, consumption of alcohol and other potentially damaging substances is discouraged. There is no evidence that supports the avoidance or dose reduction of paracetamol in people with compensated cirrhosis; it is thus considered a safe analgesic for said individuals.
Vaccination against hepatitis A and hepatitis B is recommended early in the course of illness due to decline in effectiveness of the vaccines with decompensation.
Treating the cause of cirrhosis prevents further damage; for example, giving oral antivirals such as entecavir and tenofovir where cirrhosis is due to hepatitis B prevents progression of cirrhosis. Similarly, control of weight and diabetes prevents deterioration in cirrhosis due to non-alcoholic fatty liver disease.
People with cirrhosis or liver damage are often advised to avoid drugs that could further harm the liver. These include several drugs such as anti-depressants, certain antibiotics, and NSAIDs (like ibuprofen). These agents are hepatotoxic as they are metabolized by the liver. If a medication that harms the liver is still recommended by a doctor, the dosage can be adjusted to aim for minimal stress on the liver.
Lifestyle
According to a 2018 systematic review based on studies that implemented 8 to 14 week-long exercise programs, there is currently insufficient scientific evidence regarding either the beneficial or harmful effects of physical exercise in people with cirrhosis on all-cause mortality, morbidity (including both serious and non-serious adverse events), health-related quality of life, exercise capacity and anthropomorphic measures. These conclusions were based on low to very low quality research, which imposes the need to develop further research with higher quality, especially to evaluate its effects on clinical outcomes.
Transplantation
If complications cannot be controlled or when the liver ceases functioning, liver transplantation is necessary. Survival from liver transplantation has been improving over the 1990s, and the five-year survival rate is now around 80%. The survival rate depends largely on the severity of disease and other medical risk factors in the recipient. In the United States, the MELD score is used to prioritize patients for transplantation. Transplantation necessitates the use of immune suppressants (ciclosporin or tacrolimus).
Decompensated cirrhosis
Manifestations of decompensation in cirrhosis include gastrointestinal bleeding, hepatic encephalopathy, jaundice or ascites. In patients with previously stable cirrhosis, decompensation may occur due to various causes, such as constipation, infection (of any source), increased alcohol intake, medication, bleeding from esophageal varices or dehydration. It may take the form of any of the complications of cirrhosis listed below.
People with decompensated cirrhosis generally require admission to a hospital, with close monitoring of the fluid balance, mental status, and emphasis on adequate nutrition and medical treatment – often with diuretics, antibiotics, laxatives or enemas, thiamine and occasionally steroids, acetylcysteine and pentoxifylline. Administration of saline is avoided, as it would add to the already high total body sodium content that typically occurs in cirrhosis. Life expectancy without liver transplant is low, at most three years.
Palliative care
Palliative care is specialized medical care that focuses on providing patients with relief from the symptoms, pain, and stress of a serious illness, such as cirrhosis. The goal of palliative care is to improve quality of life for both the patient and the patient's family and it is appropriate at any stage and for any type of cirrhosis.
Especially in the later stages, people with cirrhosis experience significant symptoms such as abdominal swelling, itching, leg edema, and chronic abdominal pain which would be amenable for treatment through palliative care. Because the disease is not curable without a transplant, palliative care can also help with discussions regarding the person's wishes concerning health care power of attorney, do not resuscitate decisions and life support, and potentially hospice. Despite proven benefit, people with cirrhosis are rarely referred to palliative care.
Immune system
Cirrhosis is known to cause immune dysfunction in numerous ways. It impedes the immune system from working normally.
Bleeding and blood clot risk
Cirrhosis can increase the risk of bleeding. The liver produces various proteins in the coagulation cascade (coagulation factors II, VII, IX, X, V, and VI). When damaged, the liver is impaired in its production of these proteins. This will ultimately increase bleeding as clotting factors are diminished. Clotting function is estimated by lab values, mainly platelet count, prothrombin time (PT), and international normalized ratio (INR).
The American Gastroenterological Association (AGA) provided recommendations in 2021 in regards to coagulopathy management of cirrhotic patients in certain scenarios.
The AGA does not recommend for extensive pre-procedural testing, including repeated measurements of PT/INR or platelet count before patients with stable cirrhosis undergo common gastrointestinal procedures. Nor do they suggest the routine use of blood products, such as platelets, for bleeding prevention. Cirrhosis is stable when there are no changes in baseline abnormalities of coagulation lab values.
For patients with stable cirrhosis and low platelet count undergoing common low-risk procedures, the AGA does not recommend the routine use of thrombopoietin receptor agonists for bleeding prevention.
In hospitalized patients who meet standard guidelines for clot prevention, the AGA suggests standard prevention.
The AGA does not recommend in routine screening for portal vein thrombosis. If there is a portal vein thrombosis, the AGA suggests treatment by anticoagulation.
In the case of cirrhosis with atrial fibrillation, the AGA recommends using anticoagulation over no anticoagulation.
Complications
Ascites
Salt restriction is often necessary, as cirrhosis leads to accumulation of salt (sodium retention). Diuretics may be necessary to suppress ascites. Diuretic options for inpatient treatment include aldosterone antagonists (spironolactone) and loop diuretics. Aldosterone antagonists are preferred for people who can take oral medications and are not in need of an urgent volume reduction. Loop diuretics can be added as additional therapy.
Where salt restriction and the use of diuretics are ineffective then paracentesis may be the preferred option. This procedure requires the insertion of a plastic tube into the peritoneal cavity. Human serum albumin solution is usually given to prevent complications from the rapid volume reduction. In addition to being more rapid than diuretics, 4–5 liters of paracentesis is more successful in comparison to diuretic therapy.
Esophageal and gastric variceal bleeding
For portal hypertension, nonselective beta blockers such as propranolol or nadolol are commonly used to lower blood pressure over the portal system. In severe complications from portal hypertension, transjugular intrahepatic portosystemic shunting (TIPS) is occasionally indicated to relieve pressure on the portal vein. As this shunting can worsen hepatic encephalopathy, it is reserved for those patients at low risk of encephalopathy. TIPS is generally regarded only as a bridge to liver transplantation or as a palliative measure. Balloon-occluded retrograde transvenous obliteration can be used to treat gastric variceal bleeding.
Gastroscopy (endoscopic examination of the esophagus, stomach, and duodenum) is performed in cases of established cirrhosis. If esophageal varices are found, prophylactic local therapy may be applied such as sclerotherapy or banding, and beta blockers may be used.
Hepatic encephalopathy
Hepatic encephalopathy is a potential complication of cirrhosis. It may lead to functional neurological impairment ranging from mild confusion to coma. Hepatic encephalopathy is primarily caused by the accumulation of ammonia in the blood, which causes neurotoxicity when crossing the blood-brain barrier. Ammonia is normally metabolized by the liver; as cirrhosis causes both decreased liver function and increased portosystemic shunting (allowing blood to bypass the liver), systemic ammonia levels gradually rise and lead to encephalopathy.
Most pharmaceutical approaches to treating hepatic encephalopathy focus on reducing ammonia levels. Per 2014 guidelines, the first-line treatment involves the use of lactulose, a non-absorbable disaccharide which decreases the pH level of the colon when it is metabolized by intestinal bacteria. The lower colonic pH causes increased conversion of ammonia into ammonium, which is then excreted from the body. Rifaximin, an antibiotic that inhibits the function of ammonia-producing bacteria in the gastrointestinal tract, is recommended for use in combination with lactulose as prophylaxis against recurrent episodes of hepatic encephalopathy.
In addition to pharmacotherapy, providing proper hydration and nutritional support is also essential. Appropriate quantities of protein uptake is encouraged. Several factors may precipitate hepatic encephalopathy, which include alcohol use, excess protein, gastrointestinal bleeding, infection, constipation, and vomiting/diarrhea. Drugs such as benzodiazepines, diuretics, or narcotics can also precipitate encephalopathic events. A low protein diet is recommended with gastrointestinal bleeding.
The severity of hepatic encephalopathy is determined by assessing the patient's mental status. This is generally a subjective assessment, although several attempts at creating criteria to help standardize this assessment have been published. One example is the West Haven criteria, reproduced below.
People with cirrhosis have a 40% lifetime risk of developing hepatic encephalopathy. The median survival after the development of hepatic encephalopathy is 0.9 years. Mild hepatic encephalopathy (also known as covert hepatic encephalopathy), in which symptoms are more subtle, such as impairments in executive function, poor sleep or balance impairment is also associated with a higher risk of hospitalization and death (18% in those with covert hepatic encephalopathy vs 3% in those with cirrhosis and no HE).
Hepatorenal syndrome
Hepatorenal syndrome is a serious complication of end-stage cirrhosis when kidney damage is also involved. The annual risk of developing hepatorenal syndrome in those with cirrhosis is 8% and once the syndrome develops the median survival is 2 weeks.
Portal hypertensive gastropathy
Portal hypertensive gastropathy refers to changes in the mucosa of the stomach in people with portal hypertension, and is associated with cirrhosis severity.
Infection
Cirrhosis can cause immune system dysfunction, leading to infection. Signs and symptoms of infection may be nonspecific and are more difficult to recognize (for example, worsening encephalopathy but no fever). Moreover, infections in cirrhosis are major triggers for other complications (ascites, variceal bleeding, hepatic encephalopathy, organ failures, death).
Those with cirrhosis are at increased risk of infections as well as increased mortality from infections. This is due to a combination of factors including cirrhosis associated immune dysfunction, reduced gut barrier function, reduced bile flow, and changes in the gut microbiota, with an increase in pathobionts (native bacteria, that under certain conditions may cause infection).
Cirrhosis associated immune dysfunction is caused by reduced complement component synthesis in the liver including C3, C4 and reduced total complement activity (CH50). The complement system is a part of the innate immune system and assists immune cells and antibodies in destroying pathogens. The liver produces compliment factors, but this may be reduced in cirrhosis, raising the risk of infections. Acute phase proteins (which help mount an immune response) and soluble pattern recognition receptors (which help immune cells to identify pathogens) are also reduced in those with cirrhosis, leading to further immune dysfunction. Cirrhosis is also associated with reduced Kupfer cell function, further increasing the risk for infections. Kupfer cells are resident macrophages in the liver which help to destroy pathogens.
Extrinsic factors may also increase the risk of infection in those with cirrhosis, including proton pump inhibitor use, alcohol use, frailty, antibiotic overuse, and hospitalizations or invasive procedures (which increase the risk of bacterial translocation to other areas of the body).
Infections that are common in those in the hospital with cirrhosis include spontaneous bacterial peritonitis (with a prevalence of 27% among hospitalized patients), urinary tract infections (22-29%), pneumonia (19%), spontaneous bacteremia (8-13%), skin and soft tissue infections (8-12%) and C. difficile colitis (2.4-4%). It is estimated that 3.5% of people with cirrhosis and ascites may have asymptomatic spontaneous bacterial peritonitis.
The mortality rate for infections in those with cirrhosis is higher than that of the general population. In those with cirrhosis and severe infections with sepsis the mortality rate is greater than 50% and in those with septic shock, the mortality rate is 65%.
Hepatocellular carcinoma
Hepatocellular carcinoma is the most common primary liver cancer, and the most common cause of death in people with cirrhosis. Screening using an ultrasound with or without cancer markers such as alpha-fetoprotein can detect this cancer and is often carried out for early signs which has been shown to improve outcomes.
Epidemiology
Each year, approximately one million deaths are due to complications of cirrhosis, making cirrhosis the 11th most common cause of death globally. Cirrhosis and chronic liver disease were the tenth leading cause of death for men and the twelfth for women in the United States in 2001, killing about 27,000 people each year.
The cause of cirrhosis can vary; alcohol and non-alcoholic fatty liver disease are main causes in western and industrialized countries, whereas viral hepatitis is the predominant cause in low and middle-income countries. Cirrhosis is more common in men than in women. The cost of cirrhosis in terms of human suffering, hospital costs, and lost productivity is high.
Globally, age-standardized disability-adjusted life year (DALY) rates have decreased from 1990 to 2017, with the values going from 656.4 years per 100,000 people to 510.7 years per 100,000 people. In males DALY rates have decreased from 903.1 years per 100,000 population in 1990, to 719.3 years per 100,000 population in 2017; in females the DALY rates have decreased from 415.5 years per 100,000 population in 1990, to 307.6 years per 100,000 population in 2017. However, globally the total number of DALYs have increased by 10.9 million from 1990 to 2017, reaching the value of 41.4 million DALYs.
Etymology
The word "cirrhosis" is a neologism derived from ; kirrhos , meaning "yellowish, tawny" (the orange yellow colour of the diseased liver) and the suffix -osis, i.e. "condition" in medical terminology. While the clinical entity was known before, René Laennec gave it this name in an 1819 paper.
| Biology and health sciences | Non-infectious disease | null |
2033875 | https://en.wikipedia.org/wiki/Artificial%20leather | Artificial leather | Artificial leather, also called synthetic leather, is a material intended to substitute for leather in upholstery, clothing, footwear, and other uses where a leather-like finish is desired but the actual material is cost prohibitive or unsuitable due to practical or ethical concerns. Artificial leather is known under many names, including leatherette, imitation leather, faux leather, vegan leather, PU leather (polyurethane), and pleather.
Uses
Artificial leathers are often used in clothing fabrics, furniture upholstery, water craft upholstery, and automotive interiors.
One of its primary advantages, especially in cars, is that it requires little maintenance in comparison to leather, and does not crack or fade easily, though the surface of some artificial leathers may rub and wear off with time. Artificial leather made from polyurethane is washable, but varieties made from polyvinyl chloride (PVC) are not easily cleaned.
Fashion
Depending on the construction, the artificial leather may be porous and breathable, or may be impermeable and waterproof.
Porous artificial leather with a non-woven microfibre backing is a popular choice for clothing, and is comfortable to wear.
Manufacture
Many different methods for the manufacture of imitation leathers have been developed.
A current method is to use an embossed release paper known as casting paper as a form for the surface finish, often mimicking the texture of top-grain leather. This embossed release paper holds the final texture in negative. For the manufacture, the release paper is coated with several layers of plastic e.g. PVC or polyurethane, possibly including a surface finish, a colour layer, a foam layer, an adhesive, a fabric layer, a reverse finish. Depending on the specific process, these layers may be wet or partially cured at the time of integration. The artificial leather is cured, then the release paper is removed and possibly reused.
A fermentation method of making collagen, the main chemical in real leather, is under development.
Materials to make vegan leather can be derived from fungi, yeasts and bacterial strains using biotechnological processes.
Historical methods
One of the earliest artificial leathers was Presstoff. Invented in 19th century Germany, it was made of specially layered and treated paper pulp. It gained its widest use in Germany during the Second World War in place of leather, which under wartime conditions was rationed. Presstoff could be used in almost every application normally filled by leather, excepting items like footwear that were repeatedly subjected to flex wear or moisture. Under these conditions, Presstoff tends to delaminate and lose cohesion.
Another early example was Rexine, a leathercloth fabric produced in the United Kingdom by Rexine Ltd of Hyde, near Manchester. It was made of cloth surfaced with a mixture of nitrocellulose, camphor oil, alcohol, and pigment, embossed to look like leather. It was used as a bookbinding material and upholstery covering, especially for the interiors of motor vehicles and the interiors of railway carriages produced by British manufacturers beginning in the 1920s, its cost being around a quarter that of leather.
Poromerics are made from a plastic coating (usually a polyurethane) on a fibrous base layer (typically a polyester). The term poromeric was coined by DuPont as a derivative of the terms porous and polymeric. The first poromeric material was DuPont's Corfam, introduced in 1963 at the Chicago Shoe Show. Corfam was the centerpiece of the DuPont pavilion at the 1964 New York World's Fair in New York City. After spending millions of dollars marketing the product to shoe manufacturers, DuPont withdrew Corfam from the market in 1971 and sold the rights to a company in Poland.
Leatherette is also made by covering a fabric base with a plastic. The fabric can be made of natural or synthetic fiber which is then covered with a soft polyvinyl chloride (PVC) layer. Leatherette is used in bookbinding and was common on the casings of 20th century cameras.
Cork leather is a natural-fiber alternative made from the bark of cork oak trees that has been compressed, similar to Presstoff.
Environmental effect
The production of the PVC used in the production of many artificial leathers requires a plasticizer called a phthalate to make it flexible and soft. PVC requires petroleum and large amounts of energy thus making it reliant on fossil fuels. During the production process carcinogenic byproducts, dioxins, are produced which are toxic to humans and animals. Dioxins remain in the environment long after PVC is manufactured. When PVC ends up in a landfill it does not decompose like genuine leather and can release dangerous chemicals into the water and soil.
Polyurethane is currently more popular for use than PVC.
The production of some artificial leathers requires plastic, with others, called plant-based leathers, only requiring plant-based materials; the inclusion of artificial materials in the production of artificial leathers notably raises sustainability issues. However, some reports state that the manufacture of artificial leather is still more sustainable than that of real leather, with the Environmental Profit & Loss, a sustainability report developed in 2018 by Kering, stating that the impact of vegan-leather production can be up to a third lower than real leather.
Some artificial leathers may have traces of restricted substances, like paint ingredient butanone oxime, according to a study by the FILK Freiberg Institute.
Brand names
Alcantara
Clarino: manufactured by Kuraray Co., Ltd. of Japan.
Fabrikoid: A DuPont brand, cotton cloth coated with nitrocellulose
Kirza: A Russian form developed in the 1930s consisting of cotton fabric, latex, and rosin
MB-Tex: Used in many Mercedes-Benz base trims
Naugahyde: An American brand introduced by Uniroyal
Piñatex: Made from pineapple leaves
Rexine: A British brand
Skai: Made by the German company Konrad Hornschuch AG, its name has become a genericized trademark in Germany and surrounding countries
| Technology | Materials | null |
2035853 | https://en.wikipedia.org/wiki/Aluminium%20sulfate | Aluminium sulfate | Aluminium sulfate is a salt with the formula . It is soluble in water and is mainly used as a coagulating agent (promoting particle collision by neutralizing charge) in the purification of drinking water and wastewater treatment plants, and also in paper manufacturing.
The anhydrous form occurs naturally as a rare mineral millosevichite, found for example in volcanic environments and on burning coal-mining waste dumps. Aluminium sulfate is rarely, if ever, encountered as the anhydrous salt. It forms a number of different hydrates, of which the hexadecahydrate and octadecahydrate are the most common. The heptadecahydrate, whose formula can be written as , occurs naturally as the mineral alunogen.
Aluminium sulfate is sometimes called alum or papermaker's alum in certain industries. However, the name "alum" is more commonly and properly used for any double sulfate salt with the generic formula , where X is a monovalent cation such as potassium or ammonium.
Production
In the laboratory
Aluminium sulfate may be made by adding aluminium hydroxide, , to sulfuric acid, :
or by heating aluminium in a sulfuric acid solution:
From alum schists
The alum schists employed in the manufacture of aluminium sulfate are mixtures of iron pyrite, aluminium silicate and various bituminous substances, and are found in upper Bavaria, Bohemia, Belgium, and Scotland. These are either roasted or exposed to the weathering action of the air. In the roasting process, sulfuric acid is formed and acts on the clay to form aluminium sulfate, a similar condition of affairs being produced during weathering. The mass is now systematically extracted with water, and a solution of aluminium sulfate of specific gravity 1.16 is prepared. This solution is allowed to stand for some time (in order that any calcium sulfate and basic iron(III) sulfate may separate), and is then evaporated until iron(II) sulfate crystallizes on cooling; it is then drawn off and evaporated until it attains a specific gravity of 1.40. It is now allowed to stand for some time, and decanted from any sediment.
From clays or bauxite
In the preparation of aluminium sulfate from clays or from bauxite, the material is gently calcined, then mixed with sulfuric acid and water and heated gradually to boiling; if concentrated acid is used no external heat is generally required as the formation of aluminium sulfate is exothermic. It is allowed to stand for some time, and the clear solution is drawn off.
From cryolite
When cryolite is used as the ore, it is mixed with calcium carbonate and heated. By this means, sodium aluminate is formed; it is then extracted with water and precipitated either by sodium bicarbonate or by passing a current of carbon dioxide through the solution. The precipitate is then dissolved in sulfuric acid.
Uses
Aluminium sulfate is sometimes used in the human food industry as a firming agent, where it takes on E number E520, and in animal feed as a bactericide. In the United States, the FDA lists it as "generally recognized as safe" with no limit on concentration. Aluminium sulfate may be used as a deodorant, an astringent, or as a styptic for superficial shaving wounds. Aluminium sulfate is used as a mordant in dyeing and printing textiles.
It is a common vaccine adjuvant and works "by facilitating the slow release of antigen from the vaccine depot formed at the site of inoculation."
Aluminium sulfate is used in water purification and for chemical phosphorus removal from wastewater. It causes suspended impurities to coagulate into larger particles and then settle to the bottom of the container (or be filtered out) more easily. This process is called coagulation or flocculation. Research suggests that in Australia, aluminium sulfate used in this way in drinking water treatment is the primary source of hydrogen sulfide gas in sanitary sewer systems. An improper and excess application incident in 1988 polluted the water supply of Camelford in Cornwall.
Aluminium sulfate has been used as a method of eutrophication remediation for shallow lakes. It works by reducing the phosphorus load in the lakes.
When dissolved in a large amount of neutral or slightly alkaline water, aluminium sulfate produces a gelatinous precipitate of aluminium hydroxide, Al(OH)3. In dyeing and printing cloth, the gelatinous precipitate helps the dye adhere to the clothing fibers by rendering the pigment insoluble.
Aluminium sulfate is sometimes used to reduce the pH of garden soil, as it hydrolyzes to form the aluminium hydroxide precipitate and a dilute sulfuric acid solution. An example of what changing the pH level of soil can do to plants is visible when looking at Hydrangea macrophylla. The gardener can add aluminium sulfate to the soil to reduce the pH which in turn will result in the flowers of the Hydrangea turning a different color (blue). The aluminium is what makes the flowers blue; at a higher pH, the aluminium is not available to the plant.
In the construction industry, it is used as waterproofing agent and accelerator in concrete. Another use is a foaming agent in fire fighting foam.
It can also be very effective as a molluscicide, killing spanish slugs.
Mordants aluminium triacetate and aluminium sulfacetate can be prepared from aluminium sulfate, the product formed being determined by the amount of lead(II) acetate used:
Chemical reactions
The compound decomposes to γ-alumina and sulfur trioxide when heated between 580 and 900 °C. It combines with water forming hydrated salts of various compositions.
Aluminium sulfate reacts with sodium bicarbonate to which foam stabilizer has been added, producing carbon dioxide for fire-extinguishing foams:
The carbon dioxide is trapped by the foam stabilizer and creates a thick foam which will float on top of hydrocarbon fuels and seal off access to atmospheric oxygen, smothering the fire. Chemical foam was unsuitable for use on polar solvents such as alcohol, as the fuel would mix with and break down the foam blanket. The carbon dioxide generated also served to propel the foam out of the container, be it a portable fire extinguisher or fixed installation using hoselines. Chemical foam is considered obsolete in the United States and has been replaced by synthetic mechanical foams, such as AFFF which have a longer shelf life, are more effective, and more versatile, although some countries such as Japan and India continue to use it.
| Physical sciences | Sulfuric oxyanions | Chemistry |
2036044 | https://en.wikipedia.org/wiki/Galaxiidae | Galaxiidae | The Galaxiidae are a family of mostly small freshwater fish in the Southern Hemisphere. The majority live in Southern Australia or New Zealand, but some are found in South Africa, southern South America, Lord Howe Island, New Caledonia, and the Falkland Islands. One galaxiid species, the common galaxias (Galaxias maculatus), is probably the most widely naturally distributed freshwater fish in the Southern Hemisphere. They are coolwater species, found in temperate latitudes, with only one species known from subtropical habitats. Many specialise in living in cold, high-altitude upland rivers, streams, and lakes.
Some galaxiids live in fresh water all their lives, but many have a partially marine lifecycle. In these cases, larvae are hatched in a river, but are washed downstream to the ocean, later returning to rivers as juveniles to complete their development to full adulthood. This pattern differs from that of salmon, which only return to fresh water to breed, and is described as amphidromous.
Freshwater galaxiid species are gravely threatened by exotic salmonid species, particularly trout species, which prey upon galaxiids and compete with them for food. Exotic salmonids have been recklessly introduced to many different land masses (e.g. Australia, New Zealand), with no thought as to impacts on native fish, or attempts to preserve salmonid-free habitats for them. Numerous localised extinctions of galaxiid species have been caused by the introduction of exotic salmonids, and a number of freshwater galaxiid species are threatened with overall extinction by exotic salmonids.
Evolution
Phylogenetic evidence alternatively places galaxiids within the Protacanthopterygii, or more recently as the sister group to the Neoteleostei. Their ancestors are thought to have diverged from the neoteleosts around the Triassic-Jurassic boundary.
The earliest definitive fossils of galaxiids are from the Miocene of New Zealand, which can be placed in the extant genus Galaxias. This young fossil range contrasts with the presumed ancient origins of the group. In 1998, a possible Late Cretaceous (Maastrichtian) galaxiid from South Africa was described as Stompooria. However, later studies have questioned this assignment, as Stompooria differs from galaxiids in many morphological traits, especially in the presence of scales, although it being an ancestral galaxiid that had not yet developed galaxiid traits could not be ruled out. Other taxonomic treatments have instead placed Stompooria as part of an extinct clade sister to the Esociformes and Salmoniformes.
Taxonomic diversity
About 50 species are in the family Galaxiidae, grouped into seven genera:
Genera
?†Stompooria Anderson, 1998 (Maastrichtian of South Africa; assignment disputed, possibly a stem-salmoniform)
Subfamily Aplochitoninae Begle 1991
Genus Aplochiton Jenyns 1842 [Haplochiton Agassiz 1846; Farionella Valenciennes 1850 ex Cuvier & Valenciennes 1850] (two species)
Genus Lovettia McCulloch 1915 (one species)
Subfamily Galaxiinae [Paragalaxiinae Scott 1936]
Genus Brachygalaxias Eigenmann 1928 (two species)
Genus Galaxias Cuvier 1816 [Saxilaga Scott 1936; Galaxias (Agalaxis) Scott 1936; Agalaxis (Scott 1936); Lyragalaxias Whitley 1935; Austrocobitis Ogilby 1899; Mesites Jenyns 1842 non Schoenherr 1838 non Geoffroy 1838; Nesogalaxias Whitley 1935] (34 species)
Genus Galaxiella McDowall 1978 (four species)
Genus Neochanna Günther 1867 [Saxilaga (Lixagasa) Scott 1936; Lixagasa (Scott 1936); Saxilaga Scott 1936] (six species)
Genus Paragalaxias Scott 1935 [Querigalaxias Whitley 1935] (four species)
Species by geography
Australia
Galaxiids are found around the south eastern seaboard of Australia and in some parts of south western Australia. The galaxiids and the temperate perches (Percichthyidae) are the dominant native freshwater fish families of southern Australia. Species common to all areas include:
Common galaxias or jollytail galaxias, Galaxias maculatus
Spotted galaxias, spotted mountain trout, or spotted minnow, Galaxias truttaceus
South east Australian mainland
Climbing galaxias, Galaxias brevipinnis
Mountain galaxias, Galaxias olidus
Flathead galaxias, Galaxias rostratus
Threatened species are:
Galaxias fuscus (Victoria), also called barred galaxias or brown galaxias
Dwarf galaxias, Galaxiella pusilla (South Australia, Victoria)
Tasmanian mudfish, Neochanna cleaveri (Wilsons Promontory, Victoria)
Western Australia
Western galaxias, Galaxias occidentalis
Mud minnow, Galaxiella munda
Black-stripe minnow, Galaxiella nigrostriata
Tasmania
Seventeen species of galaxiids have been found in Tasmania. The most common species are:
Climbing galaxias, Galaxias brevipinnis
Common galaxias, Galaxias maculatus
Spotted galaxias, Galaxias truttaceus
Tasmanian endangered species include:
Saddled galaxias, Galaxias tanycephalus
Pedder galaxias, Galaxias pedderensis
Swan galaxias, Galaxias fontanus
Swamp galaxias, Galaxias parvus
Golden galaxias, Galaxias auratus
Dwarf galaxias, Galaxiella pusilla
Clarence galaxias, Galaxias johnstoni
Tasmanian mudfish, Neochanna cleaveri
Western paragalaxias, Paragalaxias julianus
Great Lake paragalaxias, Paragalaxias eleotroides
Arthurs paragalaxias, Paragalaxias mesotes
Shannon paragalaxias, Paragalaxias dissimilis
New Zealand
Twenty-three species of galaxiids have been discovered in New Zealand, and prior to the introduction of non-native species such as trout, they were the dominant freshwater fish family. Most of these live in fresh water all their lives. However, the larvae of five species of the genus Galaxias develop in the ocean, where they form part of the zooplankton and return to rivers and streams as juveniles (whitebait), where they develop and remain as adults. All Galaxias species found in New Zealand are endemic, except for Galaxias brevipinnis (koaro) and Galaxias maculatus (inanga).
Roundhead galaxias, Galaxias anomalus
Giant kōkopu, Galaxias argenteus
Climbing galaxias, koaro, or short-fin galaxias, Galaxias brevipinnis
Lowland longjawed galaxias, Galaxias cobitinis
Flathead galaxias, Galaxias depressiceps
Dwarf galaxias, Galaxias divergens
Eldon's galaxias, Galaxias eldoni
Banded kōkopu, Galaxias fasciatus
Gollum galaxias, Galaxias gollumoides
Dwarf inanga, Galaxias gracilis
Bignose galaxias, Galaxias macronasus
Common galaxias, inanga, or common jollytail, Galaxias maculatus
Alpine galaxias, Galaxias paucispondylus
Shortjaw kokopu, Galaxias postvectis
Longjawed galaxias, Galaxias prognathus
Dusky galaxias, Galaxias pullus
Common river galaxias or Canterbury galaxias, Galaxias vulgaris
Brown mudfish, Neochanna apoda
Canterbury mudfish, Neochanna burrowsius
Black mudfish, Neochanna diversus
Northland mudfish, Neochanna heleios
Chatham mudfish, Neochanna rekohua
South America
Aplochiton taeniatus (Chile, Argentina, Falklands Islands)
Common galaxias or puyen, Galaxias maculatus (Chile, Argentina, Falkland Islands)
Brachygalaxias bullocki (Chile)
Brachygalaxias gothei (Chile)
Galaxias globiceps (Chile)
Galaxias platei (Chile)
South Africa
Cape galaxias, Galaxias zebratus (Cape Province, South Africa)
Fishing
The juveniles of those galaxiids that develop in the ocean and then move into rivers for their adult lives are caught as whitebait while moving upstream and are much valued as a delicacy. Adult galaxiids may be caught for food, but they are generally not large. In some cases, their exploitation may be banned (e.g. New Zealand) unless available to indigenous tribes.
In addition to serious impacts from exotic trout species, Australian adult galaxiids suffer a disregard from anglers for being "too small" and "not being trout". This is despite the fact that several Australian galaxiid species, though smallish, grow to a sufficient size to be catchable and readily take wet and dry flies, and that one of these species — the spotted galaxias — was keenly fished for in Australia before the introduction of exotic trout species. A handful of fly-fishing exponents in Australia are rediscovering the pleasure of catching (and releasing) these Australian native fish on ultralight fly-fishing tackle.
| Biology and health sciences | Osmeriformes | null |
2037563 | https://en.wikipedia.org/wiki/Geodesics%20in%20general%20relativity | Geodesics in general relativity | In general relativity, a geodesic generalizes the notion of a "straight line" to curved spacetime. Importantly, the world line of a particle free from all external, non-gravitational forces is a particular type of geodesic. In other words, a freely moving or falling particle always moves along a geodesic.
In general relativity, gravity can be regarded as not a force but a consequence of a curved spacetime geometry where the source of curvature is the stress–energy tensor (representing matter, for instance). Thus, for example, the path of a planet orbiting a star is the projection of a geodesic of the curved four-dimensional (4-D) spacetime geometry around the star onto three-dimensional (3-D) space.
Mathematical expression
The full geodesic equation is
where s is a scalar parameter of motion (e.g. the proper time), and are Christoffel symbols (sometimes called the affine connection coefficients or Levi-Civita connection coefficients) symmetric in the two lower indices. Greek indices may take the values: 0, 1, 2, 3 and the summation convention is used for repeated indices and . The quantity on the left-hand-side of this equation is the acceleration of a particle, so this equation is analogous to Newton's laws of motion, which likewise provide formulae for the acceleration of a particle. The Christoffel symbols are functions of the four spacetime coordinates and so are independent of the velocity or acceleration or other characteristics of a test particle whose motion is described by the geodesic equation.
Equivalent mathematical expression using coordinate time as parameter
So far the geodesic equation of motion has been written in terms of a scalar parameter s. It can alternatively be written in terms of the time coordinate, (here we have used the triple bar to signify a definition). The geodesic equation of motion then becomes:
This formulation of the geodesic equation of motion can be useful for computer calculations and to compare General Relativity with Newtonian Gravity. It is straightforward to derive this form of the geodesic equation of motion from the form which uses proper time as a parameter using the chain rule. Notice that both sides of this last equation vanish when the mu index is set to zero. If the particle's velocity is small enough, then the geodesic equation reduces to this:
Here the Latin index n takes the values [1,2,3]. This equation simply means that all test particles at a particular place and time will have the same acceleration, which is a well-known feature of Newtonian gravity. For example, everything floating around in the International Space Station will undergo roughly the same acceleration due to gravity.
Derivation directly from the equivalence principle
Physicist Steven Weinberg has presented a derivation of the geodesic equation of motion directly from the equivalence principle. The first step in such a derivation is to suppose that a free falling particle does not accelerate in the neighborhood of a point-event with respect to a freely falling coordinate system (). Setting , we have the following equation that is locally applicable in free fall:
The next step is to employ the multi-dimensional chain rule. We have:
Differentiating once more with respect to the time, we have:
We have already said that the left-hand-side of this last equation must vanish because of the Equivalence Principle. Therefore:
Multiply both sides of this last equation by the following quantity:
Consequently, we have this:
Weinberg defines the affine connection as follows:
which leads to this formula:
Notice that, if we had used the proper time “s” as the parameter of motion, instead of using the locally inertial time coordinate “T”, then our derivation of the geodesic equation of motion would be complete. In any event, let us continue by applying the one-dimensional chain rule:
As before, we can set . Then the first derivative of x0 with respect to t is one and the second derivative is zero. Replacing λ with zero gives:
Subtracting d xλ / d t times this from the previous equation gives:
which is a form of the geodesic equation of motion (using the coordinate time as parameter).
The geodesic equation of motion can alternatively be derived using the concept of parallel transport.
Deriving the geodesic equation via an action
We can (and this is the most common technique) derive the geodesic equation via the action principle. Consider the case of trying to find a geodesic between two timelike-separated events.
Let the action be
where is the line element. There is a negative sign inside the square root because the curve must be timelike. To get the geodesic equation we must vary this action. To do this let us parameterize this action with respect to a parameter . Doing this we get:
We can now go ahead and vary this action with respect to the curve . By the principle of least action we get:
Using the product rule we get:
where
Integrating by-parts the last term and dropping the total derivative (which equals to zero at the boundaries) we get that:
Simplifying a bit we see that:
so,
multiplying this equation by we get:
So by Hamilton's principle we find that the Euler–Lagrange equation is
Multiplying by the inverse metric tensor we get that
Thus we get the geodesic equation:
with the Christoffel symbol defined in terms of the metric tensor as
(Note: Similar derivations, with minor amendments, can be used to produce analogous results for geodesics between light-like or space-like separated pairs of points.)
Equation of motion may follow from the field equations for empty space
Albert Einstein believed that the geodesic equation of motion can be derived from the field equations for empty space, i.e. from the fact that the Ricci curvature vanishes. He wrote:
It has been shown that this law of motion — generalized to the case of arbitrarily large gravitating masses — can be derived from the field equations of empty space alone. According to this derivation the law of motion is implied by the condition that the field be singular nowhere outside its generating mass points.
and
One of the imperfections of the original relativistic theory of gravitation was that as a field theory it was not complete; it introduced the independent postulate that the law of motion of a particle is given by the equation of the geodesic.
A complete field theory knows only fields and not the concepts of particle and motion. For these must not exist independently from the field but are to be treated as part of it.
On the basis of the description of a particle without singularity, one has the possibility of a logically more satisfactory treatment of the combined problem: The problem of the field and that of the motion coincide.
Both physicists and philosophers have often repeated the assertion that the geodesic equation can be obtained from the field equations to describe the motion of a gravitational singularity, but this claim remains disputed. According to David Malament, “Though the geodesic principle can be recovered as theorem in general relativity, it is not a consequence of Einstein’s equation (or the conservation principle) alone. Other assumptions are needed to derive the theorems in question.” Less controversial is the notion that the field equations determine the motion of a fluid or dust, as distinguished from the motion of a point-singularity.
Extension to the case of a charged particle
In deriving the geodesic equation from the equivalence principle, it was assumed that particles in a local inertial coordinate system are not accelerating. However, in real life, the particles may be charged, and therefore may be accelerating locally in accordance with the Lorentz force. That is:
with
The Minkowski tensor is given by:
These last three equations can be used as the starting point for the derivation of an equation of motion in General Relativity, instead of assuming that acceleration is zero in free fall. Because the Minkowski tensor is involved here, it becomes necessary to introduce something called the metric tensor in General Relativity. The metric tensor g is symmetric, and locally reduces to the Minkowski tensor in free fall. The resulting equation of motion is as follows:
with
This last equation signifies that the particle is moving along a timelike geodesic; massless particles like the photon instead follow null geodesics (replace −1 with zero on the right-hand side of the last equation). It is important that the last two equations are consistent with each other, when the latter is differentiated with respect to proper time, and the following formula for the Christoffel symbols ensures that consistency:
This last equation does not involve the electromagnetic fields, and it is applicable even in the limit as the electromagnetic fields vanish. The letter g with superscripts refers to the inverse of the metric tensor. In General Relativity, indices of tensors are lowered and raised by contraction with the metric tensor or its inverse, respectively.
Geodesics as curves of stationary interval
A geodesic between two events can also be described as the curve joining those two events which has a stationary interval (4-dimensional "length"). Stationary here is used in the sense in which that term is used in the calculus of variations, namely, that the interval along the curve varies minimally among curves that are nearby to the geodesic.
In Minkowski space there is only one geodesic that connects any given pair of events, and for a time-like geodesic, this is the curve with the longest proper time between the two events. In curved spacetime, it is possible for a pair of widely separated events to have more than one time-like geodesic between them. In such instances, the proper times along several geodesics will not in general be the same. For some geodesics in such instances, it is possible for a curve that connects the two events and is nearby to the geodesic to have either a longer or a shorter proper time than the geodesic.
For a space-like geodesic through two events, there are always nearby curves which go through the two events that have either a longer or a shorter proper length than the geodesic, even in Minkowski space. In Minkowski space, the geodesic will be a straight line. Any curve that differs from the geodesic purely spatially (i.e. does not change the time coordinate) in any inertial frame of reference will have a longer proper length than the geodesic, but a curve that differs from the geodesic purely temporally (i.e. does not change the space coordinates) in such a frame of reference will have a shorter proper length.
The interval of a curve in spacetime is
Then, the Euler–Lagrange equation,
becomes, after some calculation,
where
The goal being to find a curve for which the value of
is stationary, where
such goal can be accomplished by calculating the Euler–Lagrange equation for f, which is
Substituting the expression of f into the Euler–Lagrange equation (which makes the value of the integral l stationary), gives
Now calculate the derivatives:
This is just one step away from the geodesic equation.
If the parameter s is chosen to be affine, then the right side of the above equation vanishes (because is constant). Finally, we have the geodesic equation
Derivation using autoparallel transport
The geodesic equation can be alternatively derived from the autoparallel transport of curves. The derivation is based on the lectures given by Frederic P. Schuller at the We-Heraeus International Winter School on Gravity & Light.
Let be a smooth manifold with connection and be a curve on the manifold. The curve is said to be autoparallely transported if and only if .
In order to derive the geodesic equation, we have to choose a chart :
Using the linearity and the Leibniz rule:
Using how the connection acts on functions () and expanding the second term with the help of the connection coefficient functions:
The first term can be simplified to . Renaming the dummy indices:
We finally arrive to the geodesic equation:
| Physical sciences | Theory of relativity | Physics |
6762618 | https://en.wikipedia.org/wiki/Virgo%20interferometer | Virgo interferometer | The Virgo interferometer is a large-scale scientific instrument near Pisa, Italy, for detecting gravitational waves. The detector is a Michelson interferometer, which can detect the minuscule length variations in its two 3-km (1.9 mi) arms induced by the passage of gravitational waves. The required precision is achieved using many systems to isolate it from the outside world, including keeping its mirrors and instrumentation in an ultra-high vacuum and suspending them using complex systems of pendula. Between its periodical observations, the detector is upgraded to increase its sensitivity. The observation runs are planned in collaboration with other similar detectors, including the two Laser Interferometer Gravitational-Wave Observatories (LIGO) in the United States and the Japanese Kamioka Gravitational Wave Detector (KAGRA), as cooperation between several detectors is crucial for detecting gravitational waves and pinpointing their origin.
It was conceived and built when gravitational waves were only a prediction of general relativity. The project, named after the Virgo galaxy cluster, was first approved in 1992 and construction was completed in 2003. After several years of improvements without detection, it was shut down in 2011 for the "Advanced Virgo" upgrades. In 2015, the first observation of gravitational waves was made by the two LIGO detectors, while Virgo was still being upgraded. It resumed observations in early August 2017, making its first detection on 14 August (together with the LIGO detectors); this was quickly followed by the detection of the GW170817 gravitational wave, the only one also observed with classical methods (optical, gamma-ray, X-ray and radio telescopes) as of 2024.
Virgo is hosted by the European Gravitational Observatory (EGO), a consortium founded by the French Centre National de la Recherche Scientifique (CNRS) and the Italian Istituto Nazionale di Fisica Nucleare (INFN). The broader Virgo Collaboration, gathering 940 members in 20 countries, operates the detector, and defines the strategy and policy for its use and upgrades. The LIGO and Virgo collaborations have shared their data since 2007, and with KAGRA since 2019, forming the LIGO-Virgo-KAGRA (LVK) collaboration.
Organisation
The Virgo interferometer is managed by the European Gravitational Observatory (EGO) consortium, which was created in December 2000 by the French National Centre for Scientific Research (CNRS) and the Istituto Nazionale di Fisica Nucleare (INFN). Nikhef, the Dutch Institute for Nuclear and High-Energy Physics, later joined as an observer and eventually became a full member. EGO is responsible for the Virgo site and is in charge of the detector's commissioning, maintenance, operation and upgrades. By metonymy, the site itself is sometimes referred to as EGO, as the consortium is headquartered there. One of EGO's goals is to promote research on gravity in Europe. Between 2018 and 2024, the budget of EGO fluctuates between 9 and 11.5 million euros per year, employing around 60 people.
The Virgo Collaboration consists of all the researchers working on various aspects of the detector. About 940 members, representing 165 institutions in 20 countries, were part of the Collaboration as of December 2024. This includes institutions in France, Italy, the Netherlands, Poland, Spain, Belgium, Germany, Hungary, Portugal, Greece, Czechia, Denmark, Ireland, Monaco, Switzerland, Brazil, Burkina Faso, China, Israel, Japan and South Korea.
The Virgo Collaboration is part of the larger LIGO-Virgo-KAGRA (LVK) Collaboration, which gathers scientists from the other major gravitational-waves experiments to jointly analyse the data; this is crucial for gravitational-wave detection. LVK began in 2007 as the LIGO-Virgo Collaboration, and was expanded when KAGRA joined in 2019.
Science case
Virgo is designed to look for gravitational waves emitted by astrophysical sources across the universe which can be classified into three types:
Transient sources, which are objects only detectable for a short period. The main sources in this category are compact binary coalescences (CBC) from binary black holes (or neutron stars) merging, emitting a rapidly-growing signal which only becomes detectable in the last seconds before the merger. Other possible sources of short-lived gravitational waves are supernovas, instabilities in compact astrophysical objects, or exotic sources such as cosmic strings.
Continuous sources, emitting a signal observable on a long time scale. Prime candidates are rapidly-spinning neutron stars (pulsars), which may emit gravitational waves if they are not perfectly spherical (e.g. if there are tiny "mountains" on the surface).
Stochastic backgrounds, a type of generally-continuous signal diffused across large regions of the sky rather than from a single source. It could consist of a large number of indistinguishable sources from the above categories, or originate from the early moments of the universe.
Detection of gravitational waves from these sources is a new way to observe them (often with different information than classical methods such as telescopes) and to probe fundamental properties of gravity such as the polarisation of gravitational waves, possible gravitational lensing, or determining whether the observed signals are correctly described by general relativity. It also provides a way to measure the Hubble constant.
History
The Virgo project was approved in 1992 by the French CNRS and the following year by the Italian INFN. Construction of the detector began in 1996 in Santo Stefano a Macerata in Cascina, near Pisa, Italy, and was completed in 2003. After several observation runs in which no gravitational waves were detected, the interferometer was shut down in 2011 for upgrading as part of the Advanced Virgo project. It began observations again in 2017, and made its first two detections soon after, together with the LIGO detectors.
Conception
Although the concept of gravitational waves was presented by Albert Einstein in 1916, serious projects for detecting them only began during the late 1960s. The first were the Weber bars, invented by Joseph Weber; although they could detect gravitational waves in theory, none of the experiments succeeded. However, they sparked the creation of research groups dedicated to gravitational waves.
The idea of a large interferometric detector began to gain credibility during the early 1980s, and the Virgo project was conceptualised by Italian researcher Adalberto Giazotto and French researcher Alain Brillet in 1985 after they met in Rome. A key idea that set Virgo apart from other projects was the targeting of low frequencies (around 10 Hz); most projects focused on higher frequencies (around 500 Hz). Many believed at the time that low-frequency observations were not possible; only France and Italy began work on the project, which was first proposed in 1987. The name Virgo was coined shortly after, in reference to the Virgo galaxy cluster; it symbolizes the aim of the project to detect gravitational waves originating from beyond our galaxy. After approval by the CNRS and the INFN, construction of the interferometer began in 1996 with the aim of beginning observations by 2000.
Virgo's first goal was to directly observe gravitational waves, whose existence was already indirectly evidenced by the three-decade study of the binary pulsar 1913+16: the observed decrease of this binary pulsar's orbital period was in agreement with the hypothesis that the system was losing energy by emitting gravitational waves.
Initial Virgo detector
The Virgo detector was first built, commissioned and operated during the 2000s, and reached its expected sensitivity. This validated its design choices, and demonstrated that giant interferometers were promising devices for detecting gravitational waves in a broad frequency band. This phase is sometimes called the "initial Virgo" or "original Virgo".
Construction of the initial Virgo detector was completed in June 2003, and several data collection periods ("science runs") followed between 2007 and 2011, after 4 years of commissioning. Some of the runs were performed with the two LIGO detectors (which are located in Hanford, Washington and in Livingston, Louisiana). There was a shut-down of a few months in 2010 for an upgrade of the Virgo suspension system, and the original steel suspension wires were replaced by glass fibres to reduce thermal noise. Even after several months of data collection with the upgraded suspension system, no gravitational waves were observed, and the detector was shut down in September 2011 for the installation of Advanced Virgo.
Advanced Virgo detector
The Advanced Virgo detector aimed to increase the sensitivity (and the distance from which a signal can be detected) by a factor of 10, allowing it to probe a volume of the universe 1,000 times larger and making detection of gravitational waves more likely. It benefited from the experience gained with the initial detector and technological advances.
The Advanced Virgo detector kept the same vacuum infrastructure as the initial Virgo, but the rest of the interferometer was upgraded. Four additional cryotraps were added at both ends of each arm to trap residual particles coming from the mirror towers. The new mirrors were larger, with a diameter of and a weight of , and their optical performance was improved. The optical elements used to control the interferometer were under vacuum on suspended mountings. A system of adaptive optics was installed to correct the mirror aberrations in situ. In the original plan, the laser power was expected to reach 200 W in its final configuration.
Advanced Virgo began the commissioning process in 2016, joining the two LIGO detectors (which had gone through similar upgrades with Advanced LIGO, and made their first detection in 2015) on 1 August 2017. Observation "runs" for the Advanced detector era are planned by the LVK collaboration with the goal to maximise the observing time with several detectors, and are labelled O1 to O5; Virgo began participating in these near the end of the O2 run. LIGO and Virgo detected the GW170814 signal on 14 August 2017, which was reported on 27 September of that year. It was the first binary black hole merger detected by both LIGO and Virgo, and the first for Virgo.
GW170817 was detected by LIGO and Virgo on 17 August 2017. The signal, produced by the final minutes of two neutron stars spiralling closer to each other and merging, was the first binary neutron-star merger observed and the first gravitational-wave observation confirmed by non-gravitational means. The resulting gamma-ray burst was also detected, and optical telescopes later discovered a kilonova corresponding to the merger.
After further upgrades, Virgo began its third observation run (O3) in April 2019. Planned to last one year, the run ended early on 27 March 2020 due to the COVID-19 pandemic.
The upgrades following O3 are part of the Advanced Virgo+ program, divided into two phases; the first preceded the O4 run, and the second precedes the O5 run. The first phase focused on the reduction of quantum noise by introducing a more powerful laser, improving the squeezing introduced in O3, and implementing a new technique known as signal recycling; seismic sensors were also installed around the mirrors. The second phase will attempt to reduce the mirror thermal noise by changing the geometry of the laser beam to increase its size on the mirrors (spreading the energy on a larger area and thus reducing the temperature) and improving the coating of the mirrors; the end mirrors will be larger, requiring improvements to the suspension. Further improvements for quantum noise reduction are also expected in the second phase, building on the changes in the first.
The fourth observation run (O4) was scheduled to begin in May 2023 and was planned to last for 20 months, including a commissioning break of up to two months. On 11 May 2023, Virgo announced that it would not join the beginning of O4; the interferometer was not stable enough to reach the expected sensitivity and one mirror needed replacement, requiring several weeks of work. Virgo did not join the O4 run during its first part (O4a, which ended on 16 January 2024), since it only reached a peak sensitivity of 45 Mpc instead of the 80 to 115 Mpc initially expected; it joined the second part of the run (O4b), which began on 10 April 2024, with a sensitivity of 50 to 55 Mpc. In June 2024, it was announced that the O4 run would last until 9 June 2025 to further prepare for the O5 upgrades.
Future
The detector will again be shut down for upgrades, including mirror-coating improvement, after the O4 run. A fifth observing run (O5) is planned to begin around June 2027. Virgo's target sensitivity, originally set at 150–260 Mpc, is being redefined in light of its performance during O4. Plans to enter the O5 run are expected to be known in the first quarter of 2025.
No official plans have been announced for the future of the Virgo installations after the O5 period, although projects for improving the detectors have been suggested. The collaboration's current plans are known as the Virgo_nEXT project.
Instrument
Principle
In general relativity, a gravitational wave is a space-time perturbation which propagates at the speed of light. It slightly curves spacetime, changing the light path. This can be detected with a Michelson interferometer, in which a laser is divided into two beams travelling in orthogonal directions, bouncing on a mirror at the end of each arm. As the gravitational wave passes, it alters the path of the two beams differently; they are then recombined, and the resulting interferometric pattern is measured with a photodiode. Since the induced deformation is extremely small, precision in mirror position, laser stability, measurements, and isolation from outside noise are essential.
Laser and injection system
The laser, the instrument's light source, must be powerful and stable in frequency and amplitude. To meet these specifications, the beam starts from a low-power, stable laser. Light from the laser passes through several amplifiers, which enhance its power by a factor of 100. A 50 watt (W) output power was achieved for the last configuration of the initial Virgo detector (reaching 100 W during the O3 run after the Advanced Virgo upgrades), and is expected to be upgraded to 130 W at the beginning of the O4 run. The original Virgo detector had a master-slave laser system, where a "master" laser is used to stabilise a high-powered "slave" laser; the master laser was a Nd:YAG laser, and the slave laser was a Nd:YVO4 laser. The Advanced Virgo design uses a fibre laser, with an amplification stage also made of fibres, to improve the system's robustness; its final configuration is planned to combine the light of two lasers to reach the required power. The laser's wavelength is 1064 nanometres in the original and Advanced Virgo configurations.
This laser beam is sent into the interferometer after passing through the injection system, which ensures its stability, adjusts its shape and power, and positions it correctly for entering the interferometer. The injection system includes the input mode cleaner, which is a 140-metre-long (460 ft) cavity designed to improve beam quality by stabilising the frequency, removing unwanted light propagation and reducing the effect of laser misalignment. It also features a Faraday isolator preventing light from returning to the laser, and a mode-matching telescope which adapts the size and position of the beam before it enters the interferometer.
Mirrors
The large mirrors in each arm are the interferometer's most critical optics. They include the two end mirrors at the ends of the 3-km (1.9 mi) interferometer arms and the two input mirrors near the beginning of the arms. These mirrors make a resonant optical cavity in each arm in which the light bounces thousands of times before returning to the beam splitter, maximising the signal's effect on the laser path and allowing the power of the light circulating in the arms to be increased. These mirrors (designed for Virgo) are cylinders in diameter and thick, made from extremely pure glass. During the manufacturing process, the mirrors are polished to the atomic level to avoid diffusing (and losing) any light. A reflective coating (a Bragg reflector made with ion-beam sputtering) is then added. The mirrors at the end of the arms reflect almost all incoming light, with less than 0.002 per cent lost at each reflection.
Two other mirrors are also in the final design:
The power-recycling mirror, between the laser and the beam splitter. Since most light is reflected toward the laser after returning to the beam splitter, this mirror re-injects the light into the main interferometer and increases power in the arms.
The signal-recycling mirror, at the interferometer output, re-injects part of the signal into the interferometer (transmission of this mirror is planned to be 40 per cent) and forms another cavity. With small adjustments to this mirror, quantum noise can be reduced in part of the frequency band and increased elsewhere; this makes it possible to tune the interferometer for certain frequencies. It is planned to use a wideband configuration, decreasing noise at high and low frequencies and increasing it at intermediate frequencies. Decreased noise at high frequencies is of particular interest for study of a signal right before and after a compact object merger.
Superattenuators
To mitigate seismic noise which could propagate up to the mirrors, shaking them and obscuring potential gravitational-wave signals, the mirrors are suspended by a complex system. The main mirrors are suspended by four thin fibres made of silica which are attached to a series of attenuators. This superattenuator, nearly high, is in a vacuum. The superattenuators limit disturbances to the mirrors and allow mirror position and orientation to be precisely steered. The optical table with the injection optics used to shape the laser beam, such as the optical benches used for the light detection, are also suspended in a vacuum to limit seismic and acoustic noise. In the Advanced Virgo configuration, the instrumentation used to detect gravitational-wave signals and steer the interferometer (photodiodes, cameras, and associated electronics) is installed on several benches suspended in a vacuum.
Superattenuator design is based on passive attenuation of seismic noise achieved by chaining several pendula, each a harmonic oscillator. They have a resonant frequency (diminishing with pendulum length) above which noise will be dampened; chaining several pendula reduces noise by twelve orders of magnitude, introducing resonant frequencies which are higher than a single long pendulum. The highest resonant frequency is around 2 Hz, providing meaningful noise reduction starting at 4 Hz and reaching the level needed to detect gravitational waves around 10 Hz. The system is limited in that noise in the resonant-frequency band (below 2 Hz) is not filtered and can generate large oscillations; this is mitigated by an active damping system, including sensors measuring seismic noise and actuators controlling the superattenuator to counteract the noise.
Detection system
Part of the light in the arm cavities is sent towards the detection system by the beam splitter. The interferometer works near the "dark fringe", with very little light sent towards the output; most is sent back to the input, to be collected by the power-recycling mirror. A fraction of this light is reflected back by the signal-recycling mirror, and the rest is collected by the detection system. It first passes through the output mode cleaner, which filters the "high-order modes" (light propagating in an unwanted way, typically from small defects in the mirrors) before reaching the photodiodes which measure the light intensity. The output mode cleaner and the photodiodes are suspended in a vacuum.
With the O3 run, a squeezed vacuum source was introduced to reduce the quantum noise which is one of the main limitations to sensitivity. When replacing the standard vacuum with a squeezed vacuum, the fluctuations of a quantity are decreased at the expense of increasing the fluctuations of the other quantity due to Heisenberg's uncertainty principle. In Virgo, the quantities are the amplitude and phase of the light. A squeezed vacuum was proposed in 1981 by Carlton Caves during the infancy of gravitational-wave detectors. During the O3 run, frequency-independent squeezing was implemented; squeezing is identical at all frequencies, reducing shot noise (dominant at high frequencies) and increasing radiation pressure noise (dominant at low frequencies, and not limiting the instrument's sensitivity). Due to the addition of the squeezed vacuum injection, quantum noise was reduced by 3.2 dB at high frequencies and the detector's range was increased by five to eight per cent. More sophisticated squeezed states are produced by combining the technology from O3 with a new 285-m-long (935 ft) filter cavity. This technology, known as frequency-dependent squeezing, helps to reduce shot noise at high frequencies (where radiation pressure noise is irrelevant) and reduce radiation-pressure noise at low frequencies (where shot noise is low).
Infrastructure
From the air, the Virgo detector has an "L" shape with its two 3-km-long (1.9 mi) perpendicular arms. At the intersection of the two arms, the central building is found, containing most of Virgo's key components including the laser, the beamsplitter and the input mirrors. Alongside the west arm, a shorter cavity and the associated building host the input mode-cleaner. The end mirrors are contained in a dedicated building at the end of each arm. South of the west arm, additional buildings contains offices, workshops, as well as the site computing center and the instrument control room.
The arm "tunnels" house pipes in which the laser beams travel in a vacuum. Virgo is Europe's largest ultra-high vacuum installation, with a volume of . The two 3-km (1.9 mi) arms are made of a long steel pipe in diameter, in which the target residual pressure is about one-thousandth of a billionth of an atmosphere (100 times thinner than in the original Virgo). The residual gas molecules, primarily hydrogen and water, have a limited impact on the laser beams' path. Large gate valves are at both ends of the arms so work can be done in the mirror-vacuum towers without breaking an arm's ultra-high vacuum. The towers containing the mirrors and attenuators are split into two sections, with different pressures. The tubes undergo a process, known as baking, in which they are heated to to remove unwanted particles from their surfaces; although the towers were also baked in the initial Virgo design, cryogenic traps are now used to prevent contamination.
Due to the interferometer's high power, its mirrors are susceptible to the effects of heating induced by the laser (despite extremely low absorption). These effects can cause deformation of the surface due to dilation or a change in refractive index of the substrate, resulting in power escaping from the interferometer and perturbations of the signal. These effects are accounted for by a thermal compensation system (TCS) which includes Hartmann wavefront sensors to measure optical aberration through an auxiliary light source, and two actuators: CO2 lasers (which heat parts of the mirror to correct the defects) and ring heaters, which adjust the mirror's radius of curvature. The system also corrects "cold defects": permanent defects introduced during mirror manufacture. During the O3 run, the TCS increased power inside the interferometer by 15 per cent and decreased power leaving the interferometer by a factor of two.
Another important component is the system for controlling stray light (any light leaving the interferometer's designated path) by scattering on a surface or from unwanted reflection. Recombination of stray light with the interferometer's main beam can be a significant noise source, often difficult to track and model. Most efforts to mitigate stray light are based on absorbing plates (known as baffles) placed near the optics and within the tubes; additional precautions are taken to prevent the baffles from affecting interferometer operation.
Calibration is required to estimate the detector's response to gravitational waves and correctly reconstruct the signal. It involves moving the mirrors in a controlled way and measuring the result. During the initial Virgo era, this was primarily achieved by agitating a pendulum on which the mirror is suspended with coils to generate a magnetic field interacting with magnets fixed to the pendulum. This technique was used until O2. For O3, the primary calibration method was photon calibration (PCal); it had been a secondary method to validate the results, using an auxiliary laser to displace the mirror with radiation pressure. A method known as Newtonian calibration (NCal) was introduced at the end of O2 to validate the PCal results; it relies on gravity to move the mirror, placing a rotating mass at a specific distance from it. At the beginning of the second part of O4, Ncal became the main calibration method because it performed better than PCal; PCal is still used to validate NCal results and probe higher frequencies which are inaccessible to the NCal.
The instrument requires an efficient data-acquisition system which manages data measured at the interferometer's output and from sensors on the site, writing it in files and distributing the files for data analysis. Dedicated electronic hardware and software have been developed for this purpose.
Noise and sensitivity
Noise sources
The Virgo detector is sensitive to several noise sources which limit its ability to detect gravitational-wave signals. Some have large frequency ranges and limit the overall sensitivity of the detector, such as:
seismic noise (any ground motion from sources such as waves in the Mediterranean Sea, wind, or human activity), generally in low frequencies up to about 10 Hertz (Hz)
thermal noise of the mirrors and their suspension wires corresponding to the agitation of the mirror or suspension from its own temperature, from a few tens to a few hundred Hz
quantum noise, which includes laser shot noise corresponding to fluctuation in power received by the photodiodes and relevant above a few hundred Hz, and radiation pressure noise corresponding to pressure by the laser on the mirror (relevant at low frequency)
Newtonian noise, caused by tiny fluctuations in the Earth's gravitational field which affect the position of the mirror; relevant below 20 Hz
In addition to these broad noise sources, others may affect specific frequencies. These include a source at 50 Hz (and harmonics at 100, 150, and 200 Hz), corresponding to the frequency of the European power grid; "violin modes" at 300 Hz (and several harmonics), corresponding to the resonant frequency of the suspension fibres (which can vibrate at a specific frequency, as the strings of a violin do); and calibration lines, appearing when mirrors are moved for calibration.
Additional noise sources may have a short-term impact; bad weather or earthquakes may temporarily increase the noise level. Short-lived artefacts may appear in the data due to many possible instrumental issues, and are usually referred to as "glitches". It is estimated that about 20 per cent of detected events are impacted by glitches, requiring specific data-processing methods to mitigate their impact.
Detector sensitivity
Sensitivity depends on frequency, and is usually represented as a curve corresponding to the noise power spectrum (or amplitude spectrum, the square root of the power spectrum); the lower the curve, the greater the sensitivity. Virgo is a wide-band detector whose sensitivity ranges from a few Hz to 10 kHz; a 2011 Virgo sensitivity curve is plotted with a log-log scale.
The most common measure of gravitational-wave-detector sensitivity is the horizon distance, defined as the distance at which a reference target produces a signal-to-noise ratio of 8 in the detector. The reference is usually a binary neutron star with both components having a mass of 1.4 solar masses; the distance is generally expressed in megaparsecs. The range for Virgo during the O3 run was between 40 and 50 Mpc. This range is an indicator, not a maximal range for the detector; signals from more massive sources will have a larger amplitude, and can be detected from further away.
Calculations indicate that the detector sensitivity roughly scales as , where is the arm-cavity length and the laser power on the beam splitter. To improve it, these quantities must be increased. This is achieved with long arms, optical cavities inside the arm to maximise exposure to the signal, and power recycling to increase power in the arms.
Data analysis
An important part of Virgo collaboration resources is dedicated to the development and deployment of data-analysis software designed to process the detector's output. Apart from the data-acquisition software and tools for distributing the data, the effort is shared with members of the LIGO and KAGRA collaborations as part of the LIGO-Virgo-KAGRA (LVK) collaboration.
Data from the detector is initially only available to LVK members. Segments of data surrounding detected events are released at the publication of the related paper, and the full data is released after a proprietary period (currently 18 months). During the third observing run (O3), this resulted in two separate data releases (O3a and O3b) corresponding to the first and last six months of the run. The data is then generally available on the Gravitational Wave Open Science Center (GWOSC) platform.
Analysis of the data requires a variety of techniques targeting different types of sources. Most of the effort is dedicated to the detection and analysis of mergers of compact objects, the only type of source detected until now. Analysis software is running the data in search of this type of event, and a dedicated infrastructure is used to alert the online community. Other efforts are carried out after the data-acquisition period (offline), including searches for continuous sources, a stochastic background, or deeper analysis of detected events.
Scientific results
Virgo first detected a gravitational signal during the second observation run (O2) of the "advanced" era; only the LIGO detectors were operating during the first observation run. The event, named GW170814, was a coalescence between two black holes. It was the first event detected by three different detectors, allowing for greatly-improved localisation compared to events from the first observation run. It also allowed for the first conclusive measure of gravitational-wave polarisation, providing evidence against polarisations other than those predicted by general relativity.
It was soon followed by the better-known GW170817, the first merger of two neutron stars detected by the gravitational-wave network and (as of ) the only event with a confirmed detection of an electromagnetic counterpart in gamma rays, optical telescopes, radio and x-ray domains. No signal was observed in Virgo, but this absence was crucial to more tightly constrain the event's localisation, as it allows to exclude regions of the sky where the signal would have been visible in Virgo data. This event, involving over 4,000 astronomers, improved the understanding of neutron-star mergers and put tight constraints on the speed of gravity.
Several searches for continuous gravitational waves have been performed on data from past runs. O3-run searches include an all-sky search, targeted searches toward Scorpius X-1 and several known pulsars (including the Crab and Vela Pulsars), and a directed search towards the supernova remnants Cassiopeia A and Vela Jr. and the Galactic Center. Although none of the searches identified a signal, this enabled upper limits to be set on some parameters; in particular, it was found that the deviation from perfect spinning spheres for close known pulsars is at most .
Virgo was included in the latest search for a gravitational-wave background with LIGO, combining the results of O3 with the O1 and O2 runs (which only used LIGO data). No stochastic background was observed, improving previous constraints on the energy of the background by an order of magnitude.
Broad estimates of the Hubble constant have also been obtained; the current best estimate is 68 km s−1 Mpc−1, combining results from binary black holes and the GW170817 event. This result is consistent with other estimates of the constant, but not precise enough to solve the current debates about its exact value.
Outreach
The Virgo Collaboration participates in several activities promoting communication and education about gravitational waves for the general public. One example of an activity is guided tours of the Virgo facilities for schools, universities, and the public; however, many of outreach activities take place outside the Virgo site. This includes public lectures and courses about Virgo activities and participation in science festivals, and developing methods and devices for the public understanding of gravitational waves and related topics. The Collaboration is involved in several artistic projects, ranging from visual projects such as "The Rhythm of Space" at the Museo della Grafica in Pisa and "On Air" at the Palais de Tokyo to concerts. It includes activities promoting gender equality in science, highlighting women working in Virgo in communications to the general public.
| Technology | Ground-based observatories | null |
5144613 | https://en.wikipedia.org/wiki/Depressive%20personality%20disorder | Depressive personality disorder | Depressive personality disorder, also known as melancholic personality disorder, is a former psychiatric diagnosis that denotes a personality disorder with depressive features.
Originally included in the American Psychiatric Association's DSM-II,, depressive personality disorder was removed from the DSM-III and DSM-III-R. The latest description of depressive personality disorder is described in Appendix B in the DSM-IV-TR. Although no longer listed as a personality disorder in the DSM-5, the diagnosis of subclinical Other Specified Personality Disorder and Unspecified Personality Disorder can be used instead.
While depressive personality disorder shares some similarities with mood disorders such as dysthymia, it also shares many similarities with other personality disorders including avoidant personality disorder. Some researchers argue that depressive personality disorder is sufficiently distinct from these other conditions so as to warrant a separate diagnosis.
Characteristics
The DSM-IV defines depressive personality disorder as "a pervasive pattern of depressive cognitions and behaviors beginning by early adulthood and occurring in a variety of contexts." Depressive personality disorder occurs independently of major depressive episodes, making it a distinct diagnosis not included in the definition of either major depressive episodes or dysthymia.
Five or more of the following criteria must be present:
usual mood is dominated by dejection, gloominess, cheerlessness, joylessness and unhappiness
self-concept centers on beliefs of inadequacy, worthlessness, and low self-esteem
is critical, blaming, and derogatory towards self
is brooding and given to worry
is negativistic, critical and judgmental toward others
is pessimistic
is prone to feeling guilty or remorseful
Studies in 2000-2002 have found more of a correlation between depressive personality disorder and dysthymia than a comparable group of people without depressive personality disorder.
Millon's subtypes
Theodore Millon identified five subtypes of depression. Any individual depressive may exhibit none, or one or more of the following:
Not all patients with a depressive disorder fall into a subtype. These subtypes are multidimensional in that patients usually experience multiple subtypes, instead of being limited to fitting into one subtype category. Currently, this set of subtypes is associated with melancholic personality disorders. All depression spectrum personality disorders are melancholic and can be looked at in terms of these subtypes.
Differential diagnosis
Similarities to dysthymia
Much of the controversy surrounding the potential inclusion of depressive personality disorder in the DSM-5 stems from its apparent similarities to dysthymia, a diagnosis already included in the DSM-IV. Dysthymia is characterized by a variety of depressive symptoms such as hypersomnia, fatigue, low self-esteem, poor appetite, or difficulty making decisions, for over two years, with symptoms never numerous or severe enough to qualify as major depressive disorder. Patients with dysthymia may experience social withdrawal, pessimism, and feelings of inadequacy at higher rates than other patients with depression. Early-onset dysthymia is the diagnosis most closely related to depressive personality disorder.
The key difference between dysthymia and depressive personality disorder is the focus of the symptoms used to diagnose. Dysthymia is diagnosed by looking at the somatic senses, the more tangible senses. Depressive personality disorder is diagnosed by looking at the cognitive and intrapsychic symptoms. The symptoms of dysthymia and depressive personality disorder may look similar at first glance, but the way these symptoms are considered distinguish the two diagnoses.
Comorbidity with other disorders
Many researchers believe that depressive personality disorder is so highly comorbid with other depressive disorders, manic-depressive episodes, and dysthymia, that it is redundant to include it as a distinct diagnosis. Recent studies however, have found that dysthymia and depressive personality disorder are not as comorbid as previously thought. It was found that almost two thirds of the test subjects with depressive personality disorder did not have dysthymia, and 83% did not have early-onset dysthymia.
The comorbidity with Axis I depressive disorders is not as high as had been assumed. An experiment conducted by American psychologists showed that depressive personality disorder shows a high comorbidity rate with major depression experienced at some point in a lifetime and with any mood disorders experienced at any point in a lifetime. A high comorbidity rate with these disorders is expected of many diagnoses. As for the extremely high comorbidity rate with mood disorders, it has been found that essentially all mood disorders are comorbid with at least one other, especially when looking at a lifetime sample size.
| Biology and health sciences | Mental disorders | Health |
5146410 | https://en.wikipedia.org/wiki/Phalangiotarbida | Phalangiotarbida | Phalangiotarbida is an extinct arachnid order first recorded from the Early Devonian of Germany and most widespread in the Upper Carboniferous coal measures of Europe and North America. The last species are known from the early Permian Rotliegend of Germany.
The affinities of phalangiotarbids are obscure, with most authors favouring affinities with Opiliones (harvestmen) and/or Acari (mites and ticks). Phalangiotarbida has been recently (2004) proposed to be sister group to (Palpigradi+Tetrapulmonata): the taxon Megoperculata sensu Shultz (1990).
Nemastomoides depressus, described as a harvestman in the family Nemastomoididae, is actually a poorly preserved phalangiotarbid.
Taxa included
Family Anthracotarbidae Kjellesvig-Waering, 1969
Genus Anthracotarbus Kjellesvig-Waering, 1969
Species Anthracotarbus hintoni Kjellesvig-Waering, 1969
Family Architarbidae Karsch, 1882
Genus Architarbus Scudder, 1868
Species Architarbus hoffmanni Guthörl, 1934 (Jr synonyms Opiliotarbus kliveri Waterlot, 1934; Goniotarbus sarana Guthörl, 1965)
Species Architarbus minor Petrunkevitch, 1913
Species Architarbus rotundatus Scudder, 1868
Genus Bornatarbus Rößler & Schneider, 1997
Species Bornatarbus mayasii Haupt in Nindel, 1955
Genus Devonotarbus Poschmann, Anderson & Dunlop, 2005
Species Devonotarbus hombachensis Poschmann, Anderson & Dunlop, 2005
Genus Discotarbus Petrunkevitch, 1913
Species Discotarbus deplanatus Petrunkevitch, 1913
Genus Geratarbus Scudder, 1890
Species Geratarbus lacoei Scudder, 1890
Species Geratarbus bohemicus Petrunkevitch, 1953
Genus Goniotarbus Petrunkevitch, 1953
Species Goniotarbus angulatus Pocock, 1911
Species Goniotarbus tuberculatus Pocock, 1911
Genus Hadrachne Melander, 1903
Species Hadrachne horribilis Melander, 1903
Genus Leptotarbus Petrunkevitch, 1945
Species Leptotarbus torpedo Pocock, 1911
Genus Mesotarbus Petrunkevitch, 1949
Species Mesotarbus angustus Pocock, 1911
Species Mesotarbus eggintoni Pocock, 1911
Species Mesotarbus hindi Pocock, 1911
Species Mesotarbus intermedius Petrunkevitch, 1949
Species Mesotarbus peteri Dunlop & Horrocks, 1997
Genus Metatarbus Petrunkevitch, 1913
Species Metatarbus triangularus Petrunkevitch, 1913
Genus Ootarbus Petrunkevitch, 1945
Species Ootarbus pulcher Petrunkevitch, 1945
Species Ootarbus ovatus Petrunkevitch, 1945
Genus OrthotarbusPetrunkevitch, 1945
Species Orthotarbus minutus Petrunkevitch, 1913
Species Orthotarbus robustus Petrunkevitch, 1945
Species Orthotarbus nyranensis Petrunkevitch, 1953
Genus Paratarbus Petrunkevitch, 1945
Species Paratarbus carbonarius Petrunkevitch, 1945
Genus Phalangiotarbus Haase, 1890
Species Phalangiotarbus subovalis Woodward, 1872
Genus Pycnotarbus Darber, 1990
Species Pycnotarbus verrucosus Darber, 1990
Genus Triangulotarbus Patrick, 1989
Species Triangulotarbus terrehautensis Patrick, 1989
Family Heterotarbidae Petrunkevitch, 1913
Genus Heterotarbus Petrunkevitch, 1913
Species Heterotarbus ovatus Petrunkevitch, 1913
Family Opiliotarbidae Petrunkevitch, 1949
Genus Opiliotarbus Pocock, 1910
Species Opiliotarbus elongatus Scudder, 1890
nomina dubia
Eotarbus litoralis Kušta, 1888
Nemastomoides depressus Petrunkevitch, 1913
| Biology and health sciences | Prehistoric arachnids | Animals |
5146476 | https://en.wikipedia.org/wiki/Reproductive%20isolation | Reproductive isolation | The mechanisms of reproductive isolation are a collection of evolutionary mechanisms, behaviors and physiological processes critical for speciation. They prevent members of different species from producing offspring, or ensure that any offspring are sterile. These barriers maintain the integrity of a species by reducing gene flow between related species.
The mechanisms of reproductive isolation have been classified in a number of ways. Zoologist Ernst Mayr classified the mechanisms of reproductive isolation in two broad categories: pre-zygotic for those that act before fertilization (or before mating in the case of animals) and post-zygotic for those that act after it. The mechanisms are genetically controlled and can appear in species whose geographic distributions overlap (sympatric speciation) or are separate (allopatric speciation).
Pre-zygotic isolation
Pre-zygotic isolation mechanisms are the most economic in terms of the natural selection of a population, as resources are not wasted on the production of a descendant that is weak, non-viable or sterile. These mechanisms include physiological or systemic barriers to fertilization.
Temporal or habitat isolation
Any of the factors that prevent potentially fertile individuals from meeting will reproductively isolate the members of distinct species. The types of barriers that can cause this isolation include: different habitats, physical barriers, and a difference in the time of sexual maturity or flowering.
An example of the ecological or habitat differences that impede the meeting of potential pairs occurs in two fish species of the family Gasterosteidae (sticklebacks). One species lives all year round in fresh water, mainly in small streams. The other species lives in the sea during winter, but in spring and summer individuals migrate to river estuaries to reproduce. The members of the two populations are reproductively isolated due to their adaptations to distinct salt concentrations.
An example of reproductive isolation due to differences in the mating season are found in the toad species Bufo americanus and Bufo fowleri. The members of these species can be successfully crossed in the laboratory producing healthy, fertile hybrids. However, mating does not occur in the wild even though the geographical distribution of the two species overlaps. The reason for the absence of inter-species mating is that B. americanus mates in early summer and B. fowleri in late summer.
Certain plant species, such as Tradescantia canaliculata and T. subaspera, are sympatric throughout their geographic distribution, yet they are reproductively isolated as they flower at different times of the year. In addition, one species grows in sunny areas and the other in deeply shaded areas.
Behavioral isolation
The different mating rituals of animal species creates extremely powerful reproductive barriers, termed sexual or behavior isolation, that isolate apparently similar species in the majority of the groups of the animal kingdom. In dioecious species, males and females have to search for a partner, be in proximity to each other, carry out the complex mating rituals and finally copulate or release their gametes into the environment in order to breed.
Mating dances, the songs of males to attract females or the mutual grooming of pairs, are all examples of typical courtship behavior that allows both recognition and reproductive isolation. This is because each of the stages of courtship depend on the behavior of the partner. The male will only move onto the second stage of the exhibition if the female shows certain responses in her behavior. He will only pass onto the third stage when she displays a second key behavior. The behaviors of both interlink, are synchronized in time and lead finally to copulation or the liberation of gametes into the environment. No animal that is not physiologically suitable for fertilization can complete this demanding chain of behavior. In fact, the smallest difference in the courting patterns of two species is enough to prevent mating (for example, a specific song pattern acts as an isolation mechanism in distinct species of grasshopper of the genus Chorthippus).
Even where there are minimal morphological differences between species, differences in behavior can be enough to prevent mating. For example, Drosophila melanogaster and D. simulans which are considered twin species due to their morphological similarity, do not mate even if they are kept together in a laboratory. Drosophila ananassae and D. pallidosa are twin species from Melanesia. In the wild they rarely produce hybrids, although in the laboratory it is possible to produce fertile offspring. Studies of their sexual behavior show that the males court the females of both species but the females show a marked preference for mating with males of their own species. A different regulator region has been found on Chromosome II of both species that affects the selection behavior of the females.
Pheromones play an important role in the sexual isolation of insect species. These compounds serve to identify individuals of the same species and of the same or different sex. Evaporated molecules of volatile pheromones can serve as a wide-reaching chemical signal. In other cases, pheromones may be detected only at a short distance or by contact.
In species of the melanogaster group of Drosophila, the pheromones of the females are mixtures of different compounds, there is a clear dimorphism in the type and/or quantity of compounds present for each sex. In addition, there are differences in the quantity and quality of constituent compounds between related species, it is assumed that the pheromones serve to distinguish between individuals of each species. An example of the role of pheromones in sexual isolation is found in 'corn borers' in the genus Ostrinia. There are two twin species in Europe that occasionally cross. The females of both species produce pheromones that contain a volatile compound which has two isomers, E and Z; 99% of the compound produced by the females of one species is in the E isomer form, while the females of the other produce 99% isomer Z. The production of the compound is controlled by just one locus and the interspecific hybrid produces an equal mix of the two isomers. The males, for their part, almost exclusively detect the isomer emitted by the females of their species, such that the hybridization although possible is scarce. The perception of the males is controlled by one gene, distinct from the one for the production of isomers, the heterozygous males show a moderate response to the odour of either type. In this case, just 2 'loci' produce the effect of ethological isolation between species that are genetically very similar.
Sexual isolation between two species can be asymmetrical. This can happen when the mating that produces descendants only allows one of the two species to function as the female progenitor and the other as the male, while the reciprocal cross does not occur. For instance, half of the wolves tested in the Great Lakes area of America show mitochondrial DNA sequences of coyotes, while mitochondrial DNA from wolves is never found in coyote populations. This probably reflects an asymmetry in inter-species mating due to the difference in size of the two species as male wolves take advantage of their greater size in order to mate with female coyotes, while female wolves and male coyotes do not mate.
Mechanical isolation
Mating pairs may not be able to couple successfully if their genitals are not compatible. The relationship between the reproductive isolation of species and the form of their genital organs was signaled for the first time in 1844 by the French entomologist Léon Dufour. Insects' rigid carapaces act in a manner analogous to a lock and key, as they will only allow mating between individuals with complementary structures, that is, males and females of the same species (termed co-specifics).
Evolution has led to the development of genital organs with increasingly complex and divergent characteristics, which will cause mechanical isolation between species. Certain characteristics of the genital organs will often have converted them into mechanisms of isolation. However, numerous studies show that organs that are anatomically very different can be functionally compatible, indicating that other factors also determine the form of these complicated structures.
Mechanical isolation also occurs in plants and this is related to the adaptation and coevolution of each species in the attraction of a certain type of pollinator (where pollination is zoophilic) through a collection of morphophysiological characteristics of the flowers (called pollination syndrome), in such a way that the transport of pollen to other species does not occur.
Gametic isolation
The synchronous spawning of many species of coral in marine reefs means that inter-species hybridization can take place as the gametes of hundreds of individuals of tens of species are liberated into the same water at the same time. Approximately a third of all the possible crosses between species are compatible, in the sense that the gametes will fuse and lead to individual hybrids. This hybridization apparently plays a fundamental role in the evolution of coral species. However, the other two-thirds of possible crosses are incompatible. It has been observed that in sea urchins of the genus Strongylocentrotus the concentration of spermatocytes that allow 100% fertilization of the ovules of the same species is only able to fertilize 1.5% of the ovules of other species. This inability to produce hybrid offspring, despite the fact that the gametes are found at the same time and in the same place, is due to a phenomenon known as gamete incompatibility, which is often found between marine invertebrates, and whose physiological causes are not fully understood.
In some Drosophila crosses, the swelling of the female's vagina has been noted following insemination. This has the effect of consequently preventing the fertilization of the ovule by sperm of a different species.
In plants the pollen grains of a species can germinate in the stigma and grow in the style of other species. However, the growth of the pollen tubes may be detained at some point between the stigma and the ovules, in such a way that fertilization does not take place. This mechanism of reproductive isolation is common in the angiosperms and is called cross-incompatibility or incongruence. A relationship exists between self-incompatibility and the phenomenon of cross-incompatibility. In general crosses between individuals of a self-compatible species (SC) with individuals of a self-incompatible (SI) species give hybrid offspring. On the other hand, a reciprocal cross (SI x SC) will not produce offspring, because the pollen tubes will not reach the ovules. This is known as unilateral incompatibility, which also occurs when two SC or two SI species are crossed.
Post-zygotic isolation
A number of mechanisms which act after fertilization preventing successful inter-population crossing are discussed below.
Zygote mortality and non-viability of hybrids
A type of incompatibility that is found as often in plants as in animals occurs when the egg or ovule is fertilized but the zygote does not develop, or it develops and the resulting individual has a reduced viability. This is the case for crosses between species of the frog order, where widely differing results are observed depending upon the species involved. In some crosses there is no segmentation of the zygote (or it may be that the hybrid is extremely non-viable and changes occur from the first mitosis). In others, normal segmentation occurs in the blastula but gastrulation fails. Finally, in other crosses, the initial stages are normal but errors occur in the final phases of embryo development. This indicates differentiation of the embryo development genes (or gene complexes) in these species and these differences determine the non-viability of the hybrids.
Similar results are observed in mosquitoes of the genus Culex, but the differences are seen between reciprocal crosses, from which it is concluded that the same effect occurs in the interaction between the genes of the cell nucleus (inherited from both parents) as occurs in the genes of the cytoplasmic organelles which are inherited solely from the female progenitor through the cytoplasm of the ovule.
In Angiosperms, the successful development of the embryo depends on the normal functioning of its endosperm.
The failure of endosperm development and its subsequent abortion has been observed in many interploidal crosses (that is, those between populations with a particular degree of intra or interspecific ploidy), and in certain crosses in species with the same level of ploidy. The collapse of the endosperm, and the subsequent abortion of the hybrid embryo is one of the most common post-fertilization reproductive isolation mechanism found in angiosperms.
Hybrid sterility
A hybrid may have normal viability but is typically deficient in terms of reproduction or is sterile. This is demonstrated by the mule and in many other well known hybrids. In all of these cases sterility is due to the interaction between the genes of the two species involved; to chromosomal imbalances due to the different number of chromosomes in the parent species; or to nucleus-cytoplasmic interactions such as in the case of Culex described above.
Hinnies and mules are hybrids resulting from a cross between a horse and a donkey or between a mare and a donkey, respectively. These animals are nearly always sterile due to the difference in the number of chromosomes between the two parent species. Both horses and donkeys belong to the genus Equus, but Equus caballus has 64 chromosomes, while Equus asinus only has 62. A cross will produce offspring (mule or hinny) with 63 chromosomes, that will not form pairs, which means that they do not divide in a balanced manner during meiosis. In the wild, the horses and donkeys ignore each other and do not cross. In order to obtain mules or hinnies it is necessary to train the progenitors to accept copulation between the species or create them through artificial insemination.
The sterility of many interspecific hybrids in angiosperms has been widely recognised and studied.
Interspecific sterility of hybrids in plants has multiple possible causes. These may be genetic, related to the genomes, or the interaction between nuclear and cytoplasmic factors, as will be discussed in the corresponding section. Nevertheless, in plants, hybridization is a stimulus for the creation of new species – the contrary to the situation in animals.
Although the hybrid may be sterile, it can continue to multiply in the wild by asexual reproduction, whether vegetative propagation or apomixis or the production of seeds.
Indeed, interspecific hybridization can be associated with polyploidia and, in this way, the origin of new species that are called allopolyploids. Rosa canina, for example, is the result of multiple hybridizations. The common wheat (Triticum aestivum) is an allohexaploid (allopolyploid with six chromosome sets) that contains the genomes of three different species.
Multiple mechanisms
In general, the barriers that separate species do not consist of just one mechanism. The twin species of Drosophila, D. pseudoobscura and D. persimilis, are isolated from each other by habitat (persimilis generally lives in colder regions at higher altitudes), by the timing of the mating season (persimilis is generally more active in the morning and pseudoobscura at night) and by behavior during mating (the females of both species prefer the males of their respective species). In this way, although the distribution of these species overlaps in wide areas of the west of the United States of America, these isolation mechanisms are sufficient to keep the species separated. Such that, only a few fertile females have been found amongst the other species among the thousands that have been analyzed. However, when hybrids are produced between both species, the gene flow between the two will continue to be impeded as the hybrid males are sterile. Also, and in contrast with the great vigor shown by the sterile males, the descendants of the backcrosses of the hybrid females with the parent species are weak and notoriously non-viable. This last mechanism restricts even more the genetic interchange between the two species of fly in the wild.
Hybrid sex: Haldane's rule
Haldane's rule states that when one of the two sexes is absent in interspecific hybrids between two specific species, then the sex that is not produced, is rare or is sterile is the heterozygous (or heterogametic) sex. In mammals, at least, there is growing evidence to suggest that this is due to high rates of mutation of the genes determining masculinity in the Y chromosome.
It has been suggested that Haldane's rule simply reflects the fact that the male sex is more sensitive than the female when the sex-determining genes are included in a hybrid genome. But there are also organisms in which the heterozygous sex is the female: birds and butterflies and the law is followed in these organisms. Therefore, it is not a problem related to sexual development, nor with the sex chromosomes. Haldane proposed that the stability of hybrid individual development requires the full gene complement of each parent species, so that the hybrid of the heterozygous sex is unbalanced (i.e. missing at least one chromosome from each of the parental species). For example, the hybrid male obtained by crossing D. melanogaster females with D. simulans males, which is non-viable, lacks the X chromosome of D. simulans.
Genetics
Pre-copulatory mechanisms in animals
The genetics of ethological isolation barriers will be discussed first. Pre-copulatory isolation occurs when the genes necessary for the sexual reproduction of one species differ from the equivalent genes of another species, such that if a male of species A and a female of species B are placed together they are unable to copulate. Study of the genetics involved in this reproductive barrier tries to identify the genes that govern distinct sexual behaviors in the two species. The males of Drosophila melanogaster and those of D. simulans conduct an elaborate courtship with their respective females, which are different for each species, but the differences between the species are more quantitative than qualitative. In fact the simulans males are able to hybridize with the melanogaster females. Although there are lines of the latter species that can easily cross there are others that are hardly able to. Using this difference, it is possible to assess the minimum number of genes involved in pre-copulatory isolation between the melanogaster and simulans species and their chromosomal location.
In experiments, flies of the D. melanogaster line, which hybridizes readily with simulans, were crossed with another line that it does not hybridize with, or rarely. The females of the segregated populations obtained by this cross were placed next to simulans males and the percentage of hybridization was recorded, which is a measure of the degree of reproductive isolation. It was concluded from this experiment that 3 of the 8 chromosomes of the haploid complement of D. melanogaster carry at least one gene that affects isolation, such that substituting one chromosome from a line of low isolation with another of high isolation reduces the hybridization frequency. In addition, interactions between chromosomes are detected so that certain combinations of the chromosomes have a multiplying effect.
Cross incompatibility or incongruence in plants is also determined by major genes that are not associated at the self-incompatibility S locus.
Post-copulation or fertilization mechanisms in animals
Reproductive isolation between species appears, in certain cases, a long time after fertilization and the formation of the zygote, as happens – for example – in the twin species Drosophila pavani and D. gaucha. The hybrids between both species are not sterile, in the sense that they produce viable gametes, ovules and spermatozoa. However, they cannot produce offspring as the sperm of the hybrid male do not survive in the semen receptors of the females, be they hybrids or from the parent lines. In the same way, the sperm of the males of the two parent species do not survive in the reproductive tract of the hybrid female. This type of post-copulatory isolation appears as the most efficient system for maintaining reproductive isolation in many species.
The development of a zygote into an adult is a complex and delicate process of interactions between genes and the environment that must be carried out precisely, and if there is any alteration in the usual process, caused by the absence of a necessary gene or the presence of a different one, it can arrest the normal development causing the non-viability of the hybrid or its sterility. It should be borne in mind that half of the chromosomes and genes of a hybrid are from one species and the other half come from the other. If the two species are genetically different, there is little possibility that the genes from both will act harmoniously in the hybrid. From this perspective, only a few genes would be required in order to bring about post copulatory isolation, as opposed to the situation described previously for pre-copulatory isolation.
In many species where pre-copulatory reproductive isolation does not exist, hybrids are produced but they are of only one sex. This is the case for the hybridization between females of Drosophila simulans and Drosophila melanogaster males: the hybridized females die early in their development so that only males are seen among the offspring. However, populations of D. simulans have been recorded with genes that permit the development of adult hybrid females, that is, the viability of the females is "rescued". It is assumed that the normal activity of these speciation genes is to "inhibit" the expression of the genes that allow the growth of the hybrid. There will also be regulator genes.
A number of these genes have been found in the melanogaster species group. The first to be discovered was "Lhr" (Lethal hybrid rescue) located in Chromosome II of D. simulans. This dominant allele allows the development of hybrid females from the cross between simulans females and melanogaster males. A different gene, also located on Chromosome II of D. simulans is "Shfr" that also allows the development of female hybrids, its activity being dependent on the temperature at which development occurs. Other similar genes have been located in distinct populations of species of this group. In short, only a few genes are needed for an effective post copulatory isolation barrier mediated through the non-viability of the hybrids.
As important as identifying an isolation gene is knowing its function. The Hmr gene, linked to the X chromosome and implicated in the viability of male hybrids between D. melanogaster and D. simulans, is a gene from the proto-oncogene family myb, that codes for a transcriptional regulator. Two variants of this gene function perfectly well in each separate species, but in the hybrid they do not function correctly, possibly due to the different genetic background of each species. Examination of the allele sequence of the two species shows that change of direction substitutions are more abundant than synonymous substitutions, suggesting that this gene has been subject to intense natural selection.
The Dobzhansky–Muller model proposes that reproductive incompatibilities between species are caused by the interaction of the genes of the respective species. It has been demonstrated recently that Lhr has functionally diverged in D. simulans and will interact with Hmr which, in turn, has functionally diverged in D. melanogaster to cause the lethality of the male hybrids. Lhr is located in a heterochromatic region of the genome and its sequence has diverged between these two species in a manner consistent with the mechanisms of positive selection. An important unanswered question is whether the genes detected correspond to old genes that initiated the speciation favoring hybrid non-viability, or are modern genes that have appeared post-speciation by mutation, that are not shared by the different populations and that suppress the effect of the primitive non-viability genes. The OdsH (abbreviation of Odysseus) gene causes partial sterility in the hybrid between Drosophila simulans and a related species, D. mauritiana, which is only encountered on Mauritius, and is of recent origin. This gene shows monophyly in both species and also has been subject to natural selection. It is thought that it is a gene that intervenes in the initial stages of speciation, while other genes that differentiate the two species show polyphyly. Odsh originated by duplication in the genome of Drosophila and has evolved at very high rates in D. mauritania, while its paralogue, unc-4, is nearly identical between the species of the group melanogaster. Seemingly, all these cases illustrate the manner in which speciation mechanisms originated in nature, therefore they are collectively known as "speciation genes", or possibly, gene sequences with a normal function within the populations of a species that diverge rapidly in response to positive selection thereby forming reproductive isolation barriers with other species. In general, all these genes have functions in the transcriptional regulation of other genes.
The Nup96 gene is another example of the evolution of the genes implicated in post-copulatory isolation. It regulates the production of one of the approximately 30 proteins required to form a nuclear pore. In each of the simulans groups of Drosophila the protein from this gene interacts with the protein from another, as yet undiscovered, gene on the X chromosome in order to form a functioning pore. However, in a hybrid the pore that is formed is defective and causes sterility. The differences in the sequences of Nup96 have been subject to adaptive selection, similar to the other examples of speciation genes described above.
Post-copulatory isolation can also arise between chromosomally differentiated populations due to chromosomal translocations and inversions. If, for example, a reciprocal translocation is fixed in a population, the hybrid produced between this population and one that does not carry the translocation will not have a complete meiosis. This will result in the production of unequal gametes containing unequal numbers of chromosomes with a reduced fertility. In certain cases, complete translocations exist that involve more than two chromosomes, so that the meiosis of the hybrids is irregular and their fertility is zero or nearly zero. Inversions can also give rise to abnormal gametes in heterozygous individuals but this effect has little importance compared to translocations. An example of chromosomal changes causing sterility in hybrids comes from the study of Drosophila nasuta and D. albomicans which are twin species from the Indo-Pacific region. There is no sexual isolation between them and the F1 hybrid is fertile. However, the F2 hybrids are relatively infertile and leave few descendants which have a skewed ratio of the sexes. The reason is that the X chromosome of albomicans is translocated and linked to an autosome which causes abnormal meiosis in hybrids. Robertsonian translocations are variations in the numbers of chromosomes that arise from either: the fusion of two acrocentric chromosomes into a single chromosome with two arms, causing a reduction in the haploid number, or conversely; or the fission of one chromosome into two acrocentric chromosomes, in this case increasing the haploid number. The hybrids of two populations with differing numbers of chromosomes can experience a certain loss of fertility, and therefore a poor adaptation, because of irregular meiosis.
In plants
A large variety of mechanisms have been demonstrated to reinforce reproductive isolation between closely related plant species that either historically lived or currently live in sympatry. This phenomenon is driven by strong selection against hybrids, typically resulting from instances in which hybrids suffer reduced fitness. Such negative fitness consequences have been proposed to be the result of negative epistasis in hybrid genomes and can also result from the effects of hybrid sterility. In such cases, selection gives rise to population-specific isolating mechanisms to prevent either fertilization by interspecific gametes or the development of hybrid embryos.
Because many sexually reproducing species of plants are exposed to a variety of interspecific gametes, natural selection has given rise to a variety of mechanisms to prevent the production of hybrids. These mechanisms can act at different stages in the developmental process and are typically divided into two categories, pre-fertilization and post-fertilization, indicating at which point the barrier acts to prevent either zygote formation or development. In the case of angiosperms and other pollinated species, pre-fertilization mechanisms can be further subdivided into two more categories, pre-pollination and post-pollination, the difference between the two being whether or not a pollen tube is formed. (Typically when pollen encounters a receptive stigma, a series of changes occur which ultimately lead to the growth of a pollen tube down the style, allowing for the formation of the zygote.) Empirical investigation has demonstrated that these barriers act at many different developmental stages and species can have none, one, or many barriers to hybridization with interspecifics.
Examples of pre-fertilization mechanisms
A well-documented example of a pre-fertilization isolating mechanism comes from study of Louisiana iris species. These iris species were fertilized with interspecific and conspecific pollen loads and it was demonstrated by measure of hybrid progeny success that differences in pollen-tube growth between interspecific and conspecific pollen led to a lower fertilization rate by interspecific pollen. This demonstrates how a specific point in the reproductive process is manipulated by a particular isolating mechanism to prevent hybrids.
Another well-documented example of a pre-fertilization isolating mechanism in plants comes from study of the 2 wind-pollinated birch species. Study of these species led to the discovery that mixed conspecific and interspecific pollen loads still result in 98% conspecific fertilization rates, highlighting the effectiveness of such barriers. In this example, pollen tube incompatibility and slower generative mitosis have been implicated in the post-pollination isolation mechanism.
Examples of post-fertilization mechanisms
Crosses between diploid and tetraploid species of Paspalum provide evidence of a post-fertilization mechanism preventing hybrid formation when pollen from tetraploid species was used to fertilize a female of a diploid species. There were signs of fertilization and even endosperm formation but subsequently this endosperm collapsed. This demonstrates evidence of an early post-fertilization isolating mechanism, in which the hybrid early embryo is detected and selectively aborted. This process can also occur later during development in which developed, hybrid seeds are selectively aborted.
Effects of hybrid necrosis
Plant hybrids often suffer from an autoimmune syndrome known as hybrid necrosis. In the hybrids, specific gene products contributed by one of the parents may be inappropriately recognized as foreign and pathogenic, and thus trigger pervasive cell death throughout the plant. In at least one case, a pathogen receptor, encoded by the most variable gene family in plants, was identified as being responsible for hybrid necrosis.
Chromosomal rearrangements in yeast
In brewers' yeast Saccharomyces cerevisiae, chromosomal rearrangements are a major mechanism to reproductively isolate different strains. Hou et al. showed that reproductive isolation acts postzygotically and could be attributed to chromosomal rearrangements. These authors crossed 60 natural isolates sampled from diverse niches with the reference strain S288c and identified 16 cases of reproductive isolation with reduced offspring viabilities, and identified reciprocal chromosomal translocations in a large fraction of isolates.
Incompatibility caused by microorganisms
In addition to the genetic causes of reproductive isolation between species there is another factor that can cause post zygotic isolation: the presence of microorganisms in the cytoplasm of certain species. The presence of these organisms in a species and their absence in another causes the non-viability of the corresponding hybrid. For example, in the semi-species of the group D. paulistorum the hybrid females are fertile but the males are sterile, this is due to the presence of a Wolbachia in the cytoplasm which alters spermatogenesis leading to sterility. It is interesting that incompatibility or isolation can also arise at an intraspecific level. Populations of D. simulans have been studied that show hybrid sterility according to the direction of the cross. The factor determining sterility has been found to be the presence or absence of a microorganism Wolbachia and the populations tolerance or susceptibility to these organisms. This inter population incompatibility can be eliminated in the laboratory through the administration of a specific antibiotic to kill the microorganism. Similar situations are known in a number of insects, as around 15% of species show infections caused by this symbiont. It has been suggested that, in some cases, the speciation process has taken place because of the incompatibility caused by this bacteria. Two wasp species Nasonia giraulti and N. longicornis carry two different strains of Wolbachia. Crosses between an infected population and one free from infection produces a nearly total reproductive isolation between the semi-species. However, if both species are free from the bacteria or both are treated with antibiotics there is no reproductive barrier. Wolbachia also induces incompatibility due to the weakness of the hybrids in populations of spider mites (Tetranychus urticae), between Drosophila recens and D. subquinaria and between species of Diabrotica (beetle) and Gryllus (cricket).
Selection
In 1950 K. F. Koopman reported results from experiments designed to examine the hypothesis that selection can increase reproductive isolation between populations. He used D. pseudoobscura and D. persimilis in these experiments. When the flies of these species are kept at 16 °C approximately a third of the matings are interspecific. In the experiment equal numbers of males and females of both species were placed in containers suitable for their survival and reproduction. The progeny of each generation were examined in order to determine if there were any interspecific hybrids. These hybrids were then eliminated. An equal number of males and females of the resulting progeny were then chosen to act as progenitors of the next generation. As the hybrids were destroyed in each generation the flies that solely mated with members of their own species produced more surviving descendants than the flies that mated solely with individuals of the other species. In the adjacent table it can be seen that for each generation the number of hybrids continuously decreased up to the tenth generation when hardly any interspecific hybrids were produced. It is evident that selection against the hybrids was very effective in increasing reproductive isolation between these species. From the third generation, the proportions of the hybrids were less than 5%. This confirmed that selection acts to reinforce the reproductive isolation of two genetically divergent populations if the hybrids formed by these species are less well adapted than their parents.
These discoveries allowed certain assumptions to be made regarding the origin of reproductive isolation mechanisms in nature. Namely, if selection reinforces the degree of reproductive isolation that exists between two species due to the poor adaptive value of the hybrids, it is expected that the populations of two species located in the same area will show a greater reproductive isolation than populations that are geographically separated (see reinforcement). This mechanism for "reinforcing" hybridization barriers in sympatric populations is also known as the "Wallace effect", as it was first proposed by Alfred Russel Wallace at the end of the 19th century, and it has been experimentally demonstrated in both plants and animals.
The sexual isolation between Drosophila miranda and D. pseudoobscura, for example, is more or less pronounced according to the geographic origin of the flies being studied. Flies from regions where the distribution of the species is superimposed show a greater sexual isolation than exists between populations originating in distant regions.
On the other hand, interspecific hybridization barriers can also arise as a result of the adaptive divergence that accompanies allopatric speciation. This mechanism has been experimentally proved by an experiment carried out by Diane Dodd on D. pseudoobscura. A single population of flies was divided into two, with one of the populations fed with starch-based food and the other with maltose-based food. This meant that each sub population was adapted to each food type over a number of generations. After the populations had diverged over many generations, the groups were again mixed; it was observed that the flies would mate only with others from their adapted population. This indicates that the mechanisms of reproductive isolation can arise even though the interspecific hybrids are not selected against.
| Biology and health sciences | Basics_4 | Biology |
20321147 | https://en.wikipedia.org/wiki/Typhus | Typhus | Typhus, also known as typhus fever, is a group of infectious diseases that include epidemic typhus, scrub typhus, and murine typhus. Common symptoms include fever, headache, and a rash. Typically these begin one to two weeks after exposure.
The diseases are caused by specific types of bacterial infection. Epidemic typhus is caused by Rickettsia prowazekii spread by body lice, scrub typhus is caused by Orientia tsutsugamushi spread by chiggers, and murine typhus is caused by Rickettsia typhi spread by fleas.
Vaccines have been developed, but none are commercially available. Prevention is achieved by reducing exposure to the organisms that spread the disease. Treatment is with the antibiotic doxycycline. Epidemic typhus generally occurs in outbreaks when poor sanitary conditions and crowding are present. While once common, it is now rare. Scrub typhus occurs in Southeast Asia, Japan, and northern Australia. Murine typhus occurs in tropical and subtropical areas of the world.
Typhus has been described since at least 1528. The name comes from the Greek (), meaning 'hazy' or 'smoky' and commonly used as a word for delusion, describing the state of mind of those infected. While typhoid means 'typhus-like', typhus and typhoid fever are distinct diseases caused by different types of bacteria, the latter by specific strains of Salmonella typhi. However, in some languages such as German, the term does mean 'typhoid fever', and the here-described typhus is called by another name, such as the language's equivalent of 'lice fever'.
Signs and symptoms
These signs and symptoms refer to epidemic typhus, as it is the most important of the typhus group of diseases.
Signs and symptoms begin with sudden onset of fever and other flu-like symptoms about one to two weeks after being infected. Five to nine days after the symptoms have started, a rash typically begins on the trunk and spreads to the extremities. This rash eventually spreads over most of the body, sparing the face, palms, and soles. Signs of meningoencephalitis begin with the rash and continue into the second or third weeks.[citation needed] Other signs of meningoencephalitis include sensitivity to light (photophobia), altered mental status (delirium), or coma. Untreated cases are often fatal.
Signs and symptoms of scrub typhus usually start within 1 to 2 weeks after being infected. These symptoms include fever, headaches, chills, swollen lymph nodes, nausea/vomiting, and a rash at the site of infection called an eschar. More severe symptoms may damage the lungs, brain, kidney, meninges, and heart.
Causes
Multiple diseases include the word "typhus" in their descriptions. Types include:
Diagnosis
The main method of diagnosing typhus of all types is laboratory testing. It is most commonly done with an indirect immunofluorescence antibody IFA test for all types of typhus. This tests a sample for the antibodies associated with typhus. It can also be done with either immunohistochemistry (IHC) or polymerase chain reaction (PCR) tests excluding scrub typhus. Scrub typhus is not tested with IHC or PCR but is instead tested with the IFA test as well as indirect immunuoperoxidase (IIP) assays.
Prevention
As of 2025, no vaccine is commercially available. A vaccine has been in development for scrub typhus known as the scrub typhus vaccine.
Scrub typhus
Scrub typhus is caused by mites, so avoid the outdoors when scrub is common in the area. Make sure your clothing is treated with permethrin to prevent mite bites. Lastly, make sure to use bug spray to keep mites away as well. For children and babies, you additionally have to make sure their clothing covers their limbs. For babies put a mosquito cover over their stroller which also protects them from mites.
Epidemic typhus
Epidemic typhus is caused by body lice and thrives in areas with overcrowding. To avoid lice you should stay away from highly populated areas. Also, make sure to regularly clean yourself and your clothing to help kill lice. This also goes for things like bedding and towels. Make sure to not share any fabric items with anyone who has lice or typhus. Lastly, treat clothing with permethrin because it helps kill lice.
Murine typhus
Murine typhus is caused by flea bites so take steps to avoid fleas. This can be done by making sure pets do not have fleas and if they do, treat them, stay away from wild animals, use insect repellent to keep fleas away, and wear gloves when dealing with sick or dead animals. Take steps to ensure rodents or other wildlife do not get into your home.
Treatment
The American Public Health Association recommends treatment based upon clinical findings and before culturing confirms the diagnosis. Without treatment, death may occur in 10% to 60% of people with epidemic typhus, with people over age 50 having the highest risk of death. In the antibiotic era, death is uncommon if doxycycline is given. In one study of 60 people hospitalized with epidemic typhus, no one died when given doxycycline or chloramphenicol.
Epidemiology
According to the World Health Organization, in 2010 the death rate from typhus was about one of every 5,000,000 people per year.
Only a few areas of epidemic typhus exist today. Since the late 20th century, cases have been reported in Burundi, Rwanda, Ethiopia, Algeria, and a few areas in South and Central America.
Except for two cases, all instances of epidemic typhus in the United States have occurred east of the Mississippi River. An examination of a cluster of cases in Pennsylvania concluded the source of the infection was flying squirrels. Sylvatic cycle (diseases transmitted from wild animals) epidemic typhus remains uncommon in the US. The Centers for Disease Control and Prevention have documented only 47 cases from 1976 to 2010. An outbreak of flea-borne murine typhus was identified in downtown Los Angeles, California, in October 2018.
History
Middle Ages
The first reliable description of typhus appears in 1489 AD during the Spanish siege of Baza against the Moors during the War of Granada (1482–1492). These accounts include descriptions of fever; red spots over arms, back, and chest; attention deficit, progressing to delirium; and gangrenous sores and the associated smell of rotting flesh. During the siege, the Spaniards lost 3,000 men to enemy action, but an additional 17,000 died of typhus.
In historical times, "jail fever" or "gaol fever" was common in English prisons, and is believed by modern authorities to have been typhus. It often occurred when prisoners were crowded together into dark, filthy rooms where lice spread easily. Thus, "imprisonment until the next term of court" was often equivalent to a death sentence. Prisoners brought before the court sometimes infected members of the court. The Black Assize of Exeter 1586 was another notable outbreak. During the Lent assizes court held at Taunton in 1730, gaol fever caused the death of the Lord Chief Baron, as well as the High Sheriff, the sergeant, and hundreds of others. During a time when persons were executed for capital offenses, more prisoners died from 'gaol fever' than were put to death by all the public executioners in the British realm. In 1759, an English authority estimated that each year, a quarter of the prisoners had died from gaol fever. In London, gaol fever frequently broke out among the ill-kept prisoners of Newgate Prison and then moved into the general city population. In May 1750, the Lord Mayor of London, Sir Samuel Pennant, and many court personnel were fatally infected in the courtroom of the Old Bailey, which adjoined Newgate Prison.
Early modern epidemics
Epidemics occurred routinely throughout Europe from the 16th to the 19th centuries, including during the English Civil War, the Thirty Years' War, and the Napoleonic Wars. Pestilence of several kinds raged among combatants and civilians in Germany and surrounding lands from 1618 to 1648. According to Joseph Patrick Byrne, "By war's end, typhus may have killed more than 10 percent of the total German population, and disease in general accounted for 90 percent of Europe's casualties."
19th century
During Napoleon's retreat from Moscow in 1812, more French soldiers died of typhus than were killed by the Russians.
A major epidemic occurred in Ireland between 1816 and 1819, during the famine caused by a worldwide reduction in temperature known as the Year Without a Summer. An estimated 100,000 people perished. Typhus appeared again in the late 1830s, and yet another major typhus epidemic occurred during the Great Irish Famine between 1846 and 1849. The typhus outbreak along with typhoid fever is said to be responsible for 400,000 deaths. The Irish typhus spread to England, where it was sometimes called "Irish fever" and was noted for its virulence. It killed people of all social classes, as lice were endemic and inescapable, but it hit particularly hard in the lower or "unwashed" social strata.
In the United States, a typhus epidemic broke out in Philadelphia in 1837 and killed the son of Franklin Pierce (14th President of the United States) in Concord, New Hampshire, in 1843. Several epidemics occurred in Baltimore, Memphis, and Washington, DC, between 1865 and 1873. Typhus was also a significant killer during the US Civil War, although typhoid fever was the more prevalent cause of US Civil War "camp fever". Typhoid fever is caused by the bacterium Salmonella enterica Serovar Typhi.
In Canada alone, the typhus epidemic of 1847 killed more than 20,000 people from 1847 to 1848, mainly Irish immigrants in fever sheds and other forms of quarantine, who had contracted the disease aboard the crowded coffin ships in fleeing the Great Irish Famine. Officials neither knew how to provide sufficient sanitation under conditions of the time nor understood how the disease spread.
20th century
Typhus was endemic in Poland and several neighboring countries prior to World War I (1914–1918), but became epidemic during the war. Delousing stations were established for troops on the Western Front during World War I, but typhus ravaged the armies of the Eastern Front, where over 150,000 died in Serbia alone. Fatalities were generally between 10% and 40% of those infected and the disease was a major cause of death for those nursing the sick.
In 1922, the typhus epidemic reached its peak in Soviet territory, with some 20 to 30 million cases in Russia. Although typhus had ravaged Poland with some 4 million cases reported, efforts to stem the spread of disease in that country had largely succeeded by 1921 through the efforts of public health pioneers such as Hélène Sparrow and Rudolf Weigl. In Russia during the civil war between the White and Red Armies, epidemic typhus killed 2–3 million people, many of whom were civilians. In 1937 and 1938, there was a typhus epidemic in Chile. On 6 March 1939, Prime Minister of France Édouard Daladier stated to the French parliament, he would return 300,000 of the Spanish refugees fleeing from the 1938 Spanish Civil War; reasons included the typhus spread in the French refugee camps, as well as France's sovereign recognition of Francisco Franco.
During World War II, many German POWs after the loss at Stalingrad died of typhus. Typhus epidemics killed those confined to POW camps, ghettos, and Nazi concentration camps who were held in unhygienic conditions. Pictures of mass graves including people who died from typhus can be seen in footage shot at Bergen-Belsen concentration camp. Among thousands of prisoners in concentration camps such as Theresienstadt and Bergen-Belsen who died of typhus were Anne Frank, age 15, and her sister Margot, age 19, in the latter camp.
The first typhus vaccine was developed by the Polish zoologist Rudolf Weigl in the interwar period; the vaccine did not prevent the disease but reduced its mortality.
21st century
Beginning in 2018, a typhus outbreak spread through Los Angeles County primarily affecting homeless people. In 2019, city attorney Elizabeth Greenwood revealed that she, too, was infected with typhus as a result of a flea bite at her office in Los Angeles City Hall. Pasadena also experienced a sudden uptick in typhus with 22 cases in 2018 but, without being able to attribute this to one location, the Pasadena Public Health Department did not identify the cases as an "outbreak". Over the past decade as well murine typhus cases have been rising with the highest number of cases being 171 in 2022.
| Biology and health sciences | Infectious disease | null |
20321872 | https://en.wikipedia.org/wiki/Bronze%20%28color%29 | Bronze (color) | Bronze is a metallic brown color which resembles the metal alloy bronze.
The first recorded use of bronze as a color name in English was in 1753.
Variations
Blast-off bronze
Blast-off bronze is one of the colors in the special set of metallic Crayola crayons called Metallic FX, the colors of which were formulated by Crayola in 2001.
Antique bronze
The first recorded use of antique bronze as a color name in English was in 1910.
| Physical sciences | Colors | Physics |
1396249 | https://en.wikipedia.org/wiki/Airplane | Airplane | An airplane (North American English) or aeroplane (British English), informally plane, is a fixed-wing aircraft that is propelled forward by thrust from a jet engine, propeller, or rocket engine. Airplanes come in a variety of sizes, shapes, and wing configurations. The broad spectrum of uses for airplanes includes recreation, transportation of goods and people, military, and research. Worldwide, commercial aviation transports more than four billion passengers annually on airliners and transports more than 200 billion tonne-kilometers of cargo annually, which is less than 1% of the world's cargo movement. Most airplanes are flown by a pilot on board the aircraft, but some are designed to be remotely or computer-controlled such as drones.
The Wright brothers invented and flew the first airplane in 1903, recognized as "the first sustained and controlled heavier-than-air powered flight". They built on the works of George Cayley dating from 1799, when he set forth the concept of the modern airplane (and later built and flew models and successful passenger-carrying gliders) and the work of German pioneer of human aviation Otto Lilienthal, who, between 1867 and 1896, also studied heavier-than-air flight. Lilienthal's flight attempts in 1891 are seen as the beginning of human flight.
Following its limited use in World War I, aircraft technology continued to develop. Airplanes had a presence in all the major battles of World War II. The first jet aircraft was the German Heinkel He 178 in 1939. The first jet airliner, the de Havilland Comet, was introduced in 1952. The Boeing 707, the first widely successful commercial jet, was in commercial service for more than 60 years, from 1958 to 2019.
Etymology and usage
First attested in English in the late 19th century (prior to the first sustained powered flight), the word airplane, like aeroplane, derives from the French aéroplane, which comes from the Greek ἀήρ (aēr), "air" and either Latin planus, "level", or Greek πλάνος (planos), "wandering". "Aéroplane" originally referred just to the wing, as it is a plane moving through the air. In an example of synecdoche, the word for the wing came to refer to the entire aircraft.
In the United States and Canada, the term "airplane" is used for powered fixed-wing aircraft. In the United Kingdom and Ireland and most of the Commonwealth, the term "aeroplane" () is usually applied to these aircraft.
History
Antecedents
Many stories from antiquity involve flight, such as the Greek legend of Icarus and Daedalus, and the Vimana in ancient Indian epics. Around 400 BC in Greece, Archytas was reputed to have designed and built the first artificial, self-propelled flying device, a bird-shaped model propelled by a jet of what was probably steam, said to have flown some . This machine may have been suspended for its flight.
Some of the earliest recorded attempts with gliders were those by the 9th-century Andalusian and Arabic-language poet Abbas ibn Firnas and the 11th-century English monk Eilmer of Malmesbury; both experiments injured their pilots. Leonardo da Vinci researched the wing design of birds and designed a man-powered aircraft in his Codex on the Flight of Birds (1502), noting for the first time the distinction between the center of mass and the center of pressure of flying birds.
In 1799, George Cayley set forth the concept of the modern airplane as a fixed-wing flying machine with separate systems for lift, propulsion, and control. Cayley was building and flying models of fixed-wing aircraft as early as 1803, and he built a successful passenger-carrying glider in 1853. In 1856, Frenchman Jean-Marie Le Bris made the first powered flight, by having his glider "L'Albatros artificiel" pulled by a horse on a beach. Then the Russian Alexander F. Mozhaisky also made some innovative designs. In 1883, the American John J. Montgomery made a controlled flight in a glider. Other aviators who made similar flights at that time were Otto Lilienthal, Percy Pilcher, and Octave Chanute.
Sir Hiram Maxim built a craft that weighed 3.5 tons, with a wingspan that was powered by two steam engines driving two propellers. In 1894, his machine was tested with overhead rails to prevent it from rising. The test showed that it had enough lift to take off. The craft was uncontrollable and it is presumed that Maxim realized this because he subsequently abandoned work on it.
Between 1867 and 1896, the German pioneer of human aviation Otto Lilienthal developed heavier-than-air flight. He was the first person to make well-documented, repeated, successful gliding flights. Lilienthal's work led to him developing the concept of the modern wing, his flight attempts in 1891 are seen as the beginning of human flight, the "Lilienthal Normalsegelapparat" is considered to be the first airplane in series production and his work heavily inspired the Wright brothers.
In the 1890s, Lawrence Hargrave conducted research on wing structures and developed a box kite that lifted the weight of a man. His box kite designs were widely adopted. Although he also developed a type of rotary aircraft engine, he did not create and fly a powered fixed-wing aircraft.
Early powered flights
The Frenchman Clement Ader constructed his first of three flying machines in 1886, the Éole. It was a bat-like design run by a lightweight steam engine of his own invention, with four cylinders developing , driving a four-blade propeller. The engine weighed no more than . The wings had a span of . All-up weight was . On 9 October 1890, Ader attempted to fly the Éole. Aviation historians give credit to this effort as a powered take-off and uncontrolled hop of approximately at a height of approximately . Ader's two subsequent machines were not documented to have achieved flight.
The American Wright brothers's flights in 1903 are recognized by the Fédération Aéronautique Internationale (FAI), the standard-setting and record-keeping body for aeronautics, as "the first sustained and controlled heavier-than-air powered flight". By 1905, the Wright Flyer III was capable of fully controllable, stable flight for substantial periods. The Wright brothers credited Otto Lilienthal as a major inspiration for their decision to pursue manned flight.
In 1906, the Brazilian Alberto Santos-Dumont made what was claimed to be the first airplane flight unassisted by catapult and set the first world record recognized by the Aéro-Club de France by flying in less than 22 seconds. This flight was also certified by the FAI.
An early aircraft design that brought together the modern monoplane tractor configuration was the Blériot VIII design of 1908. It had movable tail surfaces controlling both yaw and pitch, a form of roll control supplied either by wing warping or by ailerons and controlled by its pilot with a joystick and rudder bar. It was an important predecessor of his later Blériot XI Channel-crossing aircraft of the summer of 1909.
World War I served as a testbed for the use of the airplane as a weapon. Airplanes demonstrated their potential as mobile observation platforms, then proved themselves to be machines of war capable of causing casualties to the enemy. The earliest known aerial victory with a synchronized machine gun-armed fighter aircraft occurred in 1915, by German Luftstreitkräfte Leutnant Kurt Wintgens. Fighter aces appeared; the greatest (by number of Aerial Combat victories) was Manfred von Richthofen, also known as the Red Baron.
Following WWI, aircraft technology continued to develop. Alcock and Brown crossed the Atlantic non-stop for the first time in 1919. The first international commercial flights took place between the United States and Canada in 1919.
Airplanes had a presence in all the major battles of World War II. They were an essential component of the military strategies of the period, such as the German Blitzkrieg, The Battle of Britain, and the American and Japanese aircraft carrier campaigns of the Pacific War.
Development of jet aircraft
The first practical jet aircraft was the German Heinkel He 178, which was tested in 1939. In 1943, the Messerschmitt Me 262, the first operational jet fighter aircraft, went into service in the German Luftwaffe.
The first jet airliner, the de Havilland Comet, was introduced in 1952. The Boeing 707, the first widely successful commercial jet, was in commercial service for more than 50 years, from 1958 to 2010. The Boeing 747 was the world's biggest passenger aircraft from 1970 until it was surpassed by the Airbus A380 in 2005.
Supersonic airliner flights, including those of the Concorde, have been limited to over-water flight at supersonic speed because of their sonic boom, which is prohibited over most populated land areas. The high cost of operation per passenger-mile and a deadly crash in 2000 induced the operators of the Concorde to remove it from service.
Propulsion
Propeller
An aircraft propeller, or airscrew, converts rotary motion from an engine or other power source, into a swirling slipstream which pushes the propeller forwards or backwards. It comprises a rotating power-driven hub, to which are attached two or more radial airfoil-section blades such that the whole assembly rotates about a longitudinal axis. Three types of aviation engines used to power propellers include reciprocating engines (or piston engines), gas turbines, and electric motors. The amount of thrust a propeller creates is determined, in part, by its disk area—the area through which the blades rotate. The limitation on blade speed is the speed of sound; as when the blade tip exceeds the speed of sound, shock waves decrease propeller efficiency. The rpm required to generate a given tip speed is inversely proportional to the diameter of the propeller. The upper design speed limit for propeller-driven aircraft is Mach 0.6. Aircraft designed to go faster than that employ jet engines.
Reciprocating engine
Reciprocating engines in aircraft have three main variants, radial, in-line and flat or horizontally opposed engine. The radial engine is a reciprocating type internal combustion engine configuration in which the cylinders "radiate" outward from a central crankcase like the spokes of a wheel and was commonly used for aircraft engines before gas turbine engines became predominant. An inline engine is a reciprocating engine with banks of cylinders, one behind another, rather than rows of cylinders, with each bank having any number of cylinders, but rarely more than six, and may be water-cooled. A flat engine is an internal combustion engine with horizontally-opposed cylinders.
Gas turbine
A turboprop gas turbine engine consists of an intake, compressor, combustor, turbine, and a propelling nozzle, which provide power from a shaft through a reduction gearing to the propeller. The propelling nozzle provides a relatively small proportion of the thrust generated by a turboprop.
Electric motor
An electric aircraft runs on electric motors with electricity coming from fuel cells, solar cells, ultracapacitors, power beaming, or batteries. Currently, flying electric aircraft are mostly experimental prototypes, including manned and unmanned aerial vehicles, but there are some production models on the market.
Jet
Jet aircraft are propelled by jet engines, which are used because the aerodynamic limitations of propellers do not apply to jet propulsion. These engines are much more powerful than a reciprocating engine for a given size or weight and are comparatively quiet and work well at higher altitude. Variants of the jet engine include the ramjet and the scramjet, which rely on high airspeed and intake geometry to compress the combustion air, prior to the introduction and ignition of fuel. Rocket motors provide thrust by burning a fuel with an oxidizer and expelling gas through a nozzle.
Turbofan
Most jet aircraft use turbofan jet engines, which employ a gas turbine to drive a ducted fan, which accelerates air around the turbine to provide thrust in addition to that which is accelerated through the turbine. The ratio of air passing around the turbine to that passing through is called the by-pass ratio. They represent a compromise between turbojet (with no bypass) and turboprop forms of aircraft propulsion (primarily powered with bypass air).
Subsonic aircraft, such as airliners, employ high by-pass jet engines for fuel efficiency. Supersonic aircraft, such as jet fighters, use low-bypass turbofans. However at supersonic speeds, the air entering the engine must be decelerated to a subsonic speed and then re-accelerated back to supersonic speeds after combustion. An afterburner may be used on combat aircraft to increase power for short periods of time by injecting fuel directly into the hot exhaust gases. Many jet aircraft also use thrust reversers to slow down after landing.
Ramjet
A ramjet is a form of jet engine that contains no major moving parts and can be particularly useful in applications requiring a small and simple engine for high-speed use, such as with missiles. Ramjets require forward motion before they can generate thrust and so are often used in conjunction with other forms of propulsion, or with an external means of achieving sufficient speed. The Lockheed D-21 was a Mach 3+ ramjet-powered reconnaissance drone that was launched from a parent aircraft. A ramjet uses the vehicle's forward motion to force air through the engine without resorting to turbines or vanes. Fuel is added and ignited, which heats and expands the air to provide thrust.
Scramjet
A scramjet is a specialized ramjet that uses internal supersonic airflow to compress, combine with fuel, combust and accelerate the exhaust to provide thrust. The engine operates at supersonic speeds only. The NASA X-43, an experimental unmanned scramjet, set a world speed record in 2004 for a jet-powered aircraft with a speed of Mach 9.7, nearly .
Rocket
Whereas jet aircraft use the atmosphere both as a source of oxidant and of mass to accelerate reactively behind the aircraft, rocket aircraft carry the oxidizer on board and accelerate the burned fuel and oxidizer backwards as the sole source of mass for reaction. Liquid fuel and oxidizer may be pumped into a combustion chamber or a solid fuel with oxidizer may burn in the fuel chamber. Whether liquid or solid-fueled, the hot gas is accelerated through a nozzle.
In World War II, the Germans deployed the Me 163 Komet rocket-powered aircraft. The first plane to break the sound barrier in level flight was a rocket plane – the Bell X-1 in 1948. The North American X-15 broke many speed and altitude records in the 1960s and pioneered engineering concepts for later aircraft and spacecraft. Military transport aircraft may employ rocket-assisted take offs for short-field situations. Otherwise, rocket aircraft include spaceplanes, like SpaceShipTwo, for travel beyond the Earth's atmosphere and sport aircraft developed for the short-lived Rocket Racing League.
Design and manufacture
Most airplanes are constructed by companies with the objective of producing them in quantity for customers. The design and planning process, including safety tests, can last up to four years for small turboprops or longer for larger planes.
During this process, the objectives and design specifications of the aircraft are established. First the construction company uses drawings and equations, simulations, wind tunnel tests and experience to predict the behavior of the aircraft. Computers are used by companies to draw, plan and do initial simulations of the aircraft. Small models and mockups of all or certain parts of the plane are then tested in wind tunnels to verify its aerodynamics.
When the design has passed through these processes, the company constructs a limited number of prototypes for testing on the ground. Representatives from an aviation governing agency often make a first flight. The flight tests continue until the aircraft has fulfilled all the requirements. Then, the governing public agency of aviation of the country authorizes the company to begin production.
In the United States, this agency is the Federal Aviation Administration (FAA). In the European Union, European Aviation Safety Agency (EASA); in the United Kingdom it is the Civil Aviation Authority (CAA). In Canada, the public agency in charge and authorizing the mass production of aircraft is Transport Canada's Civil Aviation Authority.
When a part or component needs to be joined together by welding for virtually any aerospace or defense application, it must meet the most stringent and specific safety regulations and standards. Nadcap, or the National Aerospace and Defense Contractors Accreditation Program sets global requirements for quality, quality management and quality assurance for aerospace engineering.
In the case of international sales, a license from the public agency of aviation or transport of the country where the aircraft is to be used is also necessary. For example, airplanes made by the European company, Airbus, need to be certified by the FAA to be flown in the United States, and airplanes made by U.S.-based Boeing need to be approved by the EASA to be flown in the European Union.
Regulations have resulted in reduced noise from aircraft engines in response to increased noise pollution from growth in air traffic over urban areas near airports.
Small planes can be designed and constructed by amateurs as homebuilts. Other homebuilt aircraft can be assembled using pre-manufactured kits of parts that can be assembled into a basic plane and must then be completed by the builder.
Few companies produce planes on a large scale. However, the production of a plane for one company is a process that actually involves dozens, or even hundreds, of other companies and plants, that produce the parts that go into the plane. For example, one company can be responsible for the production of the landing gear, while another one is responsible for the radar. The production of such parts is not limited to the same city or country; in the case of large plane manufacturing companies, such parts can come from all over the world. The parts are sent to the main plant of the plane company, where the production line is located. In the case of large planes, production lines dedicated to the assembly of certain parts of the plane can exist, especially the wings and the fuselage. When complete, a plane is rigorously inspected to search for imperfections and defects. After approval by inspectors, the plane is put through a series of flight tests to assure that all systems are working correctly and that the plane handles properly. To meet a particular customer need, the airplane may be customised using components or packages of components provided by the manufacturer or the customer.
Characteristics
Airframe
The structural parts of a fixed-wing aircraft are called the airframe. The parts present can vary according to the aircraft's type and purpose. Early types were usually made of wood with fabric wing surfaces, When engines became available for powered flight around a hundred years ago, their mounts were made of metal. Then as speeds increased more and more parts became metal until by the end of WWII all-metal aircraft were common. In modern times, increasing use of composite materials has been made.
Typical structural parts include:
One or more large horizontal wings, often with an airfoil cross-section shape. The wing deflects air downward as the aircraft moves forward, generating lifting force to support it in flight. The wing also provides stability in roll to stop the aircraft from rolling to the left or right in steady flight.
A fuselage, a long, thin body, usually with tapered or rounded ends to make its shape aerodynamically smooth. The fuselage joins the other parts of the airframe and usually contains important things such as the pilot, payload and flight systems.
A vertical stabilizer or fin is a vertical wing-like surface mounted at the rear of the plane and typically protruding above it. The fin stabilizes the plane's yaw (turn left or right) and mounts the rudder, which controls its rotation along that axis.
A horizontal stabilizer or tailplane, usually mounted at the tail near the vertical stabilizer. The horizontal stabilizer is used to stabilize the plane's pitch (tilt up or down) and mounts the elevators, which provide pitch control.
Landing gear, a set of wheels, skids, or floats that support the plane while it is on the surface. On seaplanes, the bottom of the fuselage or floats (pontoons) support it while on the water. On some planes the landing gear retracts during flight to reduce drag.
Wings
The wings of a fixed-wing aircraft are static planes extending either side of the aircraft. When the aircraft travels forwards, air flows over the wings, which are shaped to create lift. This shape is called an airfoil and is shaped like a bird's wing.
Wing structure
Airplanes have flexible wing surfaces which are stretched across a frame and made rigid by the lift forces exerted by the airflow over them. Larger aircraft have rigid wing surfaces which provide additional strength.
Whether flexible or rigid, most wings have a strong frame to give them their shape and to transfer lift from the wing surface to the rest of the aircraft. The main structural elements are one or more spars running from root to tip, and many ribs running from the leading (front) to the trailing (rear) edge.
Early airplane engines had little power, and lightness was very important. Also, early airfoil sections were very thin, and could not have a strong frame installed within. So, until the 1930s, most wings were too lightweight to have enough strength, and external bracing struts and wires were added. When the available engine power increased during the 1920s and 30s, wings could be made heavy and strong enough that bracing was not needed any more. This type of unbraced wing is called a cantilever wing.
Wing configuration
The number and shape of the wings varies widely on different types. A given wing plane may be full-span or divided by a central fuselage into port (left) and starboard (right) wings. Occasionally, even more wings have been used, with the three-winged triplane achieving some fame in WWI. The four-winged quadruplane and other multiplane designs have had little success.
A monoplane has a single wing plane, a biplane has two stacked one above the other, a tandem wing has two placed one behind the other. When the available engine power increased during the 1920s and 30s and bracing was no longer needed, the unbraced or cantilever monoplane became the most common form of powered type.
The wing planform is the shape when seen from above. To be aerodynamically efficient, a wing should be straight with a long span from side to side but have a short chord (high aspect ratio). But to be structurally efficient, and hence light weight, a wing must have a short span but still enough area to provide lift (low aspect ratio).
At transonic speeds (near the speed of sound), it helps to sweep the wing backwards or forwards to reduce drag from supersonic shock waves as they begin to form. The swept wing is just a straight wing swept backwards or forwards.
The delta wing is a triangle shape that may be used for several reasons. As a flexible Rogallo wing, it allows a stable shape under aerodynamic forces and so is often used for ultralight aircraft and even kites. As a supersonic wing, it combines high strength with low drag and so is often used for fast jets.
A variable geometry wing can be changed in flight to a different shape. The variable-sweep wing transforms between an efficient straight configuration for takeoff and landing, to a low-drag swept configuration for high-speed flight. Other forms of variable planform have been flown, but none have gone beyond the research stage.
Fuselage
A fuselage is a long, thin body, usually with tapered or rounded ends to make its shape aerodynamically smooth. The fuselage may contain the flight crew, passengers, cargo or payload, fuel and engines. The pilots of manned aircraft operate them from a cockpit located at the front or top of the fuselage and equipped with controls and usually windows and instruments. A plane may have more than one fuselage, or it may be fitted with booms with the tail located between the booms to allow the extreme rear of the fuselage to be useful for a variety of purposes.
Wings vs. bodies
Flying wing
A flying wing is a tailless aircraft which has no definite fuselage. Most of the crew, payload and equipment are housed inside the main wing structure.
The flying wing configuration was studied extensively in the 1930s and 1940s, notably by Jack Northrop and Cheston L. Eshelman in the United States, and Alexander Lippisch and the Horten brothers in Germany. After the war, several experimental designs were based on the flying wing concept, but the known difficulties remained intractable. Some general interest continued until the early 1950s but designs did not necessarily offer a great advantage in range and presented several technical problems, leading to the adoption of "conventional" solutions like the Convair B-36 and the B-52 Stratofortress. Due to the practical need for a deep wing, the flying wing concept is most practical for designs in the slow-to-medium speed range, and there has been continual interest in using it as a tactical airlifter design.
Interest in flying wings was renewed in the 1980s due to their potentially low radar reflection cross-sections. Stealth technology relies on shapes which only reflect radar waves in certain directions, thus making the aircraft hard to detect unless the radar receiver is at a specific position relative to the aircraft - a position that changes continuously as the aircraft moves. This approach eventually led to the Northrop B-2 Spirit stealth bomber. In this case, the aerodynamic advantages of the flying wing are not the primary needs. However, modern computer-controlled fly-by-wire systems allowed for many of the aerodynamic drawbacks of the flying wing to be minimized, making for an efficient and stable long-range bomber.
Blended wing body
Blended wing body aircraft have a flattened and airfoil shaped body, which produces most of the lift to keep itself aloft, and distinct and separate wing structures, though the wings are smoothly blended in with the body.
Thus blended wing bodied aircraft incorporate design features from both a futuristic fuselage and flying wing design. The purported advantages of the blended wing body approach are efficient high-lift wings and a wide airfoil-shaped body. This enables the entire craft to contribute to lift generation with the result of potentially increased fuel economy.
Lifting body
A lifting body is a configuration in which the body itself produces lift. In contrast to a flying wing, which is a wing with minimal or no conventional fuselage, a lifting body can be thought of as a fuselage with little or no conventional wing. Whereas a flying wing seeks to maximize cruise efficiency at subsonic speeds by eliminating non-lifting surfaces, lifting bodies generally minimize the drag and structure of a wing for subsonic, supersonic, and hypersonic flight, or, spacecraft re-entry. All of these flight regimes pose challenges for proper flight stability.
Lifting bodies were a major area of research in the 1960s and 70s as a means to build a small and lightweight crewed spacecraft. The US built several famous lifting body rocket planes to test the concept, as well as several rocket-launched re-entry vehicles that were tested over the Pacific. Interest waned as the US Air Force lost interest in the crewed mission, and major development ended during the Space Shuttle design process when it became clear that the highly shaped fuselages made it difficult to fit fuel tankage.
Empennage and foreplane
The classic airfoil section wing is unstable in flight and difficult to control. Flexible-wing types often rely on an anchor line or the weight of a pilot hanging beneath to maintain the correct attitude. Some free-flying types use an adapted airfoil that is stable, or other ingenious mechanisms including, most recently, electronic artificial stability.
To achieve stability and control, most fixed-wing types have an empennage comprising a fin and rudder which act horizontally and a tailplane and elevator which act vertically. These control surfaces can typically be trimmed to relieve control forces for various stages of flight. This is so common that it is known as the conventional layout. Sometimes there may be two or more fins, spaced out along the tailplane.
Some types have a horizontal "canard" foreplane ahead of the main wing, instead of behind it. This foreplane may contribute to the lift, the trim, or control of the aircraft, or to several of these.
Controls and instruments
Airplanes have complex flight control systems. The main controls allow the pilot to direct the aircraft in the air by controlling the attitude (roll, pitch and yaw) and engine thrust.
On manned aircraft, cockpit instruments provide information to the pilots, including flight data, engine output, navigation, communications and other aircraft systems that may be installed.
Safety
When risk is measured by deaths per passenger kilometer, air travel is approximately 10 times safer than travel by bus or rail. However, when using the deaths per journey statistic, air travel is significantly more dangerous than car, rail, or bus travel. Air travel insurance is relatively expensive for this reason—insurers generally use the deaths per journey statistic. There is a significant difference between the safety of airliners and that of smaller private planes, with the per-mile statistic indicating that airliners are 8.3 times safer than smaller planes.
Environmental impact
Like all activities involving combustion, fossil-fuel-powered aircraft release soot and other pollutants into the atmosphere. Greenhouse gases such as carbon dioxide (CO2) are also produced. In addition, there are environmental impacts specific to airplanes: for instance,
Airplanes operating at high altitudes near the tropopause (mainly large jet airliners) emit aerosols and leave contrails, both of which can increase cirrus cloud formationcloud cover may have increased by up to 0.2% since the birth of aviation.
Airplanes operating at high altitudes near the tropopause can also release chemicals that interact with greenhouse gases at those altitudes, particularly nitrogen compounds, which interact with ozone, increasing ozone concentrations.
Most light piston aircraft burn avgas, which contains tetraethyllead (TEL). Some lower-compression piston engines can operate on unleaded mogas and turbine engines and diesel enginesneither of which require leadare used on some newer light aircraft. Some non-polluting light electric aircraft are already in production.
Another environmental impact of airplanes is noise pollution, mainly caused by aircraft taking off and landing.
| Technology | Aviation | null |
1397617 | https://en.wikipedia.org/wiki/Demosponge | Demosponge | Demosponges (Demospongiae) are the most diverse class in the phylum Porifera. They include greater than 90% of all species of sponges with nearly 8,800 species worldwide (World Porifera Database). They are sponges with a soft body that covers a hard, often massive skeleton made of calcium carbonate, either aragonite or calcite. They are predominantly leuconoid in structure. Their "skeletons" are made of spicules consisting of fibers of the protein spongin, the mineral silica, or both. Where spicules of silica are present, they have a different shape from those in the otherwise similar glass sponges. Some species, in particular from the Antarctic, obtain the silica for spicule building from the ingestion of siliceous diatoms.
The many diverse orders in this class include all of the large sponges. About 311 million years ago, in the Late Carboniferous, the order Spongillida split from the marine sponges, and is the only sponges to live in freshwater environments. Some species are brightly colored, with great variety in body shape; the largest species are over across. They reproduce both sexually and asexually. They are the only extant organisms that methylate sterols at the 26-position, a fact used to identify the presence of demosponges before their first known unambiguous fossils.
Because of many species' long life span (500–1,000 years) it is thought that analysis of the aragonite skeletons of these sponges could extend data regarding ocean temperature, salinity, and other variables farther into the past than has been previously possible. Their dense skeletons are deposited in an organized chronological manner, in concentric layers or bands. The layered skeletons look similar to reef corals. Therefore, demosponges are also called coralline sponges.
Classification and systematics
The Demospongiae have an ancient history. The first demosponges may have appeared during the Precambrian deposits at the end of the Cryogenian "Snowball Earth" period. Their presence has been indirectly detected by fossilized steroids, called steranes, hydrocarbon markers characteristic of the cell membranes of the sponges, rather than from direct fossils of the sponges themselves. They represent a continuous chemical fossil record of demosponges through the end of the Neoproterozoic. The earliest Demospongiae fossil was discovered in the lower Cambrian (Series 2, Stage 3; approximately 515 Ma) of the Sirius Passet Biota of North Greenland: this single specimen had a spicule assemblage similar to that found in the subclass Heteroscleromorpha. The earliest sponge-bearing reefs date to the Early Cambrian (they are the earliest known reef structure built by animals), exemplified by a small bioherm constructed by archaeocyathids and calcified microbes at the start of the Tommotian stage about 530 Ma, found in southeast Siberia. A major radiation occurred in the Lower Cambrian and further major radiations in the Ordovician possibly from the middle Cambrian.
The Systema Porifera (2002) book (2 volumes) was the result of a collaboration of 45 researchers from 17 countries led by editors J. N. A. Hooper and R. W. M. van Soest. This milestone publication provided an updated comprehensive overview of sponge systematics, the largest revision of this group (from genera, subfamilies, families, suborders, orders and class) since the start of spongiology in the mid-19th century. In this large revision, the extant Demospongiae were organized into 14 orders that encompassed 88 families and 500 genera. Hooper and van Soest (2002) gave the following classification of demosponges into orders:
Subclass Homoscleromorpha Bergquist, 1978
Homosclerophorida Dendy, 1905
Subclass Tetractinomorpha
Astrophorida Sollas, 1888
Chondrosida Boury-Esnault & Lopès, 1985
Hadromerida Topsent, 1894
Lithistida Sollas, 1888
Spirophorida Bergquist & Hogg, 1969
Subclass Ceractinomorpha Lévi, 1953
Agelasida Verrill, 1907
Dendroceratida Minchin, 1900
Dictyoceratida Minchin, 1900
Halichondrida Gray, 1867
Halisarcida Bergquist, 1996
Haplosclerida Topsent, 1928
Poecilosclerida Topsent, 1928
Verongiida Bergquist, 1978
Verticillitida Termier & Termier, 1977
However, molecular and morphological evidence show that the Homoscleromorpha do not belong in this class. The Homoscleromorpha was therefore officially taken out of the Demospongiae in 2012, and became the fourth class of phylum Porifera.
Morrow & Cárdenas (2015) propose a revision of the Demospongiae higher taxa classification, essentially based on molecular data of the last ten years. Some demosponge subclasses and orders are actually polyphyletic or should be included in other orders, so that Morrow and Cárdenas (2015) officially propose to abandon certain names: these are the Ceractinomorpha, Tetractinomorpha, Halisarcida, Verticillitida, Lithistida, Halichondrida and Hadromerida. Instead, they recommend the use of three subclasses: Verongimorpha, Keratosa and Heteroscleromorpha. They retain seven (Agelasida, Chondrosiida, Dendroceratida, Dictyoceratida, Haplosclerida, Poecilosclerida, Verongiida) of the 13 orders from Systema Porifera. They recommend to resurrect or upgrade six order names (Axinellida, Merliida, Spongillida, Sphaerocladina, Suberitida, Tetractinellida). Finally, they create seven new orders (Bubarida, Desmacellida, Polymastiida, Scopalinida, Clionaida, Tethyida, Trachycladida). These added to the recently created orders (Biemnida and Chondrillida) make a total of 22 orders in the revised classification. These changes are now implemented in the World Porifera Database part of the World Register of Marine Species.
Subclass Heteroscleromorpha Cárdenas, Pérez, Boury-Esnault, 2012
order Agelasida Verrill, 1907
order Axinellida Lévi, 1953
order Biemnida Morrow et al., 2013
order Bubarida Morrow & Cárdenas, 2015
order Clionaida Morrow & Cárdenas, 2015
order Desmacellida Morrow & Cárdenas, 2015
order Haplosclerida Topsent, 1928
order Merliida Vacelet, 1979
order Poecilosclerida Topsent, 1928
order Polymastiida Morrow & Cárdenas, 2015
order Scopalinida Morrow & Cárdenas, 2015
order Sphaerocladina Schrammen, 1924
order Spongillida Manconi & Pronzato, 2002
order Suberitida Chombard & Boury-Esnault, 1999
order Tethyida Morrow & Cárdenas, 2015
order Tetractinellida Marshall, 1876
order Trachycladida Morrow & Cárdenas, 2015
Heteroscleromorpha incertae sedis
Subclass Verongimorpha Erpenbeck et al., 2012
order Chondrillida Redmond et al., 2013
order Chondrosiida Boury-Esnault et Lopès, 1985
order Verongiida Bergquist, 1978
Subclass Keratosa Grant, 1861
order Dendroceratida Minchin, 1900
order Dictyoceratida Minchin, 1900
Sclerosponges
Sclerosponges were first proposed as a class of sponges, Sclerospongiae, in 1970 by Hartman and Goreau. However, it was later found by Vacelet that sclerosponges occur in different classes of Porifera. That means that sclerosponges are not a closely related (taxonomic) group of sponges and are considered to be a polyphyletic grouping and contained within the Demospongiae. Like bats and birds that independently developed the ability to fly, different sponges developed the ability to build a calcareous skeleton independently and at different times in Earth's history. Fossil sclerosponges are already known from the Cambrian period.
Chaetetids
Chaetetids, more formally called "chaetetid hyper-calcified demosponges" (West, 2011), are common calcareous fossils composed of fused tubules. They were previously classified as extinct corals, bryozoans, algae, stromatoporoids and sclerosponges. The chaetetid skeleton has now been shown to be of polyphyletic origin and with little systematic value. Extant chaetetids are also described. This skeleton is now known from three demosponge orders (Hadromerida, Poecilosclerida, and Agelasida). Fossil chaetetid hyper-calcified demosponges can only be classified with information on their spicule forms and the original mineralogy of their skeletons (West, 2011).
Reproduction
Spermatocytes develop from the transformation of choanocytes and oocytes arise from archeocytes. Repeated cleavage of the zygote egg takes place in the mesohyl and forms a parenchymella larva with a mass of larger internal cells surrounded by small, externally flagellated cells. The resulting swimming larva enters a canal of the central cavity and is expelled with the exhalant current.
Methods of asexual reproduction include both budding and the formation of gemmules. In budding, aggregates of cells differentiate into small sponges that are released superficially or expelled through the oscula. Gemmules are found in the freshwater family Spongillidae. They are produced in the mesohyl as clumps of archeocytes, are surrounded with a hard layer secreted by other amoebocytes. Gemmules are released when the parent body breaks down, and are capable of surviving harsh conditions. In a favorable situation, an opening called the micropyle appears and releases amoebocytes, which differentiate into cells of all the other types.
Meiosis and recombination
The cytological progression of porifera oogenesis and spermatogenesis (gametogenesis) shows great similarity to other metazoa. Most of the genes from the classic set of meiotic genes conserved in eukaryotes are upregulated in the sponges Geodia hentscheli and Geodia phlegraei including genes for DNA recombination. Since porifera are the earliest divergent animals, these findings indicate that the basic toolkit of meiosis and recombination were present early in eukaryote evolution.
Economic importance
The most economically important group of demospongians to human are the bath sponges. These are harvested by divers and can also be grown commercially. They are bleached and marketed; the spongin gives the sponge its softness.
Citations
General references
; . pp. .
Cryogenian first appearances | Biology and health sciences | Porifera | Animals |
1398078 | https://en.wikipedia.org/wiki/Plesiosaur | Plesiosaur | The Plesiosauria or plesiosaurs are an order or clade of extinct Mesozoic marine reptiles, belonging to the Sauropterygia.
Plesiosaurs first appeared in the latest Triassic Period, possibly in the Rhaetian stage, about 203 million years ago. They became especially common during the Jurassic Period, thriving until their disappearance due to the Cretaceous–Paleogene extinction event at the end of the Cretaceous Period, about 66 million years ago. They had a worldwide oceanic distribution, and some species at least partly inhabited freshwater environments.
Plesiosaurs were among the first fossil reptiles discovered. In the beginning of the nineteenth century, scientists realised how distinctive their build was and they were named as a separate order in 1835. The first plesiosaurian genus, the eponymous Plesiosaurus, was named in 1821. Since then, more than a hundred valid species have been described. In the early twenty-first century, the number of discoveries has increased, leading to an improved understanding of their anatomy, relationships and way of life.
Plesiosaurs had a broad flat body and a short tail. Their limbs had evolved into four long flippers, which were powered by strong muscles attached to wide bony plates formed by the shoulder girdle and the pelvis. The flippers made a flying movement through the water. Plesiosaurs breathed air, and bore live young; there are indications that they were warm-blooded.
Plesiosaurs showed two main morphological types. Some species, with the "plesiosauromorph" build, had (sometimes extremely) long necks and small heads; these were relatively slow and caught small sea animals. Other species, some of them reaching a length of up to seventeen metres, had the "pliosauromorph" build with a short neck and a large head; these were apex predators, fast hunters of large prey. The two types are related to the traditional strict division of the Plesiosauria into two suborders, the long-necked Plesiosauroidea and the short-neck Pliosauroidea. Modern research, however, indicates that several "long-necked" groups might have had some short-necked members or vice versa. Therefore, the purely descriptive terms "plesiosauromorph" and "pliosauromorph" have been introduced, which do not imply a direct relationship. "Plesiosauroidea" and "Pliosauroidea" today have a more limited meaning. The term "plesiosaur" is properly used to refer to the Plesiosauria as a whole, but informally it is sometimes meant to indicate only the long-necked forms, the old Plesiosauroidea.
Like other ancient marine reptiles, such as those in the clades Ichthyosauria and Mosasauria, the genera in Plesiosauria are not part of the clade Dinosauria.
History of discovery
Early finds
Skeletal elements of plesiosaurs are among the first fossils of extinct reptiles recognised as such. In 1605, Richard Verstegen of Antwerp illustrated in his A Restitution of Decayed Intelligence plesiosaur vertebrae that he referred to fishes and saw as proof that Great Britain was once connected to the European continent. The Welshman Edward Lhuyd in his Lithophylacii Brittannici Ichnographia from 1699 also included depictions of plesiosaur vertebrae that again were considered fish vertebrae or Ichthyospondyli. Other naturalists during the seventeenth century added plesiosaur remains to their collections, such as John Woodward; these were only much later understood to be of a plesiosaurian nature and are today partly preserved in the Sedgwick Museum.
In 1719, William Stukeley described a partial skeleton of a plesiosaur, which had been brought to his attention by the great-grandfather of Charles Darwin, Robert Darwin of Elston. The stone plate came from a quarry at Fulbeck in Lincolnshire and had been used, with the fossil at its underside, to reinforce the slope of a watering-hole in Elston in Nottinghamshire. After the strange bones it contained had been discovered, it was displayed in the local vicarage as the remains of a sinner drowned in the Great Flood. Stukely affirmed its "diluvial" nature but understood it represented some sea creature, perhaps a crocodile or dolphin. The specimen is today on display at the Natural History Museum, and its inventory number is NHMUK PV R.1330 (formerly BMNH R.1330). It is the earliest discovered more or less complete fossil reptile skeleton in a museum collection. It can perhaps be referred to Plesiosaurus dolichodeirus.
During the eighteenth century, the number of English plesiosaur discoveries rapidly increased, although these were all of a more or less fragmentary nature. Important collectors were the reverends William Mounsey and Baptist Noel Turner, active in the Vale of Belvoir, whose collections were in 1795 described by John Nicholls in the first part of his The History and Antiquities of the County of Leicestershire. One of Turner's partial plesiosaur skeletons is still preserved as specimen NHMUK PV R.45 (formerly BMNH R.45) in the British Museum of Natural History; this is today referred to Thalassiodracon.
Naming of Plesiosaurus
In the early nineteenth century, plesiosaurs were still poorly known and their special build was not understood. No systematic distinction was made with ichthyosaurs, so the fossils of one group were sometimes combined with those of the other to obtain a more complete specimen. In 1821, a partial skeleton discovered in the collection of Colonel Thomas James Birch, was described by William Conybeare and Henry Thomas De la Beche, and recognised as representing a distinctive group. A new genus was named, Plesiosaurus. The generic name was derived from the Ancient Greek πλήσιος, plèsios, "closer to" and the Latinised saurus, in the meaning of "saurian", to express that Plesiosaurus was in the Chain of Being more closely positioned to the Sauria, particularly the crocodile, than Ichthyosaurus, which had the form of a more lowly fish. The name should thus be rather read as "approaching the Sauria" or "near reptile" than as "near lizard". Parts of the specimen are still present in the Oxford University Museum of Natural History.
Soon afterwards, the morphology became much better known. In 1823, Thomas Clark reported an almost complete skull, probably belonging to Thalassiodracon, which is now preserved by the British Geological Survey as specimen BGS GSM 26035. The same year, commercial fossil collector Mary Anning and her family uncovered an almost complete skeleton at Lyme Regis in Dorset, England, on what is today called the Jurassic Coast. It was acquired by the Duke of Buckingham, who made it available to the geologist William Buckland. He in turn let it be described by Conybeare on 20 February 1824 in a paper read at the Geological Society of London, during the same meeting in which for the first time a dinosaur was named, Megalosaurus. The two finds revealed the unique and bizarre build of the animals, in 1832 by Professor Buckland likened to "a sea serpent run through a turtle". In 1824, Conybeare also provided a specific name to Plesiosaurus: dolichodeirus, meaning "longneck". In 1848, the skeleton was bought by the British Museum of Natural History and catalogued as specimen NHMUK OR 22656 (formerly BMNH 22656). When the paper was published in the Transactions of the Geological Society, Conybeare provisionally named a second species: Plesiosaurus giganteus. This was a short-necked form later assigned to the Pliosauroidea.
Plesiosaurs became better known to the general public through two lavishly illustrated publications by the collector Thomas Hawkins: Memoirs of Ichthyosauri and Plesiosauri of 1834 and The Book of the Great Sea-Dragons of 1840. Hawkins entertained a very idiosyncratic view of the animals, seeing them as monstrous creations of the devil, during a pre-Adamitic phase of history. Hawkins eventually sold his valuable and attractively restored specimens to the British Museum of Natural History.
During the first half of the nineteenth century, the number of plesiosaur finds steadily increased, especially through discoveries in the sea cliffs of Lyme Regis. Sir Richard Owen alone named nearly a hundred new species. The majority of their descriptions were, however, based on isolated bones, without sufficient diagnosis to be able to distinguish them from the other species that had previously been described. Many of the new species described at this time have subsequently been invalidated. The genus Plesiosaurus is particularly problematic, as the majority of the new species were placed in it so that it became a wastebasket taxon. Gradually, other genera were named. Hawkins had already created new genera, though these are no longer seen as valid. In 1841, Owen named Pliosaurus brachydeirus. Its etymology referred to the earlier Plesiosaurus dolichodeirus as it is derived from πλεῖος, pleios, "more fully", reflecting that according to Owen it was closer to the Sauria than Plesiosaurus. Its specific name means "with a short neck". Later, the Pliosauridae were recognised as having a morphology fundamentally different from the plesiosaurids. The family Plesiosauridae had already been coined by John Edward Gray in 1825. In 1835, Henri Marie Ducrotay de Blainville named the order Plesiosauria itself.
American discoveries
In the second half of the nineteenth century, important finds were made outside of England. While this included some German discoveries, it mainly involved plesiosaurs found in the sediments of the American Cretaceous Western Interior Seaway, the Niobrara Chalk. One fossil in particular marked the start of the Bone Wars between the rival paleontologists Edward Drinker Cope and Othniel Charles Marsh.
In 1867, physician Theophilus Turner near Fort Wallace in Kansas uncovered a plesiosaur skeleton, which he donated to Cope. Cope attempted to reconstruct the animal on the assumption that the longer extremity of the vertebral column was the tail, the shorter one the neck. He soon noticed that the skeleton taking shape under his hands had some very special qualities: the neck vertebrae had chevrons and with the tail vertebrae the joint surfaces were orientated back to front. Excited, Cope concluded to have discovered an entirely new group of reptiles: the Streptosauria or "Turned Saurians", which would be distinguished by reversed vertebrae and a lack of hindlimbs, the tail providing the main propulsion. After having published a description of this animal, followed by an illustration in a textbook about reptiles and amphibians, Cope invited Marsh and Joseph Leidy to admire his new Elasmosaurus platyurus. Having listened to Cope's interpretation for a while, Marsh suggested that a simpler explanation of the strange build would be that Cope had reversed the vertebral column relative to the body as a whole. When Cope reacted indignantly to this suggestion, Leidy silently took the skull and placed it against the presumed last tail vertebra, to which it fitted perfectly: it was in fact the first neck vertebra, with still a piece of the rear skull attached to it. Mortified, Cope tried to destroy the entire edition of the textbook and, when this failed, immediately published an improved edition with a correct illustration but an identical date of publication. He excused his mistake by claiming that he had been misled by Leidy himself, who, describing a specimen of Cimoliasaurus, had also reversed the vertebral column. Marsh later claimed that the affair was the cause of his rivalry with Cope: "he has since been my bitter enemy". Both Cope and Marsh in their rivalry named many plesiosaur genera and species, most of which are today considered invalid.
Around the turn of the century, most plesiosaur research was done by a former student of Marsh, Professor Samuel Wendell Williston. In 1914, Williston published his Water reptiles of the past and present. Despite treating sea reptiles in general, it would for many years remain the most extensive general text on plesiosaurs. In 2013, a first modern textbook was being prepared by Olivier Rieppel. During the middle of the twentieth century, the USA remained an important centre of research, mainly through the discoveries of Samuel Paul Welles.
Recent discoveries
Whereas during the nineteenth and most of the twentieth century, new plesiosaurs were described at a rate of three or four novel genera each decade, the pace suddenly picked up in the 1990s, with seventeen sex plesiosaurs being discovered in this period. The tempo of discovery accelerated in the early twenty-first century, with about three or four plesiosaurs being named each year. This implies that about half of the known plesiosaurs are relatively new to science, a result of a far more intense field research. Some of this is taking place away from the traditional areas, e.g. in new sites developed in New Zealand, Argentina, Chile, Norway, Japan, China and Morocco, but the locations of the more original discoveries have proven to be still productive, with important new finds in England and Germany. Some of the new genera are a renaming of already known species, which were deemed sufficiently different to warrant a separate genus name.
In 2002, the "Monster of Aramberri" was announced to the press. Discovered in 1982 at the village of Aramberri, in the northern Mexican state of Nuevo León, it was originally classified as a dinosaur. The specimen is actually a very large plesiosaur, possibly reaching in length. The media published exaggerated reports claiming it was long, and weighed up to , which would have made it among the largest predators of all time.
In 2004, what appeared to be a completely intact juvenile plesiosaur was discovered, by a local fisherman, at Bridgwater Bay National Nature Reserve in Somerset, UK. The fossil, dating from 180 million years ago as indicated by the ammonites associated with it, measured in length, and may be related to Rhomaleosaurus. It is probably the best preserved specimen of a plesiosaur yet discovered.
In 2005, the remains of three plesiosaurs (Dolichorhynchops herschelensis) discovered in the 1990s near Herschel, Saskatchewan were found to be a new species, by Dr. Tamaki Sato, a Japanese vertebrate paleontologist.
In 2006, a combined team of American and Argentinian investigators (the latter from the Argentinian Antarctic Institute and the La Plata Museum) found the skeleton of a juvenile plesiosaur measuring in length on Vega Island in Antarctica. The fossil is currently on display at the geological museum of South Dakota School of Mines and Technology.
In 2008, fossil remains of an undescribed plesiosaur that was named Predator X, now known as Pliosaurus funkei, were unearthed in Svalbard. It had a length of , and its bite force of is one of the most powerful known.
In December 2017, a large skeleton of a plesiosaur was found in the continent of Antarctica, the oldest creature on the continent, and the first of its species in Antarctica.
Not only has the number of field discoveries increased, but also, since the 1950s, plesiosaurs have been the subject of more extensive theoretical work. The methodology of cladistics has, for the first time, allowed the exact calculation of their evolutionary relationships. Several hypotheses have been published about the way they hunted and swam, incorporating general modern insights about biomechanics and ecology. The many recent discoveries have tested these hypotheses and given rise to new ones.
Evolution
The Plesiosauria have their origins within the Sauropterygia, a group of perhaps archelosaurian reptiles that returned to the sea. An advanced sauropterygian subgroup, the carnivorous Eusauropterygia with small heads and long necks, split into two branches during the Upper Triassic. One of these, the Nothosauroidea, kept functional elbow and knee joints; but the other, the Pistosauria, became more fully adapted to a sea-dwelling lifestyle. Their vertebral column became stiffer and the main propulsion while swimming no longer came from the tail but from the limbs, which changed into flippers. The Pistosauria became warm-blooded and viviparous, giving birth to live young. Early, basal, members of the group, traditionally called "pistosaurids", were still largely coastal animals. Their shoulder girdles remained weak, their pelves could not support the power of a strong swimming stroke, and their flippers were blunt. Later, a more advanced pistosaurian group split off: the Plesiosauria. These had reinforced shoulder girdles, flatter pelves, and more pointed flippers. Other adaptations allowing them to colonise the open seas included stiff limb joints; an increase in the number of phalanges of the hand and foot; a tighter lateral connection of the finger and toe phalanx series, and a shortened tail.
From the earliest Jurassic, the Hettangian stage, a rich radiation of plesiosaurs is known, implying that the group must already have diversified in the Late Triassic; of this diversification, however, only a few (very) basal forms have been discovered, the most derived Rhaeticosaurus. The subsequent evolution of the plesiosaurs is very contentious. The various cladistic analyses have not resulted in a consensus about the relationships between the main plesiosaurian subgroups. Traditionally, plesiosaurs have been divided into the long-necked Plesiosauroidea and the short-necked Pliosauroidea. However, modern research suggests that some generally long-necked groups might have had short-necked members. To avoid confusion between the phylogeny, the evolutionary relationships, and the morphology, the way the animal is built, long-necked forms are therefore called "plesiosauromorph" and short-necked forms are called "pliosauromorph", without the "plesiosauromorph" species necessarily being more closely related to each other than to the "pliosauromorph" forms.
The latest common ancestor of the Plesiosauria was probably a rather small short-necked form. During the earliest Jurassic, the subgroup with the most species was the Rhomaleosauridae, a possibly very basal split-off of species which were also short-necked. Plesiosaurs in this period were at most five metres (sixteen feet) long. By the Toarcian, about 180 million years ago, other groups, among them the Plesiosauridae, became more numerous and some species developed longer necks, resulting in total body lengths of up to .
In the middle of the Jurassic, very large Pliosauridae evolved. These were characterized by a large head and a short neck, such as Liopleurodon and Simolestes. These forms had skulls up to three metres (ten feet) long and reached a length of up to and a weight of ten tonnes. The pliosaurids had large, conical teeth and were the dominant marine carnivores of their time. During the same time, approximately 160 million years ago, the Cryptoclididae were present, shorter species with a long neck and a small head.
The Leptocleididae radiated during the Early Cretaceous. These were rather small forms that, despite their short necks, might have been more closely related to the Plesiosauridae than to the Pliosauridae. Later in the Early Cretaceous, the Elasmosauridae appeared; these were among the longest plesiosaurs, reaching up to fifteen metres (fifty feet) in length due to very long necks containing as many as 76 vertebrae, more than any other known vertebrate. Pliosauridae were still present as is shown by large predators, such as Kronosaurus.
At the beginning of the Late Cretaceous, the Ichthyosauria became extinct; perhaps a plesiosaur group evolved to fill their niches: the Polycotylidae, which had short necks and peculiarly elongated heads with narrow snouts. During the Late Cretaceous, the elasmosaurids still had many species.
All plesiosaurs became extinct as a result of the K-T event at the end of the Cretaceous period, approximately million years ago.
Relationships
In modern phylogeny, clades are defined groups that contain all species belonging to a certain branch of the evolutionary tree. One way to define a clade is to let it consist of the last common ancestor of two such species and all its descendants. Such a clade is called a "node clade". In 2008, Patrick Druckenmiller and Anthony Russell in this way defined Plesiosauria as the group consisting of the last common ancestor of Plesiosaurus dolichocheirus and Peloneustes philarchus and all its descendants. Plesiosaurus and Peloneustes represented the main subgroups of the Plesiosauroidea and the Pliosauroidea and were chosen for historical reasons; any other species from these groups would have sufficed.
Another way to define a clade is to let it consist of all species more closely related to a certain species that one in any case wishes to include in the clade than to another species that one to the contrary desires to exclude. Such a clade is called a "stem clade". Such a definition has the advantage that it is easier to include all species with a certain morphology. Plesiosauria was in 2010 by Hillary Ketchum and Roger Benson defined as such a stem-based taxon: "all taxa more closely related to Plesiosaurus dolichodeirus and Pliosaurus brachydeirus than to Augustasaurus hagdorni". Ketchum and Benson (2010) also coined a new clade Neoplesiosauria, a node-based taxon that was defined by as "Plesiosaurus dolichodeirus, Pliosaurus brachydeirus, their most recent common ancestor and all of its descendants". The clade Neoplesiosauria very likely is materially identical to Plesiosauria sensu Druckenmiller & Russell, thus would designate exactly the same species, and the term was meant to be a replacement of this concept.
Benson et al. (2012) found the traditional Pliosauroidea to be paraphyletic in relation to Plesiosauroidea. Rhomaleosauridae was found to be outside Neoplesiosauria, but still within Plesiosauria. The early Carnian pistosaur Bobosaurus was found to be one step more advanced than Augustasaurus in relation to the Plesiosauria and therefore it represented by definition the basalmost known plesiosaur. This analysis focused on basal plesiosaurs and therefore only one derived pliosaurid and one cryptoclidian were included, while elasmosaurids were not included at all. A more detailed analysis published by both Benson and Druckenmiller in 2014 was not able to resolve the relationships among the lineages at the base of Plesiosauria.
The following cladogram follows an analysis by Benson & Druckenmiller (2014).
Description
Size
In general, plesiosaurians varied in adult length from between to about . The group thus contained some of the largest marine apex predators in the fossil record, roughly equalling the longest ichthyosaurs, mosasaurids, sharks and toothed whales in size. Some plesiosaurian remains, such as a long set of highly reconstructed and fragmentary lower jaws preserved in the Oxford University Museum and referable to Pliosaurus rossicus (previously referred to Stretosaurus and Liopleurodon), indicated a length of . However, it was recently argued that its size cannot be currently determined due to their being poorly reconstructed and a length of metres or less is more likely. MCZ 1285, a specimen currently referable to Kronosaurus queenslandicus, from the Early Cretaceous of Australia, was estimated to have a skull length of . A series of neck vertebrae from the Kimmeridge Clay Formation indicate a pliosaur, probably Pliosaurus, that may have been up to long.
Skeleton
The typical plesiosaur had a broad, flat, body and a short tail. Plesiosaurs retained their ancestral two pairs of limbs, which had evolved into large flippers. Plesiosaurs were related to the earlier Nothosauridae, that had a more crocodile-like body. The flipper arrangement is unusual for aquatic animals in that probably all four limbs were used to propel the animal through the water by up-and-down movements. The tail was most likely only used for helping in directional control. This contrasts to the ichthyosaurs and the later mosasaurs, in which the tail provided the main propulsion.
To power the flippers, the shoulder girdle and the pelvis had been greatly modified, developing into broad bone plates at the underside of the body, which served as an attachment surface for large muscle groups, able to pull the limbs downwards. In the shoulder, the coracoid had become the largest element covering the major part of the breast. The scapula was much smaller, forming the outer front edge of the trunk. To the middle, it continued into a clavicle and finally a small interclavicular bone. As with most tetrapods, the shoulder joint was formed by the scapula and coracoid. In the pelvis, the bone plate was formed by the ischium at the rear and the larger pubic bone in front of it. The ilium, which in land vertebrates bears the weight of the hindlimb, had become a small element at the rear, no longer attached to either the pubic bone or the thighbone. The hip joint was formed by the ischium and the pubic bone. The pectoral and pelvic plates were connected by a plastron, a bone cage formed by the paired belly ribs that each had a middle and an outer section. This arrangement immobilised the entire trunk.
To become flippers, the limbs had changed considerably. The limbs were very large, each about as long as the trunk. The forelimbs and hindlimbs strongly resembled each other. The humerus in the upper arm, and the femur in the upper leg, had become large flat bones, expanded at their outer ends. The elbow joints and the knee joints were no longer functional: the lower arm and the lower leg could not flex in relation to the upper limb elements, but formed a flat continuation of them. All outer bones had become flat supporting elements of the flippers, tightly connected to each other and hardly able to rotate, flex, extend or spread. This was true of the ulna, radius, metacarpals and fingers, as well of the tibia, fibula, metatarsals and toes. Furthermore, in order to elongate the flippers, the number of phalanges had increased, up to eighteen in a row, a phenomenon called hyperphalangy. The flippers were not perfectly flat, but had a lightly convexly curved top profile, like an airfoil, to be able to "fly" through the water.
While plesiosaurs varied little in the build of the trunk, and can be called "conservative" in this respect, there were major differences between the subgroups as regards the form of the neck and the skull. Plesiosaurs can be divided into two major morphological types that differ in head and neck size. "Plesiosauromorphs", such as Cryptoclididae, Elasmosauridae, and Plesiosauridae, had long necks and small heads. "Pliosauromorphs", such as the Pliosauridae and the Rhomaleosauridae, had shorter necks with a large, elongated head. The neck length variations were not caused by an elongation of the individual cervical vertebrae, but an increase in their number. Elasmosaurus has seventy-two neck vertebrae; the known record is held by the elasmosaurid Albertonectes, with seventy-six cervicals. The large number of joints suggested to early researchers that the neck must have been very flexible; indeed, a swan-like curvature of the neck was assumed to be possible - in Icelandic, plesiosaurs are even called Svaneðlur, "swan lizards". However, modern research has confirmed an earlier conjecture of Williston that the long plate-like spines on top of the vertebrae, the processus spinosi, strongly limited vertical neck movement. Although horizontal curving was less restricted, in general, the neck must have been rather stiff and certainly was incapable of being bent into serpentine coils. This is even more true of the short-necked "pliosauromophs", which had as few as eleven cervical vertebrae. With early forms, the amphicoelous or amphiplat neck vertebrae bore double-headed neck ribs; later forms had single-headed ribs. In the remainder of the vertebral column, the number of dorsal vertebrae varied between about nineteen and thirty-two; of the sacral vertebrae, between two and six, and of the tail vertebrae, between about twenty-one and thirty-two. These vertebrae still possessed the original processes inherited from the land-dwelling ancestors of the Sauropterygia and had not been reduced to fish-like simple discs, as happened with the vertebrae of ichthyosaurs. The tail vertebrae possessed chevron bones. The dorsal vertebrae of plesiosaurs are easily recognisable by two large foramina subcentralia, paired vascular openings at the underside.
The skull of plesiosaurs showed the "euryapsid" condition, lacking the lower temporal fenestrae, the openings at the lower rear sides. The upper temporal fenestrae formed large openings at the sides of the rear skull roof, the attachment for muscles closing the lower jaws. Generally, the parietal bones were very large, with a midline crest, while the squamosal bones typically formed an arch, excluding the parietals from the occiput. The eye sockets were large, in general pointing obliquely upwards; the pliosaurids had more sideways directed eyes. The eyes were supported by scleral rings, the form of which shows that they were relatively flat, an adaptation to diving. The anteriorly placed internal nostrils, the choanae, have palatal grooves to channel water, the flow of which would be maintained by hydrodynamic pressure over the posteriorly placed, in front of the eye sockets, external nares during swimming. According to one hypothesis, during its passage through the nasal ducts, the water would have been 'smelled' by olfactory epithelia. However, more to the rear, a second pair of openings is present in the palate; a later hypothesis holds that these are the real choanae and the front pair in reality represented paired salt glands. The distance between the eye sockets and the nostrils was so limited because the nasal bones were strongly reduced, even absent in many species. The premaxillae directly touched the frontal bones; in the elasmosaurids, they even reached back to the parietal bones. Often, the lacrimal bones were also lacking.
The tooth form and number was very variable. Some forms had hundreds of needle-like teeth. Most species had larger conical teeth with a round or oval cross-section. Such teeth numbered four to six in the premaxilla and about fourteen to twenty-five in the maxilla; the number in the lower jaws roughly equalled that of the skull. The teeth were placed in tooth-sockets, had vertically wrinkled enamel and lacked a true cutting edge or carina. With some species, the front teeth were notably longer, to grab prey.
Soft tissues
Soft tissue remains of plesiosaurs are rare, but sometimes, especially in shale deposits, they have been partly preserved, e.g. showing the outlines of the body. An early discovery in this respect was the holotype of Plesiosaurus conybeari (presently Attenborosaurus). From such finds it is known that the skin was smooth, without apparent scales but with small wrinkles (although Frey et al., (2017) reported that Mauriciosaurus had millimetric scale-like structures across the body that they interpret as scales), that the trailing edge of the flippers extended considerably behind the limb bones, and that the tail bore a vertical fin, as reported by Wilhelm Dames in his description of Plesiosaurus guilelmiimperatoris (presently Seeleyosaurus). The possibility of a tail fluke has been confirmed by recent studies on the caudal neural spine form of Pantosaurus, Cryptoclidus and Rhomaleosaurus zetlandicus. A 2020 study claims that the caudal fin was horizontal in configuration.
Paleobiology
Food
The probable food source of plesiosaurs varied depending on whether they belonged to the long-necked "plesiosauromorph" forms or the short-necked "pliosauromorph" species.
The extremely long necks of "plesiosauromorphs" have caused speculation as to their function from the very moment their special build became apparent. Conybeare had offered three possible explanations. The neck could have served to intercept fast-moving fish in a pursuit. Alternatively, plesiosaurs could have rested on the sea bottom, while the head was sent out to search for prey, which seemed to be confirmed by the fact the eyes were directed relatively upwards. Finally, Conybeare suggested the possibility that plesiosaurs swam on the surface, letting their necks plunge downwards to seek food at lower levels. All these interpretations assumed that the neck was very flexible. The modern insight that the neck was, in fact, rather rigid, with limited vertical movement, has necessitated new explanations. One hypothesis is that the length of the neck made it possible to surprise schools of fish, the head arriving before the sight or pressure wave of the trunk could alert them. "Plesiosauromorphs" hunted visually, as shown by their large eyes, and perhaps employed a directional sense of olfaction. Hard and soft-bodied cephalopods probably formed part of their diet. Their jaws were probably strong enough to bite through the hard shells of this prey type. Fossil specimens have been found with cephalopod shells still in their stomach. The bony fish (Osteichthyes), which further diversified during the Jurassic, were likely prey as well. A very different hypothesis claims that "plesiosauromorphs" were bottom feeders. The stiff necks would have been used to plough the sea bottom, eating the benthos. This would have been proven by long furrows present in ancients seabeds. Such a lifestyle has in 2017 been suggested for Morturneria. "Plesiosauromorphs" were not well adapted to catching large fast-moving prey, as their long necks, though seemingly streamlined, caused enormous skin friction. Sankar Chatterjee suggested in 1989 that some Cryptocleididae were suspension feeders, filtering plankton. Aristonectes e.g. had hundreds of teeth, allowing it to sieve small Crustacea from the water.
The short-necked "pliosauromorphs" were top carnivores, or apex predators, in their respective foodwebs. They were pursuit predators or ambush predators of various sized prey and opportunistic feeders; their teeth could be used to pierce soft-bodied prey, especially fish. Their heads and teeth were very large, suited to grab and rip apart large animals. Their morphology allowed for a high swimming speed. They too hunted visually.
Plesiosaurs were themselves prey for other carnivores, as shown by bite marks left by a shark that have been discovered on a fossilized plesiosaur fin and the fossilized remains of a mosasaur's stomach contents that are thought to be the remains of a plesiosaur.
Skeletons have also been discovered with gastroliths, stones, in their stomachs, though whether to help break down food, especially cephalopods, in a muscular gizzard, or to vary buoyancy, or both, has not been established. However, the total weight of the gastroliths found in various specimens appears to be insufficient to modify the buoyancy of these large reptiles. The first plesiosaur gastroliths, found with Mauisaurus gardneri (a nomen nudum), were reported by Harry Govier Seeley in 1877. The number of these stones per individual is often very large. In 1949, a fossil of Alzadasaurus (specimen SDSM 451, later renamed to Styxosaurus) showed 253 of them. The size of individual stones is often considerable. In 1991 an elasmosaurid specimen, KUVP 129744, was investigated, containing a gastrolith with a diameter of seventeen centimetres and a weight of 1300 grams; and a somewhat shorter stone of 1490 grams. In total, forty-seven gastroliths were present, with a combined weight of 13 kilograms. The size of the stones has been seen as an indication that they were not swallowed by accident, but deliberately, the animal perhaps covering large distances in search of a suitable rock type. The type specimen of Scalamagnus (MNA V10046) is associated with 289 gastroliths, which is unusual in comparison to most polycotylid skeletons that generally lack gastroliths. Ranging from less than 0.1 grams to 18.5 grams, the total mass of the gastroliths was about 518 grams. About three-quarters of the stones weighed less than 2 grams, with the mean mass and median mass of the stones respectively estimated at 1.9 grams and 0.8 grams. The gastroliths had high mean value and variability in sphericity, suggesting that this individual was obtaining its stones from rivers located along the western side of the Western Interior Seaway.
Locomotion
Flipper movement
The distinctive four-flippered body-shape has caused considerable speculation about what kind of stroke plesiosaurs used. The only modern group with four flippers are the sea turtles, which only use the front pair for propulsion. Conybeare and Buckland had already compared the flippers with bird wings. However, such a comparison was not very informative, as the mechanics of bird flight in this period were poorly understood. By the middle of the nineteenth century, it was typically assumed that plesiosaurs employed a rowing movement. The flippers would have been moved forward in a horizontal position, to minimise friction, and then axially rotated to a vertical position in order to be pulled to the rear, causing the largest possible reactive force. In fact, such a method would be very inefficient: the recovery stroke in this case generates no thrust and the rear stroke generates an enormous turbulence. In the early twentieth century, the newly discovered principles of bird flight suggested to several researchers that plesiosaurs, like turtles and penguins, made a flying movement while swimming. This was e.g. proposed by Eberhard Fraas in 1905, and in 1908 by Othenio Abel. When flying, the flipper movement is more vertical, its point describing an oval or "8". Ideally, the flipper is first moved obliquely to the front and downwards and then, after a slight retraction and rotation, crosses this path from below to be pulled to the front and upwards. During both strokes, down and up, according to Bernoulli's principle, forward and upward thrust is generated by the convexly curved upper profile of the flipper, the front edge slightly inclined relative to the water flow, while turbulence is minimal. However, despite the evident advantages of such a swimming method, in 1924 the first systematic study on the musculature of plesiosaurs by David Meredith Seares Watson concluded they nevertheless performed a rowing movement.
During the middle of the twentieth century, Watson's "rowing model" remained the dominant hypothesis regarding the plesiosaur swimming stroke. In 1957, Lambert Beverly Halstead, at the time using the family name Tarlo, proposed a variant: the hindlimbs would have rowed in the horizontal plane but the forelimbs would have paddled, moved to below and to the rear. In 1975, the traditional model was challenged by Jane Ann Robinson, who revived the "flying" hypothesis. She argued that the main muscle groups were optimally placed for a vertical flipper movement, not for pulling the limbs horizontally, and that the form of the shoulder and hip joints would have precluded the vertical rotation needed for rowing. In a subsequent article, Robinson proposed that the kinetic energy generated by the forces exerted on the trunk by the strokes, would have been stored and released as elastic energy in the ribcage, allowing for an especially efficient and dynamic propulsion system.
In Robinson's model, both the downstroke and the upstroke would have been powerful. In 1982, she was criticised by Samuel Tarsitano, Eberhard Frey and Jürgen Riess, who claimed that, while the muscles at the underside of the shoulder and pelvic plates were clearly powerful enough to pull the limbs downwards, comparable muscle groups on the top of these plates to elevate the limbs were simply lacking, and, had they been present, could not have been forcefully employed, their bulging carrying the danger of hurting the internal organs. They proposed a more limited flying model in which a powerful downstroke was combined with a largely unpowered recovery, the flipper returning to its original position by the momentum of the forward moving and temporarily sinking body. This modified flying model became a popular interpretation. Less attention was given to an alternative hypothesis by Stephen Godfrey in 1984, which proposed that both the forelimbs and hindlimbs performed a deep paddling motion to the rear combined with a powered recovery stroke to the front, resembling the movement made by the forelimbs of sea-lions.
In 2010, Frank Sanders and Kenneth Carpenter published a study concluding that Robinson's model had been correct. Frey & Riess would have been mistaken in their assertion that the shoulder and pelvic plates had no muscles attached to their upper sides. While these muscle groups were probably not very powerful, this could easily have been compensated by the large muscles on the back, especially the latissimus dorsi, which would have been well developed in view of the high spines on the backbone. Furthermore, the flat build of the shoulder and hip joints strongly indicated that the main movement was vertical, not horizontal.
Gait
Like all tetrapods with limbs, plesiosaurs must have had a certain gait, a coordinated movement pattern of the, in this case, flippers. Of all the possibilities, in practice attention has been largely directed to the question of whether the front pair and hind pair moved simultaneously, so that all four flippers were engaged at the same moment, or in an alternate pattern, each pair being employed in turn. Frey & Riess in 1991 proposed an alternate model, which would have had the advantage of a more continuous propulsion. In 2000, Theagarten Lingham-Soliar evaded the question by concluding that, like sea turtles, plesiosaurs only used the front pair for a powered stroke. The hind pair would have been merely used for steering. Lingham-Soliar deduced this from the form of the hip joint, which would have allowed for only a limited vertical movement. Furthermore, a separation of the propulsion and steering function would have facilitated the general coordination of the body and prevented a too extreme pitch. He rejected Robinson's hypothesis that elastic energy was stored in the ribcage, considering the ribs too stiff for this.
The interpretation by Frey & Riess became the dominant one, but was challenged in 2004 by Sanders, who showed experimentally that, whereas an alternate movement might have caused excessive pitching, a simultaneous movement would have caused only a slight pitch, which could have been easily controlled by the hind flippers. Of the other axial movements, rolling could have been controlled by alternately engaging the flippers of the right or left side, and yaw by the long neck or a vertical tail fin. Sanders did not believe that the hind pair was not used for propulsion, concluding that the limitations imposed by the hip joint were very relative. In 2010, Sanders & Carpenter concluded that, with an alternating gait, the turbulence caused by the front pair would have hindered an effective action of the hind pair. Besides, a long gliding phase after a simultaneous engagement would have been very energy efficient. It is also possible that the gait was optional and was adapted to the circumstances. During a fast steady pursuit, an alternate movement would have been useful; in an ambush, a simultaneous stroke would have made a peak speed possible. When searching for prey over a longer distance, a combination of a simultaneous movement with gliding would have cost the least energy. In 2017, a study by Luke Muscutt, using a robot model, concluded that the rear flippers were actively employed, allowing for a 60% increase of the propulsive force and a 40% increase of efficiency. There would not have been a single optimal phase for all conditions, the gait likely having been changed as the situation demanded.
Speed
In general, it is hard to determine the maximum speed of extinct sea creatures. For plesiosaurs, this is made more difficult by the lack of consensus about their flipper stroke and gait. There are no exact calculations of their Reynolds Number. Fossil impressions show that the skin was relatively smooth, not scaled, and this may have reduced form drag. Small wrinkles are present in the skin that may have prevented separation of the laminar flow in the boundary layer and thereby reduced skin friction.
Sustained speed may be estimated by calculating the drag of a simplified model of the body, that can be approached by a prolate spheroid, and the sustainable level of energy output by the muscles. A first study of this problem was published by Judy Massare in 1988. Even when assuming a low hydrodynamic efficiency of 0.65, Massare's model seemed to indicate that plesiosaurs, if warm-blooded, would have cruised at a speed of four metres per second, or about fourteen kilometres per hour, considerably exceeding the known speeds of extant dolphins and whales. However, in 2002 Ryosuke Motani showed that the formulae that Massare had used, had been flawed. A recalculation, using corrected formulae, resulted in a speed of half a metre per second (1.8 km/h) for a cold-blooded plesiosaur and one and a half metres per second (5.4 km/h) for an endothermic plesiosaur. Even the highest estimate is about a third lower than the speed of extant Cetacea.
Massare also tried to compare the speeds of plesiosaurs with those of the two other main sea reptile groups, the Ichthyosauria and the Mosasauridae. She concluded that plesiosaurs were about twenty percent slower than advanced ichthyosaurs, which employed a very effective tunniform movement, oscillating just the tail, but five percent faster than mosasaurids, which were assumed to swim with an inefficient anguilliform, eel-like, movement of the body.
The many plesiosaur species may have differed considerably in their swimming speeds, reflecting the various body shapes present in the group. While the short-necked "pliosauromorphs" (e.g. Liopleurodon) may have been fast swimmers, the long-necked "plesiosauromorphs" were built more for manoeuvrability than for speed, slowed by a strong skin friction, yet capable of a fast rolling movement. Some long-necked forms, such as the Elasmosauridae, also have relatively short stubby flippers with a low aspect ratio, further reducing speed but improving roll.
Diving
Few data are available that show exactly how deep plesiosaurs dived. That they dived to some considerable depth is proven by traces of decompression sickness. The heads of the humeri and femora with many fossils show necrosis of the bone tissue, caused by a too rapid ascent after deep diving. However, this does not allow to deduce some exact depth as the damage could have been caused by a few very deep dives, or alternatively by a great number of relatively shallow descents. The vertebrae show no such damage: they were probably protected by a superior blood supply, made possible by the arteries entering the bone through the two foramina subcentralia, large openings in their undersides.
Descending would have been helped by a negative Archimedes Force, i.e. being denser than water. Of course, this would have had the disadvantage of hampering coming up again. Young plesiosaurs show pachyostosis, an extreme density of the bone tissue, which might have increased relative weight. Adult individuals have more spongy bone. Gastroliths have been suggested as a method to increase weight or even as means to attain neutral buoyancy, swallowing or spitting them out again as needed. They might also have been used to increase stability.
The relatively large eyes of the Cryptocleididae have been seen as an adaptation to deep diving.<ref name="berezin2019"/
Tail role
A 2020 study has posited that sauropterygians relied on vertical tail strokes much like cetaceans. In plesiosaurs the trunk was rigid so this action was more limited and in conjunction with the flippers.
Metabolism
Traditionally, it was assumed that extinct reptile groups were cold-blooded like modern reptiles. New research during the past decades has led to the conclusion that some groups, such as theropod dinosaurs and pterosaurs, were very likely warm-blooded. Whether perhaps plesiosaurs were warm-blooded as well is difficult to determine. One of the indications of a high metabolism is the presence of fast-growing fibrolamellar bone. The pachyostosis with juvenile individuals makes it hard to establish whether plesiosaurs possessed such bone, though. However, it has been possible to check its occurrence with more basal members of the more inclusive group that plesiosaurs belonged to, the Sauropterygia. A study in 2010 concluded that fibrolamellar bone was originally present with sauropterygians. A subsequent publication in 2013 found that the Nothosauridae lacked this bone matrix type but that basal Pistosauria possessed it, a sign of a more elevated metabolism. It is thus more parsimonious to assume that the more derived pistosaurians, the plesiosaurs, also had a faster metabolism. A paper published in 2018 claimed that plesiosaurs had resting metabolic rates (RMR) in the range of birds based on quantitative osteohistological modelling. However, these results are problematic in view of general principles of vertebrate physiology (see Kleiber's law); evidence from isotope studies of plesiosaur tooth enamel indeed suggests endothermy at lower RMRs, with inferred body temperatures of ca. .
Reproduction
As reptiles in general are oviparous, until the end of the twentieth century it had been seen as possible that smaller plesiosaurs may have crawled up on a beach to lay eggs, like modern turtles. Their strong limbs and a flat underside seemed to have made this feasible. This method was, for example, defended by Halstead. However, as those limbs no longer had functional elbow or knee joints and the underside by its very flatness would have generated a lot of friction, already in the nineteenth century it was hypothesised that plesiosaurs had been viviparous. Besides, it was hard to conceive how the largest species, as big as whales, could have survived a beaching. Fossil finds of ichthyosaur embryos showed that at least one group of marine reptiles had borne live young. The first to claim that similar embryos had been found in plesiosaurs was Harry Govier Seeley, who reported in 1887 having acquired a nodule with four to eight tiny skeletons. In 1896, he described this discovery in more detail. If authentic, the embryos of plesiosaurs would have been very small, like those of ichthyosaurs. However, in 1982 Richard Anthony Thulborn showed that Seeley had been deceived by a "doctored" fossil of a nest of crayfish.
An actual plesiosaur specimen found in 1987 eventually proved that plesiosaurs gave birth to live young: This fossil of a pregnant Polycotylus latippinus shows that these animals gave birth to a single large juvenile and probably invested parental care in their offspring, similar to modern whales. The young was 1.5 metres (five feet) long and thus large compared to its mother of five metres (sixteen feet) length, indicating a K-strategy in reproduction. Little is known about growth rates or a possible sexual dimorphism.
Social behaviour and intelligence
From the parental care indicated by the large size of the young, it can be deduced that social behaviour in general was relatively complex. It is not known whether plesiosaurs hunted in packs. Their relative brain size seems to be typical for reptiles. Of the senses, sight and smell were important, hearing less so; elasmosaurids have lost the stapes completely. It has been suggested that with some groups the skull housed electro-sensitive organs.
Paleopathology
Some plesiosaur fossils show pathologies, the result of illness or old age. In 2012, a mandible of Pliosaurus was described with a jaw joint clearly afflicted by arthritis, a typical sign of senescence.
Distribution
Plesiosaur fossils have been found on every continent, including Antarctica.
Timeline of Plesiosauria Species
Stratigraphic distribution
The following is a list of geologic formations that have produced plesiosaur fossils.
In contemporary culture
The belief that plesiosaurs are dinosaurs is a common misconception, and plesiosaurs are often erroneously depicted as dinosaurs in popular culture.
It has been suggested that legends of sea serpents and modern sightings of supposed monsters in lakes or the sea could be explained by the survival of plesiosaurs into modern times. This cryptozoological proposal has been rejected by the scientific community at large, which considers it to be based on fantasy and pseudoscience. Purported plesiosaur carcasses have been shown to be partially decomposed corpses of basking sharks instead.
While the Loch Ness monster is often reported as looking like a plesiosaur, it is also often described as looking completely different. A number of reasons have been presented for it to be unlikely to be a plesiosaur. The fact that the osteology of the plesiosaur's neck makes it absolutely safe to say that the plesiosaur could not lift its head like a swan out of water as the Loch Ness monster does, the assumption that air-breathing animals would be easy to see whenever they appear at the surface to breathe, the fact that the loch is too small and contains insufficient food to be able to support a breeding colony of large animals, and finally the fact that the lake was formed only 10,000 years ago at the end of the last ice age, and the latest fossil appearance of plesiosaurs dates to over 66 million years ago. Frequent explanations for the sightings include waves, floating inanimate objects, tricks of the light, swimming known animals and practical jokes. Nevertheless, in the popular imagination, plesiosaurs have come to be identified with the Monster of Loch Ness. This has made plesiosaurs better known to the general public.
| Biology and health sciences | Dinosaurs and prehistoric reptiles | null |
1398681 | https://en.wikipedia.org/wiki/Nyctereutes | Nyctereutes | Nyctereutes (Greek: nyx, nykt- "night" + ereutēs "wanderer") is a genus of canid which includes only two extant species, both known as raccoon dogs: the common raccoon dog (Nyctereutes procyonoides) and the Japanese raccoon dog (Nyctereutes viverrinus). Nyctereutes first entered the fossil record 5.5 million years ago (Mya) in northern China. It was one of the earliest canines to arrive in the Old World. All but two species became extinct before the end of the Pleistocene. A study suggests that the evolution of Nyctereutes was influenced by environmental and climatic changes, such as the expansion and contraction of forests and the fluctuations of temperature and precipitation.
Characteristics
They are typically recognized by their short snouts, round crania and the shaping of their molars, specifically the ratio between M1 and M2. Nyctereutes is considered mainly an opportunistic carnivore, feeding on small mammals, fish, birds, and insects, alongside occasional plants, specifically roots. Their diet is mostly influenced by environmental factors. Japanese raccoon dogs are considered distinct from the mainland species because of the larger skull size found in Russian and Hokkaido raccoon dogs.
Species
Extant species
Fossil species
†Nyctereutes abdeslami 3.6—1.8 Mya (Morocco)
†Nyctereutes donnezani 9.0—3.4 Mya (Eastern Europe, Spain)
†Nyctereutes lockwoodi 3.42—3.2 Mya (Ethiopia)
†Nyctereutes megamastoides (Europe)
†Nyctereutes sinensis 3.6 Mya—781,000 years ago (Eastern Asia)
†Nyctereutes tingi
†Nyctereutes vinetorum
| Biology and health sciences | Canines | Animals |
1399625 | https://en.wikipedia.org/wiki/Hummock | Hummock | In geology, a hummock is a small knoll or mound above ground. They are typically less than in height and tend to appear in groups or fields. Large landslide avalanches that typically occur in volcanic areas are responsible for formation of hummocks. From the initiation of the landslide to the final formation, hummocks can be characterized by their evolution, spatial distribution, and internal structure. As the movement of landslide begins, the extension faulting results in formation of hummocks with smaller ones at the front of the landslide and larger ones in the back. The size of the hummocks is dependent on their position in the initial mass. As this mass spreads, the hummocks further modify to break up or merge to form larger structures. It is difficult to make generalizations about hummocks because of the diversity in their morphology and sedimentology. An extremely irregular surface may be called hummocky.
An ice hummock is a boss or rounded knoll of ice rising above the general level of an ice-field. Hummocky ice is caused by slow and unequal pressure in the main body of the packed ice, and by unequal structure and temperature at a later period.
Bog hummocks
Hummocks in the shape of low ridges of drier peat moss typically form part of the structure of certain types of raised bog, such as plateau, kermi, palsa or string bog. The hummocks alternate with shallow wet depressions or flarks.
Swamp hummocks
Swamp hummocks are mounds typically initiated as fallen trunks or branches covered with moss and rising above the swamp floor. The low-lying areas between hummocks are called hollows. A related term, used in the Southeastern United States, is "hammock".
Cryogenic earth hummocks
Cryogenic earth hummocks go by various names; in North America they are known as earth hummocks; the Icelandic term (pl. ) is also used to describe them in Greenland and Iceland, and the Finnish term (pl. ) in Fennoscandia. These cold climate landforms appear in regions of permafrost and seasonally frozen ground. They usually develop in fine-grained soils with light to moderate vegetation in areas of low relief where there is adequate moisture to fuel cryogenic processes. Cryogenic earth hummocks appear in a variety of cold-ground environments, making the story of their genesis complex. Geologists recognize that hummocks may be polygenetic and form by a combination of forces that are yet to be well understood. Recent research on cryogenic hummocks has focused on their role as environmental indicators. Because hummocks can both form and disintegrate rapidly (well within a human lifetime) they are an ideal landform to monitor for medium range environmental change. There are several explanations of earth hummock formation. Hummocks may form as a result of clasts migrating to the surface through frost push and pull mechanisms. As the clasts rise they push up on the ground above forming bulging mounds.
Oscillating cryogenic earth hummocks
Cryogenic hummocks are found covered in vegetation in Taiga and Boreal forests. They are also known as active hummocks due to the freeze and thaw cycle of the ice lenses that continually occur within the organic layers of their mounds. The freezing of ice lenses is what causes the mounds to rise. When the ice lenses thaw during a forest fire, the mounds collapse until they freeze again.
Thufurs
Thufurs are small sized hummocks typically found in climates like that of Iceland. They prefer areas with seasonal freezing and maritime climates. While their sediment is rich in silt, the primary make up of these mounds is volcanic ash. A clear display of layers of volcanic ash is observed within these Thufurs amidst other organic matter.
Cellular circulation
Hummock excavation normally reveals a disturbed soil profile, often with irregular streaks of organic matter or other colorations suggesting fluidity at some point in the past. The disturbance, a form of cryoturbation, often extends to a depth roughly equal to the hummock’s height. This has been explained by some as the result of convection processes whereby warmer soil and water at depth expands, becomes less dense and rises, while gravity forces denser soil downwards. Circulation has also been explained as driven solely by density of soil material and not temperature induced density changes.
Differential frost heave (cryostatic pressure hypothesis)
This is the most widely accepted explanation of cryogenic hummock genesis. Irregularities in preexisting ground conditions (differences in grain size, ground temperature, moisture conditions of vegetation) cause surface downwards freezing during the winter to spread unevenly. Encroaching frost exerted increasing pressure on the adjacent unfrozen soil. Trapped between the freezing surface soils and the buried permafrost layer the soil material is forced upwards into hummocks. While this is currently the most commonly accepted hypothesis, there is still only limited evidence of this happening.
Hummocks created by debris avalanches
Debris avalanches are caused by sudden collapses of large volumes of rock from the flanks of mountains, especially volcanoes. These events are fast-moving, gravity-driven currents of saturated debris that do not necessarily include juvenile material. Debris avalanche deposits are characterized by the debris-avalanche block (hummocks) and the debris-avalanche matrix. Debris avalanches are diagnosed for landscapes where the volcano has an amphitheater at the source with hummocky terrain downhill. In some cases, such as Mount Shasta in California, the amphitheater has been filled in by later volcanic activity and all that remains are the hummocks.
Debris avalanche blocks are identifiable because they keep their internal stratigraphy. The blocks simply break off the mountain and slide down, completely intact, identifiable because they differ from the surrounding landscape. The volume and height of hummocks is mostly dependent on their location; the closer to the source region, the larger they become. The bottom layer of a debris avalanche deposit is the fine-grained matrix which forms due to the shear at the base of the large, turbulent moving mass.
| Physical sciences | Other erosional landforms | Earth science |
6772880 | https://en.wikipedia.org/wiki/Zygentoma | Zygentoma | Zygentoma are an order in the class Insecta, and consist of about 550 known species. The Zygentoma include the so-called silverfish or fishmoths, and the firebrats. A conspicuous feature of the order are the three long caudal filaments. The two lateral filaments are cerci, and the medial one is an epiproct or appendix dorsalis. In this they resemble the Archaeognatha, although the cerci of Zygentoma, unlike in the latter order, are nearly as long as the epiproct.
Until the late twentieth century the Zygentoma were regarded as a suborder of the Thysanura, until it was recognized that the order Thysanura was paraphyletic, thus raising the two suborders to the status of independent monophyletic orders, with Archaeognatha as sister group to the Dicondylia, including the Zygentoma.
Etymology
The name "Zygentoma" is derived from the Greek (), in context meaning "yoke" or "bridge"; and (), "insects" (literally meaning "cut into", in reference to the segmented anatomy of typical insects). The idea behind the name was that the taxon formed a notional link between the Pterygota and the Apterygota. This view of the taxon as a link is now totally obsolete, but the phylogeny of the Insecta was in its infancy in the late 19th and early 20th centuries, and the name was firmly established by the time that more sophisticated views were developed.
Description and ecology
Silverfish are so-called because of the silvery glitter of the scales covering the bodies of the most conspicuous species (family Lepismatidae). Their movement has been described as "fish-like" as if they were swimming. Most extant species have a body length less than long, though Carboniferous fossils about 6 cm long are known.
Zygentoma have dorsiventrally flattened bodies, generally elongated or oval in outline. Their antennae are slender and mobile. The compound eyes tend to be small, and the two families Nicoletiidae and Protrinemuridae, and some troglobitic species, lack eyes entirely. The Lepismatidae have compound eyes composed of 12 ommatidia on each side of the head. Ocelli are absent in all species except for Tricholepidion gertschi, the only member of the family Lepidotrichidae. The mandibles are short, and the mouthparts unspecialised. Tricholepidion, Nicoletiidae and Protrinemuridae have eight pairs of short appendages called styli on their abdominal segments 2 to 9, but in Lepismatidae styli are only found on segment 7 to 9 or 8 to 9, and sometimes just on the ninth segment. Or styli can be completely absent. A distinctive feature of the group is the presence of three long, tail-like filaments extending from their last segment. These three generally subequal, except in some members of the family Nicoletiidae, in which they are short, and the cerci are hard to detect. The two lateral filaments are the abdominal cerci and the medial one is the epiproct.
Silverfish may be found in moist, humid environments or dry conditions, both as free-living organisms or nest-associates. In domestic settings, they feed on cereals, paste, paper, starch in clothes, rayon fabrics and dried meats. In nature, they will feed on organic detritus. Silverfish can sometimes be found in bathtubs or sinks at night, because they have difficulty moving on smooth surfaces and so become trapped if they fall in.
Wild species often are found in dark, moist habitats such as caves or under rocks, and some, particularly the Atelurinae, are commensals living in association with ant and termite nests, e.g., Trichatelura manni and Allotrichotriura saevissima, which lives inside nests of fire ants in Brazil.
There are no current species formally considered to be at conservation risk, though several are troglobites limited to one or a few caves or cave systems, and these species run an exceptionally high risk of extinction.
Aggregation behaviour
In the past, a contact pheromone was assumed to be responsible for the aggregation and arrestment behaviour observed in Zygentoma. It was later found out that the aggregation behaviour is not triggered by pheromones, but by an endosymbiotic fungus, Mycotypha microspora (Mycotyphaceae), and an endosymbiotic bacterium, Enterobacter cloacae (Enterobacteriaceae), both present in the faeces of the firebrat, Thermobia domestica. It was also shown that firebrats detect the presence of E. cloacae based on its external glycocalyx of polysaccharides, most likely based on its D-glucose component. Mycotypha microspora is only detected by firebrats in the presence of cellulose, suggesting that metabolites of the enzymatic cellulose digestion by M. microspora (such as D-glucose) serve as the aggregation/arrestment cue. A follow-up study showed that gray silverfish, Ctenolepisma longicaudatum, also respond with arrestment to Mycotypha microspora, but not so the common silverfish Lepisma saccharinum.
Furthermore, direct current-powered low-level electromagnetic coils with static electromagnetic fields were found to induce attraction or arrestment behaviour in Lepisma saccharinum and Thermobia domestica. This behavioural trait has potential application in traps for Zygentoma, and a respective patent has been issued.
Taxonomy
Order Zygentoma Börner 1904
Suborder Archizygentoma Engel 2006
Family Tricholepidiidae Engel 2006
Suborder Neozygentoma Engel 2006
Infraorder Parazygentoma Engel 2006
Family Lepidotrichidae Silvestri 1913
Infraorder Euzygentoma Grimaldi & Engel 2005
Family Maindroniidae Escherich 1905
Family Lepismatidae Latreille 1802
Family Protrinemuridae Mendes 1988
Family Nicoletiidae Escherich 1905
The Tricholepidiidae are represented by Tricholepidion gertschi from forests of northern California.
The Lepidotrichidae are represented by the extinct Lepidotrix pilifera, known from Baltic amber.
The Lepismatidae is the largest family and they include the physically largest specimens. The family is cosmopolitan with more than 200 species. Many are anthropophilic, living in human habitations. Some species are inquilines in ant colonies.
The Nicoletiidae tend to be smaller, pale in colour, and often live in soil litter, humus, under stones, in caves (with reduced eyes) or as inquilines in ant or termite colonies. The family is subdivided into five subfamilies.
The Maindroniidae comprise three species, found in the Middle East and in Chile.
The Protrinemuridae comprise four genera. Like Nicoletiidae species living in caves, they lack eyes.
Some molecular phylogenies have found Tricholepidiidae to form an independent, more basal branch of insects unrelated to other zygentomans.
Evolutionary history
The fossil record for Zygentoma is poor, though they must have diverged from all other insects either during the Carboniferous, or the Devonian if Leverhulmia is an example of the group. The oldest fossils of the order are indeterminate specimens of Lepismatidae from the Santana Formation of Brazil, dating to the Aptian stage of the Early Cretaceous around 113 million years ago, with other specimens of Lepismatidae known from the Burmese amber of Myanmar, dating to around 100 million years ago. Fossils of Nicoletiidae are known from Miocene aged Dominican amber.
Reproduction
Silverfish have an elaborate courtship ritual to ensure the transfer of sperm. The male spins a silken thread between the substrate and a vertical object. He deposits a sperm packet (spermatophore) beneath this thread and then coaxes a female to walk under the thread. When her cerci contact the silk thread, she picks up the spermatophore with her genital opening. Sperm are released into her reproductive system, after which she ejects the empty spermatophore and eats it.
As ametabolous insects, silverfish continue to moult throughout their lives, with several sexually mature instars, unlike the pterygote insects. They are relatively slow growing, and lifespans of four to up to eight years have been recorded.
Research for biofuel production
Since silverfish consume lignocellulose found in wood, they are one type of insect (along with termites, wood-feeding roaches, wood wasps, and others) currently being researched for use in the production of biofuel. The guts of these insects act as natural bioreactors in which chemical processes break down cellulose. They have been studied in the hope of developing commercially cost-effective biofuel production processes.
| Biology and health sciences | Zygentoma | Animals |
2835531 | https://en.wikipedia.org/wiki/Brow%20ridge | Brow ridge | The brow ridge, or supraorbital ridge known as superciliary arch in medicine, is a bony ridge located above the eye sockets of all primates and some other animals. In humans, the eyebrows are located on their lower margin.
Structure
The brow ridge is a nodule or crest of bone situated on the frontal bone of the skull. It forms the separation between the forehead portion itself (the squama frontalis) and the roof of the eye sockets (the pars orbitalis). Normally, in humans, the ridges arch over each eye, offering mechanical protection. In other primates, the ridge is usually continuous and often straight rather than arched. The ridges are separated from the frontal eminences by a shallow groove. The ridges are most prominent medially, and are joined to one another by a smooth elevation named the glabella.
Typically, the arches are more prominent in men than in women, and vary between different human populations. Behind the ridges, deeper in the bone, are the frontal sinuses.
Terminology
The brow ridges, being a prominent part of the face in some human populations and a trait linked to sexual dimorphism, have a number of names in different disciplines. In vernacular English, the terms eyebrow bone or eyebrow ridge are common. The more technical terms frontal or supraorbital arch, ridge or torus (or tori to refer to the plural, as the ridge is usually seen as a pair) are often found in anthropological or archaeological studies. In medicine, the term arcus superciliaris (Latin) or the English translation superciliary arch. This feature is different from the supraorbital margin and the margin of the orbit.
Some paleoanthropologists distinguish between frontal torus and supraorbital ridge. In anatomy, a torus is a projecting shelf of bone that unlike a ridge is rectilinear, unbroken and goes through glabella. Some fossil hominins, in this use of the word, have the frontal torus, but almost all modern humans only have the ridge.
Development
Spatial model
The Spatial model proposes that supraorbital torus development can be best explained in terms of the disparity between the anterior position of the orbital component relative to the neurocranium.
Much of the groundwork for the spatial model was laid down by Schultz (1940). He was the first to document that at later stages of development (after age 4) the growth of the orbit would outpace that of the eye. Consequently, he proposed that facial size is the most influential factor in orbital development, with orbital growth being only secondarily affected by size and ocular position.
Weindenreich (1941) and Biegert (1957, 1963) argued that the supraorbital region can best be understood as a product of the orientation of its two components, the face and the neurocranium.
The most composed articulation of the spatial model was presented by Moss and Young (1960), who stated that "the presence… of supraorbital ridges is only the reflection of the spatial relationship between two functionally unrelated cephalic components, the orbit and the brain" (Moss and Young, 1960, p282). They proposed (as first articulated by Biegert in 1957) that during infancy the neurocranium extensively overlaps the orbit, a condition that prohibits brow ridge development. As the splanchnocranium grows, however, the orbits begin to advance, thus causing the anterior displacement of the face relative to the brain. Brow ridges then form as a result of this separation.
Bio-mechanical model
The bio-mechanical model predicts that morphological variation in torus size is the direct product of differential tension caused by mastication, as indicated by an increase in load/lever ratio and broad craniofacial angle.
Research done on this model has largely been based on earlier work of Endo. By applying pressure similar to the type associated with chewing, he carried out an analysis of the structural function of the supraorbital region on dry human and gorilla skulls. His findings indicated that the face acts as a pillar that carries and disperses tension caused by the forces produced during mastication. Russell and Oyen et al. elaborated on this idea, suggesting that amplified facial projection necessitates the application of enhanced force to the anterior dentition in order to generate the same bite power that individuals with a dorsal deflection of the facial skull exert. In more prognathic individuals, this increased pressure triggers bone deposition to reinforce the brow ridges, until equilibrium is reached.
Oyen et al. conducted a cross-section study of Papio anubis in order to ascertain the relationship between palate length, incisor load and Masseter lever efficiency, relative to torus enlargement. Indications found of osteoblastic deposition in the glabella were used as evidence for supraorbital enlargement. Oyen et al.’s data suggested that more prognathic individuals experienced a decrease in load/lever efficiency. This transmits tension via the frontal process of the maxilla to the supraorbital region, resulting in a contemporary reinforcement of this structure. This was also correlated to periods of tooth eruption.
In a later series of papers, Russell developed aspects of this mode further. Employing an adult Australian sample, she tested the association between brow ridge formation and anterior dental loading, via the craniofacial angle (prosthion-nasion-metopion), maxilla breadth, and discontinuities in food preparation such as those observed between different age groups. Finding strong support for the first two criteria, she concluded that the supraorbital complex is formed as a result of increased tension due to the widening of the maxilla, thought to be positively correlated with the size of the masseter muscle, as well as with the improper orientation of bone in the superior orbital region.
Function
Some researchers have suggested that brow ridges function to protect the eyes and orbital bones during hand-to-hand combat, given that they are an incredibly dimorphic trait.
Paleolithic humans
Pronounced brow ridges were a common feature among paleolithic humans. Early modern people such as those from the finds from Jebel Irhoud and Skhul and Qafzeh had thick, large brow ridges, but they differ from those of archaic humans like Neanderthals by having a supraorbital foramen or notch, forming a groove through the ridge above each eye, although there were exceptions, such as Skhul 2 in which the ridge was unbroken, unlike other members of her tribe. This splits the ridge into central parts and distal parts. In current humans, almost always only the central sections of the ridge are preserved (if preserved at all). This contrasts with many archaic and early modern humans, where the brow ridge is pronounced and unbroken.
Other animals
The size of these ridges varies also between different species of primates, either living or fossil. The closest living relatives of humans, the great apes and especially gorillas or chimpanzees, have a very pronounced supraorbital ridge, which has also been called a frontal torus, while in modern humans and orangutans, it is relatively reduced. The fossil record indicates that the supraorbital ridge in early hominins was reduced as the cranial vault grew; the frontal portion of the brain became positioned above rather than behind the eyes, giving a more vertical forehead.
Supraorbital ridges are also present in some other animals, such as wild rabbits, eagles and certain species of sharks. The presence of a supraorbital ridge in the Korean field mouse has been used to distinguish it among related species.
| Biology and health sciences | Human anatomy | Health |
2839552 | https://en.wikipedia.org/wiki/Cashmere%20goat | Cashmere goat | A cashmere goat is a type of goat that produces cashmere wool, the goat's fine, soft, downy, winter undercoat, in commercial quality and quantity. This undercoat grows as the days get shorter and is associated with an outer coat of coarse hair, which is present all the year and is called guard hair. Most common goat breeds, including dairy goats, grow this two-coated fleece.
The down is produced by secondary follicles, the guard hair by the primary follicles.
In 1994, China had an estimated population of 123 million goats and is the largest producer of cashmere down. Local breeds are dominant. In the past decades, breeding programs have been started to develop productive breeds. The cashmere goat is a fiber goat along with the Pygora goat, Nigora goat, and the Angora goat.
The goats take their name from their origin in the Himalayan region of Kashmir region with the word "cashmere" deriving from an anglicisation of Kashmir.
Cashmere-producing breeds
Australian cashmere goat
The foundation stock for the Australian Cashmere Goat was taken from northern and western Australia from the local bush goat population in the late 1970s. The production varies from herd to herd, with the most productive herds averaging 250 grams at a diameter of 15 μm. There is a breed and fleece standard, and active development of the breed continues with the University of Western Australia running a sire referencing scheme.
Changthangi (Kashmir Pashmina) cashmere goat
The Changthangi or Pashmina goat is found in China (Tibet), Mongolia, Myanmar, Bhutan, Nepal, Pakistan and India. They are raised for cashmere production and used as pack animals. The breed is most often white, but black, gray and brown animals also occur. They have large, twisting horns. This bloodline produces the finest Cashmere with an average diameter between 12-13 μm and average fiber length between 55-60mm. It is very rare and constitutes less than 0.1% of global cashmere production.
Hexi
The Hexi Cashmere has a long history in desert and semidesert regions of Gansu Province, China. About 60% of the goats are white. The Hexi cashmere can be found in the Gansu, Qinghai and Ningxia provinces. A typical adult doe produces 184 grams of down at 15.7 μm diameter.
Inner Mongolia cashmere goat
The Inner Mongolia cashmere goat is a local dual-purpose breed with a long history. It adapts well to desert and semidesert pastures. The goats can be divided into five strains, Alasan (Alashanzuoqi), Arbus, Erlangshan, Hanshan and Wuzhumuqin. The first three strains produce quality cashmere; the last two have been developed for high production. The average down yield is about 240 grams, with an average down diameter between 14.3 and 15.8 μm. The cashmere length is between 41 and 47 mm. In 1994, the total Inner Mongolian goat population was approximately 2.3 million goats.
Liaoning cashmere goat
Breeding animals were selected in the 1960s from six counties in the eastern mountain area of Liaoning Province. The herd has been continually developed since then, and used to improve the cashmere herd throughout China. The Liaoning goat is mainly found in the Buyun mountains in the Liaodong Peninsula. The breed was formally named the Liaoning cashmere goat by the Chinese Ministry of Agriculture in 1984. By 1994, selected Liaoning does were producing 326 g of down at 15 μm diameter. The selection work emphasizes size, length of body, quantity and quality of cashmere, the ability to climb, sturdiness, conformation and growth.
Licheng Daqing goat
The Licheng Daqing goat is a dual-purpose breed from the Shanxi Province, China. The down is usually brown, but the color can vary. The average doe down yield is 115 g at 14 μm diameter.
Luliang black goat
This dual-purpose goat is found in the Lüliang area; it produces a small quantity of dark soft down.
Tibetan Plateau goat
In 1994, there were more than 7 million Tibetan Plateau and Valley goats in Tibetan Plateau regions of People's Republic of China. Five million were in Tibet Autonomous Region, 1 million in Tibetan Autonomous Prefectures in Sichuan, half a million in Qinghai and about 100,000 in Gansu. There are also a small number of Tibetan goats in India and Nepal. The Tibetan plateau goats are kept for down production. In 1994, an adult doe's average down production was 197 g, while the average adult buck's down production was 261 g.
Wuzhumuqin
This Inner Mongolian strain is a new breed, recognized in 1994, and is distributed mainly in Xilingele Meng. The development of the breed started in 1980. By 1994, the breed had 372 nucleus herds and 681 selection herds. The bucks have thick, long horns and 85% of the does are horned. Ninety eight percent of the herd is white. The developers of the breed claim the lustre of the fleece is better than the Liaoning goat. The average production of a Wuzhumuqin adult does in 1994 was 285 g at 15.6 μm diameter; the average down length was 46 mm.
Zalaa Jinst White goat
The Zalaa Jinst White goat is the only entirely white breed of cashmere goat in Mongolia recognized by the Mongolian Wool & Cashmere Association, found in the southwest region of the Gobi Desert, where it has adapted well to Gobi desert nomadic herding. The average cashmere production for males is 380 grams; adult female is 290 grams with fibers averaging 16.0-16.5 microns in diameter
Zhongwei cashmere goats
The Zhongwei goat originated in the semidesert and desert area around Zhongwei in Ningxia and Gansu Provinces in China, and are famous for their kid fur and cashmere production. The average fiber production for does is 216 g at 15 μm diameter.
| Biology and health sciences | Goats | Animals |
3801543 | https://en.wikipedia.org/wiki/X-ray%20telescope | X-ray telescope | An X-ray telescope (XRT) is a telescope that is designed to observe remote objects in the X-ray spectrum. X-rays are absorbed by the Earth's atmosphere, so instruments to detect X-rays must be taken to high altitude by balloons, sounding rockets, and satellites.
The basic elements of the telescope are the optics (focusing or collimating), that collects the radiation entering the telescope, and the detector, on which the radiation is collected and measured. A variety of different designs and technologies have been used for these elements.
Many X-ray telescopes on satellites are compounded of multiple small detector-telescope systems whose capabilities add up or complement each other, and additional fixed or removable elements (filters, spectrometers) that add functionalities to the instrument.
History of X-ray telescopes
X-ray telescopes were first used for astronomy to observe the Sun, which was the only source in the sky bright enough in X-rays for those early telescopes to detect. Because the Sun is so bright in X-rays, early X-ray telescopes could use a small focusing element and the X-rays would be detected with photographic film. The first X-ray picture of the Sun from a rocket-borne telescope was taken by John V. Lindsay of the NASA Goddard Space Flight Center and collaborators in 1963. The first orbiting X-ray telescope flew on Skylab in the early 1970s and recorded more than 35,000 full-disk images of the Sun over a 9-month period.
First specialised X-ray satellite, Uhuru, was launched by NASA in 1970. It detected 339 X-ray sources in its 2.5-year lifetime.
The Einstein Observatory, launched in 1978, was the first imaging X-ray observatory. It obtained high-resolution X-ray images in the energy range from 0.1 to 4 keV of stars of all types, supernova remnants, galaxies, and clusters of galaxies. Another large project was ROSAT (active from 1990 to 1999), which was a heavy X-ray space observatory with focusing X-ray optics, and European EXOSAT.
The Chandra X-Ray Observatory was launched by NASA in 1999 and is operated for more than 25 years in a high elliptical orbit, returning thousands 0.5 arc-second images and high-resolution spectra of all kinds of astronomical objects in the energy range from 0.5 to 8.0 keV. Chandra's resolution is about 50 times superior to that of ROSAT.
Active X-ray observatory satellites
Satellites in use today include ESA's XMM-Newton observatory (low to mid energy X-rays 0.1-15 keV), NASA's Swift observatory, Chandra observatory and IXPE telescope. JAXA has launched the XRISM telescope, while ISRO has launched Aditya-L1 and XPoSat.
The GOES 14 spacecraft carries on board a Solar X-ray Imager to monitor the Sun's X-rays for the early detection of solar flares, coronal mass ejections, and other phenomena that impact the geospace environment. It was launched into orbit on June 27, 2009, at 22:51 GMT from Space Launch Complex 37B at the Cape Canaveral Air Force Station.
The Chinese Hard X-ray Modulation Telescope was launched on June 15, 2017 to observe black holes, neutron stars, active galactic nuclei and other phenomena based on their X-ray and gamma-ray emissions.
The Lobster-Eye X-ray Satellite was launched on 25 July 2020 by CNSA making it is the first in-orbit telescope to utilize the lobster-eye imaging technology of ultra-large field of view imaging to search for dark matter signals in the x-ray energy range. Lobster Eye Imager for Astronomy was launched on 27 July 2022 as a technology demonstrator for Einstein Probe, launched on January 9, 2024, dedicated to time-domain high-energy astrophysics. The Space Variable Objects Monitor observatory launched on 22 June 2024 is directed towards studying the explosions of massive stars and analysis of gamma-ray bursts.
A soft X-ray solar imaging telescope is on board the GOES-13 weather satellite launched using a Delta IV from Cape Canaveral LC37B on May 24, 2006. However, there have been no GOES 13 SXI images since December 2006.
The Russian-German Spektr-RG carries the eROSITA telescope array as well as the ART-XC telescope. It was launched by Roscosmos on 13 July 2019 from Baikonur and began collecting data in October 2019.
Optics
The most common methods used in X-ray optics are grazing incidence mirrors and collimated apertures. Only three geometries that use grazing incidence reflection of X-rays to produce X-ray images are known: Wolter system, Kirkpatrick-Baez system, and lobster-eye optics.
Focusing mirrors
A simple parabolic mirror was originally proposed in 1960 by Riccardo Giacconi and Bruno Rossi, the founders of extrasolar X-ray astronomy. This type of mirror is often used as the primary reflector in an optical telescope. However, images of off-axis objects would be severely blurred. The German physicist Hans Wolter showed in 1952 that the reflection off a combination of two elements, a paraboloid followed by a hyperboloid, would work far better for X-ray astronomy applications. Wolter described three different imaging configurations, the Types I, II, and III. The design most commonly used by X-ray astronomers is the Type I since it has the simplest mechanical configuration. In addition, the Type I design offers the possibility of nesting several telescopes inside one another, thereby increasing the useful reflecting area. The Wolter Type II is useful only as a narrow-field imager or as the optic for a dispersive spectrometer. The Wolter Type III has never been employed for X-ray astronomy.
With respect to collimated optics, focusing optics allow:
a high resolution imaging
a high telescope sensitivity: since radiation is focused on a small area, Signal-to-noise ratio is much higher for this kind of instruments.
The mirrors can be made of ceramic or metal foil coated with a thin layer of a reflective material (typically gold or iridium). Mirrors based on this construction work on the basis of total reflection of light at grazing incidence.
This technology is limited in energy range by the inverse relation between critical angle for total reflection and radiation energy. The limit in the early 2000s with Chandra and XMM-Newton X-ray observatories was about 15 kilo-electronvolt (keV) light. Using new multi-layered coated mirrors, the X-ray mirror for the NuSTAR telescope pushed this up to 79 keV light. To reflect at this level, glass layers were multi-coated with tungsten (W)/silicon (Si) or platinum (Pt)/silicon carbide(SiC).
Collimating optics
While earlier X-ray telescopes were using simple collimating techniques (e.g. rotating collimators, wire collimators), the technology most used in the present day employs coded aperture masks. This technique uses a flat aperture patterned grille in front of the detector. This design gives results that are less sensitive than focusing optics; also the imaging quality and identification of source position is much poorer. Though this design offers a larger field of view and can be employed at higher energies, where grazing incidence optics become ineffective. Also the imaging is not direct, but the image is rather reconstructed by post-processing of the signal.
Detection and imaging of X-rays
X-rays has a huge span in wavelength (~8 nm - 8 pm), frequency (~50 PHz - 50 EHz) and energy (~0.12 - 120 keV). In terms of temperature, 1 eV = 11,604 K. Thus X-rays (0.12 to 120 keV) correspond to 1.39 × 106 to 1.39 × 109 K. From 10 to 0.1 nanometers (nm) (about 0.12 to 12 keV) they are classified as soft X-rays, and from 0.1 nm to 0.01 nm (about 12 to 120 keV) as hard X-rays.
Closer to the visible range of the electromagnetic spectrum is the ultraviolet. The draft ISO standard on determining solar irradiances (ISO-DIS-21348) describes the ultraviolet as ranging from ~10 nm to ~400 nm. That portion closest to X-rays is often referred to as the "extreme ultraviolet" (EUV or XUV). When an EUV photon is absorbed, photoelectrons and secondary electrons are generated by ionization, much like what happens when X-rays or electron beams are absorbed by matter.
The distinction between X-rays and gamma rays has changed in recent decades. Originally, the electromagnetic radiation emitted by X-ray tubes had a longer wavelength than the radiation emitted by radioactive nuclei (gamma rays). So older literature distinguished between X- and gamma radiation on the basis of wavelength, with radiation shorter than some arbitrary wavelength, such as 10−11 m, defined as gamma rays. However, as shorter wavelength continuous spectrum "X-ray" sources such as linear accelerators and longer wavelength "gamma ray" emitters were discovered, the wavelength bands largely overlapped. The two types of radiation are now usually distinguished by their origin: X-rays are emitted by electrons outside the nucleus, while gamma rays are emitted by the nucleus.
Although the more energetic X-rays, photons with an energy greater than 30 keV (4,800 aJ), can penetrate the Earth's atmosphere at least for distances of a few meters, the Earth's atmosphere is thick enough that virtually none are able to penetrate from outer space all the way to the Earth's surface. X-rays in the 0.5 to 5 keV (80 to 800 aJ) range, where most celestial sources give off the bulk of their energy, can be stopped by a few sheets of paper; 90% of the photons in a beam of 3 keV (480 aJ) X-rays are absorbed by traveling through just 10 cm of air.
Proportional counters
A proportional counter is a type of gaseous ionization detector that counts particles of ionizing radiation and measures their energy. It works on the same principle as the Geiger-Müller counter, but uses a lower operating voltage. All X-ray proportional counters consist of a windowed gas cell. Often this cell is subdivided into a number of low- and high-electric field regions by some arrangement of electrodes.
Proportional counters were used on EXOSAT, on the US portion of the Apollo–Soyuz mission (July 1975), and on French TOURNESOL instrument.
X-ray monitor
Monitoring generally means to be aware of the state of a system. A device that displays or sends a signal for displaying X-ray output from an X-ray generating source so as to be aware of the state of the source is referred to as an X-ray monitor in space applications.
On Apollo 15 in orbit above the Moon, for example, an X-ray monitor was used to follow the possible variation in solar X-ray intensity and spectral shape while mapping the lunar surface with respect to its chemical composition due to the production of secondary X-rays.
The X-ray monitor of Solwind, designated NRL-608 or XMON, was a collaboration between the Naval Research Laboratory and Los Alamos National Laboratory. The monitor consisted of 2 collimated argon proportional counters.
Scintillation detector
A scintillator is a material which exhibits the property of luminescence when excited by ionizing radiation. Luminescent materials, when struck by an incoming particle, such as an X-ray photon, absorb its energy and scintillate, i.e. reemit the absorbed energy in the form of a small flash of light, typically in the visible range.
The scintillation X-ray detector were used on Vela 5A and its twin Vela 5B; the X-ray telescope onboard OSO 4 consisted of a single thin NaI(Tl) scintillation crystal plus phototube assembly enclosed in a CsI(Tl) anti-coincidence shield. OSO 5 carried a CsI crystal scintillator. The central crystal was 0.635 cm thick, had a sensitive area of 70 cm2, and was viewed from behind by a pair of photomultiplier tubes.
The PHEBUS had two independent detectors, each detector consisted of a bismuth germinate (BGO) crystal 78 mm in diameter by 120 mm thick. The KONUS-B instrument consisted of seven detectors distributed around the spacecraft that responded to photons of 10 keV to 8 MeV energy. They consisted of NaI(Tl) scintillator crystals 200 mm in diameter by 50 mm thick behind a Be entrance window. Kvant-1 carried the HEXE, or High Energy X-ray Experiment, which employed a phoswich of sodium iodide and caesium iodide.
Modulation collimator
In electronics, modulation is the process of varying one waveform in relation to another waveform. With a 'modulation collimator' the amplitude (intensity) of the incoming X-rays is reduced by the presence of two or more 'diffraction gratings' of parallel wires that block or greatly reduce that portion of the signal incident upon the wires.
An X-ray collimator is a device that filters a stream of X-rays so that only those traveling parallel to a specified direction are allowed through.
Minoru Oda, President of Tokyo University of Information Sciences, invented the modulation collimator, first used to identify the counterpart of Sco X-1 in 1966, which led to the most accurate positions for X-ray sources available, prior to the launch of X-ray imaging telescopes.
SAS 3 carried modulation collimators (2-11 keV) and Slat and Tube collimators (1 up to 60keV).
On board the Granat Observatory were four WATCH instruments that could localize bright sources in the 6 to 180 keV range to within 0.5° using a Rotation Modulation Collimator. Taken together, the instruments' three fields of view covered approximately 75% of the sky.
The Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI), Explorer 81, images solar flares from soft X-rays to gamma rays (~3 keV to ~20 MeV). Its imaging capability is based on a Fourier-transform technique using a set of 9 Rotational Modulation Collimators.
X-ray spectrometer
OSO 8 had on board a Graphite Crystal X-ray Spectrometer, with energy range of 2-8 keV, FOV 3°.
The Granat ART-S X-ray spectrometer covered the energy range 3 to 100 keV, FOV 2° × 2°. The instrument consisted of four detectors based on spectroscopic MWPCs, making an effective area of 2,400 cm2 at 10 keV and 800 cm2 at 100 keV. The time resolution was 200 microseconds.
The X-ray spectrometer aboard ISEE-3 was designed to study both solar flares and cosmic gamma-ray bursts over the energy range 5-228 keV. The experiment consisted of 2 cylindrical X-ray detectors: a Xenon filled proportional counter covering 5-14 keV, and a NaI(Tl) scintillator covering 12-1250 keV.
CCDs
Most existing X-ray telescopes use CCD detectors, similar to those in visible-light cameras. In visible-light, a single photon can produce a single electron of charge in a pixel, and an image is built up by accumulating many such charges from many photons during the exposure time. When an X-ray photon hits a CCD, it produces enough charge (hundreds to thousands of electrons, proportional to its energy) that the individual X-rays have their energies measured on read-out.
Microcalorimeters
Microcalorimeters can only detect X-rays one photon at a time (but can measure the energy of each).
Transition edge sensors
Transition-edge sensors are the next step in microcalorimetry. In essence they are super-conducting metals kept as close as possible to their transition temperature. This is the temperature at which these metals become super-conductors and their resistance drops to zero. These transition temperatures are usually just a few degrees above absolute zero (usually less than 10 K).
| Technology | Telescope | null |
3804557 | https://en.wikipedia.org/wiki/Inverse%20hyperbolic%20functions | Inverse hyperbolic functions | In mathematics, the inverse hyperbolic functions are inverses of the hyperbolic functions, analogous to the inverse circular functions. There are six in common use: inverse hyperbolic sine, inverse hyperbolic cosine, inverse hyperbolic tangent, inverse hyperbolic cosecant, inverse hyperbolic secant, and inverse hyperbolic cotangent. They are commonly denoted by the symbols for the hyperbolic functions, prefixed with arc- or ar-, or with a superscript (as in ).
For a given value of a hyperbolic function, the inverse hyperbolic function provides the corresponding hyperbolic angle measure, for example and Hyperbolic angle measure is the length of an arc of a unit hyperbola as measured in the Lorentzian plane (not the length of a hyperbolic arc in the Euclidean plane), and twice the area of the corresponding hyperbolic sector. This is analogous to the way circular angle measure is the arc length of an arc of the unit circle in the Euclidean plane or twice the area of the corresponding circular sector. Alternately hyperbolic angle is the area of a sector of the hyperbola Some authors call the inverse hyperbolic functions hyperbolic area functions.
Hyperbolic functions occur in the calculation of angles and distances in hyperbolic geometry. They also occur in the solutions of many linear differential equations (such as the equation defining a catenary), cubic equations, and Laplace's equation in Cartesian coordinates. Laplace's equations are important in many areas of physics, including electromagnetic theory, heat transfer, fluid dynamics, and special relativity.
Notation
The earliest and most widely adopted symbols use the prefix arc- (that is: , , , , , ), by analogy with the inverse circular functions (, etc.). For a unit hyperbola ("Lorentzian circle") in the Lorentzian plane (pseudo-Euclidean plane of signature ) or in the hyperbolic number plane, the hyperbolic angle measure (argument to the hyperbolic functions) is indeed the arc length of a hyperbolic arc.
Also common is the notation etc., although care must be taken to avoid misinterpretations of the superscript −1 as an exponent. The standard convention is that or means the inverse function while or means the reciprocal Especially inconsistent is the conventional use of positive integer superscripts to indicate an exponent rather than function composition, e.g. conventionally means and not
Because the argument of hyperbolic functions is not the arc length of a hyperbolic arc in the Euclidean plane, some authors have condemned the prefix arc-, arguing that the prefix ar- (for area) or arg- (for argument) should be preferred. Following this recommendation, the ISO 80000-2 standard abbreviations use the prefix ar- (that is: , , , , , ).
In computer programming languages, inverse circular and hyperbolic functions are often named with the shorter prefix a- (, etc.).
This article will consistently adopt the prefix ar- for convenience.
Definitions in terms of logarithms
Since the hyperbolic functions are quadratic rational functions of the exponential function they may be solved using the quadratic formula and then written in terms of the natural logarithm.
For complex arguments, the inverse circular and hyperbolic functions, the square root, and the natural logarithm are all multi-valued functions.
Addition formulae
Other identities
Composition of hyperbolic and inverse hyperbolic functions
Composition of inverse hyperbolic and circular functions
Conversions
Derivatives
These formulas can be derived in terms of the derivatives of hyperbolic functions. For example, if , then so
Series expansions
Expansion series can be obtained for the above functions:
An asymptotic expansion for arsinh is given by
Principal values in the complex plane
As functions of a complex variable, inverse hyperbolic functions are multivalued functions that are analytic except at a finite number of points. For such a function, it is common to define a principal value, which is a single valued analytic function which coincides with one specific branch of the multivalued function, over a domain consisting of the complex plane in which a finite number of arcs (usually half lines or line segments) have been removed. These arcs are called branch cuts. The principal value of the multifunction is chosen at a particular point and values elsewhere in the domain of definition are defined to agree with those found by analytic continuation.
For example, for the square root, the principal value is defined as the square root that has a positive real part. This defines a single valued analytic function, which is defined everywhere, except for non-positive real values of the variables (where the two square roots have a zero real part). This principal value of the square root function is denoted in what follows. Similarly, the principal value of the logarithm, denoted in what follows, is defined as the value for which the imaginary part has the smallest absolute value. It is defined everywhere except for non-positive real values of the variable, for which two different values of the logarithm reach the minimum.
For all inverse hyperbolic functions, the principal value may be defined in terms of principal values of the square root and the logarithm function. However, in some cases, the formulas of do not give a correct principal value, as giving a domain of definition which is too small and, in one case non-connected.
Principal value of the inverse hyperbolic sine
The principal value of the inverse hyperbolic sine is given by
The argument of the square root is a non-positive real number, if and only if belongs to one of the intervals and of the imaginary axis. If the argument of the logarithm is real, then it is positive. Thus this formula defines a principal value for arsinh, with branch cuts and . This is optimal, as the branch cuts must connect the singular points and to infinity.
Principal value of the inverse hyperbolic cosine
The formula for the inverse hyperbolic cosine given in is not convenient, since similar to the principal values of the logarithm and the square root, the principal value of arcosh would not be defined for imaginary . Thus the square root has to be factorized, leading to
The principal values of the square roots are both defined, except if belongs to the real interval . If the argument of the logarithm is real, then is real and has the same sign. Thus, the above formula defines a principal value of arcosh outside the real interval , which is thus the unique branch cut.
Principal values of the inverse hyperbolic tangent and cotangent
The formulas given in suggests
for the definition of the principal values of the inverse hyperbolic tangent and cotangent. In these formulas, the argument of the logarithm is real if and only if is real. For artanh, this argument is in the real interval , if belongs either to or to . For arcoth, the argument of the logarithm is in , if and only if belongs to the real interval .
Therefore, these formulas define convenient principal values, for which the branch cuts are and for the inverse hyperbolic tangent, and for the inverse hyperbolic cotangent.
In view of a better numerical evaluation near the branch cuts, some authors use the following definitions of the principal values, although the second one introduces a removable singularity at . The two definitions of differ for real values of with . The ones of differ for real values of with .
Principal value of the inverse hyperbolic cosecant
For the inverse hyperbolic cosecant, the principal value is defined as
.
It is defined except when the arguments of the logarithm and the square root are non-positive real numbers. The principal value of the square root is thus defined outside the interval of the imaginary line. If the argument of the logarithm is real, then is a non-zero real number, and this implies that the argument of the logarithm is positive.
Thus, the principal value is defined by the above formula outside the branch cut, consisting of the interval of the imaginary line.
(At , there is a singular point that is included in the branch cut.)
Principal value of the inverse hyperbolic secant
Here, as in the case of the inverse hyperbolic cosine, we have to factorize the square root. This gives the principal value
If the argument of a square root is real, then is real, and it follows that both principal values of square roots are defined, except if is real and belongs to one of the intervals and . If the argument of the logarithm is real and negative, then is also real and negative. It follows that the principal value of arsech is well defined, by the above formula outside two branch cuts, the real intervals and .
For , there is a singular point that is included in one of the branch cuts.
Graphical representation
In the following graphical representation of the principal values of the inverse hyperbolic functions, the branch cuts appear as discontinuities of the color. The fact that the whole branch cuts appear as discontinuities, shows that these principal values may not be extended into analytic functions defined over larger domains. In other words, the above defined branch cuts are minimal.
| Mathematics | Specific functions | null |
3808275 | https://en.wikipedia.org/wiki/Ornithocheirus | Ornithocheirus | Ornithocheirus (from Ancient Greek "ὄρνις", meaning bird, and "χεῖρ", meaning hand) is a pterosaur genus known from fragmentary fossil remains uncovered from sediments in the United Kingdom and possibly Morocco.
Several species have been referred to the genus, most of which are now considered as dubious species, or members of different genera, and the genus is now often considered to include only the type species, Ornithocheirus simus. Species have been referred to Ornithocheirus from the mid-Cretaceous period of North America, Europe and South America, but O. simus is known mostly from the United Kingdom, though a specimen referred to O. cf. simus is also known from the Late Cretaceous Kem Kem Group of Morocco.
Because O. simus was originally named based on poorly preserved fossil material, the genus Ornithocheirus has suffered enduring problems of zoological nomenclature.
Fossil remains of Ornithocheirus have been recovered mainly from the Cambridge Greensand of England, dating to the beginning of the Albian stage of the early Cretaceous period, about 110 million years ago. Additional fossils from the Santana Formation of Brazil are sometimes classified as species of Ornithocheirus, but have also been placed in their own genera, most notably Tropeognathus.
Discovery and naming
During the 19th century, in England many fragmentary pterosaur fossils were found in the Cambridge Greensand, a layer from the early Cretaceous, that had originated as a sandy seabed. Decomposing pterosaur cadavers, floating on the sea surface, had gradually lost individual bones that sank to the bottom of the sea. Water currents then moved the bones around, eroding and polishing them, until they were at last covered by more sand and fossilised. Even the largest of these remains were damaged and difficult to interpret. They had been assigned to the genus Pterodactylus, as was common for any pterosaur species described in the early and middle 19th century.
Young researcher Harry Govier Seeley was commissioned to bring order to the pterosaur collection of the Sedgwick Museum in Cambridge. He soon concluded that it was best to create a new genus for the Cambridge Greensand material that he named Ornithocheirus (meaning "bird hand"), as he in this period still considered pterosaurs to be the direct ancestors of birds, and assumed the hand of the genus to represent a transitional stage in the evolution towards the bird hand. To distinguish the best pieces in the collection, and partly because they had already been described as species by other scientists. Between the years 1869 and 1870, Seeley each gave them a separate species name: O. simus, O. woodwardi, O. oxyrhinus, O. carteri, O. platyrhinus, O. sedgwickii, O. crassidens, O. capito, O. eurygnathus, O. reedi, O. cuvieri, O. scaphorhynchus, O. brachyrhinus, O. colorhinus, O. dentatus, O. denticulatus, O. enchorhynchus, O. xyphorhynchus, O. fittoni, O. nasutus, O. polyodon, O. tenuirostris, O. machaerorhynchus, O. platystomus, O. microdon, O. oweni and O. huxleyi, thus 27 in total. As yet Seeley did not designate a type species.
When Seeley published his conclusions in his 1870 book The Ornithosauria, this provoked a reaction by the leading British paleontologist of his day, Sir Richard Owen. Owen was not an evolutionist and he therefore considered the name Ornithocheirus to be inappropriate; he also thought it was possible to distinguish two main types within the material, based on differences in snout form and tooth position — the best fossils consisted of jaw fragments. In 1874, he created two new genera: Coloborhynchus and Criorhynchus. Coloborhynchus (meaning "maimed beak") which comprised a new type species called Coloborhynchus clavirostris, as well as two other species reassigned from Ornithocheirus: C. sedgwickii and C. cuvieri. Criorhynchus (meaning "ram beak") consisted entirely of former Ornithocheirus species: the type species, Criorhynchus simus, and furthermore such as C. eurygnathus, C. capito, C. platystomus, C. crassidens and C. reedi.
Seeley did not accept Owen's position. In 1881 he designated O. simus the type species of Ornithocheirus and named a new separate species called O. bunzeli. In 1888, Edwin Tulley Newton reassigned several existing species names into Ornithocheirus, which created new combinations: O. clavirostris, O. daviesii, O. sagittirostris, O. validus, O. giganteus, O. clifti, O. diomedeus, O. nobilis, O. curtus, O. macrorhinus and O. hlavaci. He also reassigned the species O. umbrosus and O. harpyia into Ornithocheirus, which were formerly species given to the genus Pteranodon by Edward Drinker Cope back in 1872.
In 1914 Reginald Walter Hooley made a new attempt to structure the large number of species. Hooley synonymized Owen's Criorhynchus to Ornithocheirus, in which he also sunk Coloborhynchus into that genus, meaning that the only generic name he kept was Ornithocheirus. To allow for a greater differentiation, Hooley created two new genera, again based on jaw form: Lonchodectes and Amblydectes. The genus Lonchodectes (meaning "lance biter") consisted of the former species Pterodactylus compressirostris, and Pterodactylus giganteus, which were reassigned as Lonchodectes compressirostris, the type species, and Lonchodectes giganteus, in addition, Hooley also named a new separate species called L. daviesii. The genus Amblydectes (meaning "blunt biter") also consisted of three species: A. platystomus, A. crassidens and A. eurygnathus. Hooley's classification however, was rarely applied later in the century, and therefore paleontologists weren't aware of it, and kept subsuming all the poorly preserved and confusing material under the name Ornithocheirus. In 1964, a Russian-language overview of Pterosauria designated the species Lonchodectes compressirostris, which was identified as Pterodactylus compressirostris in the overview, as the type species of Ornithocheirus, which was then followed by Kuhn in 1967, and Wellnhofer in 1978, yet those authors weren't aware that back in 1881, Seeley made already made the species P. simus as the type species of Ornithocheirus, in which defined the new combination of O. simus.
From the seventies onwards many new pterosaur fossils were found in Brazil in deposits slightly older than the Cambridge Greensand, 110 million years old. Unlike the English material, these new finds included some of the best preserved large pterosaur skeletons and several new genera names were given to them, such as Anhanguera. This situation caused a renewed interest in the Ornithocheirus material and the validity of the several names based on it, for it might be possible that it could by more detailed studies be established that the Brazilian pterosaurs were actually junior synonyms of the European types. Several European researchers concluded that this was indeed the case. Unwin revived Coloborhynchus and Michael Fastnacht Criorhynchus, each author ascribing Brazilian species to these genera. However, in 2000 Unwin stated that Criorhynchus could not be valid. Referring to Seeley's designation of 1881 he considered Ornithocheirus simus, holotype CAMSM B.54428, to be the type species. This also made it possible to revive Lonchodectes, using as type the former O. compressirostris, which then became L. compressirostris.
As a result, though over forty species have been named in the genus Ornithocheirus over the years, only O. simus is currently considered valid by all pterosaur researchers. The species Tropeognathus mesembrinus, which was named by Peter Wellnhofer in 1987, was assigned to Ornithocheirus by David Unwin in 2003, making Tropeognathus a junior synonym. In 1989 however, Alexander Kellner considered it as an Anhanguera mesembrinus, then as a Coloborhynchus mesembrinus by André Veldmeijer in 1998 and as a Criorhynchus mesembrinus by Michael Fastnacht in 2001. Even earlier, in 2001, Unwin had referred the "Tropeognathus" material to O. simus in which was followed by Veldmeijer; however, Veldmeijer rejected O. simus as the type species in favor of O. compressirostris (alternately Lonchodectes), and he used the names Criorhynchus simus and Criorhynchus mesembrinus instead.
Formerly assigned species
In 2013, Rodrigues and Kellner found Ornithocheirus to be monotypic, containing only O. simus, and placed most other species in other genera, or declared them nomina dubia. They also considered O. platyrhinus a junior synonym of O. simus.
Misassigned species:
O. compressirostris (Hooley, 1914) = Pterodactylus compressirostris, Owen, 1851 [now classified as Lonchodectes]
O. crassidens (Seeley, 1870) = [now classified as Amblydectes]
O. cuvieri (Seeley, 1870) = Pterodactylus cuvieri, Bowerbank, 1851 [now classified as Cimoliopterus]
O. curtus (Hooley, 1914) = Pterodactylus curtus, Owen, 1874
O. giganteus (Owen, 1879) = Pterodactylus giganteus, Bowerbank, 1846 [now classified as Lonchodraco]
"O." hilsensis (Koken, 1883) = indeterminate Neotheropoda
O. mesembrinus (Wellnhofer, 1987) = Tropeognathus mesembrinus, Wellnfofer, 1987
O. nobilis (Owen, 1869) = Pterodactylus nobilis, Owen 1869
O. sagittirostris (Seeley, 1874) = [now classified as Serradraco]
O. simus (Owen, 1861) = [originally Pterodactylus] (type)
O. sedgwicki (Owen, 1859) = Pterodactylus sedgwickii, Owen 1859 [now classified as Aerodraco]
"O." wiedenrothi (Wild, 1990) = [now classified as Targaryendraco]
Cimoliornis diomedeus, Cretornis hlavatschi, and Palaeornis clifti, originally misidentified as birds, were once referred to Ornithocheirus in the past, but recent papers have found them to be distinct; Cimoliornis may be closer to azhdarchoidea, Cretornis is a valid genus of azhdarchid, and Palaeornis was shown to be a lonchodectid in 2009. O. buenzeli (Bunzel 1871, often misspelled and incorrectly attributed as O. bunzeli, Seeley 1881), cited in the past as evidence of Late Cretaceous ornithocheirids, has since been re-identified as a likely azhdarchid as well.
Description
The type species, Ornithocheirus simus, is only known from fragmentary jaw tips. It bore a distinctive convex "keeled" crest on its snout similar to its relatives. Ornithocheirus had relatively narrow jaw tips compared to the related Coloborhynchus and Tropeognathus, which had prominently-expanded rosettes of teeth, as well as a more developed "keeled" crest compared to Ornithocheirus. Another feature that made Ornithocheirus unique and unlike its relatives, was that its teeth of were mostly vertical, rather than set at an outward-pointing angle.
It was believed in the past that Ornithocheirus was one of the largest pterosaurs to have existed, with a wingspan possibly measuring 40 feet (12.2 m) wide. However, this is a highly exaggerated number, as the animal's wingspan likely measured 15 to 20 feet (4.5 to 6.1 m) wide, which would make it a medium-sized pterosaur. A related species called Tropeognathus had a wingspan measuring about wide, making it the largest toothed pterosaur known. In 2022, Gregory S. Paul estimated that Ornithocheirus had a wingspan of and a body mass of .
Classification
A topology made by Andres and Myers in 2013 placed Ornithocheirus within the family Ornithocheiridae in a more derived position than Tropeognathus, but in a more basal position than Coloborhynchus, and the family itself is placed within the more inclusive clade Ornithocheirae. In 2019, Pêgas et al. found Ornithocheirus to be in a basal member of the clade Ornithocheirae, reclassifying all other snout-crested pterosaurs in the family Anhangueridae. Their cladogram is shown on the right.
Topology 1: Andres & Myers (2013).
Topology 2: Pêgas et al. (2019).
| Biology and health sciences | Pterosaurs | Animals |
21378217 | https://en.wikipedia.org/wiki/Personality%20disorder | Personality disorder | Personality disorders (PD) are a class of mental health conditions characterized by enduring maladaptive patterns of behavior, cognition, and inner experience, exhibited across many contexts and deviating from those accepted by the culture. These patterns develop early, are inflexible, and are associated with significant distress or disability. The definitions vary by source and remain a matter of controversy. Official criteria for diagnosing personality disorders are listed in the sixth chapter of the International Classification of Diseases (ICD) and in the American Psychiatric Association's Diagnostic and Statistical Manual of Mental Disorders (DSM).
Personality, defined psychologically, is the set of enduring behavioral and mental traits that distinguish individual humans. Hence, personality disorders are defined by experiences and behaviors that deviate from social norms and expectations. Those diagnosed with a personality disorder may experience difficulties in cognition, emotiveness, interpersonal functioning, or impulse control. For psychiatric patients, the prevalence of personality disorders is estimated between 40 and 60%. The behavior patterns of personality disorders are typically recognized by adolescence, the beginning of adulthood or sometimes even childhood and often have a pervasive negative impact on the quality of life.
Treatment for personality disorders is primarily psychotherapeutic. Evidence-based psychotherapies for personality disorders include cognitive behavioral therapy, and dialectical behavior therapy especially for borderline personality disorder. A variety of psychoanalytic approaches are also used. Personality disorders are associated with considerable stigma in popular and clinical discourse alike. Despite various methodological schemas designed to categorize personality disorders, many issues occur with classifying a personality disorder because the theory and diagnosis of such disorders occur within prevailing cultural expectations; thus, their validity is contested by some experts on the basis of inevitable subjectivity. They argue that the theory and diagnosis of personality disorders are based strictly on social, or even sociopolitical and economic considerations.
Classification and symptoms
The two latest editions of the major systems of classification are:
the International Classification of Diseases (11th revision, ICD-11) published by the World Health Organization
the Diagnostic and Statistical Manual of Mental Disorders (Fifth Edition, DSM-5) by the American Psychiatric Association.
The ICD is a collection of alpha-numerical codes which have been assigned to all known clinical states, and provides uniform terminology for medical records, billing, statistics and research. The DSM defines psychiatric diagnoses based on research and expert consensus. Both have deliberately aligned their diagnoses to some extent, but some differences remain. For example, the ICD-10 included narcissistic personality disorder in the group of other specific personality disorders, while DSM-5 does not include enduring personality change after catastrophic experience. The ICD-10 classified the DSM-5 schizotypal personality disorder as a form of schizophrenia rather than as a personality disorder. There are accepted diagnostic issues and controversies with regard to distinguishing particular personality disorder categories from each other. Dissociative identity disorder, previously known as multiple personality as well as multiple personality disorder, has always been classified as a dissociative disorder and never was regarded as a personality disorder.
DSM-5
The most recent fifth edition of the Diagnostic and Statistical Manual of Mental Disorders stresses that a personality disorder is an enduring and inflexible pattern of long duration leading to significant distress or impairment and is not due to use of substances or another medical condition. The DSM-5 lists personality disorders in the same way as other mental disorders, rather than on a separate 'axis', as previously. DSM-5 lists ten specific personality disorders: paranoid, schizoid, schizotypal, antisocial, borderline, histrionic, narcissistic, avoidant, dependent and obsessive–compulsive personality disorder. The DSM-5 also contains three diagnoses for personality patterns not matching these ten disorders, which nevertheless exhibit characteristics of a personality disorder:
Personality change due to another medical conditionpersonality disturbance due to the direct effects of a medical condition
Other specified personality disorderdisorder which meets the general criteria for a personality disorder but fails to meet the criteria for a specific disorder, with the reason given
Unspecified personality disorderdisorder which meets the general criteria for a personality disorder but is not included in the DSM-5 classification
These specific personality disorders are grouped into the following three clusters based on descriptive similarities:
Cluster A (odd or eccentric disorders)
Cluster A personality disorders are often associated with schizophrenia. People with these disorders can be paranoid and have difficulty being understood by others, as they often have odd or eccentric modes of speaking and an unwillingness and inability to form and maintain close relationships.
Paranoid personality disorderpattern of irrational suspicion and mistrust of others, interpreting motivations as malevolent
Schizoid personality disordercold affect and detachment from social relationships, apathy, and restricted emotional expression
Schizotypal personality disorderpattern of extreme discomfort interacting socially, and distorted cognition and perceptions
Significant evidence suggests a small proportion of people with Cluster A personality disorders, especially schizotypal personality disorder, have the potential to develop schizophrenia and other psychotic disorders. These disorders also have a higher probability of occurring among individuals whose first-degree relatives have either schizophrenia or a Cluster A personality disorder.
Cluster B (emotional or erratic disorders)
Cluster B personality disorders are characterized by dramatic, impulsive, self-destructive, emotional behavior and sometimes incomprehensible interactions with others.
Antisocial personality disorderpervasive pattern of disregard for and violation of the rights of others, lack of empathy, lack of remorse, callousness, bloated self-image, and manipulative and impulsive behavior
Borderline personality disorderpervasive pattern of abrupt emotional outbursts, fear of abandonment, unhealthy attachment, altered empathy, and instability in relationships, self-image, identity, behavior and affect, often leading to self-harm and impulsivity
Histrionic personality disorderpervasive pattern of attention-seeking behavior, including excessive emotions, an impressionistic style of speech, inappropriate seduction, exhibitionism, and egocentrism
Narcissistic personality disorderpervasive pattern of superior grandiosity, haughtiness, need for admiration, deceiving others, and lack of empathy (and, in more severe expressions, criminal behavior remorse)
Cluster C (anxious or fearful disorders)
Cluster C personality disorders are characterised by a consistent pattern of anxious thinking or behavior.
Avoidant personality disorderpervasive feelings of social inhibition and inadequacy, and extreme sensitivity to negative evaluation
Dependent personality disorderpervasive psychological need to be cared for by other people
Obsessive–compulsive personality disorderrigid conformity to rules, perfectionism, and control to the point of exclusion of leisurely activities and friendships (distinct from obsessive–compulsive disorder)
DSM-5 general criteria
Both the DSM-5 and the ICD-11 diagnostic systems provide a definition and six criteria for a general personality disorder. These criteria should be met by all personality disorder cases before a more specific diagnosis can be made. The DSM-5 indicates that any personality disorder diagnosis must meet the following criteria:
There is an enduring pattern of inner experience and behavior that deviates markedly from the expectations of the individual's culture. This pattern is manifested in two (or more) of the following areas:
Cognition (i.e., ways of perceiving and interpreting self, other people, and events)
Affectivity (i.e., the range, intensity, lability, and appropriateness of emotional response)
Interpersonal functioning
Impulse control
The enduring pattern is inflexible and pervasive across a broad range of personal and social situations.
The enduring pattern leads to clinically significant distress, or impairment in functioning, in social, occupational, or other important areas.
The pattern is stable and of long duration, and its onset can be traced back at least to adolescence or early adulthood.
The enduring pattern is not better explained as a manifestation or consequence of another mental disorder.
The enduring pattern is not attributable to the physiological effects of a substance (e.g., a drug of abuse, a medication) or another medical condition (e.g., head trauma).
ICD-11
The ICD-11 personality disorder section differs substantially from the previous edition, ICD-10. All distinct PDs have been merged into one: personality disorder (), which can be coded as mild (), moderate (), severe (), or severity unspecified (). Severity is determined by the level of distress experienced and degree of impact on day to day activities which results from difficulties in fuctioning aspects of self, (i,e, identity and agency) and interpersonal relationships. There is also an additional category called personality difficulty (), which can be used to describe personality traits that are problematic, but do not meet the diagnostic criteria for a PD. A personality disorder or difficulty can be specified by one or more prominent personality traits or patterns (). The ICD-11 uses five trait domains:
Negative affectivity () – including anxiety, separation insecurity, distrustfulness, worthlessness and emotional instability
Detachment () – including social detachment and emotional coldness
Dissociality () – including grandiosity, egocentricity, deception, exploitativeness and aggression
Disinhibition () – including risk-taking, impulsivity, irresponsibility and distractibility
Anankastia () – including rigid control over behaviour and affect and rigid perfectionism
Listed directly underneath is borderline pattern (), a category similar to borderline personality disorder. This is not a trait in itself, but a combination of the five traits in certain severity. In the ICD-11, any personality disorder must meet all of the following criteria:
There is an enduring disturbance characterized by problems in functioning of aspects of the self (e.g., identity, self-worth, accuracy of self-view, self-direction), and/or interpersonal dysfunction (e.g., ability to develop and maintain close and mutually satisfying relationships, ability to understand others' perspectives and to manage conflict in relationships).
The disturbance has persisted over an extended period of time (e.g., lasting 2 years or more).
The disturbance is manifest in patterns of cognition, emotional experience, emotional expression, and behaviour that are maladaptive (e.g., inflexible or poorly regulated).
The disturbance is manifest across a range of personal and social situations (i.e., is not limited to specific relationships or social roles), though it may be consistently evoked by particular types of circumstances and not others.
The symptoms are not due to the direct effects of a medication or substance, including withdrawal effects, and are not better accounted for by another mental disorder, a disease of the nervous system, or another medical condition.
The disturbance is associated with substantial distress or significant impairment in personal, family, social, educational, occupational or other important areas of functioning.
Personality disorder should not be diagnosed if the patterns of behaviour characterizing the personality disturbance are developmentally appropriate (e.g., problems related to establishing an independent self-identity during adolescence) or can be explained primarily by social or cultural factors, including socio-political conflict.
ICD-10
The ICD-10 lists these general guideline criteria:
Markedly disharmonious attitudes and behavior, generally involving several areas of functioning, e.g. affectivity, arousal, impulse control, ways of perceiving and thinking, and style of relating to others;
The abnormal behavior pattern is enduring, of long standing, and not limited to episodes of mental illness;
The abnormal behavior pattern is pervasive and clearly maladaptive to a broad range of personal and social situations;
The above manifestations always appear during childhood or adolescence and continue into adulthood;
The disorder leads to considerable personal distress but this may only become apparent late in its course;
The disorder is usually, but not invariably, associated with significant problems in occupational and social performance.
The ICD adds: "For different cultures it may be necessary to develop specific sets of criteria with regard to social norms, rules and obligations." Chapter V in the ICD-10 contains the mental and behavioral disorders and includes categories of personality disorder and enduring personality changes. They are defined as ingrained patterns indicated by inflexible and disabling responses that significantly differ from how the average person in the culture perceives, thinks, and feels, particularly in relating to others.
The specific personality disorders are: paranoid, schizoid, schizotypal, dissocial, emotionally unstable (borderline type and impulsive type), histrionic, narcissistic, anankastic, anxious (avoidant) and dependent. Besides the ten specific PD, there are the following categories:
Other specific personality disorders (involves PD characterized as eccentric, haltlose, immature, narcissistic, passive–aggressive, or psychoneurotic.)
Personality disorder, unspecified (includes "character neurosis" and "pathological personality").
Mixed and other personality disorders (defined as conditions that are often troublesome but do not demonstrate the specific pattern of symptoms in the named disorders).
Enduring personality changes, not attributable to brain damage and disease (this is for conditions that seem to arise in adults without a diagnosis of personality disorder, following catastrophic or prolonged stress or other psychiatric illness).
Other personality types and Millon's description
Some types of personality disorder were in previous versions of the diagnostic manuals but have been deleted. Examples include sadistic personality disorder (pervasive pattern of cruel, demeaning, and aggressive behavior) and self-defeating personality disorder or masochistic personality disorder (characterized by behavior consequently undermining the person's pleasure and goals). They were listed in the DSM-III-R appendix as "Proposed diagnostic categories needing further study" without specific criteria. Psychologist Theodore Millon, a researcher on personality disorders, and other researchers consider some relegated diagnoses to be equally valid disorders, and may also propose other personality disorders or subtypes, including mixtures of aspects of different categories of the officially accepted diagnoses. Millon proposed the following description of personality disorders:
Additional factors
In addition to classifying by category and cluster, it is possible to classify personality disorders using additional factors such as severity, impact on social functioning, and attribution.
Severity
This involves both the notion of personality difficulty as a measure of subthreshold scores for personality disorder using standard interviews and the evidence that those with the most severe personality disorders demonstrate a "ripple effect" of personality disturbance across the whole range of mental disorders. In addition to subthreshold (personality difficulty) and single cluster (simple personality disorder), this also derives complex or diffuse personality disorder (two or more clusters of personality disorder present) and can also derive severe personality disorder for those of greatest risk.
There are several advantages to classifying personality disorder by severity:
It not only allows for but also takes advantage of the tendency for personality disorders to be comorbid with each other.
It represents the influence of personality disorder on clinical outcome more satisfactorily than the simple dichotomous system of no personality disorder versus personality disorder.
This system accommodates the new diagnosis of severe personality disorder, particularly "dangerous and severe personality disorder" (DSPD).
Effect on social functioning
Social function is affected by many other aspects of mental functioning apart from that of personality. However, whenever there is persistently impaired social functioning in conditions in which it would normally not be expected, the evidence suggests that this is more likely to be created by personality abnormality than by other clinical variables. The Personality Assessment Schedule gives social function priority in creating a hierarchy in which the personality disorder creating the greater social dysfunction is given primacy over others in a subsequent description of personality disorder.
Attribution
Many who have a personality disorder do not recognize any abnormality and defend valiantly their continued occupancy of their personality role. This group have been termed the Type R, or treatment-resisting personality disorders, as opposed to the Type S or treatment-seeking ones, who are keen on altering their personality disorders and sometimes clamor for treatment. The classification of 68 personality disordered patients on the caseload of an assertive community team using a simple scale showed a 3 to 1 ratio between Type R and Type S personality disorders with Cluster C personality disorders being significantly more likely to be Type S, and paranoid and schizoid (Cluster A) personality disorders significantly more likely to be Type R than others.
Psychoanalytic theory has been used to explain treatment-resistant tendencies as egosyntonic (i.e. the patterns are consistent with the ego integrity of the individual) and are therefore perceived to be appropriate by that individual. In addition, this behavior can result in maladaptive coping skills and may lead to personal problems that induce extreme anxiety, distress, or depression and result in impaired psychosocial functioning.
Presentation
Comorbidity
There is a considerable personality disorder diagnostic co-occurrence. Patients who meet the DSM-IV-TR diagnostic criteria for one personality disorder are likely to meet the diagnostic criteria for another. Diagnostic categories provide clear, vivid descriptions of discrete personality types but the personality structure of actual patients might be more accurately described by a constellation of maladaptive personality traits.
Sites used DSM-III-R criterion sets. Data obtained for purposes of informing the development of the DSM-IV-TR personality disorder diagnostic criteria.
Abbreviations used: PPD – Paranoid Personality Disorder, SzPD – Schizoid Personality Disorder, StPD – Schizotypal Personality Disorder, ASPD – Antisocial Personality Disorder, BPD – Borderline Personality Disorder, HPD – Histrionic Personality Disorder, NPD – Narcissistic Personality Disorder, AvPD – Avoidant Personality Disorder, DPD – Dependent Personality Disorder, OCPD – Obsessive–Compulsive Personality Disorder, PAPD – Passive–Aggressive Personality Disorder.
The disorders in each of the three clusters may share with each other underlying common vulnerability factors involving cognition, affect and impulse control, and behavioral maintenance or inhibition, respectively. But they may also have a spectrum relationship to certain syndromal mental disorders:
Paranoid, schizoid or schizotypal personality disorders may be observed to be premorbid antecedents of delusional disorders or schizophrenia.
Borderline personality disorder is seen in association with mood and anxiety disorders, with impulse-control disorders, eating disorders, ADHD, ASD, or a substance use disorder.
Avoidant personality disorder is seen with social anxiety disorder.
Impact on functioning
It is generally assumed that all personality disorders are linked to impaired functioning and a reduced quality of life (QoL) because that is a basic diagnostic requirement. But research shows that this may be true only for some types of personality disorder. In several studies, higher levels of disability and lower QoL were predicted by avoidant, dependent, schizoid, paranoid, schizotypal and antisocial personality disorders. This link is particularly strong for avoidant, schizotypal and borderline PD. However, obsessive–compulsive PD was not related to a reduced QoL or increased impairment. A prospective study reported that all PD were associated with significant impairment 15 years later, except for obsessive compulsive and narcissistic personality disorder.
One study investigated some aspects of "life success" (status, wealth and successful intimate relationships). It showed somewhat poor functioning for schizotypal, antisocial, borderline, and dependent PD; schizoid PD had the lowest scores regarding these variables. Paranoid, histrionic and avoidant PD were average. Narcissistic and obsessive–compulsive PD, however, had high functioning and appeared to contribute rather positively to these aspects of life success. There is also a direct relationship between the number of diagnostic criteria and quality of life. For each additional personality disorder criterion that a person meets there is an even reduction in quality of life. Personality disorders – especially dependent, narcissistic, and sadistic personality disorders – also facilitate various forms of counterproductive work behavior, including knowledge hiding and knowledge sabotage.
Issues
In the workplace
Depending on the diagnosis, severity and individual, and the job itself, personality disorders can be associated with difficulty coping with work or the workplace—potentially leading to problems with others by interfering with interpersonal relationships. Indirect effects also play a role; for example, impaired educational progress or complications outside of work, such as substance abuse and co-morbid mental disorders, can be problematic. However, personality disorders can also bring about above-average work abilities by increasing competitive drive or causing the individual with the condition to exploit their co-workers.
In 2005 and again in 2009, psychologists Belinda Board and Katarina Fritzon at the University of Surrey, UK, interviewed and gave personality tests to high-level British executives and compared their profiles with those of criminal psychiatric patients at Broadmoor Hospital in the UK. They found that three out of eleven personality disorders were actually more common in executives than in the disturbed criminals:
Histrionic personality disorder: including superficial charm, insincerity, egocentricity and manipulation
Narcissistic personality disorder: including grandiosity, self-focused lack of empathy for others, exploitativeness and independence.
Obsessive–compulsive personality disorder: including perfectionism, excessive devotion to work, rigidity, stubbornness and dictatorial tendencies.
According to leadership academic Manfred F.R. Kets de Vries, it seems almost inevitable that some personality disorders will be present in a senior management team.
In children
Early stages and preliminary forms of personality disorders need a multi-dimensional and early treatment approach. Personality development disorder is considered to be a childhood risk factor or early stage of a later personality disorder in adulthood.
In addition, in Robert F. Krueger's review of their research indicates that some children and adolescents do experience clinically significant syndromes that resemble adult personality disorders, and that these syndromes have meaningful correlates and are consequential. Much of this research has been framed by the adult personality disorder constructs from Axis II of the Diagnostic and Statistical Manual. Hence, they are less likely to encounter the first risk they described at the outset of their review: clinicians and researchers are not simply avoiding use of the PD construct in youth. However, they may encounter the second risk they described: under-appreciation of the developmental context in which these syndromes occur. That is, although PD constructs show continuity over time, they are probabilistic predictors; not all youths who exhibit PD symptomatology become adult PD cases.
Versus normal personality
The issue of the relationship between normal personality and personality disorders is one of the important issues in personality and clinical psychology. The personality disorders classification (DSM-5 and ICD-10) follows a categorical approach that views personality disorders as discrete entities that are distinct from each other and from normal personality. In contrast, the dimensional approach is an alternative approach that personality disorders represent maladaptive extensions of the same traits that describe normal personality.
Thomas Widiger and his collaborators have contributed to this debate significantly. He discussed the constraints of the categorical approach and argued for the dimensional approach to the personality disorders. Specifically, he proposed the Five Factor Model of personality as an alternative to the classification of personality disorders. For example, this view specifies that Borderline Personality Disorder can be understood as a combination of emotional lability (i.e., high neuroticism), impulsivity (i.e., low conscientiousness), and hostility (i.e., low agreeableness). Many studies across cultures have explored the relationship between personality disorders and the Five Factor Model. This research has demonstrated that personality disorders largely correlate in expected ways with measures of the Five Factor Model and has set the stage for including the Five Factor Model within DSM-5.
In clinical practice, individuals are generally diagnosed by an interview with a psychiatrist based on a mental status examination, which may take into account observations by relatives and others. One tool of diagnosing personality disorders is a process involving interviews with scoring systems. The patient is asked to answer questions, and depending on their answers, the trained interviewer tries to code what their responses were. This process is fairly time-consuming.
Abbreviations used: PPD – Paranoid Personality Disorder, SzPD – Schizoid Personality Disorder, StPD – Schizotypal Personality Disorder, ASPD – Antisocial Personality Disorder, BPD – Borderline Personality Disorder, HPD – Histrionic Personality Disorder, NPD – Narcissistic Personality Disorder, AvPD – Avoidant Personality Disorder, DPD – Dependent Personality Disorder, OCPD – Obsessive–Compulsive Personality Disorder, PAPD – Passive–Aggressive Personality Disorder, DpPD – Depressive Personality Disorder, SDPD – Self-Defeating Personality Disorder, SaPD – Sadistic Personality Disorder, and n/a – not available.
As of 2002, there were over fifty published studies relating the five factor model (FFM) to personality disorders. Since that time, quite a number of additional studies have expanded on this research base and provided further empirical support for understanding the DSM personality disorders in terms of the FFM domains. In her seminal review of the personality disorder literature published in 2007, Lee Anna Clark asserted that "the five-factor model of personality is widely accepted as representing the higher-order structure of both normal and abnormal personality traits". The five factor model has been shown to significantly predict all 10 personality disorder symptoms and outperform the Minnesota Multiphasic Personality Inventory (MMPI) in the prediction of borderline, avoidant, and dependent personality disorder symptoms.
Research results examining the relationships between the FFM and each of the ten DSM personality disorder diagnostic categories are widely available. For example, in a study published in 2003 titled "The five-factor model and personality disorder empirical literature: A meta-analytic review", the authors analyzed data from 15 other studies to determine how personality disorders are different and similar, respectively, with regard to underlying personality traits. In terms of how personality disorders differ, the results showed that each disorder displays a FFM profile that is meaningful and predictable given its unique diagnostic criteria. With regard to their similarities, the findings revealed that the most prominent and consistent personality dimensions underlying a large number of the personality disorders are positive associations with neuroticism and negative associations with agreeableness.
Openness to experience
At least three aspects of openness to experience are relevant to understanding personality disorders: cognitive distortions, lack of insight (means the ability to recognize one's own mental illness) and impulsivity. Problems related to high openness that can cause problems with social or professional functioning are excessive fantasising, peculiar thinking, diffuse identity, unstable goals and nonconformity with the demands of the society.
High openness is characteristic to schizotypal personality disorder (odd and fragmented thinking), narcissistic personality disorder (excessive self-valuation) and paranoid personality disorder (sensitivity to external hostility). Lack of insight (shows low openness) is characteristic to all personality disorders and could help explain the persistence of maladaptive behavioral patterns.
The problems associated with low openness are difficulties adapting to change, low tolerance for different worldviews or lifestyles, emotional flattening, alexithymia and a narrow range of interests. Rigidity is the most obvious aspect of (low) openness among personality disorders and that shows lack of knowledge of one's emotional experiences. It is most characteristic of obsessive–compulsive personality disorder; the opposite of it known as impulsivity (here: an aspect of openness that shows a tendency to behave unusually or autistically) is characteristic of schizotypal and borderline personality disorders.
Causes
Currently, there are no definitive proven causes for personality disorders. However, there are numerous possible causes and known risk factors supported by scientific research that vary depending on the disorder, the individual, and the circumstance. Overall, findings show that genetic disposition and life experiences, such as trauma and abuse, play a key role in the development of personality disorders.
Child abuse
Child abuse and neglect consistently show up as risk factors to the development of personality disorders in adulthood. A study looked at retrospective reports of abuse of participants that had demonstrated psychopathology throughout their life and were later found to have past experience with abuse. In a study of 793 mothers and children, researchers asked mothers if they had screamed at their children, and told them that they did not love them or threatened to send them away. Children who had experienced such verbal abuse were three times as likely as other children (who did not experience such verbal abuse) to have borderline, narcissistic, obsessive–compulsive or paranoid personality disorders in adulthood. The sexually abused group demonstrated the most consistently elevated patterns of psychopathology. Officially verified physical abuse showed an extremely strong correlation with the development of antisocial and impulsive behavior. On the other hand, cases of abuse of the neglectful type that created childhood pathology were found to be subject to partial remission in adulthood.
Socioeconomic status
Socioeconomic status has also been looked at as a potential cause for personality disorders. There is a strong association with low parental/neighborhood socioeconomic status and personality disorder symptoms. In a 2015 publication from Bonn, Germany, which compared parental socioeconomic status and a child's personality, it was seen that children who were from higher socioeconomic backgrounds were more altruistic, less risk seeking, and had overall higher IQs. These traits correlate with a low risk of developing personality disorders later on in life. In a study looking at female children who were detained for disciplinary actions found that psychological problems were most negatively associated with socioeconomic problems. Furthermore, social disorganization was found to be positively correlated with personality disorder symptoms.
Parenting
Evidence shows personality disorders may begin with parental personality issues. These cause the child to have their own difficulties in adulthood, such as difficulties reaching higher education, obtaining jobs, and securing dependable relationships. By either genetic or modeling mechanisms, children can pick up these traits. Additionally, poor parenting appears to have symptom elevating effects on personality disorders. More specifically, lack of maternal bonding has also been correlated with personality disorders. In a study comparing 100 healthy individuals to 100 borderline personality disorder patients, analysis showed that BPD patients were significantly more likely not to have been breastfed as a baby (42.4% in BPD vs. 9.2% in healthy controls). These researchers suggested "Breastfeeding may act as an early indicator of the mother-infant relationship that seems to be relevant for bonding and attachment later in life". Additionally, findings suggest personality disorders show a negative correlation with two attachment variables: maternal availability and dependability. When left unfostered, other attachment and interpersonal problems occur later in life ultimately leading to development of personality disorders.
Genetics
Currently, genetic research for the understanding of the development of personality disorders is severely lacking. However, there are a few possible risk factors currently in discovery. Researchers are currently looking into genetic mechanisms for traits such as aggression, fear and anxiety, which are associated with diagnosed individuals. More research is being conducted into disorder specific mechanisms.
Neurobiological correlates – hippocampus, amygdala
Research shows that several brain regions are altered in personality disorders, particularly: hippocampus up to 18% smaller, a smaller amygdala, malfunctions in the striatum-nucleus accumbens and the cingulum neural pathways connecting them and taking care of the feedback loops on what to do with all the incoming information from the multiple senses; so what comes out is anti-social – not according to what is the social norm, socially acceptable and appropriate.
Management
Specific approaches
There are many different forms (modalities) of treatment used for personality disorders:
Individual psychotherapy has been a mainstay of treatment. There are long-term and short-term (brief) forms.
Family therapy, including couples therapy.
Group therapy for personality dysfunction is probably the second most used.
Psychological-education may be used as an addition.
Self-help groups may provide resources for personality disorders.
Psychiatric medications for treating symptoms of personality dysfunction or co-occurring conditions.
Milieu therapy, a kind of group-based residential approach, has a history of use in treating personality disorders, including therapeutic communities.
The practice of mindfulness that includes developing the ability to be nonjudgmentally aware of unpleasant emotions appears to be a promising clinical tool for managing different types of personality disorders.
There are different specific theories or schools of therapy within many of these modalities. They may, for example, emphasize psychodynamic techniques, or cognitive or behavioral techniques. In clinical practice, many therapists use an 'eclectic' approach, taking elements of different schools as and when they seem to fit to an individual client. There is also often a focus on common themes that seem to be beneficial regardless of techniques, including attributes of the therapist (e.g. trustworthiness, competence, caring), processes afforded to the client (e.g. ability to express and confide difficulties and emotions), and the match between the two (e.g. aiming for mutual respect, trust and boundaries).
Despite the lack of evidence supporting the benefit of antipsychotics in people with personality disorders, 1 in 4 who do not have a serious mental illness are prescribed them in UK primary care. Many people receive these medication for over a year, contrary to NICE guidelines.
Challenges
The management and treatment of personality disorders can be a challenging and controversial area, for by definition the difficulties have been enduring and affect multiple areas of functioning. This often involves interpersonal issues, and there can be difficulties in seeking and obtaining help from organizations in the first place, as well as with establishing and maintaining a specific therapeutic relationship. On the one hand, an individual may not consider themselves to have a mental health problem, while on the other, community mental health services may view individuals with personality disorders as too complex or difficult, and may directly or indirectly exclude individuals with such diagnoses or associated behaviors. The disruptiveness that people with personality disorders can create in an organisation makes these, arguably, the most challenging conditions to manage.
Apart from all these issues, an individual may not consider their personality to be disordered or the cause of problems. This perspective may be caused by the patient's ignorance or lack of insight into their own condition, an ego-syntonic perception of the problems with their personality that prevents them from experiencing it as being in conflict with their goals and self-image, or by the simple fact that there is no distinct or objective boundary between 'normal' and 'abnormal' personalities. There is substantial social stigma and discrimination related to the diagnosis.
The term 'personality disorder' encompasses a wide range of issues, each with a different level of severity or impairment; thus, personality disorders can require fundamentally different approaches and understandings. To illustrate the scope of the matter, consider that while some disorders or individuals are characterized by continual social withdrawal and the shunning of relationships, others may cause fluctuations in forwardness. The extremes are worse still: at one extreme lie self-harm and self-neglect, while at another extreme some individuals may commit violence and crime. There can be other factors such as problematic substance use or dependency or behavioral addictions.
Therapists in this area can become disheartened by lack of initial progress, or by apparent progress that then leads to setbacks. Clients may be perceived as negative, rejecting, demanding, aggressive or manipulative. This has been looked at in terms of both therapist and client; in terms of social skills, coping efforts, defense mechanisms, or deliberate strategies; and in terms of moral judgments or the need to consider underlying motivations for specific behaviors or conflicts. The vulnerabilities of a client, and indeed a therapist, may become lost behind actual or apparent strength and resilience. It is commonly stated that there is always a need to maintain appropriate professional personal boundaries, while allowing for emotional expression and therapeutic relationships. However, there can be difficulty acknowledging the different worlds and views that both the client and therapist may live with. A therapist may assume that the kinds of relationships and ways of interacting that make them feel safe and comfortable have the same effect on clients. As an example of one extreme, people who may have been exposed to hostility, deceptiveness, rejection, aggression or abuse in their lives, may in some cases be made confused, intimidated or suspicious by presentations of warmth, intimacy or positivity. On the other hand, reassurance, openness and clear communication are usually helpful and needed. It can take several months of sessions, and perhaps several stops and starts, to begin to develop a trusting relationship that can meaningfully address a client's issues.
Epidemiology
The prevalence of personality disorder in the general community was largely unknown until surveys starting from the 1990s. In 2008 the median rate of diagnosable PD was estimated at 10.6%, based on six major studies across three nations. This rate of around one in ten, especially as associated with high use of cocaine, is described as a major public health concern requiring attention by researchers and clinicians. The prevalence of individual personality disorders ranges from about 2% to 8% for the more common varieties, such as obsessive-compulsive, schizotypal, antisocial, borderline, and histrionic, to 0.5–1% for the least common, such as narcissistic and avoidant.
A screening survey across 13 countries by the World Health Organization using DSM-IV criteria, reported in 2009 a prevalence estimate of around 6% for personality disorders. The rate sometimes varied with demographic and socioeconomic factors, and functional impairment was partly explained by co-occurring mental disorders. In the US, screening data from the National Comorbidity Survey Replication between 2001 and 2003, combined with interviews of a subset of respondents, indicated a population prevalence of around 9% for personality disorders in total. Functional disability associated with the diagnoses appeared to be largely due to co-occurring mental disorders (Axis I in the DSM). This statistic has been supported by other studies in the US, with overall global prevalence statistics ranging from 9% to 11%.
A UK national epidemiological study (based on DSM-IV screening criteria), reclassified into levels of severity rather than just diagnosis, reported in 2010 that the majority of people show some personality difficulties in one way or another (short of threshold for diagnosis), while the prevalence of the most complex and severe cases (including meeting criteria for multiple diagnoses in different clusters) was estimated at 1.3%. Even low levels of personality symptoms were associated with functional problems, but the most severely in need of services was a much smaller group. Personality disorders (especially Cluster A) are found more commonly among homeless people.
There are some sex differences in the frequency of personality disorders which are shown in the table below. The known prevalence of some personality disorders, especially borderline PD and antisocial PD are affected by diagnostic bias. This is due to many factors including disproportionately high research towards borderline PD and antisocial PD, alongside social and gender stereotypes, and the relationship between diagnosis rates and prevalence rates. Since the removal of depressive PD, self-defeating PD, sadistic PD and passive-aggressive PD from the DSM-5, studies analysing their prevalence and demographics have been limited.
History
Diagnostic and Statistical Manual history
Before the 20th century
Personality disorder is a term with a distinctly modern meaning, owing in part to its clinical usage and the institutional character of modern psychiatry. The currently accepted meaning must be understood in the context of historical changing classification systems such as DSM-IV and its predecessors. Although highly anachronistic, and ignoring radical differences in the character of subjectivity and social relations, some have suggested similarities to other concepts going back to at least the ancient Greeks. For example, the Greek philosopher Theophrastus described 29 'character' types that he saw as deviations from the norm, and similar views have been found in Asian, Arabic and Celtic cultures. A long-standing influence in the Western world was Galen's concept of personality types, which he linked to the four humours proposed by Hippocrates.
Such views lasted into the eighteenth century, when experiments began to question the supposed biologically based humours and 'temperaments'. Psychological concepts of character and 'self' became widespread. In the nineteenth century, 'personality' referred to a person's conscious awareness of their behavior, a disorder of which could be linked to altered states such as dissociation. This sense of the term has been compared to the use of the term 'multiple personality disorder' in the first versions of the DSM.
Physicians in the early nineteenth century started to diagnose forms of insanity involving disturbed emotions and behaviors but seemingly without significant intellectual impairment or delusions or hallucinations. Philippe Pinel referred to this as ' manie sans délire ' – mania without delusions – and described a number of cases mainly involving excessive or inexplicable anger or rage. James Cowles Prichard advanced a similar concept he called moral insanity, which would be used to diagnose patients for some decades. 'Moral' in this sense referred to affect (emotion or mood) rather than simply the ethical dimension, but it was arguably a significant move for 'psychiatric' diagnostic practice to become so clearly engaged with judgments about individual's social behaviour. Prichard was influenced by his own religious, social and moral beliefs, as well as ideas in German psychiatry. These categories were much different and broader than later definitions of personality disorder, while also being developed by some into a more specific meaning of moral degeneracy akin to later ideas about 'psychopaths'. Separately, Richard von Krafft-Ebing popularized the terms sadism and masochism, as well as homosexuality, as psychiatric issues.
The German psychiatrist Koch sought to make the moral insanity concept more scientific, and in 1891 suggested the phrase 'psychopathic inferiority', theorized to be a congenital disorder. This referred to continual and rigid patterns of misconduct or dysfunction in the absence of apparent "mental retardation" or illness, supposedly without a moral judgment. Described as deeply rooted in his Christian faith, his work established the concept of personality disorder as used today.
20th century
In the early 20th century, another German psychiatrist, Emil Kraepelin, included a chapter on psychopathic inferiority in his influential work on clinical psychiatry for students and physicians. He suggested six types – excitable, unstable, eccentric, liar, swindler and quarrelsome. The categories were essentially defined by the most disordered criminal offenders observed, distinguished between criminals by impulse, professional criminals, and morbid vagabonds who wandered through life. Kraepelin also described three paranoid (meaning then delusional) disorders, resembling later concepts of schizophrenia, delusional disorder and paranoid personality disorder. A diagnostic term for the latter concept would be included in the DSM from 1952, and from 1980 the DSM would also include schizoid, schizotypal; interpretations of earlier (1921) theories of Ernst Kretschmer led to a distinction between these and another type later included in the DSM, avoidant personality disorder.
In 1933 Russian psychiatrist Pyotr Borisovich Gannushkin published his book Manifestations of Psychopathies: Statics, Dynamics, Systematic Aspects, which was one of the first attempts to develop a detailed typology of psychopathies. Regarding maladaptation, ubiquity, and stability as the three main symptoms of behavioral pathology, he distinguished nine clusters of psychopaths: cycloids (including constitutionally depressive, constitutionally excitable, cyclothymics, and emotionally labile), (including psychasthenics), schizoids (including dreamers), paranoiacs (including fanatics), epileptoids, hysterical personalities (including pathological liars), unstable psychopaths, antisocial psychopaths, and constitutionally stupid. Some elements of Gannushkin's typology were later incorporated into the theory developed by a Russian adolescent psychiatrist, Andrey Yevgenyevich Lichko, who was also interested in psychopathies along with their milder forms, the so-called accentuations of character.
In 1939, psychiatrist David Henderson published a theory of 'psychopathic states' that contributed to popularly linking the term to anti-social behavior. Hervey M. Cleckley's 1941 text, The Mask of Sanity, based on his personal categorization of similarities he noted in some prisoners, marked the start of the modern clinical conception of psychopathy and its popularist usage.
Towards the mid 20th century, psychoanalytic theories were coming to the fore based on work from the turn of the century being popularized by Sigmund Freud and others. This included the concept of character disorders, which were seen as enduring problems linked not to specific symptoms but to pervasive internal conflicts or derailments of normal childhood development. These were often understood as weaknesses of character or willful deviance, and were distinguished from neurosis or psychosis. The term 'borderline' stems from a belief some individuals were functioning on the edge of those two categories, and a number of the other personality disorder categories were also heavily influenced by this approach, including dependent, obsessive–compulsive and histrionic, the latter starting off as a conversion symptom of hysteria particularly associated with women, then a hysterical personality, then renamed histrionic personality disorder in later versions of the DSM. A passive aggressive style was defined clinically by Colonel William Menninger during World War II in the context of men's reactions to military compliance, which would later be referenced as a personality disorder in the DSM. Otto Kernberg was influential with regard to the concepts of borderline and narcissistic personalities later incorporated in 1980 as disorders into the DSM.
Meanwhile, a more general personality psychology had been developing in academia and to some extent clinically. Gordon Allport published theories of personality traits from the 1920s—and Henry Murray advanced a theory called personology, which influenced a later key advocate of personality disorders, Theodore Millon. Tests were developing or being applied for personality evaluation, including projective tests such as the Rorschach test, as well as questionnaires such as the Minnesota Multiphasic Personality Inventory. Around mid-century, Hans Eysenck was analysing traits and personality types, and psychiatrist Kurt Schneider was popularising a clinical use in place of the previously more usual terms 'character', 'temperament' or 'constitution'.
American psychiatrists officially recognized concepts of enduring personality disturbances in the first Diagnostic and Statistical Manual of Mental Disorders in the 1950s, which relied heavily on psychoanalytic concepts. Somewhat more neutral language was employed in the DSM-II in 1968, though the terms and descriptions had only a slight resemblance to current definitions. The DSM-III published in 1980 made some major changes, notably putting all personality disorders onto a second separate 'axis' along with "mental retardation", intended to signify more enduring patterns, distinct from what were considered axis one mental disorders. 'Inadequate' and 'asthenic' personality disorder' categories were deleted, and others were expanded into more types, or changed from being personality disorders to regular disorders. Sociopathic personality disorder, which had been the term for psychopathy, was renamed Antisocial Personality Disorder. Most categories were given more specific 'operationalized' definitions, with standard criteria psychiatrists could agree on to conduct research and diagnose patients. In the DSM-III revision, self-defeating personality disorder and sadistic personality disorder were included as provisional diagnoses requiring further study. They were dropped in the DSM-IV, though a proposed 'depressive personality disorder' was added; in addition, the official diagnosis of passive–aggressive personality disorder was dropped, tentatively renamed 'negativistic personality disorder.'
International differences have been noted in how attitudes have developed towards the diagnosis of personality disorder. Kurt Schneider argued they were 'abnormal varieties of psychic life' and therefore not necessarily the domain of psychiatry, a view said to still have influence in Germany today. British psychiatrists have also been reluctant to address such disorders or consider them on par with other mental disorders, which has been attributed partly to resource pressures within the National Health Service, as well as to negative medical attitudes towards behaviors associated with personality disorders. In the US, the prevailing healthcare system and psychoanalytic tradition has been said to provide a rationale for private therapists to diagnose some personality disorders more broadly and provide ongoing treatment for them.
| Biology and health sciences | Mental disorder | null |
554088 | https://en.wikipedia.org/wiki/Lyman-alpha%20forest | Lyman-alpha forest | In astronomical spectroscopy, the Lyman-alpha forest is a series of absorption lines in the spectra of distant galaxies and quasars arising from the Lyman-alpha electron transition of the neutral hydrogen atom. As the light travels through multiple gas clouds with different redshifts, multiple absorption lines are formed.
History
The Lyman-alpha forest was first discovered in 1970 by astronomer Roger Lynds in an observation of the quasar 4C 05.34. Quasar 4C 05.34 was the farthest object observed to that date, and Lynds noted an unusually large number of absorption lines in its spectrum and suggested that most of the absorption lines were all due to the same Lyman-alpha transition. Follow-up observations by John Bahcall and Samuel Goldsmith confirmed the presence of the unusual absorption lines, though they were less conclusive about the origin of the lines. Subsequently, the spectra of many other high-redshift quasars were observed to have the same system of narrow absorption lines. Lynds was the first to describe them as the "Lyman-alpha forest". Jan Oort argued that the absorption features are due not to any physical interactions within the quasars themselves, but to absorption inside clouds of intergalactic gas in superclusters.
Physical background
For a neutral hydrogen atom, spectral lines are formed when an electron transitions between energy levels. The Lyman series of spectral lines are produced by electrons transitioning between the ground state and higher energy levels (excited states). The Lyman-alpha transition corresponds to an electron transitioning between the ground state (n = 1) and the first excited state (n = 2). The Lyman-alpha spectral line has a laboratory wavelength (or rest wavelength) of 1216 Å, which is in the ultraviolet portion of the electromagnetic spectrum.
The Lyman-alpha absorption lines in the quasar spectra result from intergalactic gas through which the galaxy or quasar's light has traveled. Since neutral hydrogen clouds in the intergalactic medium are at different degrees of redshift (due to their varying distance from Earth), their absorption lines are observed at a range of wavelengths. Each individual cloud leaves its fingerprint as an absorption line at a different position in the observed spectrum.
Use as a tool in astrophysics
The Lyman-alpha forest is an important probe of the intergalactic medium and can be used to determine the frequency and density of clouds containing neutral hydrogen, as well as their temperature. Searching for lines from other elements like helium, carbon and silicon (matching in redshift), the abundance of heavier elements in the clouds can also be studied. A cloud with a high column density of neutral hydrogen will show typical damping wings around the line and is referred to as a damped Lyman-alpha system.
For quasars at higher redshift the number of lines in the forest is higher, until at a redshift of about 6, where there is so much neutral hydrogen in the intergalactic medium that the forest turns into a Gunn–Peterson trough. This shows the end of the reionization of the universe.
The Lyman-alpha forest observations can be used to constrain cosmological models. They can also be used to constrain the properties of dark matter, such as the dark matter free streaming scale, which for thermal relic dark matter models is closely related to the dark matter particle mass.
| Physical sciences | Active galactic nucleus | Astronomy |
554386 | https://en.wikipedia.org/wiki/Law%20of%20superposition | Law of superposition | The law of superposition is an axiom that forms one of the bases of the sciences of geology, archaeology, and other fields pertaining to geological stratigraphy. In its plainest form, it states that in undeformed stratigraphic sequences, the oldest strata will lie at the bottom of the sequence, while newer material stacks upon the surface to form new deposits over time. This is paramount to stratigraphic dating, which requires a set of assumptions, including that the law of superposition holds true and that an object cannot be older than the materials of which it is composed. To illustrate the practical applications of superposition in scientific inquiry, sedimentary rock that has not been deformed by more than 90° will exhibit the oldest layers on the bottom, thus enabling paleontologists and paleobotanists to identify the relative ages of any fossils found within the strata, with the remains of the most archaic lifeforms confined to the lowest. These findings can inform the community on the fossil record covering the relevant strata, to determine which species coexisted temporally and which species existed successively in perhaps an evolutionarily or phylogenetically relevant way.
History
The law of superposition was first proposed in 1669 by the Danish scientist Nicolas Steno, and is present as one of his major theses in the groundbreaking seminal work Dissertationis prodromus (1669).
In the English-language literature, the law was popularized by William "Strata" Smith, who used it to produce the first geologic map of Britain. It is the first of Smith's laws, which were formally published in Strata Identified by Fossils (1816–1819).
Archaeological considerations
Superposition in archaeology and especially in stratification use during excavation is slightly different as the processes involved in laying down archaeological strata are somewhat different from geological processes. Human-made intrusions and activity in the archaeological record need not form chronologically from top to bottom or be deformed from the horizontal as natural strata are by equivalent processes. Some archaeological strata (often termed as contexts or layers) are created by undercutting previous strata. An example would be that the silt back-fill of an underground drain would form some time after the ground immediately above it. Other examples of non vertical superposition would be modifications to standing structures such as the creation of new doors and windows in a wall. Superposition in archaeology requires a degree of interpretation to correctly identify chronological sequences and in this sense superposition in archaeology is more dynamic and multi-dimensional.
Other limitations to stratification and superposition
Original stratification induced by natural processes can subsequently be disrupted or permutated by a number of factors, including animal interference and vegetation, as well as limestone crystallization.
Stratification behaves in a different manner with surface-formed igneous depositions, such as lava flows and ash falls, and thus superposition may not always successfully apply under certain conditions.
| Physical sciences | Stratigraphy | Earth science |
554469 | https://en.wikipedia.org/wiki/Stratum | Stratum | In geology and related fields, a stratum (: strata) is a layer of rock or sediment characterized by certain lithologic properties or attributes that distinguish it from adjacent layers from which it is separated by visible surfaces known as either bedding surfaces or bedding planes. Prior to the publication of the International Stratigraphic Guide, older publications have defined a stratum as being either equivalent to a single bed or composed of a number of beds; as a layer greater than 1 cm in thickness and constituting a part of a bed; or a general term that includes both bed and lamina. Related terms are substrate and substratum (pl.substrata), a stratum underlying another stratum.
Characteristics
Typically, a stratum is generally one of a number of parallel layers that lie one upon another to form enormous thicknesses of strata. The bedding surfaces (bedding planes) that separate strata represent episodic breaks in deposition associated either with periodic erosion, cessation of deposition, or some combination of the two. Stacked together with other strata, individual stratum can form composite stratigraphic units that can extend over hundreds of thousands of square kilometers of the Earth's surface. Individual stratum can cover similarly large areas. Strata are typically seen as bands of different colored or differently structured material exposed in cliffs, road cuts, quarries, and river banks. Individual bands may vary in thickness from a few millimeters to several meters or more. A band may represent a specific mode of deposition: river silt, beach sand, coal swamp, sand dune, lava bed, etc.
Types of stratum
In the study of rock and sediment strata, geologists have recognized a number of different types of strata, including bed, flow, band, and key bed. A bed is a single stratum that is lithologically distinguishable from other layers above and below it. In the classification hierarchy of sedimentary lithostratigraphic units, a bed is the smallest formal unit. However, only beds that are distinctive enough to be useful for stratigraphic correlation and geologic mapping are customarily given formal names and considered formal lithostratigraphic units. The volcanic equivalent of a bed, a flow, is a discrete extrusive volcanic stratum or body distinguishable by texture, composition, or other objective criteria. As in case of a bed, a flow should only be designated and named as a formal lithostratigraphic unit when it is distinctive, widespread, and useful for stratigraphic correlation. A band is a thin stratum that is distinguishable by a distinctive lithology or color and is useful in correlating strata. Finally, a key bed, also called a marker bed, is a well-defined, easily identifiable stratum or body of strata that has sufficiently distinctive characteristics, such as lithology or fossil content, to be recognized and correlated during geologic field or subsurface mapping.
Gallery
| Physical sciences | Stratigraphy | Earth science |
554877 | https://en.wikipedia.org/wiki/Thresher%20shark | Thresher shark | Thresher sharks are large mackerel sharks of the family Alopiidae found in all temperate and tropical oceans of the world; the family contains three extant species, all within the genus Alopias.
All three thresher shark species have been listed as vulnerable by the World Conservation Union since 2007 (IUCN). All three are popular big-game sport fish, and additionally they are hunted commercially for their meat, livers (for shark liver oil), skin (for shagreen) and fins (for use in delicacies such as shark-fin soup).
Despite being active predatory fish, thresher sharks do not appear to be a threat to humans.
Taxonomy
The genus and family name derive from the Greek word , , meaning fox. As a result, the long-tailed or common thresher shark, Alopias vulpinus, is also known as the fox shark. The common name is derived from a distinctive, thresher-like tail or caudal fin which can be as long as the body of the shark itself.
Species
The three extant thresher shark species are all in the genus Alopias. The possible existence of a hitherto unrecognized fourth species was revealed during the course of a 1995 allozyme analysis by Blaise Eitner. This species is apparently found in the eastern Pacific off Baja California, and has previously been misidentified as the bigeye thresher. So far, it is only known from muscle samples from one specimen, and no aspect of its morphology has been documented.
Phylogeny and evolution
Based on cytochrome b genes, Martin and Naylor (1997) concluded the thresher sharks form a monophyletic sister group to the clade containing the families Cetorhinidae (basking shark) and Lamnidae (mackerel sharks). The megamouth shark (Megachasma pelagios) was placed as the next-closest relative to these taxa, though the phylogenetic position of that species has yet to be resolved with confidence. Cladistic analyses by Compagno (1991) based on morphological characters, and Shimada (2005) based on dentition, have both corroborated this interpretation.
Within the family, an analysis of allozyme variation by Eitner (1995) found the common thresher is the most basal member, with a sister relationship to a group containing the unrecognized fourth Alopias species and a clade comprising the bigeye and pelagic threshers. However, the position of the undescribed fourth species was only based on a single synapomorphy (derived group-defining character) in one specimen, so some uncertainty in its placement remains.
Distribution and habitat
Although occasionally sighted in shallow, inshore waters, thresher sharks are primarily pelagic; they prefer the open ocean, characteristically preferring water and less. Common threshers tend to be more prevalent in coastal waters over continental shelves. Common thresher sharks are found along the continental shelves of North America and Asia of the North Pacific, but are rare in the Central and Western Pacific. In the warmer waters of the Central and Western Pacific, bigeye and pelagic thresher sharks are more common. A thresher shark was seen on the live video feed from one of the ROVs monitoring BP's Macondo oil well blowout in the Gulf of Mexico. This is significantly deeper than the previously thought to be their limit.
A bigeye has also been found in the western Mediterranean, and so distribution may be wider than previously believed, or environmental factors may be forcing sharks to search for new territories.
Anatomy and appearance
Named for their exceptionally long, thresher-like heterocercal tail or caudal fins (which can be as long as the total body length), thresher sharks are active predators; the tail is used as a weapon to stun prey. The thresher shark has a short head and a cone-shaped nose. The mouth is generally small, and the teeth range in size from small to large. By far the largest of the three species is the common thresher, Alopias vulpinus, which may reach a length of and a mass of over . The bigeye thresher, A. superciliosus, is next in size, reaching a length of 4.9 m (16 ft); at just 3 m (10 ft), the pelagic thresher, A. pelagicus, is the smallest.
Thresher sharks are fairly slender, with small dorsal fins and large, recurved pectoral fins. With the exception of the bigeye thresher, these sharks have relatively small eyes positioned to the forward of the head. Coloration ranges from brownish, bluish or purplish gray dorsally with lighter shades ventrally.
The three species can be roughly distinguished by the primary color of the dorsal surface of the body. Common threshers are dark green, bigeye threshers are brown and pelagic threshers are generally blue. Lighting conditions and water clarity can affect how any one shark appears to an observer, but the color test is generally supported when other features are examined.
Diet
The thresher shark mainly feeds on schooling pelagic fish such as bluefish, juvenile tuna and mackerel, which they are known to follow into shallow waters, as well as squid and cuttlefish. Crustaceans and occasionally seabirds are also eaten. The thresher shark stuns its prey by using its elongated tail as a whipping weapon.
Behavior
Thresher sharks are solitary creatures that keep to themselves. It is known that thresher populations of the Indian Ocean are separated by depth and space according to sex. Some species however do occasionally hunt in a group of two or three contrary to their solitary nature. All species are noted for their highly migratory or oceanodromous habits. When hunting schooling fish, thresher sharks are known to "whip" the water. The elongated tail is used to swat smaller fish, stunning them before feeding. Thresher sharks are one of the few shark species known to jump fully out of the water, using their elongated tail to propel them out of the water, making turns like dolphins; this behavior is called breaching.
Endothermy
Two species of the thresher have been identified as having a modified circulatory system that acts as a counter-current heat exchanger, which allows them to retain metabolic heat. Mackerel sharks (family Lamnidae) have a similar homologous structure to this which is more extensively developed. This structure is a strip of red muscle along each of its flanks, which has a tight network of blood vessels that transfer metabolic heat inward towards the core of the shark, allowing it to maintain and regulate its body heat.
Reproduction
No distinct breeding season is observed by thresher sharks. Fertilization and embryonic development occur internally; this ovoviviparous or live-bearing mode of reproduction results in a small litter (usually two to four) of large well-developed pups, up to at birth in thintail threshers. The young fish exhaust their yolk sacs while still inside the mother, at which time they begin feasting on the mother's unfertilized eggs; this is known as oophagy.
Thresher sharks are slow to mature; males reach sexual maturity between seven and thirteen years of age and females between eight and fourteen years in bigeye threshers. They may live for 20 years or more.
In October 2013, the first picture of a thresher shark giving birth was taken off the coast of the Philippines.
Fisheries
Thresher sharks are classified as prized game fish in the United States and South Africa. Common thresher sharks are the target of a popular recreational fishery off Baja, Mexico.
Status
Because of their low fecundity, thresher sharks are highly vulnerable to overfishing. All three thresher shark species have been listed as vulnerable to extinction by the World Conservation Union since 2007 (IUCN).
| Biology and health sciences | Sharks | Animals |
554994 | https://en.wikipedia.org/wiki/P-value | P-value | In null-hypothesis significance testing, the p-value is the probability of obtaining test results at least as extreme as the result actually observed, under the assumption that the null hypothesis is correct. A very small p-value means that such an extreme observed outcome would be very unlikely under the null hypothesis. Even though reporting p-values of statistical tests is common practice in academic publications of many quantitative fields, misinterpretation and misuse of p-values is widespread and has been a major topic in mathematics and metascience.
In 2016, the American Statistical Association (ASA) made a formal statement that "p-values do not measure the probability that the studied hypothesis is true, or the probability that the data were produced by random chance alone" and that "a p-value, or statistical significance, does not measure the size of an effect or the importance of a result" or "evidence regarding a model or hypothesis". That said, a 2019 task force by ASA has issued a statement on statistical significance and replicability, concluding with: "p-values and significance tests, when properly applied and interpreted, increase the rigor of the conclusions drawn from data".
Basic concepts
In statistics, every conjecture concerning the unknown probability distribution of a collection of random variables representing the observed data in some study is called a statistical hypothesis. If we state one hypothesis only and the aim of the statistical test is to see whether this hypothesis is tenable, but not to investigate other specific hypotheses, then such a test is called a null hypothesis test.
As our statistical hypothesis will, by definition, state some property of the distribution, the null hypothesis is the default hypothesis under which that property does not exist. The null hypothesis is typically that some parameter (such as a correlation or a difference between means) in the populations of interest is zero. Our hypothesis might specify the probability distribution of precisely, or it might only specify that it belongs to some class of distributions. Often, we reduce the data to a single numerical statistic, e.g., , whose marginal probability distribution is closely connected to a main question of interest in the study.
The p-value is used in the context of null hypothesis testing in order to quantify the statistical significance of a result, the result being the observed value of the chosen statistic . The lower the p-value is, the lower the probability of getting that result if the null hypothesis were true. A result is said to be statistically significant if it allows us to reject the null hypothesis. All other things being equal, smaller p-values are taken as stronger evidence against the null hypothesis.
Loosely speaking, rejection of the null hypothesis implies that there is sufficient evidence against it.
As a particular example, if a null hypothesis states that a certain summary statistic follows the standard normal distribution then the rejection of this null hypothesis could mean that (i) the mean of is not 0, or (ii) the variance of is not 1, or (iii) is not normally distributed. Different tests of the same null hypothesis would be more or less sensitive to different alternatives. However, even if we do manage to reject the null hypothesis for all 3 alternatives, and even if we know that the distribution is normal and variance is 1, the null hypothesis test does not tell us which non-zero values of the mean are now most plausible. The more independent observations from the same probability distribution one has, the more accurate the test will be, and the higher the precision with which one will be able to determine the mean value and show that it is not equal to zero; but this will also increase the importance of evaluating the real-world or scientific relevance of this deviation.
Definition and interpretation
Definition
The p-value is the probability under the null hypothesis of obtaining a real-valued test statistic at least as extreme as the one obtained. Consider an observed test-statistic from unknown distribution . Then the p-value is what the prior probability would be of observing a test-statistic value at least as "extreme" as if null hypothesis were true. That is:
for a one-sided right-tail test-statistic distribution.
for a one-sided left-tail test-statistic distribution.
for a two-sided test-statistic distribution. If the distribution of is symmetric about zero, then
Interpretations
In a significance test, the null hypothesis is rejected if the p-value is less than or equal to a predefined threshold value , which is referred to as the alpha level or significance level. is not derived from the data, but rather is set by the researcher before examining the data. is commonly set to 0.05, though lower alpha levels are sometimes used. The 0.05 value (equivalent to 1/20 chances) was originally proposed by R. Fisher in 1925 in his famous book entitled "Statistical Methods for Research Workers". In 2018, a group of statisticians led by Daniel Benjamin proposed the adoption of the 0.005 value as standard value for statistical significance worldwide.
Different p-values based on independent sets of data can be combined, for instance using Fisher's combined probability test.
Distribution
The p-value is a function of the chosen test statistic and is therefore a random variable. If the null hypothesis fixes the probability distribution of precisely (e.g. where is the only parameter), and if that distribution is continuous, then when the null-hypothesis is true, the p-value is uniformly distributed between 0 and 1. Regardless of the truth of the , the p-value is not fixed; if the same test is repeated independently with fresh data, one will typically obtain a different p-value in each iteration.
Usually only a single p-value relating to a hypothesis is observed, so the p-value is interpreted by a significance test, and no effort is made to estimate the distribution it was drawn from. When a collection of p-values are available (e.g. when considering a group of studies on the same subject), the distribution of p-values is sometimes called a p-curve.
A p-curve can be used to assess the reliability of scientific literature, such as by detecting publication bias or p-hacking.
Distribution for composite hypothesis
In parametric hypothesis testing problems, a simple or point hypothesis refers to a hypothesis where the parameter's value is assumed to be a single number. In contrast, in a composite hypothesis the parameter's value is given by a set of numbers. When the null-hypothesis is composite (or the distribution of the statistic is discrete), then when the null-hypothesis is true the probability of obtaining a p-value less than or equal to any number between 0 and 1 is still less than or equal to that number. In other words, it remains the case that very small values are relatively unlikely if the null-hypothesis is true, and that a significance test at level is obtained by rejecting the null-hypothesis if the p-value is less than or equal to .
For example, when testing the null hypothesis that a distribution is normal with a mean less than or equal to zero against the alternative that the mean is greater than zero (, variance known), the null hypothesis does not specify the exact probability distribution of the appropriate test statistic. In this example that would be the Z-statistic belonging to the one-sided one-sample Z-test. For each possible value of the theoretical mean, the Z-test statistic has a different probability distribution. In these circumstances the p-value is defined by taking the least favorable null-hypothesis case, which is typically on the border between null and alternative.
This definition ensures the complementarity of p-values and alpha-levels: means one only rejects the null hypothesis if the p-value is less than or equal to , and the hypothesis test will indeed have a maximum type-1 error rate of .
Usage
The p-value is widely used in statistical hypothesis testing, specifically in null hypothesis significance testing. In this method, before conducting the study, one first chooses a model (the null hypothesis) and the alpha level α (most commonly 0.05). After analyzing the data, if the p-value is less than α, that is taken to mean that the observed data is sufficiently inconsistent with the null hypothesis for the null hypothesis to be rejected. However, that does not prove that the null hypothesis is false. The p-value does not, in itself, establish probabilities of hypotheses. Rather, it is a tool for deciding whether to reject the null hypothesis.
Misuse
According to the ASA, there is widespread agreement that p-values are often misused and misinterpreted. One practice that has been particularly criticized is accepting the alternative hypothesis for any p-value nominally less than 0.05 without other supporting evidence. Although p-values are helpful in assessing how incompatible the data are with a specified statistical model, contextual factors must also be considered, such as "the design of a study, the quality of the measurements, the external evidence for the phenomenon under study, and the validity of assumptions that underlie the data analysis". Another concern is that the p-value is often misunderstood as being the probability that the null hypothesis is true.
Some statisticians have proposed abandoning p-values and focusing more on other inferential statistics, such as confidence intervals, likelihood ratios, or Bayes factors, but there is heated debate on the feasibility of these alternatives. Others have suggested to remove fixed significance thresholds and to interpret p-values as continuous indices of the strength of evidence against the null hypothesis. Yet others suggested to report alongside p-values the prior probability of a real effect that would be required to obtain a false positive risk (i.e. the probability that there is no real effect) below a pre-specified threshold (e.g. 5%).
That said, in 2019 a task force by ASA had convened to consider the use of statistical methods in scientific studies, specifically hypothesis tests and p-values, and their connection to replicability. It states that "Different measures of uncertainty can complement one another; no single measure serves all purposes", citing p-value as one of these measures. They also stress that p-values can provide valuable information when considering the specific value as well as when compared to some threshold. In general, it stresses that "p-values and significance tests, when properly applied and interpreted, increase the rigor of the conclusions drawn from data".
Calculation
Usually, is a test statistic. A test statistic is the output of a scalar function of all the observations. This statistic provides a single number, such as a t-statistic or an F-statistic. As such, the test statistic follows a distribution determined by the function used to define that test statistic and the distribution of the input observational data.
For the important case in which the data are hypothesized to be a random sample from a normal distribution, depending on the nature of the test statistic and the hypotheses of interest about its distribution, different null hypothesis tests have been developed. Some such tests are the z-test for hypotheses concerning the mean of a normal distribution with known variance, the t-test based on Student's t-distribution of a suitable statistic for hypotheses concerning the mean of a normal distribution when the variance is unknown, the F-test based on the F-distribution of yet another statistic for hypotheses concerning the variance. For data of other nature, for instance, categorical (discrete) data, test statistics might be constructed whose null hypothesis distribution is based on normal approximations to appropriate statistics obtained by invoking the central limit theorem for large samples, as in the case of Pearson's chi-squared test.
Thus computing a p-value requires a null hypothesis, a test statistic (together with deciding whether the researcher is performing a one-tailed test or a two-tailed test), and data. Even though computing the test statistic on given data may be easy, computing the sampling distribution under the null hypothesis, and then computing its cumulative distribution function (CDF) is often a difficult problem. Today, this computation is done using statistical software, often via numeric methods (rather than exact formulae), but, in the early and mid 20th century, this was instead done via tables of values, and one interpolated or extrapolated p-values from these discrete values. Rather than using a table of p-values, Fisher instead inverted the CDF, publishing a list of values of the test statistic for given fixed p-values; this corresponds to computing the quantile function (inverse CDF).
Example
Testing the fairness of a coin
As an example of a statistical test, an experiment is performed to determine whether a coin flip is fair (equal chance of landing heads or tails) or unfairly biased (one outcome being more likely than the other).
Suppose that the experimental results show the coin turning up heads 14 times out of 20 total flips. The full data would be a sequence of twenty times the symbol "H" or "T". The statistic on which one might focus could be the total number of heads. The null hypothesis is that the coin is fair, and coin tosses are independent of one another. If a right-tailed test is considered, which would be the case if one is actually interested in the possibility that the coin is biased towards falling heads, then the p-value of this result is the chance of a fair coin landing on heads at least 14 times out of 20 flips. That probability can be computed from binomial coefficients as
This probability is the p-value, considering only extreme results that favor heads. This is called a one-tailed test. However, one might be interested in deviations in either direction, favoring either heads or tails. The two-tailed p-value, which considers deviations favoring either heads or tails, may instead be calculated. As the binomial distribution is symmetrical for a fair coin, the two-sided p-value is simply twice the above calculated single-sided p-value: the two-sided p-value is 0.115.
In the above example:
Null hypothesis (H0): The coin is fair, with Pr(heads) = 0.5.
Test statistic: Number of heads.
Alpha level (designated threshold of significance): 0.05.
Observation O: 14 heads out of 20 flips.
Two-tailed p-value of observation O given H0 = 2 × min(Pr(no. of heads ≥ 14 heads), Pr(no. of heads ≤ 14 heads)) = 2 × min(0.058, 0.978) = 2 × 0.058 = 0.115.
The Pr(no. of heads ≤ 14 heads) = 1 − Pr(no. of heads ≥ 14 heads) + Pr(no. of head = 14) = 1 − 0.058 + 0.036 = 0.978; however, the symmetry of this binomial distribution makes it an unnecessary computation to find the smaller of the two probabilities. Here, the calculated p-value exceeds 0.05, meaning that the data falls within the range of what would happen 95% of the time, if the coin were fair. Hence, the null hypothesis is not rejected at the 0.05 level.
However, had one more head been obtained, the resulting p-value (two-tailed) would have been 0.0414 (4.14%), in which case the null hypothesis would be rejected at the 0.05 level.
Optional stopping
The difference between the two meanings of "extreme" appear when we consider a sequential hypothesis testing, or optional stopping, for the fairness of the coin. In general, optional stopping changes how p-value is calculated. Suppose we design the experiment as follows:
Flip the coin twice. If both comes up heads or tails, end the experiment.
Else, flip the coin 4 more times.
This experiment has 7 types of outcomes: 2 heads, 2 tails, 5 heads 1 tail, ..., 1 head 5 tails. We now calculate the p-value of the "3 heads 3 tails" outcome.
If we use the test statistic , then under the null hypothesis is exactly 1 for two-sided p-value, and exactly for one-sided left-tail p-value, and same for one-sided right-tail p-value.
If we consider every outcome that has equal or lower probability than "3 heads 3 tails" as "at least as extreme", then the p-value is exactly
However, suppose we have planned to simply flip the coin 6 times no matter what happens, then the second definition of p-value would mean that the p-value of "3 heads 3 tails" is exactly 1.
Thus, the "at least as extreme" definition of p-value is deeply contextual and depends on what the experimenter planned to do even in situations that did not occur.
History
P-value computations date back to the 1700s, where they were computed for the human sex ratio at birth, and used to compute statistical significance compared to the null hypothesis of equal probability of male and female births. John Arbuthnot studied this question in 1710, and examined birth records in London for each of the 82 years from 1629 to 1710. In every year, the number of males born in London exceeded the number of females. Considering more male or more female births as equally likely, the probability of the observed outcome is 1/282, or about 1 in 4,836,000,000,000,000,000,000,000; in modern terms, the p-value. This is vanishingly small, leading Arbuthnot that this was not due to chance, but to divine providence: "From whence it follows, that it is Art, not Chance, that governs." In modern terms, he rejected the null hypothesis of equally likely male and female births at the p = 1/282 significance level. This and other work by Arbuthnot is credited as "… the first use of significance tests …" the first example of reasoning about statistical significance, and "… perhaps the first published report of a nonparametric test …", specifically the sign test; see details at .
The same question was later addressed by Pierre-Simon Laplace, who instead used a parametric test, modeling the number of male births with a binomial distribution:
The p-value was first formally introduced by Karl Pearson, in his Pearson's chi-squared test, using the chi-squared distribution and notated as capital P. The p-values for the chi-squared distribution (for various values of χ2 and degrees of freedom), now notated as P, were calculated in , collected in .
Ronald Fisher formalized and popularized the use of the p-value in statistics, with it playing a central role in his approach to the subject. In his highly influential book Statistical Methods for Research Workers (1925), Fisher proposed the level p = 0.05, or a 1 in 20 chance of being exceeded by chance, as a limit for statistical significance, and applied this to a normal distribution (as a two-tailed test), thus yielding the rule of two standard deviations (on a normal distribution) for statistical significance (see 68–95–99.7 rule).
He then computed a table of values, similar to Elderton but, importantly, reversed the roles of χ2 and p. That is, rather than computing p for different values of χ2 (and degrees of freedom n), he computed values of χ2 that yield specified p-values, specifically 0.99, 0.98, 0.95, 0,90, 0.80, 0.70, 0.50, 0.30, 0.20, 0.10, 0.05, 0.02, and 0.01. That allowed computed values of χ2 to be compared against cutoffs and encouraged the use of p-values (especially 0.05, 0.02, and 0.01) as cutoffs, instead of computing and reporting p-values themselves. The same type of tables were then compiled in , which cemented the approach.
As an illustration of the application of p-values to the design and interpretation of experiments, in his following book The Design of Experiments (1935), Fisher presented the lady tasting tea experiment, which is the archetypal example of the p-value.
To evaluate a lady's claim that she (Muriel Bristol) could distinguish by taste how tea is prepared (first adding the milk to the cup, then the tea, or first tea, then milk), she was sequentially presented with 8 cups: 4 prepared one way, 4 prepared the other, and asked to determine the preparation of each cup (knowing that there were 4 of each). In that case, the null hypothesis was that she had no special ability, the test was Fisher's exact test, and the p-value was so Fisher was willing to reject the null hypothesis (consider the outcome highly unlikely to be due to chance) if all were classified correctly. (In the actual experiment, Bristol correctly classified all 8 cups.)
Fisher reiterated the p = 0.05 threshold and explained its rationale, stating:
He also applies this threshold to the design of experiments, noting that had only 6 cups been presented (3 of each), a perfect classification would have only yielded a p-value of which would not have met this level of significance. Fisher also underlined the interpretation of p, as the long-run proportion of values at least as extreme as the data, assuming the null hypothesis is true.
In later editions, Fisher explicitly contrasted the use of the p-value for statistical inference in science with the Neyman–Pearson method, which he terms "Acceptance Procedures". Fisher emphasizes that while fixed levels such as 5%, 2%, and 1% are convenient, the exact p-value can be used, and the strength of evidence can and will be revised with further experimentation. In contrast, decision procedures require a clear-cut decision, yielding an irreversible action, and the procedure is based on costs of error, which, he argues, are inapplicable to scientific research.
Related indices
The E-value can refer to two concepts, both of which are related to the p-value and both of which play a role in multiple testing. First, it corresponds to a generic, more robust alternative to the p-value that can deal with optional continuation of experiments. Second, it is also used to abbreviate "expect value", which is the expected number of times that one expects to obtain a test statistic at least as extreme as the one that was actually observed if one assumes that the null hypothesis is true. This expect-value is the product of the number of tests and the p-value.
The q-value is the analog of the p-value with respect to the positive false discovery rate. It is used in multiple hypothesis testing to maintain statistical power while minimizing the false positive rate.
The Probability of Direction (pd) is the Bayesian numerical equivalent of the p-value. It corresponds to the proportion of the posterior distribution that is of the median's sign, typically varying between 50% and 100%, and representing the certainty with which an effect is positive or negative.
Second-generation p-values extend the concept of p-values by not considering extremely small, practically irrelevant effect sizes as significant.
| Mathematics | Statistics and probability | null |
555119 | https://en.wikipedia.org/wiki/Displacement%20current | Displacement current | In electromagnetism, displacement current density is the quantity appearing in Maxwell's equations that is defined in terms of the rate of change of , the electric displacement field. Displacement current density has the same units as electric current density, and it is a source of the magnetic field just as actual current is. However it is not an electric current of moving charges, but a time-varying electric field. In physical materials (as opposed to vacuum), there is also a contribution from the slight motion of charges bound in atoms, called dielectric polarization.
The idea was conceived by James Clerk Maxwell in his 1861 paper On Physical Lines of Force, Part III in connection with the displacement of electric particles in a dielectric medium. Maxwell added displacement current to the electric current term in Ampère's circuital law. In his 1865 paper A Dynamical Theory of the Electromagnetic Field Maxwell used this amended version of Ampère's circuital law to derive the electromagnetic wave equation. This derivation is now generally accepted as a historical landmark in physics by virtue of uniting electricity, magnetism and optics into one single unified theory. The displacement current term is now seen as a crucial addition that completed Maxwell's equations and is necessary to explain many phenomena, most particularly the existence of electromagnetic waves.
Explanation
The electric displacement field is defined as:
where:
is the permittivity of free space;
is the electric field intensity; and
is the polarization of the medium.
Differentiating this equation with respect to time defines the displacement current density, which therefore has two components in a dielectric:(see also the "displacement current" section of the article "current density")
The first term on the right hand side is present in material media and in free space. It doesn't necessarily come from any actual movement of charge, but it does have an associated magnetic field, just as a current does due to charge motion. Some authors apply the name displacement current to the first term by itself.
The second term on the right hand side, called polarization current density, comes from the change in polarization of the individual molecules of the dielectric material. Polarization results when, under the influence of an applied electric field, the charges in molecules have moved from a position of exact cancellation. The positive and negative charges in molecules separate, causing an increase in the state of polarization . A changing state of polarization corresponds to charge movement and so is equivalent to a current, hence the term "polarization current". Thus,
This polarization is the displacement current as it was originally conceived by Maxwell. Maxwell made no special treatment of the vacuum, treating it as a material medium. For Maxwell, the effect of was simply to change the relative permittivity in the relation .
The modern justification of displacement current is explained below.
Isotropic dielectric case
In the case of a very simple dielectric material the constitutive relation holds:
where the permittivity is the product of:
, the permittivity of free space, or the electric constant; and
, the relative permittivity of the dielectric.
In the equation above, the use of accounts for
the polarization (if any) of the dielectric material.
The scalar value of displacement current may also be expressed in terms of electric flux:
The forms in terms of scalar are correct only for linear isotropic materials. For linear non-isotropic materials, becomes a matrix; even more generally, may be replaced by a tensor, which may depend upon the electric field itself, or may exhibit frequency dependence (hence dispersion).
For a linear isotropic dielectric, the polarization is given by:
where is known as the susceptibility of the dielectric to electric fields. Note that
Necessity
Some implications of the displacement current follow, which agree with experimental observation, and with the requirements of logical consistency for the theory of electromagnetism.
Generalizing Ampère's circuital law
Current in capacitors
An example illustrating the need for the displacement current arises in connection with capacitors with no medium between the plates. Consider the charging capacitor in the figure. The capacitor is in a circuit that causes equal and opposite charges to appear on the left plate and the right plate, charging the capacitor and increasing the electric field between its plates. No actual charge is transported through the vacuum between its plates. Nonetheless, a magnetic field exists between the plates as though a current were present there as well. One explanation is that a displacement current "flows" in the vacuum, and this current produces the magnetic field in the region between the plates according to Ampère's law:
where
is the closed line integral around some closed curve ;
is the magnetic field measured in teslas;
is the vector dot product;
is an infinitesimal vector line element along the curve , that is, a vector with magnitude equal to the length element of , and direction given by the tangent to the curve ;
is the magnetic constant, also called the permeability of free space; and
is the net displacement current that passes through a small surface bounded by the curve .
The magnetic field between the plates is the same as that outside the plates, so the displacement current must be the same as the conduction current in the wires, that is,
which extends the notion of current beyond a mere transport of charge.
Next, this displacement current is related to the charging of the capacitor. Consider the current in the imaginary cylindrical surface shown surrounding the left plate. A current, say , passes outward through the left surface of the cylinder, but no conduction current (no transport of real charges) crosses the right surface . Notice that the electric field between the plates increases as the capacitor charges. That is, in a manner described by Gauss's law, assuming no dielectric between the plates:
where refers to the imaginary cylindrical surface. Assuming a parallel plate capacitor with uniform electric field, and neglecting fringing effects around the edges of the plates, according to charge conservation equation
where the first term has a negative sign because charge leaves surface (the charge is decreasing), the last term has a positive sign because unit vector of surface is from left to right while the direction of electric field is from right to left, is the area of the surface . The electric field at surface is zero because surface is in the outside of the capacitor. Under the assumption of a uniform electric field distribution inside the capacitor, the displacement current density D is found by dividing by the area of the surface:
where is the current leaving the cylindrical surface (which must equal D) and D is the flow of charge per unit area into the cylindrical surface through the face .
Combining these results, the magnetic field is found using the integral form of Ampère's law with an arbitrary choice of contour provided the displacement current density term is added to the conduction current density (the Ampère-Maxwell equation):
This equation says that the integral of the magnetic field around the edge of a surface is equal to the integrated current through any surface with the same edge, plus the displacement current term through whichever surface.
As depicted in the figure to the right, the current crossing surface is entirely conduction current. Applying the Ampère-Maxwell equation to surface yields:
However, the current crossing surface is entirely displacement current. Applying this law to surface , which is bounded by exactly the same curve , but lies between the plates, produces:
Any surface that intersects the wire has current passing through it so Ampère's law gives the correct magnetic field. However a second surface bounded by the same edge could be drawn passing between the capacitor plates, therefore having no current passing through it. Without the displacement current term Ampere's law would give zero magnetic field for this surface. Therefore, without the displacement current term Ampere's law gives inconsistent results, the magnetic field would depend on the surface chosen for integration. Thus the displacement current term is necessary as a second source term which gives the correct magnetic field when the surface of integration passes between the capacitor plates. Because the current is increasing the charge on the capacitor's plates, the electric field between the plates is increasing, and the rate of change of electric field gives the correct value for the field found above.
Mathematical formulation
In a more mathematical vein, the same results can be obtained from the underlying differential equations. Consider for simplicity a non-magnetic medium where the relative magnetic permeability is unity, and the complication of magnetization current (bound current) is absent, so that and
The current leaving a volume must equal the rate of decrease of charge in a volume. In differential form this continuity equation becomes:
where the left side is the divergence of the free current density and the right side is the rate of decrease of the free charge density. However, Ampère's law in its original form states:
which implies that the divergence of the current term vanishes, contradicting the continuity equation. (Vanishing of the divergence is a result of the mathematical identity that states the divergence of a curl is always zero.) This conflict is removed by addition of the displacement current, as then:
and
which is in agreement with the continuity equation because of Gauss's law:
Wave propagation
The added displacement current also leads to wave propagation by taking the curl of the equation for magnetic field.
Substituting this form for into Ampère's law, and assuming there is no bound or free current density contributing to :
with the result:
However,
leading to the wave equation:
where use is made of the vector identity that holds for any vector field :
and the fact that the divergence of the magnetic field is zero. An identical wave equation can be found for the electric field by taking the curl:
If , , and are zero, the result is:
The electric field can be expressed in the general form:
where is the electric potential (which can be chosen to satisfy Poisson's equation) and is a vector potential (i.e. magnetic vector potential, not to be confused with surface area, as is denoted elsewhere). The component on the right hand side is the Gauss's law component, and this is the component that is relevant to the conservation of charge argument above. The second term on the right-hand side is the one relevant to the electromagnetic wave equation, because it is the term that contributes to the curl of . Because of the vector identity that says the curl of a gradient is zero, does not contribute to .
History and interpretation
Maxwell's displacement current was postulated in part III of his 1861 paper ''. Few topics in modern physics have caused as much confusion and misunderstanding as that of displacement current. This is in part due to the fact that Maxwell used a sea of molecular vortices in his derivation, while modern textbooks operate on the basis that displacement current can exist in free space. Maxwell's derivation is unrelated to the modern day derivation for displacement current in the vacuum, which is based on consistency between Ampère's circuital law for the magnetic field and the continuity equation for electric charge.
Maxwell's purpose is stated by him at (Part I, p. 161):
He is careful to point out the treatment is one of analogy:
In part III, in relation to displacement current, he says
Clearly Maxwell was driving at magnetization even though the same introduction clearly talks about dielectric polarization.
Maxwell compared the speed of electricity measured by Wilhelm Eduard Weber and Rudolf Kohlrausch (193,088 miles/second) and the speed of light determined by the Fizeau experiment (195,647 miles/second). Based on their same speed, he concluded that "light consists of transverse undulations in the same medium that is the cause of electric and magnetic phenomena."
But although the above quotations point towards a magnetic explanation for displacement current, for example, based upon the divergence of the above curl equation, Maxwell's explanation ultimately stressed linear polarization of dielectrics:
With some change of symbols (and units) combined with the results deduced in the section (, , and the material constant these equations take the familiar form between a parallel plate capacitor with uniform electric field, and neglecting fringing effects around the edges of the plates:
When it came to deriving the electromagnetic wave equation from displacement current in his 1865 paper 'A Dynamical Theory of the Electromagnetic Field', he got around the problem of the non-zero divergence associated with Gauss's law and dielectric displacement by eliminating the Gauss term and deriving the wave equation exclusively for the solenoidal magnetic field vector.
Maxwell's emphasis on polarization diverted attention towards the electric capacitor circuit, and led to the common belief that Maxwell conceived of displacement current so as to maintain conservation of charge in an electric capacitor circuit. There are a variety of debatable notions about Maxwell's thinking, ranging from his supposed desire to perfect the symmetry of the field equations to the desire to achieve compatibility with the continuity equation.
| Physical sciences | Electrodynamics | Physics |
555390 | https://en.wikipedia.org/wiki/Type%20conversion | Type conversion | In computer science, type conversion, type casting, type coercion, and type juggling are different ways of changing an expression from one data type to another. An example would be the conversion of an integer value into a floating point value or its textual representation as a string, and vice versa. Type conversions can take advantage of certain features of type hierarchies or data representations. Two important aspects of a type conversion are whether it happens implicitly (automatically) or explicitly, and whether the underlying data representation is converted from one representation into another, or a given representation is merely reinterpreted as the representation of another data type. In general, both primitive and compound data types can be converted.
Each programming language has its own rules on how types can be converted. Languages with strong typing typically do little implicit conversion and discourage the reinterpretation of representations, while languages with weak typing perform many implicit conversions between data types. Weak typing language often allow forcing the compiler to arbitrarily interpret a data item as having different representations—this can be a non-obvious programming error, or a technical method to directly deal with underlying hardware.
In most languages, the word coercion is used to denote an implicit conversion, either during compilation or during run time. For example, in an expression mixing integer and floating point numbers (like 5 + 0.1), the compiler will automatically convert integer representation into floating point representation so fractions are not lost. Explicit type conversions are either indicated by writing additional code (e.g. adding type identifiers or calling built-in routines) or by coding conversion routines for the compiler to use when it otherwise would halt with a type mismatch.
In most ALGOL-like languages, such as Pascal, Modula-2, Ada and Delphi, conversion and casting are distinctly different concepts. In these languages, conversion refers to either implicitly or explicitly changing a value from one data type storage format to another, e.g. a 16-bit integer to a 32-bit integer. The storage needs may change as a result of the conversion, including a possible loss of precision or truncation. The word cast, on the other hand, refers to explicitly changing the interpretation of the bit pattern representing a value from one type to another. For example, 32 contiguous bits may be treated as an array of 32 Booleans, a 4-byte string, an unsigned 32-bit integer or an IEEE single precision floating point value. Because the stored bits are never changed, the programmer must know low level details such as representation format, byte order, and alignment needs, to meaningfully cast.
In the C family of languages and ALGOL 68, the word cast typically refers to an explicit type conversion (as opposed to an implicit conversion), causing some ambiguity about whether this is a re-interpretation of a bit-pattern or a real data representation conversion. More important is the multitude of ways and rules that apply to what data type (or class) is located by a pointer and how a pointer may be adjusted by the compiler in cases like object (class) inheritance.
Explicit casting in various languages
Ada
Ada provides a generic library function Unchecked_Conversion.
C-like languages
Implicit type conversion
Implicit type conversion, also known as coercion or type juggling, is an automatic type conversion by the compiler. Some programming languages allow compilers to provide coercion; others require it.
In a mixed-type expression, data of one or more subtypes can be converted to a supertype as needed at runtime so that the program will run correctly. For example, the following is legal C language code:
double d;
long l;
int i;
if (d > i) d = i;
if (i > l) l = i;
if (d == l) d *= 2;
Although , , and belong to different data types, they will be automatically converted to equal data types each time a comparison or assignment is executed. This behavior should be used with caution, as unintended consequences can arise. Data can be lost when converting representations from floating-point to integer, as the fractional components of the floating-point values will be truncated (rounded toward zero). Conversely, precision can be lost when converting representations from integer to floating-point, since a floating-point type may be unable to exactly represent all possible values of some integer type. For example, might be an IEEE 754 single precision type, which cannot represent the integer 16777217 exactly, while a 32-bit integer type can. This can lead to unintuitive behavior, as demonstrated by the following code:
#include <stdio.h>
int main(void)
{
int i_value = 16777217;
float f_value = 16777216.0;
printf("The integer is: %d\n", i_value);
printf("The float is: %f\n", f_value);
printf("Their equality: %d\n", i_value == f_value);
}
On compilers that implement floats as IEEE single precision, and ints as at least 32 bits, this code will give this peculiar print-out:
The integer is: 16777217
The float is: 16777216.000000
Their equality: 1
Note that 1 represents equality in the last line above. This odd behavior is caused by an implicit conversion of to float when it is compared with . The conversion causes loss of precision, which makes the values equal before the comparison.
Important takeaways:
to causes truncation, i.e., removal of the fractional part.
to causes rounding of digit.
to causes dropping of excess higher order bits.
Type promotion
One special case of implicit type conversion is type promotion, where an object is automatically converted into another data type representing a superset of the original type. Promotions are commonly used with types smaller than the native type of the target platform's arithmetic logic unit (ALU), before arithmetic and logical operations, to make such operations possible, or more efficient if the ALU can work with more than one type. C and C++ perform such promotion for objects of Boolean, character, wide character, enumeration, and short integer types which are promoted to int, and for objects of type float, which are promoted to double. Unlike some other type conversions, promotions never lose precision or modify the value stored in the object.
In Java:
int x = 3;
double y = 3.5;
System.out.println(x + y); // The output will be 6.5
Explicit type conversion
Explicit type conversion, also called type casting, is a type conversion which is explicitly defined within a program (instead of being done automatically according to the rules of the language for implicit type conversion). It is requested by the user in the program.
double da = 3.3;
double db = 3.3;
double dc = 3.4;
int result = (int)da + (int)db + (int)dc; // result == 9
// if implicit conversion would be used (as with "result = da + db + dc"), result would be equal to 10
There are several kinds of explicit conversion.
checked Before the conversion is performed, a runtime check is done to see if the destination type can hold the source value. If not, an error condition is raised.
unchecked No check is performed. If the destination type cannot hold the source value, the result is undefined.
bit pattern The raw bit representation of the source is copied verbatim, and it is re-interpreted according to the destination type. This can also be achieved via aliasing.
In object-oriented programming languages, objects can also be downcast : a reference of a base class is cast to one of its derived classes.
C# and C++
In C#, type conversion can be made in a safe or unsafe (i.e., C-like) manner, the former called checked type cast.
Animal animal = new Cat();
Bulldog b = (Bulldog) animal; // if (animal is Bulldog), stat.type(animal) is Bulldog, else an exception
b = animal as Bulldog; // if (animal is Bulldog), b = (Bulldog) animal, else b = null
animal = null;
b = animal as Bulldog; // b == null
In C++ a similar effect can be achieved using C++-style cast syntax.
Animal* animal = new Cat;
Bulldog* b = static_cast<Bulldog*>(animal); // compiles only if either Animal or Bulldog is derived from the other (or same)
b = dynamic_cast<Bulldog*>(animal); // if (animal is Bulldog), b = (Bulldog*) animal, else b = nullptr
Bulldog& br = static_cast<Bulldog&>(*animal); // same as above, but an exception will be thrown if a nullptr was to be returned
// this is not seen in code where exception handling is avoided
delete animal; // always free resources
animal = nullptr;
b = dynamic_cast<Bulldog*>(animal); // b == nullptr
Eiffel
In Eiffel the notion of type conversion is integrated into the rules of the type system. The Assignment Rule says that an assignment, such as:
x := y
is valid if and only if the type of its source expression, y in this case, is compatible with the type of its target entity, x in this case. In this rule, compatible with means that the type of the source expression either conforms to or converts to that of the target. Conformance of types is defined by the familiar rules for polymorphism in object-oriented programming. For example, in the assignment above, the type of y conforms to the type of x if the class upon which y is based is a descendant of that upon which x is based.
Definition of type conversion in Eiffel
The actions of type conversion in Eiffel, specifically converts to and converts from are defined as:
A type based on a class CU converts to a type T based on a class CT (and T converts from U) if either
CT has a conversion procedure using U as a conversion type, or
CU has a conversion query listing T as a conversion type
Example
Eiffel is a fully compliant language for Microsoft .NET Framework. Before development of .NET, Eiffel already had extensive class libraries. Using the .NET type libraries, particularly with commonly used types such as strings, poses a conversion problem. Existing Eiffel software uses the string classes (such as STRING_8) from the Eiffel libraries, but Eiffel software written for .NET must use the .NET string class (System.String) in many cases, for example when calling .NET methods which expect items of the .NET type to be passed as arguments. So, the conversion of these types back and forth needs to be as seamless as possible.
my_string: STRING_8 -- Native Eiffel string
my_system_string: SYSTEM_STRING -- Native .NET string
...
my_string := my_system_string
In the code above, two strings are declared, one of each different type (SYSTEM_STRING is the Eiffel compliant alias for System.String). Because System.String does not conform to STRING_8, then the assignment above is valid only if System.String converts to STRING_8.
The Eiffel class STRING_8 has a conversion procedure make_from_cil for objects of type System.String. Conversion procedures are also always designated as creation procedures (similar to constructors). The following is an excerpt from the STRING_8 class:
class STRING_8
...
create
make_from_cil
...
convert
make_from_cil ({SYSTEM_STRING})
...
The presence of the conversion procedure makes the assignment:
my_string := my_system_string
semantically equivalent to:
create my_string.make_from_cil (my_system_string)
in which my_string is constructed as a new object of type STRING_8 with content equivalent to that of my_system_string.
To handle an assignment with original source and target reversed:
my_system_string := my_string
the class STRING_8 also contains a conversion query to_cil which will produce a System.String from an instance of STRING_8.
class STRING_8
...
create
make_from_cil
...
convert
make_from_cil ({SYSTEM_STRING})
to_cil: {SYSTEM_STRING}
...
The assignment:
my_system_string := my_string
then, becomes equivalent to:
my_system_string := my_string.to_cil
In Eiffel, the setup for type conversion is included in the class code, but then appears to happen as automatically as explicit type conversion in client code. The includes not just assignments but other types of attachments as well, such as argument (parameter) substitution.
Rust
Rust provides no implicit type conversion (coercion) between primitive types. But, explicit type conversion (casting) can be performed using the as keyword.
let x = 1000;
println!("1000 as a u16 is: {}", x as u16);
Type assertion
A related concept in static type systems is called type assertion, which instruct the compiler to treat the expression of a certain type, disregarding its own inference. Type assertion may be safe (a runtime check is performed) or unsafe. A type assertion does not convert the value from a data type to another.
TypeScript
In TypeScript, a type assertion is done by using the as keyword:
const myCanvas = document.getElementById("main_canvas") as HTMLCanvasElement;
In the above example, document.getElementById is declared to return an HTMLElement, but you know that it always return an HTMLCanvasElement, which is a subtype of HTMLElement, in this case. If it is not the case, subsequent code which relies on the behaviour of HTMLCanvasElement will not perform correctly, as in Typescript there is no runtime checking for type assertions.
In Typescript, there is no general way to check if a value is of a certain type at runtime, as there is no runtime type support. However, it is possible to write a user-defined function which the user tells the compiler if a value is of a certain type of not. Such a function is called type guard, and is declared with a return type of x is Type, where x is a parameter or this, in place of boolean.
This allows unsafe type assertions to be contained in the checker function instead of littered around the codebase.
Go
In Go, a type assertion can be used to access a concrete type value from an interface value. It is a safe assertion that it will panic (in the case of one return value), or return a zero value (if two return values are used), if the value is not of that concrete type.
t := i.(T)
This type assertions tell the system that i is of type T. If it isn't, it panics.
Implicit casting using untagged unions
Many programming languages support union types which can hold a value of multiple types. Untagged unions are provided in some languages with loose type-checking, such as C and PL/I, but also in the original Pascal. These can be used to interpret the bit pattern of one type as a value of another type.
Security issues
In hacking, typecasting is the misuse of type conversion to temporarily change a variable's data type from how it was originally defined. This provides opportunities for hackers since in type conversion after a variable is "typecast" to become a different data type, the compiler will treat that hacked variable as the new data type for that specific operation.
| Technology | Programming languages | null |
555505 | https://en.wikipedia.org/wiki/Blender | Blender | A blender (sometimes called a mixer or liquidiser in British English) is a kitchen and laboratory appliance used to mix, crush, purée or emulsify food and other substances. A stationary blender consists of a blender container with a rotating metal or plastic blade at the bottom, powered by an electric motor that is in the base. Some powerful models can also crush ice and other frozen foods. The newer immersion blender configuration has a motor on top connected by a shaft to a rotating blade at the bottom, which can be used with any container.
Characteristics
Different blenders have different functions and features but product testing indicates that many blenders, even the less expensive ones, are useful for meeting many consumer needs. Features which consumers consider when purchasing a blender include the following:
large visible measurement marks
ease of use
low noise during usage
power usage (typically 300–1000 watts)
ease of cleaning
option for quick "pulse" blending
Countertop blenders
Countertop blenders use a 1–2 liters (4–8 cups) blending container made of glass, plastic, stainless steel. Glass blenders are heavier and more stable. Plastic is prone to scratching and absorbing the smell of blended food. Stainless steel is preferred for its appearance, but limits visibility of the food as it is blended.
Countertop blenders typically offer 2–16 speed settings, but having more choices in speed settings is not an indication of increased utility for all users.
In cases where the blades are removable, the container should have an O-ring or gasket between the body of the container and the base to seal the container and prevent the contents from leaking. The blending container is generally shaped in a way that encourages material to circulate through the blades, rather than simply spinning around.
The container rests upon a base that contains a motor for turning the blade assembly and has controls on its surface. Most modern blenders offer a number of possible speeds. Low-powered blenders require the addition of some liquid to operate correctly. In these blenders, the liquid helps move the solids around the jar, bringing them in contact with the blades. The blades create a whirlpool effect which moves solids from top to bottom, ensuring even contact with the blade. This creates a homogeneous mixture. High-powered blenders are capable of milling grains and crushing ice without such assistance.
Immersion blenders
The hand-held immersion blender, stick blender, hand blender or wand blender has no container of its own, but instead has a mixing head with rotating blades that can be immersed in a container. Immersion blenders are convenient for homogenizing volumes that are too large to fit in the bowl of a stationary blender or as in the case of soups, are too hot to be safely poured into the bowl.
The operation of an immersion blender requires that the user hold down a switch for as long as the blades operate, which can be tiresome for the user.
Handheld blenders are particularly used for small and specific tasks but do not have as many uses as a countertop blender.
Applications
Countertop blenders are designed to mix, purée, and chop food. Their strength is such that the ability to crush ice is an expected feature.
Blenders are used both in home and commercial kitchens for various purposes, including to:
Grind semi-solid ingredients, such as fresh fruits and vegetables, into smooth purées
Blend ice cream, milk, and sweet sauces to make milkshakes
Mix and crush ice in cocktails such as the Zombie, piña colada and frozen margarita
Crush ice and other ingredients in non-alcoholic drinks such as frappuccinos and smoothies
Emulsify mixtures
Reduce small solids such as spices and seeds to smaller solids or completely powder or nut butter
Blend mixtures of powders, granules, and/or liquids thoroughly
Help dissolve solids into liquids
Blenders also have a variety of applications in microbiology and food science. In addition to standard food-type blenders, there are a variety of other configurations made for laboratories.
Development
North America
The Polish-American chemist Stephen Poplawski, the owner of the Stevens Electric Company, began designing drink mixers in 1919 under a contract with Arnold Electric Company, and patented the drink mixer in 1922 which had been designed to make Horlicks malted milkshakes at soda fountains. He also introduced the liquefier blender in 1922.
In the 1930s, Louis Hamilton, Chester Beach and Fred Osius produced Poplawski's invention under the brand name Hamilton Beach Company. Fred Osius improved the appliance, making another kind of blender. He approached Fred Waring, a popular musician, who financed and promoted the "Miracle Mixer", released in 1933. However, the appliance had some problems to be solved about the seal of the jar and the knife axis, so Fred Waring redesigned the appliance and released his own blender in 1937, the Waring Blendor with which Waring popularized the smoothie in the 1940s. Waring Products was sold to Dynamics Corporation of America in 1957 and was acquired by Conair in 1998. Waring long used the trademarked spelling "Blendor" for its product; the trademark has expired.
Also in 1937, W.G. Barnard, founder of Vitamix, introduced a product called "The Blender," which was functionally a reinforced blender with a stainless steel jar, instead of the Pyrex glass jar used by Waring.
In 1946 John Oster, owner of the Oster barber equipment company, bought Stevens Electric Co. and designed its own blender, which Oster commercialized under the trademark Osterizer. Oster was bought by Sunbeam Products in 1960. which released various types of blenders, such as the Imperial series, and still make the traditional Osterizer blender.
Europe
In Europe, the Swiss Traugott Oertli developed a blender based on the technical construction and design style conception of the first Waring Blendor (1937–1942), releasing in 1943 the Turmix Standmixer. Based on the blender, Traugott also developed another kind of appliance to extract juice of any juicy fruit or vegetables, the Turmix Juicer, which was also available as separated accessory for use in the Turmix blender, the juicer Turmix Junior. had promoted the benefits of drinking natural juices made with fruits and vegetables, with recipes using juices to promote its blender and juicer. After the World War II other companies released more blender in Europe; the first one was the popular Starmix Standmixer (1948), from the Germany company Electrostar, which had numerous accessories, like a coffee grinder, cake mixer, ice cream maker, food processor, thermic jar, milk centrifugue, juicer and meat grinder; and the Braun Multimix (1950) from Max Braun, which had an attachment with glass bowl to make batter bread and a juicer centrifuge like the one developed by Turmix.
South America
In Brazil, Waldemar Clemente, ex-staffer of General Electric and owner of Walita electric appliance company since 1939, designed a blender based on the Turmix Standmixer and released in 1944 the Walita Neutron blender. Clemente also created the name liquidificador, which ever since designated a blender in Brazil. Soon thereafter, Walita acquired the Turmix patents in Brazil and also released the Turmix juicer, calling it the Centrífuga Walita as well the others Turmix accessories for use with the blender motor, as fruit peelers, grinder, crusher and batter mixer. Using the same marketing strategy as Turmix in Europe, Walita passed the million-blenders-sold mark a few years later in the early 1950s. Walita was the first manufacturer to release a wide range of blenders in the 1940s. In the 1950s, Walita made blenders for Siemens, Turmix, Philips, and Sears (Kenmore), among others. In the 1960s Royal Philips Co. approached Walita, acquiring the company in 1971, becoming Royal Philips' kitchen appliances developer division specializing in blenders, which are sold under the Philips brand outside Brazil.
The Austrian immigrant Hanz Arno, owner of an electric motor manufacturer in Brazil since the 1940s, released a blender in 1947, based on the blenders made by Hamilton Beach and Oster. The Liquidificador Arno was exported to other South American countries. As Arno had stocks of Electrolux, that brand was used on the blender in some countries. Later in 1997 Arno was bought by the Groupe SEB, owner of Moulinex, T-Fal, Rowenta, and other home appliance brands.
Increased versatility
With the rising popularity of smoothies, Frappucinos and other frozen drinks prepared in front of the customer, new models of commercial blenders often include a sound-reducing enclosures and computerized controls.
Specialized blenders for making smoothies are becoming popular, chiefly resembling an ordinary model with a spigot added for quick serving. Some models also feature a gimballed stirring rod mounted on the lid, constructed so that mixtures can be stirred whilst the machine is running with no chance of the stirrer fouling the blades.
In 1996 Tom Dickson, founder and CEO of Blendtec, introduced the WildSide blending jar—a unique design that eliminated the need for stir sticks and plungers to make thicker blends. The technology was so effective that Vita-Mix decided to use the design in the company's commercial blending containers. In 2010 the United States court system concluded that Vita-Mix had willfully infringed the patents, ultimately awarding Blendtec $24 million in damages.
Mechanical operation
A blender consists of a housing, motor, blades, and food container. A fan-cooled electric motor is secured into the housing by way of vibration dampers, and a small output shaft penetrates the upper housing and meshes with the blade assembly. Usually, a small rubber washer provides a seal around the output shaft to prevent liquid from entering the motor. Most blenders today have multiple speeds. As a typical blender has no gearbox, the multiple speeds are often implemented using a universal motor with multiple stator windings and/or multi-tapped stator windings; in a blender with electromechanical controls, the button (or other electrical switching device or position) for each different speed connects a different stator winding/tap or combination thereof. Each different combination of energized windings produces a different torque from the motor, which yields a different equilibrium speed in balance against the drag (resistance to rotation) of the blade assembly in contact with the material inside the food container. A notable exception from the mid-1960s is the Oster Model 412 Classic VIII (with the single knob) providing the lowest speed (Stir) using the aforementioned winding tap method but furnishing higher speeds (the continuously variable higher speed range is marked Puree to Liquify) by means of a mechanical speed governor that balances the force provided by flyweights against a spring force varied by the control knob when it is switched into the higher speed range.
In culture
In 1949 the company Vitamix advertised their blender in one of the first television infomercials. The sales pitch lasted for 25 minutes, suggesting that the blender be used to make bread crumbs, potato pancakes, laxative spinach drinks, and a dessert beverage featuring entire raw eggs and their shells, which the host announced would be enjoyed like malted milk.
| Technology | Household appliances | null |
555768 | https://en.wikipedia.org/wiki/Mean-field%20theory | Mean-field theory | In physics and probability theory, Mean-field theory (MFT) or Self-consistent field theory studies the behavior of high-dimensional random (stochastic) models by studying a simpler model that approximates the original by averaging over degrees of freedom (the number of values in the final calculation of a statistic that are free to vary). Such models consider many individual components that interact with each other.
The main idea of MFT is to replace all interactions to any one body with an average or effective interaction, sometimes called a molecular field. This reduces any many-body problem into an effective one-body problem. The ease of solving MFT problems means that some insight into the behavior of the system can be obtained at a lower computational cost.
MFT has since been applied to a wide range of fields outside of physics, including statistical inference, graphical models, neuroscience, artificial intelligence, epidemic models, queueing theory, computer-network performance and game theory, as in the quantal response equilibrium.
Origins
The idea first appeared in physics (statistical mechanics) in the work of Pierre Curie and Pierre Weiss to describe phase transitions. MFT has been used in the Bragg–Williams approximation, models on Bethe lattice, Landau theory, Curie-Weiss law for magnetic susceptibility, Flory–Huggins solution theory, and Scheutjens–Fleer theory.
Systems with many (sometimes infinite) degrees of freedom are generally hard to solve exactly or compute in closed, analytic form, except for some simple cases (e.g. certain Gaussian random-field theories, the 1D Ising model). Often combinatorial problems arise that make things like computing the partition function of a system difficult. MFT is an approximation method that often makes the original problem to be solvable and open to calculation, and in some cases MFT may give very accurate approximations.
In field theory, the Hamiltonian may be expanded in terms of the magnitude of fluctuations around the mean of the field. In this context, MFT can be viewed as the "zeroth-order" expansion of the Hamiltonian in fluctuations. Physically, this means that an MFT system has no fluctuations, but this coincides with the idea that one is replacing all interactions with a "mean-field”.
Quite often, MFT provides a convenient launch point for studying higher-order fluctuations. For example, when computing the partition function, studying the combinatorics of the interaction terms in the Hamiltonian can sometimes at best produce perturbation results or Feynman diagrams that correct the mean-field approximation.
Validity
In general, dimensionality plays an active role in determining whether a mean-field approach will work for any particular problem. There is sometimes a critical dimension above which MFT is valid and below which it is not.
Heuristically, many interactions are replaced in MFT by one effective interaction. So if the field or particle exhibits many random interactions in the original system, they tend to cancel each other out, so the mean effective interaction and MFT will be more accurate. This is true in cases of high dimensionality, when the Hamiltonian includes long-range forces, or when the particles are extended (e.g. polymers). The Ginzburg criterion is the formal expression of how fluctuations render MFT a poor approximation, often depending upon the number of spatial dimensions in the system of interest.
Formal approach (Hamiltonian)
The formal basis for mean-field theory is the Bogoliubov inequality. This inequality states that the free energy of a system with Hamiltonian
has the following upper bound:
where is the entropy, and and are Helmholtz free energies. The average is taken over the equilibrium ensemble of the reference system with Hamiltonian . In the special case that the reference Hamiltonian is that of a non-interacting system and can thus be written as
where are the degrees of freedom of the individual components of our statistical system (atoms, spins and so forth), one can consider sharpening the upper bound by minimising the right side of the inequality. The minimising reference system is then the "best" approximation to the true system using non-correlated degrees of freedom and is known as the mean field approximation.
For the most common case that the target Hamiltonian contains only pairwise interactions, i.e.,
where is the set of pairs that interact, the minimising procedure can be carried out formally. Define as the generalized sum of the observable over the degrees of freedom of the single component (sum for discrete variables, integrals for continuous ones). The approximating free energy is given by
where is the probability to find the reference system in the state specified by the variables . This probability is given by the normalized Boltzmann factor
where is the partition function. Thus
In order to minimise, we take the derivative with respect to the single-degree-of-freedom probabilities using a Lagrange multiplier to ensure proper normalization. The end result is the set of self-consistency equations
where the mean field is given by
Applications
Mean field theory can be applied to a number of physical systems so as to study phenomena such as phase transitions.
Ising model
Formal derivation
The Bogoliubov inequality, shown above, can be used to find the dynamics of a mean field model of the two-dimensional Ising lattice. A magnetisation function can be calculated from the resultant approximate free energy. The first step is choosing a more tractable approximation of the true Hamiltonian. Using a non-interacting or effective field Hamiltonian,
,
the variational free energy is
By the Bogoliubov inequality, simplifying this quantity and calculating the magnetisation function that minimises the variational free energy yields the best approximation to the actual magnetisation. The minimiser is
which is the ensemble average of spin. This simplifies to
Equating the effective field felt by all spins to a mean spin value relates the variational approach to the suppression of fluctuations. The physical interpretation of the magnetisation function is then a field of mean values for individual spins.
Non-interacting spins approximation
Consider the Ising model on a -dimensional lattice. The Hamiltonian is given by
where the indicates summation over the pair of nearest neighbors , and are neighboring Ising spins.
Let us transform our spin variable by introducing the fluctuation from its mean value . We may rewrite the Hamiltonian as
where we define ; this is the fluctuation of the spin.
If we expand the right side, we obtain one term that is entirely dependent on the mean values of the spins and independent of the spin configurations. This is the trivial term, which does not affect the statistical properties of the system. The next term is the one involving the product of the mean value of the spin and the fluctuation value. Finally, the last term involves a product of two fluctuation values.
The mean field approximation consists of neglecting this second-order fluctuation term:
These fluctuations are enhanced at low dimensions, making MFT a better approximation for high dimensions.
Again, the summand can be re-expanded. In addition, we expect that the mean value of each spin is site-independent, since the Ising chain is translationally invariant. This yields
The summation over neighboring spins can be rewritten as , where means "nearest neighbor of ", and the prefactor avoids double counting, since each bond participates in two spins. Simplifying leads to the final expression
where is the coordination number. At this point, the Ising Hamiltonian has been decoupled into a sum of one-body Hamiltonians with an effective mean field , which is the sum of the external field and of the mean field induced by the neighboring spins. It is worth noting that this mean field directly depends on the number of nearest neighbors and thus on the dimension of the system (for instance, for a hypercubic lattice of dimension , ).
Substituting this Hamiltonian into the partition function and solving the effective 1D problem, we obtain
where is the number of lattice sites. This is a closed and exact expression for the partition function of the system. We may obtain the free energy of the system and calculate critical exponents. In particular, we can obtain the magnetization as a function of .
We thus have two equations between and , allowing us to determine as a function of temperature. This leads to the following observation:
For temperatures greater than a certain value , the only solution is . The system is paramagnetic.
For , there are two non-zero solutions: . The system is ferromagnetic.
is given by the following relation: .
This shows that MFT can account for the ferromagnetic phase transition.
Application to other systems
Similarly, MFT can be applied to other types of Hamiltonian as in the following cases:
To study the metal–superconductor transition. In this case, the analog of the magnetization is the superconducting gap .
The molecular field of a liquid crystal that emerges when the Laplacian of the director field is non-zero.
To determine the optimal amino acid side chain packing given a fixed protein backbone in protein structure prediction (see Self-consistent mean field (biology)).
To determine the elastic properties of a composite material.
Variationally minimisation like mean field theory can be also be used in statistical inference.
Extension to time-dependent mean fields
In mean field theory, the mean field appearing in the single-site problem is a time-independent scalar or vector quantity. However, this isn't always the case: in a variant of mean field theory called dynamical mean field theory (DMFT), the mean field becomes a time-dependent quantity. For instance, DMFT can be applied to the Hubbard model to study the metal–Mott-insulator transition.
| Physical sciences | Physics basics: General | Physics |
556358 | https://en.wikipedia.org/wiki/Stove | Stove | A stove or range is a device that generates heat inside or on top of the device, for -local heating or cooking. Stoves can be powered with many fuels, such as natural gas, electricity, gasoline, wood, and coal.
Due to concerns about air pollution, efforts have been made to improve stove design. Pellet stoves are a type of clean-burning stove. Air-tight stoves are another type that burn the wood more completely and therefore, reduce the amount of the combustion by-products. Another method of reducing air pollution is through the addition of a device to clean the exhaust gas, for example, a filter or afterburner.
Research and development on safer and less emission releasing stoves is continuously evolving.
Etymology
Old English had a word stofa, meaning a hot-air bath or sweating room. However, this usage did not survive, and the word was taken newly from Middle Low German or Middle Dutch in the 15th or 16th century, later meaning any room heated with a furnace. By the 17th century it had come to mean a heated box such as an oven, and by the 18th century could mean an open fireplace.
History
Versions prior
Cooking was performed over an open fire since nearly two million years ago. It is uncertain how fires were started at these times; some hypotheses include the removal of burning branches from wildfires, spark generation through hitting rocks, or accidental lighting through the chipping of stone tools. During the Paleolithic era, approximately 200,000 to 40,000 years ago, primitive hearths were constructed, with stones arranged in a circle shape. Human homes centered around these hearths for warmth and food. Open fires were quite effective; most fires are 30% efficient on average, and heat is distributed positively, with no heat being lost into the body of a stove. An estimated three million people still cook their food today over open fires.
Pottery and other cooking vessels were later placed on open fire; eventually, setting the vessel on a support, such as a base of three stones, resulted in a stove. The three-stone stove is still widely used around the world. In some areas it developed into a U-shaped dried mud or brick enclosure with the opening in the front for fuel and air, sometimes with a second smaller hole at the rear.
Early designs
The earliest recorded stove was created in Alsace, France in 1490. It was entirely made out of brick and tile, including the flue pipe. In similar times, the Ancient Egyptian, Jewish and Roman people used stone and brick ovens, fueled with wood, in order to make bread and other culinary staples. These designs did not differ extremely from modern-day pizza ovens. Later Scandinavian stoves featured a long, hollow iron chimney with iron baffles constructed to extend the passage of the leaving gases and extract maximum heat. Russian versions are still frequently used today in northern nations, as they hold six thick-walled stone flues. This design is frequently positioned at the intersection of internal partition walls, with a piece of the stove and flue inside each of four rooms; a fire is kept until the stove and flues are hot, at which point the fire is extinguished and the flues are closed, storing the heat. During Colonial America, beehive-shaped brick ovens were used to bake cakes and other pastries. Temperature control was closely managed by burning the appropriate quantity of wood to ash and then testing by inserting hands inside, adding additional wood, or opening the door to allow cooling.
Ceramic
Clay ovens have been used for millennia for cooking.
Masonry heaters were developed from Neolithic times to control air flow in stoves. A masonry heater is designed to allow complete combustion by burning fuels at full-temperature with no restriction of air inflow. Due to its large thermal mass the captured heat is radiated over long periods of time without the need of constant firing, and the surface temperature is generally not dangerous to touch.
Gallery
Cast-iron
In 1642, at Lynn, Massachusetts, the first cast-iron stove was constructed. This stove was little more than a cast-iron box with no grates.
Benjamin Franklin designed the "Pennsylvania fireplace", now known as the Franklin stove in 1742, which incorporated the fundamental concepts of the heating stove. The Franklin stove used a grate to burn wood and had sliding doors to control the draught, or flow of air, through it. It had a labyrinthine path for hot exhaust gases to escape, thus allowing heat to enter the room instead of going up the chimney. Because of its compact size, the stove could be fitted to an existing fireplace or used free-standing in the middle of a room by connecting it to a chimney. Developed amid a wood shortage, it required one-quarter the quantity of fuel as a regular fireplace and could raise the room temperature more quickly. Throughout North America, the Franklin stove enjoyed widespread adoption, warming farmhouses, city residences, and frontier huts.
For cooking, Count Rumford created a cast iron oven around 1800, the Rumford roaster. This was built into a brick kitchen range. Isaac Orr of Philadelphia, Pennsylvania, created the first circular cast-iron stoves with grates for cooking meals on them roughly five years later. The potbellied stove traces its origins to the early 1800s, inspired by the Franklin stove developed twenty years prior. Jordan A. Mott designed the base-burning stove for burning anthracite coal in 1833. In 1834, Philo Stewart created the Oberlin Stove, a small wood-burning cast-iron stove. It was a compact metal kitchen stove that was far more efficient than cooking in a fireplace due to its improved heating capacity and allowance for record cooking durations. It was a huge commercial success, with some 90,000 units sold in the next 30 years, because it could be formed into desired shapes and forms and could survive temperature fluctuations from hot to cold readily. These iron stoves evolved into specialized cooking machines with chimney flue pipes, oven openings, and water heating systems. The originally open holes into which the pots were hung were now covered with concentric iron rings on which the pots were placed. Depending on the size of the pot or the heat needed, one could remove the inner rings.
Usage of gas
The earliest reported use of gas for cooking, according to the Gas Museum in Leicester, England, was by a Moravian called Zachaus Winzler in 1802. However, the first commercially produced gas stove, invented by Englishman James Sharp, did not enter the market until 1834. By the end of the century, the stoves became popular because they were easier to control and required less maintenance than wood or coal stoves.
The switch to gas was prompted by concerns about air pollution, deforestation and climate change, causing the general public to reconsider the usage of coal and wood stoves. Under common-use conditions, indoor NO2 from gas stoves can quickly exceed US Environmental Protection Agency (EPA) and World Health Organization (WHO) 1-h exposure benchmarks in kitchen air. NO2 pollution has been shown to harm human health.
Electric stoves
Electric stoves became popular not long after the advent of home electricity. One early model was created by Thomas Ahearn, the owner of a Canadian electric company, whose marketing included a demonstration meal prepared entirely with electricity at Ottawa's Windsor Hotel in 1892.
As central heating became the standard in the developed world, cooking became the primary function of stoves in the twentieth century. Iron cooking stoves that used wood, charcoal, or coal radiated too much heat, which made the kitchen unbearably hot in the summer. They were superseded in the twentieth century by steel ranges or ovens fueled by natural gas or electricity.
Induction
The first patents for induction stoves date from the early 1900s. Demonstration stoves were shown by the Frigidaire division of General Motors in the mid-1950s on a touring GM showcase in North America. The induction cooker was shown heating a pot of water with a newspaper placed between the stove and the pot, to demonstrate the convenience and safety. This unit, however, was never put into production.
Modern implementation in the US dates from the early 1970s, with work done at the Research & Development Center of Westinghouse Electric Corporation at Churchill Borough, near Pittsburgh. That work was first put on public display at the 1971 National Association of Home Builders convention in Houston, Texas, as part of the Westinghouse Consumer Products Division display. The stand-alone single-burner range was named the Cool Top Induction Range. It used paralleled Delco Electronics transistors developed for automotive electronic ignition systems to drive the 25 kHz current.
Westinghouse decided to make a few hundred production units to develop the market. Those were named Cool Top 2 (CT2) Induction ranges. The development work was done at the same R&D location by a team led by Bill Moreland and Terry Malarkey. The ranges were priced at $1,500 ($8,260 in 2017 dollars), including a set of high quality cookware made of Quadraply, a new laminate of stainless steel, carbon steel, aluminum and another layer of stainless steel (outside to inside).
Production took place in 1973 through to 1975 and stopped, coincidentally, with the sale of Westinghouse Consumer Products Division to White Consolidated Industries Inc. Modern-day induction stoves are sold by many manufacturers, including General Electric, LG Corporation, Whirlpool Corporation, IKEA, and Samsung.
Types
Purpose
Cooking
A kitchen stove, cooker, or cookstove is a kitchen appliance designed for the purpose of cooking food. Kitchen stoves rely on the application of direct heat for the cooking process and may also contain an oven underneath or to the side that is used for baking. Traditionally these have been fueled by wood; the earliest known example of such was the Castrol stove. More modern versions such as the popular Rayburn range offer a choice between using wood or gas.
Heating
Stoves are also used for heating purposes. Benjamin Franklin's invention in 1740 popularized the widespread usage of modern heating stoves and fireplaces. Today, wood stoves are commonly used for warming homes, and are credited for their cost-effectiveness compared to coal and gas, and connection to the practices of human ancestors.
Fuel
Wood-burning
A wood-burning stove (or wood burner or log burner in the UK) is a heating or cooking appliance capable of burning wood fuel and wood-derived biomass fuel, such as sawdust bricks. Generally the appliance consists of a solid metal (usually cast iron or steel) closed firebox, often lined by fire brick, and one or more air controls (which can be manually or automatically operated depending upon the stove). The first wood-burning stove was patented in Strasbourg in 1557, two centuries before the Industrial Revolution, which would make iron an inexpensive and common material, so such stoves were high end consumer items and only gradually spread in use. Wood-burning stoves are still commonly used today in less-developed countries.
Coal-burning
The most common stove for heating in the industrial world for almost a century and a half was the coal stove that burned coal. Coal stoves came in all sizes and shapes and different operating principles. Coal burns at a much higher temperature than wood, and coal stoves must be constructed to resist the high heat levels. A coal stove can burn either wood or coal, but a wood stove might not burn coal unless a grate is supplied. The grate may be removable or an "extra".
This is because coal stoves are fitted with a grate so allowing part of the combustion air to be admitted below the fire. The proportion of air admitted above/below the fire depends on the type of coal. Brown coal and lignites evolve more combustible gases than say anthracite and so need more air above the fire. The ratio of air above/below the fire must be carefully adjusted to enable complete combustion.
Coal, particularly anthracite coal, became a popular option during the 1800s in the United States because it burned at a high heat while also producing little soot. By 1860, as much as 90% of United States homes used anthracite coal as a solution for the fuel crisis that the United States faced. One major issue with the use of coal burning stoves in the 1800s was limitations of storing the material over time. A division between the wealthy and poor in using coal stoves was that many poor families could not afford to store the volumes of coal needed to heat homes for long periods of time. Therefore, while wealthy families could store large amounts of coal in cellars, poorer families often had to purchase coal in smaller quantities. Therefore, difficulties surrounding the storage of coal helped push the use and development of gas stoves.
Anthracite stoves such as the Pither stove were gravity fed and could burn for days.
Gas
Gas stoves were first introduced by Moravian Zachaus Winzler in 1802. Today, according to the US Energy Information Administration, 35% of American households use gas stoves. They are chosen as they offer better temperature control, durability, low cost, and speed of heating. In June 2023, Stanford researchers found combustion from gas stoves can raise indoor levels of benzene, a potent carcinogen linked to a higher risk of blood cell cancers. Gas-powered stoves are criticized for environmental concerns with methane emission and the usage of natural gas, the danger of carbon monoxide release, and difficulty in cleaning. For example, a January 2022 Stanford-led study reveals that the methane leaking from gas-burning stoves has a climate impact comparable to the carbon dioxide emissions from about 500,000 gasoline-powered cars.
Electricity
Induction stoves were first patented in the early 1900s. These stoves are praised for their cost-effectiveness, ease of cleaning, options to control low heat, and stable base for many types and sizes of pots and other cooking tools. Critics note that abrasive cleaners can damage induction stoves, that gas has more traditional culinary associations, and that induction stoves are unable to operate during power outages. Unlike gas stoves, induction stoves have no detectable benzene emissions and any benzene emissions could then be attributed to cooking food rather than to the cooktop or fuel used.
Efficiency
Compared to simple open fires, which can have efficiency of less than 10%, enclosed stoves can offer greater efficiency and control. In free air, solid fuels burn at a temperature of only about , which is too low a temperature for perfect combustion reactions to occur, heat produced through convection is largely lost, smoke particles are evolved without being fully burned and the supply of combustion air cannot be readily controlled.
By enclosing the fire in a chamber and connecting it to a chimney, draft (draught) is generated pulling fresh air through the burning fuel. This causes the temperature of combustion to rise to a point () where efficient combustion is achieved, the enclosure allows the ingress of air to be regulated and losses by convection are almost eliminated. It also becomes possible, with ingenious design, to direct the flow of burned gasses inside the stove such that smoke particles are heated and destroyed.
Enclosing a fire also prevents air from being sucked from the room into the chimney. This can represent a significant loss of heat as an open fireplace can pull away many cubic meters of heated air per hour. Efficiency is generally regarded as the maximum heat output of a stove or fire, and is usually referred to by manufacturers as the difference between heat to the room and heat lost up the chimney.
An early improvement was the fire chamber: the fire was enclosed on three sides by masonry walls and covered by an iron plate. Only in 1735 did the first design that completely enclosed the fire appear: the Castrol stove of the French architect François de Cuvilliés was a masonry construction with several fireholes covered by perforated iron plates. It is also known as a stew stove. Near the end of the 18th century, the design was refined by hanging the pots in holes through the top iron plate, thus improving heat efficiency even more.
In 1743, Benjamin Franklin invented an all-metal fireplace with an attempt to improve the efficiency. It was still an open-faced fireplace, but improved on efficiency compared to old-fashioned fireplaces.
Some stoves use a catalytic converter, which causes combustion of the gas and smoke particles not previously burned. Other models use a design that includes firebox insulation, a large baffle to produce a longer, hotter gas flow path. Modern enclosed stoves are often built with a window to let out some light and to enable the user to view progress of the fire.
While enclosed stoves are typically more efficient and controllable than open fires, there are exceptions. The type of water-heating "back boiler" open fires commonly used in Ireland, for instance, can achieve more than 80% absolute efficiency.
Modern designs
As concerns about air pollution, deforestation, and climate change have increased, new efforts have been made to improve stove design. The largest strides have been made in innovations for biomass-burning stoves, such as the wood-burning stoves used in many of the world's most populous countries. These new designs address the fundamental problem that wood and other biomass fires inefficiently consume large amounts of fuel to produce relatively small amounts of heat, while producing fumes that cause significant indoor and environmental pollutants. The World Health Organization has documented the significant number of deaths caused by smoke from home fires. Increases in efficiency allow users of stoves to spend less time gathering wood or other fuels, suffer less emphysema and other lung diseases prevalent in smoke-filled homes, while reducing deforestation and air pollution.
Corn and pellet stoves and furnaces are a type of biofuel stove. The shelled dry kernel of corn, also called a corn pellet, creates as much heat as a wood pellet, but generates more ash. "Corn pellet stoves and wood pellet stoves look the same from the outside. Since they are highly efficient, they don't need a chimney; instead, they can be vented outdoors by a four-inch (102 mm) pipe through an outside wall and so can be located in any room in the home."
A pellet stove is a type of clean-burning stove that uses small, biological fuel pellets which are renewable and very clean-burning. Home heating using a pellet stove is an alternative currently used throughout the world, with rapid growth in Europe. The pellets are made of renewable material — typically wood sawdust or off-cuts. There are more than half a million homes in North America using pellet stoves for heat, and probably a similar number in Europe. The pellet stove typically uses a feed screw to transfer pellets from a storage hopper to a combustion chamber. Air is provided for the combustion by an electric blower. The ignition is automatic, using a stream of air heated by an electrical element. The rotation speed of the feeder and the fan speeds can be varied to modulate the heat output.
Other efficient stoves are based on Top Lit updraft (T-LUD) or wood gas or smoke burner stove, a principle applied and made popular by Dr. Thomas Reed, which use small pieces of sticks, chips of wood or shavings, leaves, etc., as fuel. The efficiency is very high — up to 50 percent — as compared to traditional stoves that are 5 to 15 percent efficient on average.
Stoves fueled by alcohol, such as ethanol, offer another modern, clean-burning stove option. Ethanol-fueled stoves have been made popular through the work of Project Gaia in Africa, Latin America and the Caribbean.
Airtightness
An air-tight stove is a wood-burning stove designed to burn solid fuel, traditionally wood, in a controlled fashion so as to provide for efficient and controlled fuel use, and the benefits of stable heating or cooking temperatures. They are made of sheet metal, consisting of a drum-like combustion chamber with airflow openings that can be open and shut, and a chimney of a meter or more length.
These stoves are used most often to heat buildings in winter. Wood or other fuel is put into the stove, lit, and then air flow is regulated to control the burn. The intake airflow is either at the level where fuel is added, or below it. The exhaust (smoke) from the stove is usually several meters above the combustion chamber.
Most modern air-tight stoves feature a damper at the stove's outlet that can be closed to force the exhaust through an after burner at the top of the stove, a heated chamber in which the combustion process continues. Some air-tight stoves feature a catalytic converter, a platinum grid placed at the stove outlet to burn remaining fuel that has not been combusted, as gases burn at a much lower temperature in the presence of platinum.
Using an air-tight stove initially requires leaving the damper and air vents open until a bed of coals has been formed. After that, the damper is closed and the air vent regulated to slow down the burning of the wood. A properly loaded and controlled air-tight stove will burn safely without further attention for eight hours, or longer.
These features provide a more complete combustion of wood and elimination of polluting combustion products. It also provides for regulation of the intensity of fire by limiting air flow, and for the fire to create a strong draught or draw up the chimney. This results in highly efficient fuel usage.
Air-tight stoves are a more sophisticated version of traditional wood-burning stoves.
Emission regulation
Many countries legislate to control emissions.
Since 2015, the United States Environmental Protection Agency (EPA) Phase III Woodstove Regulations in the United States requiring that all wood stoves being manufactured limit particulate emission to 4.5 grams per hour for stoves with after burners or 2.5 grams per hour for stoves with catalytic converters. Testing wood stove efficiency is limited to two primary methods, crib wood and cord wood. In 2020, wood stoves tested with crib wood must burn at= 2.0 grams per hour. As of 2020, efficiency tests can also be conducted with cordwood and must be below 2.5 grams of emission per hour. According to the EPA Cordwood discussion paper, these changes are aimed at improving current test methods in order to eventually develop an EPA reference method for cordwood stoves and, potentially, for central heaters (e.g., hydronic heaters/boilers and forced-air furnaces).
The burn temperature in modern stoves can increase to the point where secondary and complete combustion of the fuel takes place. A properly fired masonry heater has little or no particulate pollution in the exhaust and does not contribute to the buildup of creosote in the heater flues or the chimney. Some stoves achieve as little as 1 to 4 grams of emissions per hour. This is roughly 10% as much smoke than older stoves, and equates to nearly zero visible smoke from the chimney. This is largely achieved through causing the maximum amount of material to combust, which results in a net efficiency of 60 to 70%, as contrasted to less than 30% for an open fireplace. Net efficiency is defined as the amount of heat energy transferred to the room compared to the amount contained in the wood, minus any amount central heating must work to compensate for airflow problems.
SB 1256, a bill that would ban the sale of disposable, single-use propane cylinders in California, is set to be presented for approval to Governor Gavin Newsom. If signed into law, the ban would take effect in January 2023 and would be the first of its kind in the United States. SB 1256 aims to phase out the cylinders completely by 2028; the state Assembly and Senate both approved the legislation. Propane stoves are widely used by campers for cooking, lighting, and heating, and the spent gas canisters often pile up on the ground near dumpsters at campgrounds. The bill is sponsored by the California Product Stewardship Council, a nonprofit local government coalition, in an effort to reduce waste and cut down on the pile-ups of canisters. Worthington Industries, a manufacturer of propane cylinders, has objected to the bill on the grounds that it would be disruptive to campers and that it would not improve the recycling rate of propane cylinders. The company has also argued that refillable cylinders cost three times as much as single-use cylinders.
Research and development
The search for safer, cleaner stoves remains to many an important if low-profile area of modern technology. Cook stoves in common use around the world, particularly in Third World countries, are considered fire hazards and worse: according to the World Health Organization, a million and a half people die each year from indoor smoke inhalation caused by faulty stoves. An engineer's "Stove Camp" has been hosted annually since 1999 by Aprovecho Research Center (Oregon, US) with the intent of designing a cheap, efficient, and healthy cook stove for use around the world. Other engineering societies (see Envirofit International, Colorado, US) and philanthropic groups (see the Bill & Melinda Gates Foundation, California) continue to research and promote improved cook stove designs. A focus on research and development on improved heating stoves is ongoing and was on display at the 2013 Wood Stove Decathlon in Washington, D.C.
| Technology | Household appliances | null |
556970 | https://en.wikipedia.org/wiki/Irradiance | Irradiance | In radiometry, irradiance is the radiant flux received by a surface per unit area. The SI unit of irradiance is the watt per square metre (symbol W⋅m−2 or W/m2). The CGS unit erg per square centimetre per second (erg⋅cm−2⋅s−1) is often used in astronomy. Irradiance is often called intensity, but this term is avoided in radiometry where such usage leads to confusion with radiant intensity. In astrophysics, irradiance is called radiant flux.
Spectral irradiance is the irradiance of a surface per unit frequency or wavelength, depending on whether the spectrum is taken as a function of frequency or of wavelength. The two forms have different dimensions and units: spectral irradiance of a frequency spectrum is measured in watts per square metre per hertz (W⋅m−2⋅Hz−1), while spectral irradiance of a wavelength spectrum is measured in watts per square metre per metre (W⋅m−3), or more commonly watts per square metre per nanometre (W⋅m−2⋅nm−1).
Mathematical definitions
Irradiance
Irradiance of a surface, denoted Ee ("e" for "energetic", to avoid confusion with photometric quantities), is defined as
where
∂ is the partial derivative symbol;
Φe is the radiant flux received;
A is the area.
The radiant flux emitted by a surface is called radiant exitance.
Spectral irradiance
Spectral irradiance in frequency of a surface, denoted Ee,ν, is defined as
where ν is the frequency.
Spectral irradiance in wavelength of a surface, denoted Ee,λ, is defined as
where λ is the wavelength.
Property
Irradiance of a surface is also, according to the definition of radiant flux, equal to the time-average of the component of the Poynting vector perpendicular to the surface:
where
is the time-average;
S is the Poynting vector;
α is the angle between a unit vector normal to the surface and S.
For a propagating sinusoidal linearly polarized electromagnetic plane wave, the Poynting vector always points to the direction of propagation while oscillating in magnitude. The irradiance of a surface is then given by
where
Em is the amplitude of the wave's electric field;
n is the refractive index of the medium of propagation;
c is the speed of light in vacuum;
μ0 is the vacuum permeability;
ε0 is the vacuum permittivity;
is the impedance of free space.
This formula assumes that the magnetic susceptibility is negligible; i.e. that μr ≈ 1 (μ ≈ μ0) where μr is the relative magnetic permeability of the propagation medium. This assumption is typically valid in transparent media in the optical frequency range.
Point source
A point source of light produces spherical wavefronts. The irradiance in this case varies inversely with the square of the distance from the source.
where
is the distance;
is the radiant flux;
is the surface area of a sphere of radius .
For quick approximations, this equation indicates that doubling the distance reduces irradiation to one quarter; or similarly, to double irradiation, reduce the distance to 71%.
In astronomy, stars are routinely treated as point sources even though they are much larger than the Earth. This is a good approximation because the distance from even a nearby star to the Earth is much larger than the star's diameter. For instance, the irradiance of Alpha Centauri A (radiant flux: 1.5 L☉, distance: 4.34 ly) is about 2.7 × 10−8 W/m2 on Earth.
Solar irradiance
The global irradiance on a horizontal surface on Earth consists of the direct irradiance Ee,dir and diffuse irradiance Ee,diff. On a tilted plane, there is another irradiance component, Ee,refl, which is the component that is reflected from the ground. The average ground reflection is about 20% of the global irradiance. Hence, the irradiance Ee on a tilted plane consists of three components:
The integral of solar irradiance over a time period is called "solar exposure" or "insolation".
Average solar irradiance at the top of the Earth's atmosphere is roughly 1361 W/m2, but at surface irradiance is approximately 1000 W/m2 on a clear day.
SI radiometry units
| Physical sciences | Electromagnetic radiation | Physics |
5152311 | https://en.wikipedia.org/wiki/Broad%20Breasted%20White%20turkey | Broad Breasted White turkey | The Broad Breasted White is commercially the most widely used breed of domesticated turkey. These birds have shorter breast bones and larger breasts, sometimes rendering them unable to breed without human assistance (typically via artificial insemination). They produce more breast meat and their pin feathers are less visible when the carcass is dressed due to their white color. These properties have made the breed popular in commercial turkey production but enthusiasts of slow food argue that the development of this breed and the methods in commercial turkey production have come at a cost of less flavor.
These birds are grown in large, fully automated grow-out barns, which may house as many as 10,000 birds. The growing process for these birds has been so well refined, the birds often grow to larger than 40 lbs. Average birds are typically 38-40 lbs. Because of their size, predilection for overeating, and sedentary personalities, they are flightless and prone to health problems associated with obesity, such as heart disease, respiratory failure and joint damage; even if such turkeys are spared from slaughter (such as those involved in the annual turkey pardons), they usually have short lives as a result. Broad Breasted Whites also have a very high percentage of their eggs hatch, which makes turkey eggs as a food item a rare delicacy.
| Biology and health sciences | Turkeys | Animals |
5157838 | https://en.wikipedia.org/wiki/Facultative%20bipedalism | Facultative bipedalism | A facultative biped is an animal that is capable of walking or running on two legs (bipedal), as a response to exceptional circumstances (facultative), while normally walking or running on four limbs or more. In contrast, obligate bipedalism is where walking or running on two legs is the primary method of locomotion. Facultative bipedalism has been observed in several families of lizards and multiple species of primates, including sifakas, capuchin monkeys, baboons, gibbons, gorillas, bonobos and chimpanzees. Several dinosaur and other prehistoric archosaur species are facultative bipeds, most notably ornithopods and marginocephalians, with some recorded examples within sauropodomorpha. Different facultatively bipedal species employ different types of bipedalism corresponding to the varying reasons they have for engaging in facultative bipedalism. In primates, bipedalism is often associated with food gathering and transport. In lizards, it has been debated whether bipedal locomotion is an advantage for speed and energy conservation or whether it is governed solely by the mechanics of the acceleration and lizard's center of mass. Facultative bipedalism is often divided into high-speed (lizards) and low-speed (gibbons), but some species cannot be easily categorized into one of these two. Facultative bipedalism has also been observed in cockroaches and some desert rodents.
Types of bipedal locomotion
Within the category of bipedal locomotion, there are four main techniques: walking, running, skipping, and galloping. Walking is when the footfalls have an evenly spaced gait and one foot is always on the ground. Running occurs when both feet are off the ground at the same time in what is called the aerial phase. Skipping involves an aerial phase, but the two feet hit the ground immediately after each other, and the trailing foot changes after each step. Galloping is similar to skipping, but the trailing foot does not change after each step. This is not an exhaustive list of the forms of bipedalism, but most bipedal species use one or more of these techniques.
Facultatively bipedal species
Facultative bipedalism occurs in some species of antbears, cockroaches, jerboa, kangaroo rats, primates, and lizards. It arose independently in lizard and mammal lineages.
Primates
Bipedalism is found commonly throughout the primate order. Among apes, it is found in bonobos, chimpanzees, orangutans, gorillas, and gibbons. Humans are obligate bipeds, not facultative bipeds. Among monkeys, it is found in capuchins and baboons. Among strepsirrhines, it is found in sifakas and ring-tailed lemurs.
Lemurs
The sifaka (Propithecus), which is a type of lemur native to the island of Madagascar, is one of the primary examples of facultative bipedalism. While moving through the trees, they locomote using a vertical clinging and leaping strategy. On the ground, they can walk on their two hind legs as a way to conserve energy. Sifakas can locomote bipedally in two separate ways: walking, with an evenly spaced gait and no aerial phase; or galloping, switching the trailing and leading foot every 5-7 steps. Propithecus and humans are the only species known to use a skipping/galloping type of locomotion.
Ring-tailed lemurs (Lemur catta), can be arboreal or terrestrial. While terrestrial, they move quadrupedally 70% of the time, bipedally 18% of the time, and by leaping the remaining 12% of the time. This is more bipedal locomotion than any other species in their genus. While bipedal, they can locomote by hopping or walking.
Monkeys
Capuchin monkeys are arboreal quadrupeds, but can locomote bipedally on the ground. They use a spring-like walk that lacks an aerial phase. While humans employ a pendulum-like gait which allows for the interchange of kinetic and potential energy, capuchins do not. This means the energy costs of bipedalism in capuchins is very high. It is thought that the reduced energetic costs of a pendulum-like gait (such as in humans) are what led to the evolution of obligate bipedalism.
Olive baboons are described as a quadrupedal primates, but bipedalism is observed occasionally and spontaneously in captivity and in the wild. Bipedal walking is rarely used, but most often occurs when the infant loses its grip on the mother while she's walking quadrupedally as they attempt to catch their balance. Immature baboons seem to be more bipedal than adults. These bipedal postures and locomotion in infants, although infrequent, seem to clearly distinguish them from adult baboons in terms of maturity level. In the wild, locomotor behavior of these baboons vary as a result of their need to find food and to avoid predators.
Gelada baboons use what is known as a "shuffle gait", where they squat bipedally and move their feet in a shuffling motion. They tend to use bipedal locomotion when traveling short distances.
Apes
Apes in closed forest habitats (habitats enclosed by trees) are considered to be more bipedal than chimpanzees and baboons, both when they are standing stationary or moving bipedally. The proportions of the foot in the gorilla are better adapted to bipedal standing than other primate species. In specific circumstances, such as ground conditions, some ape feet perform better than human feet in terms of bipedal standing, as they have a larger RPL (ratio of the power arm to the load arm) and reduce the muscle force when the foot contacts the ground.
Gibbons (of the genus Hylobates) are low-speed obligate bipeds when on the ground but travel quadrupedally in other contexts. Because they usually move through trees, their anatomy has become specialized for vertical clinging and leaping, which uses hip and knee joint extensions that are similar to those used in bipedal motion. They also use three back muscles (the multifidus, longissimus thoracis, and iliocostalis lumborum) that are key to bipedal motion in chimpanzees as well as humans. This anatomy necessitates that they move bipedally on the ground.
Chimpanzees exhibit bipedalism most often when carrying valuable resources (such as food gathering/transporting) because chimps can carry more than twice as much when walking bipedally as opposed to walking quadrupedally. Bipedalism is practiced both on the ground and up high when feeding from fruit trees. Foraging for food in the shorter trees while standing bipedally allows for the chimps to reach higher up so they can get food more easily.
In orangutans, bipedalism is more often considered an extension of "orthograde clamber" rather than an independent form of locomotion. Orthograde clamber is when the majority of the body mass is held up by the forelimbs. However, there are few instances when the hind limbs carry most of the body weight, only using forelimbs for support. This bipedal posture and motion are most often seen during feeding.
Australopithecines
Although no longer extant, Australopithecines exhibited facultative bipedalism. Their pelvis and lower body morphology are indicative of bipedalism: the lumbar vertebrae curve inward, the pelvis has a human-like shape, and the feet have well-developed transverse and longitudinal arches that indicate walking. However, other features indicate reduced locomotor competence, or an increase in stress caused by walking bipedally. The pelvis is broad, which requires greater energy to be used during walking. Australopithecines also have short hind limbs for their weight and height, which also shows a higher energy expenditure when walking bipedally. This indicates that this species practiced bipedal locomotion, but did so more infrequently than previously thought. At the times they did practice bipedalism, the benefits outweighed the potential costs that would be imposed on them.
Lizards
Many families of lizards, including Agamidae, Teiidae, Crotaphytidae, Iguanidae, and Phrynosomatidae, have been observed to engage in facultative bipedalism. In lizards, rapid acceleration of the hind legs induces a friction force with the ground, which produces a ground reaction force on the rear legs. When the hind limbs reach the necessary force threshold, the lizard's trunk angle opens and shifts its center of mass; this, in turn, increases front limb elevation, allowing bipedal locomotion over short distances. When modeled, an exact number of steps and rate of acceleration leads to an exact shift in the center of mass that allows the elevation of the front limbs: too fast and the center of mass moves too far back and the lizard falls over backward, too slow and the front limbs never elevate. However, this model does not account for the fact that lizards may adjust their movements using their forelimbs and tail to increase the range of acceleration in which bipedal locomotion is possible.
Debate exists over whether bipedalism in lizards confers any advantage. Advantages could include faster speeds to evade predators, or less energy consumption, and could explain why this behavior has evolved. However, research has shown that bipedal locomotion does not increase speed but can increase acceleration. It is also possible that facultative bipedalism is a physical property of the lizard's movement rather than a developed behavior. In this scenario, it would be more energetically favorable to allow the forelimbs to rise with the rotation caused by the lizard's acceleration rather than work to keep the forelimbs on the ground. Recent research has shown that the actual acceleration at which lizards begin to run bipedally is lower than the previous model predicted, suggesting that lizards actively attempt to locomote bipedally rather than passively allow the behavior to occur. If this is true, there may be some advantage associated with bipedalism that has not yet been identified. Alternatively, while the origin of the behavior may have been solely the physical motion and acceleration, traveling bipedally may have conferred an advantage, such as easier maneuvering, that was then exploited.
Evolution of bipedalism
Reptile origins
Bipedalism was common in all major groups of dinosaurs. Phylogenetic studies indicate that bipedalism in dinosaurs arose from one common ancestor, while quadrupedalism arose in multiple lines, coinciding with an increase in body size. To understand how bipedalism arose in dinosaurs, scientists studied extant facultatively bipedal lizards, especially of the clade squamata. The proposed explanation for the evolution of bipedalism in dinosaurs is that it arose in smaller carnivores that were competing with larger carnivores. The need for speed and agility prompted the adaptation of a larger hind-limb muscle, which in turn prompted the shift to facultative bipedalism, where the weaker front legs would not slow them down. Facultatively bipedal dinosaurs encountered ecological pressures for longer periods of high speed and agility, and so longer periods of bipedalism, until eventually they became continually bipedal. This explanation implies that facultative bipedalism leads to obligate bipedalism.
In lizards, bipedal running developed fairly early in their evolutionary history. Fossils suggest this behavior began approximately 110 million years ago. Although the advantage of facultative bipedalism in lizards remains unclear, increased speed or acceleration is possible, and facultative bipedalism promotes phenotypic diversity which may lead to adaptive radiation as species adapt to fill different niches.
Primate origins
Studying the biomechanics of motion contributes to understanding the morphology of both modern primates and the fossil records. Bipedal locomotion appears to have evolved separately in different primates including humans, bonobos, and gibbons. The evolutionary explanation for the development of this behavior is often linked to load-carrying in chimpanzees, bonobos, macaques, capuchin monkeys, and baboons. The ability to carry more materials can be either a selective pressure or a significant advantage, especially in uncertain environments where commodities must be collected when found. If not, they are more likely to become unavailable later on. Load carrying affects limb mechanics by increasing the force on the lower limbs, which may affect the evolution of anatomy in facultatively bipedal primates.
Possible selective pressures for facultative bipedalism include resource gathering, such as food, and physical advantages. Great apes that engage in male-male fights have an advantage when standing on their hind legs, as this allows them to use their forelimbs to strike their opponent. In primates, bipedal locomotion may allow them to carry more resources at one time, which could confer an advantage especially if the resources are rare. Additionally, standing on two legs may allow them to reach more food, as chimpanzees do. Other specific advantages, such as being able to wade in water or throw stones, may also have contributed to the evolution of facultative bipedalism. In other primates, various arboreal adaptations may have affected the evolution of bipedalism as well. Longer forelimbs would be more advantageous when moving through trees that are spaced further apart, making the changes in structure and purpose of the forelimbs due to vertical climbing and brachiation more dramatic. These changes make quadrupedal walking more difficult and contributing to the shift to bipedal locomotion. Gibbons and sifakas are examples of this: their movement through trees makes quadrupedal walking difficult, resulting in bipedal walking and galloping, respectively. Arboreal adaptations making bipedalism advantageous are supported by research that shows that hip and thigh muscles involved in the bipedal walking often most resemble those used in climbing.
| Biology and health sciences | Ethology | Biology |
32993239 | https://en.wikipedia.org/wiki/Air%20medical%20services | Air medical services | Air medical services are the use of aircraft, including both fixed-wing aircraft and helicopters to provide various kinds of urgent medical care, especially prehospital, emergency and critical care to patients during aeromedical evacuation and rescue operations.
History
During World War I, air transport was used to provide medical evacuation – either from frontline areas or the battlefield itself.
In 1928, in Australia, John Flynn founded the Flying Doctor Service (later the Royal Flying Doctor Service), to provide a wide range of medical services to civilians in remote areas; these included from routine consultations with travelling general practitioners, to air ambulance evacuations and other emergency medical services.
Fixed wing military air ambulances came into regular use during World War II. Helicopters became more commonly used for such purposes during the Korean and Vietnam wars.
Later, helicopters were introduced to civilian health care, especially for shorter distances, in and around large cities: transporting paramedics or specialist doctors as needed and transporting patients to hospitals, especially for major trauma cases. Fixed-wing aircraft remained in use for long-distance medical transport.
Advantages
Air medical services can travel faster and operate in a wider coverage area than a land ambulance. This makes them particularly useful in sparsely-populated rural areas.
Air medical services have a particular advantage for major trauma injuries. The controversial theory of the golden hour suggests that major trauma patients should be transported as quickly as possible to a specialist trauma center. Therefore, medical responders in a helicopter can provide both a higher level of care at the scene of a trauma and faster transport to a trauma center. They can also provide critical care when transporting patients from community hospitals to trauma centers.
Disadvantages
Air ambulance transport is expensive, and if utilised poorly is therefore not cost effective. When inappropriately deployed to a patient close to a hospital, an air ambulance may add delay to the patient reaching hospital. In research from 1996, air ambulance services in England and Wales demonstrated no evidence of improvement in vehicle response times (i.e. time from 999 call to an ambulance vehicle being on-scene with the patient) for air ambulance attended patients compared to those attended by a land ambulance. The same review found patient did not arrive at hospital any quicker when attended by an air ambulance. When the same authors looked at health outcomes in Cornwall and London they found no evidence that the attendance of an air ambulance (HEMS) service improved survival in trauma patients.
Indications for air transport
Effective use of helicopter services for trauma depends on the ground responder's ability to determine whether the patient's condition warrants air medical transport. Protocols and training must be developed to ensure appropriate triage criteria are applied. Excessively stringent criteria can prevent rapid care and transport of trauma victims; relaxed criteria can result in the patient being unnecessarily exposed to the potential dangers of dangerous weather conditions or other aviation-related risks.
Crew and patient safety is the single most important factor to be considered when deciding whether to transport a patient by helicopter. Weather, air traffic patterns, and distances (such as from the trauma scene to closest level 1 trauma centre) must also be considered. Another reason for cancelling a flight is based on the comfort of the flight crew with the flight. The general rule of safety is upon the crew, when there is one pilot and two medical crew is:
Some have questioned the safety of air medical services. While the number of crashes may be increasing, the number of programs and use of services has also increased. Factors associated with fatal crashes of medical transport helicopters include flying at night and during bad weather, and postcrash fires.
Air ambulance
An air ambulance is a specially outfitted helicopter or fixed-wing aircraft that transports injured or sick people in a medical emergency or over distances or terrain impractical for a conventional ground ambulance. Fixed-wing aircraft are also more often used to move patients over long distances and for repatriation from foreign countries. These and related operations are called aeromedical. In some circumstances, the same aircraft may be used to search for missing or wanted people.
Like ground ambulances, air ambulances are equipped with medical equipment vital to monitoring and treating injured or ill patients. Common equipment for air ambulances includes medications, ventilators, ECGs and monitoring units, CPR equipment, and stretchers. A medically staffed and equipped air ambulance provides medical care in flight—while a non-medically equipped and staffed aircraft simply transports patients without care in flight. Military organizations and NATO refer to the former as medical evacuation (MEDEVAC) and to the latter as casualty evacuation (CASEVAC).
Air Traffic Control (ATC) grants special treatment to air ambulance operations, much like a ground ambulance using lights and a siren, only when they are actively operating with a patient. When this happens, air ambulance aircraft take the call sign MEDEVAC (formerly LIFEGUARD) and receive priority handling in the air and on the ground.
History
Military
As with many Emergency Medical Service (EMS) innovations, treating patients in flight originated in the military. The concept of using aircraft as ambulances is almost as old as powered flight itself. Although balloons were not used to evacuate wounded soldiers at the Siege of Paris in 1870, air evacuation was experimented with during the First World War.
The first recorded British ambulance flight took place in 1917 in Ottoman Empire when a soldier in the Camel Corps who had been shot in the ankle was flown to hospital in a de Havilland DH9 in 45 minutes. First Recorded Aeromedical Evacuation in the British Army The same journey by land would have taken some 3 days to complete. In the 1920s several services, both official and unofficial, started up in various parts of the world. Aircraft were still primitive at the time, with limited capabilities, and the effort received mixed reviews.
Exploration of the idea continued, however, and France and the United Kingdom used fully organized air ambulance services during the African and Middle Eastern Colonial Wars of the 1920s. In 1920, the British, while suppressing the "Mad Mullah" in Somaliland, used an Airco DH.9A fitted out as an air ambulance. It carried a single stretcher under a fairing behind the pilot. The French evacuated over 7,000 casualties during that period. By 1936, an organized military air ambulance service evacuated wounded from the Spanish Civil War for medical treatment in Nazi Germany; this service continued during the Second World War.
The first use of medevac with helicopters was the evacuation of three British pilot combat casualties by a US Army Sikorsky R-4 in Burma during WW2, and the first dedicated use of helicopters by U.S. forces occurred during the Korean War, between 1950 and 1953. The French used light helicopters in the First Indochina War. While popularly depicted as simply removing casualties from the battlefield (which they did), helicopters in the Korean War also moved critical patients to hospital ships after initial emergency treatment in field hospitals.
Knowledge and expertise of use of air ambulances evolved parallel to the aircraft themselves. By 1969, in Vietnam, the use of specially trained medical corpsmen and helicopter air ambulances led U.S. researchers to determine that servicemen wounded in battle had better rates of survival than motorists injured on California freeways. This inspired the first experiments with the use of civilian paramedics in the world. The US military recently employed UH-60 Black Hawk helicopters to provide air ambulance service during the Iraq War to military personnel and civilians. The use of military aircraft as battlefield ambulances continues to grow and develop today in a variety of countries, as does the use of fixed-wing aircraft for long-distance travel, including repatriation of the wounded. Currently, a NATO working group is investigating unpiloted aerial vehicles (UAVs) for casualty evacuation.
Civilian
The first civilian uses of aircraft as ambulances were probably incidental. In northern Canada, Australia, and in Scandinavian countries, remote, sparsely populated settlements are often inaccessible by road for months at a time, or even year-round. In some places in Scandinavia, particularly in Norway, the primary means of transportation between communities is by boat. Early in aviation history, many of these communities began to rely on civilian "bush" pilots, who fly small aircraft and transport supplies, mail, and visiting doctors or nurses. Bush pilots probably performed the first civilian air ambulance trips, albeit on an ad hoc basis—but clearly, a need for these services existed. In the early 1920s, Sweden established a standing air ambulance system, as did Siam (Thailand). In 1928 the first formal, full-time air ambulance service was established in the Australian outback. This organization became the Royal Flying Doctor Service and still operates. In 1934, Marie Marvingt established Africa's first civil air ambulance service, in Morocco. In 1936, air ambulance services were established as part of the Highlands and Islands Medical Service to serve more remote areas of Highland Scotland. Air ambulances quickly established their usefulness in remote locations, but their role in developed areas developed more slowly. After World War II, the Saskatchewan government in Regina, Saskatchewan, Canada, established the first civilian air ambulance in North America. The Saskatchewan government had to consider remote communities and great distances in providing health care to its citizens. The Saskatchewan Air Ambulance service continues to be active as of 2023. J. Walter Schaefer founded the first air ambulance service in the U.S., in 1947, in Los Angeles. The Schaefer Air Service operated as part of Schaefer Ambulance Service.
Two research programs were implemented in the U.S. to assess the impact of medical helicopters on mortality and morbidity in the civilian arena. Project CARESOM was established in Mississippi in 1969. Three helicopters were purchased through a federal grant and located strategically in the north, central, and southern areas of the state. Upon termination of the grant, the program was considered a success and each of the three communities were given the opportunity to continue the helicopter operation. Only the one located in Hattiesburg, Mississippi did so, and it was therefore established as the first civilian air medical program in the United States. The second program, the Military Assistance to Safety and Traffic (MAST) system, was established in Fort Sam Houston in San Antonio in 1969. This was an experiment by the Department of Transportation to study the feasibility of using military helicopters to augment existing civilian emergency medical services. These programs were highly successful at establishing the need for such services. The remaining challenge was in how such services could be operated most cost-effectively. In many cases, as agencies, branches, and departments of the civilian governments began to operate aircraft for other purposes, these aircraft were frequently pressed into service to provide cost-effective air support to the evolving Emergency Medical Services.
As the concept was proven, dedicated civilian air ambulances began to appear. On November 1, 1970, the first permanent civil air ambulance helicopter, Christoph 1, entered service at the Hospital of Harlaching, Munich, Germany. The apparent success of Christoph 1 led to a quick expansion of the concept across Germany, with Christoph 10 entering service in 1975, Christoph 20 in 1981, and Christoph 51 in 1989. As of 2007, there are about 80 helicopters named after Saint Christopher, like Christoph Europa 5 (also serving Denmark), Christoph Brandenburg or Christoph Murnau am Staffelsee. Austria adopted the German system in 1983 when Christophorus 1 entered service at Innsbruck. Also in the year 1975 Hans Burghart, one of the inventor of the civilian air rescue in Germany, presented at one Academic conference in the US the concept "Rescue Helicopters in Primary and Secondary Missions" which had impact for the aviation training at Fort Rucker, Alabama.
The first civilian, hospital-based medical helicopter program in the United States began operation in 1972. Flight For Life Colorado began with a single Alouette III helicopter, based at St. Anthony Central Hospital in Denver, Colorado. In Ontario, Canada, the air ambulance program began in 1977, and featured a paramedic-based system of care, with the presence of physicians or nurses being relatively unusual. The system, operated by the Ontario Ministry of Health, began with a single rotor-wing aircraft based in Toronto. An important difference in the Ontario program involved the emphasis of service. "On scene" calls were taken, although less commonly, and a great deal of the initial emphasis of the program was on the interfacility transfer of critical care patients. Operating today through a private contractor (ORNGE), the system operates 33 aircraft stationed at 26 bases across the province, performing both interfacility transfers and on-scene responses in support of ground-based EMS. Today, across the world, the presence of civilian air ambulances has become commonplace and is seen as a much-needed support for ground-based EMS systems. In other countries of Europe, like SFR Yugoslavia, first air ambulance appeared in the 1980s. Most of the fleet was previously used in military service. With the increased number of car accidents in 1979 on highways, the Yugoslavian government made a decision to buy new or redistribution of use of old helicopters.
Organization
Air ambulance service, sometimes called Aeromedical Evacuation or simply Medevac, is provided by a variety of different sources in different places in the world. There are a number of reasonable methods of differentiating types of air ambulance services. These include military/civilian models and services that are government-funded, fee-for-service, donated by a business enterprise, or funded by public donations. It may also be reasonable to differentiate between dedicated aircraft and those with multiple purposes and roles. Finally, it is reasonable to differentiate by the type of aircraft used, including rotary-wing, fixed-wing, or very large aircraft. The military role in civilian air ambulance operations is described in the History section. Each of the remaining models is explored separately. This information applies to air ambulance systems performing emergency service. In almost all jurisdictions, private aircraft charter companies provide non-emergency air ambulance service on a fee-for-service basis.
Government operated
In some cases, governments provide air ambulance services, either directly or via a negotiated contract with a commercial service provider, such as an aircraft charter company. Such services may focus on critical care patient transport, support ground-based EMS on scenes, or may perform a combination of these roles. In almost all cases, the government provides guidelines to hospitals and EMS systems to control operating costs—and may specify operating procedures in some level of detail to limit potential liability. However, the government almost always takes a 'hands-off' approach to the actual running of the system, relying instead on local managers with subject matter (physicians and aviation executives) expertise. Ontario's ORNGE program and the Polish Lotnicze Pogotowie Ratunkowe (LPR) are examples of this type of operating system. The Polish LPR is a national system covering the entire country and funded by the government through the Ministry of Health but run independently, there is no independent HEMS operator in Poland. In North East Ohio, including Cleveland, the Cuyahoga County-owned MetroHealth Medical Center uses its Metro Life Flight to transport patients to Metro's level I trauma and burn unit. There are 5 helicopters for North East Ohio and, in addition, Metro Life Flight has one fixed-wing aircraft.
In the United Kingdom, the Scottish Ambulance Service operates two helicopters and two fixed-wing aircraft twenty-four hours per day.
Multiple purpose
In some jurisdictions, cost is a major consideration, and the presence of dedicated air ambulances is simply not practical. In these cases, the aircraft may be operated by another government or quasi-government agency and made available to EMS for air ambulance service when required. In southern New South Wales, Australia, the helicopter that responds as an air ambulance is actually operated by the local hydroelectric utility, with the New South Wales Ambulance Service providing paramedics, as required. In some cases, local EMS provides the flight paramedic to the aircraft operator as-needed. In the case of the Los Angeles County Fire Department, the helicopters are brush fire choppers also configured as air ambulances with a paramedic provided from whichever fire department rescue unit has responded.
Sometimes the air ambulance may be run as a dual concern with another governmental body - for example, the Wiltshire Air Ambulance was run as a joint Ambulance Service and police unit until 2014.
In other cases, the paramedic staffs the aircraft full-time, but has a dual function. In the case of the Maryland State Police, for example, the flight paramedic is a serving State Trooper whose job is to act as the Observer Officer on a police helicopter when not required for medical emergencies.
Fee-for-service
In many cases, local jurisdictions do not charge for air ambulance service, particularly for emergency calls. However, the cost of providing air ambulance services is considerable and many, including government-run operations, charge for service. Organizations such as service aircraft charter companies, hospitals, and some private-for-profit EMS systems generally charge for service. Within the European Union, almost all air ambulance service is on a fee-for-service basis, except for systems that operate by private subscription. Many jurisdictions have a mix of operation types. Fee-for-service operators are generally responsible for their own organization but may have to meet government licensing requirements. Rega of Switzerland is an example of such a service.
Donated by business
In some cases, a local business or even a multi-national company may choose to fund local air ambulance service as a goodwill or public relations gesture. Examples of this are common in the European Union, where in London the Virgin Corporation previously donated to the Helicopter Emergency Medical Service, and in Germany and the Netherlands a large number of the 'Christoph' air ambulance operations are actually funded by ADAC, Germany's largest automobile club and DRF Luftrettung. In Australia and New Zealand, many air ambulance helicopter operations are sponsored by the Westpac Bank. In these cases, the operation may vary but is the result of a carefully negotiated agreement between government, EMS, hospitals, and the donor. In most cases, while the sponsor receives advertising exposure in exchange for funding, they take a 'hands-off' approach to daily operations, relying instead on subject matter specialists.
Public donations
In some cases, air ambulance services may be provided by means of voluntary charitable fundraising, as opposed to government funding, or they may receive limited government subsidy to supplement local donations. Some countries, such as the U.K., use a mix of such systems. In Scotland, the parliament has voted to fund air ambulance service directly, through the Scottish Ambulance Service. In England and Wales, however, the service is funded on a charitable basis via a number of local charities for each region covered.
Great strides have been made in the UK, with the 'Association of Air Ambulance (AAA)'. This organization is widely credited for having created the political climate that made the helicopter industry and National Health Service recognise the enormous contribution charities make to trauma care in the United Kingdom. In 2013, the AAA published the "Framework for a High Performing Air Ambulance Service" which details many of the developments from 2008 to 2013.
In recent years, the service has moved towards the physician-paramedic model of care. This has necessitated some charities commissioning clinical governance services, however many air ambulances operate under the tasking ambulances services clinical governance. The AAA now publishes Best Practice Guidance on a range of operational and clinical functions and provides a code of conduct that all full members, both ambulance services and charities must uphold.
Memorial Hermann Life Flight is a not-for-profit hospital-based critical care air ambulance service in Houston, Texas, USA. As of 2023, it operates six EC-145 twin-engine helicopters. The service relies on community support and fundraising efforts. Memorial Hermann Life Flight operates from the John S. Dunn Helistop, one of the busiest helipads in the world, with space for four helicopters.
"Heavy-Lift"
A final area of distinction is the operation of large, generally fixed-wing air ambulances. In the past, the infrequency of civilian demand for such a service confined such operations to the military, which requires them to support overseas combat operations. Military organizations capable of this type of specialized operation include the United States Air Force, the German Luftwaffe, and the British Royal Air Force. The Swedish National Air Medevac - SNAM is an exception to the military only rule where the system is owned by the Swedish Civil Contingencies Agency Myndigheten för samhällsskydd och beredskap and the 737-800 aircraft is provided under contract when so required by Scandinavian Airlines. Each operates aircraft staffed by physicians, nurses, and corpsmen/technicians, and each can provide long-distance transport with full medical support to dozens of patients simultaneously.
However, in recent years, exceptions to the "military-only" rule have grown with the need to quickly transport patients to facilities that provide higher levels of care or to repatriate individuals. Air medical companies use both large and small fixed-wing aircraft configured to provide levels of care that can be found in Trauma centres for individuals who subscribe to their own health insurance or affiliated travel insurance and protection plans.
Standards
Aircraft and flight crews
In most jurisdictions, air ambulance pilots must have a great deal of experience in piloting their aircraft because the conditions of air ambulance flights are often more challenging than regular non-emergency flight services. After a spike in air ambulance crashes in the United States in the 1990s, the U.S. government and the Commission on Air Medical Transportation Systems (CAMTS) stepped up the accreditation and air ambulance flight requirements, ensuring that all pilots, personnel, and aircraft meet much higher standards than previously required. The resulting CAMTS accreditation, which applies only in the United States, includes the requirement for an air ambulance company to own and operate its own aircraft. Some air ambulance companies, realizing it is virtually impossible to have the correct medicalized aircraft for every mission, instead charter aircraft based on the mission-specific requirements.
While in principle CAMTS accreditation is voluntary, a number of government jurisdictions require companies providing medical transportation services to have CAMTS accreditation to be licensed to operate. This is an increasing trend as state health services agencies address the issues surrounding the safety of emergency medical services flights. Some examples are the states of Colorado,
New Jersey,
New Mexico,
Utah, and
Washington. According to the rationale used to justify the state of Washington's adoption of the accreditation requirements, requiring accreditation of air ambulance services provides assurance that the service meets national public safety standards. The accreditation is done by professionals who are qualified to determine air ambulance safety. In addition, compliance with accreditation standards is checked on a continual basis by the accrediting organization. Accreditation standards are periodically revised to reflect the dynamic, changing environment of medical transport, with considerable input from all disciplines of the medical profession.
Other U.S. states require either CAMTS accreditation or a demonstrated equivalent, such as Rhode Island, and Texas, which has adopted CAMTS' Accreditation Standards (Sixth Edition, October 2004) as its own. In Texas, an operator not wishing to become CAMTS accredited must submit to an equivalent survey by state auditors who are CAMTS-trained. Virginia and Oklahoma have also adopted CAMTS accreditation standards as their state licensing standards. While the original intent of CAMTS was to provide an American standard, air ambulance services in a number of other countries, including three in Canada and one in South Africa, have voluntarily submitted themselves to CAMTS accreditation.
In the UK, the AAA has a Code of Conduct that binds one of the most regulated areas of operation together. It brings the Fundraising Standards Board, CAA / EASA and the CQC together ensuring fundraising, air and clinical operations are in line with national regulation and best practice. The code goes further with an expectation of mutual support and working within its policy and best practice guides.
Medical control
The nature of the air operation frequently determines the type of medical control required. In most cases, an air ambulance staffer is considerably more skilled than a typical paramedic, so medical control permits them to exercise more medical decision-making latitude. Assessment skills tend to be considerably higher, and, particularly on inter-facility transfers, permit the inclusion of functions such as reading x-rays and interpretation of lab results. This allows for planning, consultation with supervising physicians, and issuing contingency orders in case they are required during flight. Some systems operate almost entirely off-line, using protocols for almost all procedures and only resorting to on-line medical control when protocols have been exhausted. Some air ambulance operations have full-time, on-site medical directors with pertinent backgrounds (e.g., emergency medicine); others have medical directors who are only available by pager. For those systems operating on the Franco-German model, the physician is almost always physically present, and medical control is not an issue.
Equipment and interiors
Most aircraft used as air ambulances, with the exception of charter aircraft and some military aircraft, are equipped for advanced life support and have interiors that reflect this. The challenges in most air ambulance operations, particularly those involving helicopters, are the high ambient noise levels and limited amounts of working space, both of which create significant issues for the provision of ongoing care. While equipment tends to be high-level and very conveniently grouped, it may not be possible to perform some assessment procedures, such as chest auscultation, while in flight. In some types of aircraft, the aircraft's design means that the entire patient is not physically accessible in flight. Additional issues occur with respect to pressurization of the aircraft. Not all aircraft used as air ambulances in all jurisdictions have pressurized cabins and those that do typically tend to be pressurized to only 10,000 feet above sea level. These pressure changes require advanced knowledge by flight staff with respect to the specifics of aviation medicine, including changes in physiology and the behaviour of gases.
There are a large variety of helicopter makes that are used for the civilian HEMS models. The commonly used types are the Bell 206, 407, and 429, Eurocopter AS350, BK117, EC130, EC135, EC145, and the Agusta Westland 109, 169 & 139, MD Explorer and Sikorsky S-76. Fixed-wing aircraft varieties commonly include the Learjet 35 and 36, Learjet 31, King Air 90, King Air 200, Pilatus PC-12 & PC-24, and Piper Cheyenne. Due to the configuration of the medical crew and patient compartments, these aircraft are normally configured to only transport one patient but some can be configured to transport two patients if so needed. Additionally, helicopters have stricter weather minimums that they can operate in and commonly do not fly at altitudes over 10,000 feet above sea level.
Challenges
Beginning in the 1990s, the number of air ambulance crashes in the United States, mostly involving helicopters, began to climb. By 2005, this number had reached a record high. Crash rates from 2000 to 2005 more than doubled the previous five year's rates. To some extent, these numbers had been deemed acceptable, as it was understood that the very nature of air ambulance operations meant that, because a life was at stake, air ambulances would often operate on the very edge of their safety envelopes, going on missions in conditions where no other civilian pilot would fly. As a result, nearly fifty percent of all EMS personnel deaths in the United States occur in air ambulance crashes. In 2006, the United States National Transportation Safety Board (NTSB) concluded that many air ambulances crashes were avoidable, eventually leading to the improvement of government standards and CAMTS accreditation.
Cost-effectiveness
Whilst some air ambulances do have effective methods of funding, in England, they remain almost entirely charity funded, as improved cost-benefit ratios are generally achieved with land-based attendance and transfers. Health outcomes, for example from London's Helicopter Emergency Medical Service, remain uncertain.
Patient survival versus ground ambulance
Although cost-effectiveness may be a consideration in some contexts, in the United States, the primary measure of effectiveness is patient outcomes. Improvements in ground ambulance prehospital care have created uncertainty as to whether helicopter emergency medical services transport is associated with better patient outcomes compared with ground transportation. A U.S. study using 2014 data found that after adjusting for age, Injury Severity Score, and gender, trauma patients who were transferred by helicopter were 57.0% less likely to die than those transferred by ground ambulance (95% CI 0.41 to 0.44, p<0.0001). A retrospective review study reached a similar conclusion: "Patients transported by helicopter to an urban trauma centre ... had improved survival than those arriving by other means of transport." Patient survival is not the only possible measure of patient outcome. In the case of stroke patients, for instance, various outcome measures could be used.
Dispatch of air medical services versus ground ambulance
There are many considerations in determining whether to dispatch air medical services. Availability, distance and flight conditions are primary considerations. Even when available, an air ambulance is not always the faster choice in comparison to ground ambulances. Ground ambulances are more numerous and more ubiquitous, so will often be closer to the scene. Ground ambulances can depart their base almost immediately, while air medical services must complete preflight routines prior to departure. A nearby suitable landing site may not be available due to trees, wires, etc. Air medical services tend to have an advantage where ground access routes to the hospital are congested and for locations more distant from hospitals. In some situations, it may be desirable to dispatch a ground ambulance that can arrive on the scene first to provide immediate patient care, and an air ambulance to transport the patient(s) to a trauma center. It also should be borne in mind that faster may not always be better. In the context of interhospital transport, it is sometimes better to wait for air medical services with a specialized team to transport a patient even though a local land ambulance and an ad hoc local medical team may be able to transfer a patient from a remote hospital to definitive care faster than air ambulance. In the United States, insurance coverage may be a factor. For example, the Coverage Policy Manual for Arkansas Blue Cross BlueShield, a not-for-profit mutual insurance company, specifies the circumstances in which costs for air medical services are covered.
Personnel
The medical personnel of a helicopter ambulance has historically been a Physician/Nurse combination, Paramedic/Nurse, or a Nurse/Nurse combination. The need for a Physician/Nurse combination has diminished with more protocol and evidence-based applications for care by nurses and other clinicians and so the inclusion of respiratory therapists in all modes of air transport is becoming more prominent.
Retrieval doctor/physician
Retrieval doctor/physician: Criteria for working as a medical doctor (known as "physician" in the USA) in aeromedical services depends on the jurisdiction. In Australia, where aeromedical retrieval medicine is a well-established medical field, retrieval doctors must be experienced in a critical care specialty (i.e. anaesthesia, emergency medicine, intensive care medicine) as fully qualified specialists; specialty registrars in advanced stages of training; or general practitioners (i.e. family physicians) with broad experience in critical care and obstetrics. In the UK doctors working in HEMs are usually experienced in anaesthesia, emergency medicine, acute medicine or intensive care medicine. Some general practitioners also work for air ambulances. A formal training programme for pre-hospital emergency medicine (PHEM) in the UK now aims to produce formal PHEM consultants who have underdone specific training in working in pre-hospital care and transfer medicine.
Flight paramedic
Flight paramedic: A licensed paramedic with additional training as a certified flight paramedic (FP-C) or a master's degree. The flight paramedic is usually highly trained with at least five years of autonomous clinical experience in high acuity environments of both pre-hospital emergency medicine and critical care transport. Flight paramedics in the United States may be certified as a FP-C or a CCEMT-P.
Flight nurse
Flight nurse: a nurse specialized in patient transport in the aviation environment. The flight nurse is a member of an aeromedical evacuation crew on helicopters and airplanes, providing in-flight management and care for all types of patients. Other responsibilities may also include planning and preparing for aeromedical evacuation missions and preparing a patient care plan to facilitate patient care, comfort and safety. Flight nurses may obtain certification in Emergency Nursing (CEN), Flight Nursing (CFRN) or Critical Care (CCRN).
Civilian flight nurses
Civilian flight nurses may work for hospitals, federal, state, and local governments, private medical evacuation firms, fire departments or other agencies. They have training and medical direction that allows them to operate with a broader scope of practice and more autonomy than many other nurses. Some states require that flight nurses must also have paramedic or EMT certification to respond to pre-hospital scenes.
Hospital flight nurses
The military flight nurse performs as a member of the aeromedical evacuation crew, and functions as the senior medical member of the aeromedical evacuation team on Continental United States (CONUS), intra-theater and inter-theater flights - providing for in-flight management and nursing care for all types of patients. Other responsibilities include planning and preparing for aeromedical evacuation missions and preparing a patient positioning plan to facilitate patient care, comfort and safety.
Flight nurses evaluate individual patient's in-flight needs and request appropriate medications, supplies and equipment, providing continuing nursing care from originating to destination facility. They act as liaison between medical and operational aircrews and support personnel to promote patient comfort and to expedite the mission, and also initiate emergency treatment for in-flight medical emergencies.
Transport respiratory practitioner
Transport therapist: A highly trained respiratory practitioner (also called a respiratory therapist), typically utilized in long-distance transport situations, though able to provide care during shorter transfer. Transport therapists may obtain Adult Critical Care Specialist (ACCS), Neonatal Transport Specialist (NPT) and Neonatal Pediatric Specialist (NPS) certifications from the National Board for Respiratory Care.
Associations and organizations
Aerospace Medical Association: an umbrella group providing a forum for many different disciplines to come together and share their expertise for the benefit of all persons involved in air and space travel.
Association of Air Medical Services: a non-profit 501C (6) trade association.
National or local organizations specializing in aeromedicine:
British Columbia Ambulance Service, Airevac Program
Royal Flying Doctor Service (Australia)
| Technology | Transport | null |
2038497 | https://en.wikipedia.org/wiki/Skipjack%20tuna | Skipjack tuna | The skipjack tuna (Katsuwonus pelamis) is a perciform fish in the tuna family, Scombridae, and is the only member of the genus Katsuwonus. It is also known as katsuo, arctic bonito, mushmouth, oceanic bonito, striped tuna or victor fish. It grows up to 1 m (3 ft) in length. It is a cosmopolitan pelagic fish found in tropical and warm-temperate waters. It is a very important species for fisheries. It is also the namesake of the USS Skipjack.
Description
It is a streamlined, fast-swimming pelagic fish common in tropical waters throughout the world, where it inhabits surface waters in large shoals (up to 50,000 fish), feeding on fish, crustaceans, cephalopods, and mollusks. It is an important prey species for sharks and large pelagic fishes and is often used as live bait when fishing for marlin. It has no scales, except on the lateral line and the corselet (a band of large, thick scales forming a circle around the body behind the head). It commonly reaches fork lengths up to and a mass of . Its maximum fork length is , and its maximum mass is . Determining the age of skipjack tuna is difficult, and estimates of its potential lifespan range between 8 and 12 years.
Skipjack tuna are batch spawners. Spawning occurs year-round in equatorial waters, but it gets more seasonal further away from the equator. Fork length at first spawning is about . It is also known for its potent smell.
Skipjack tuna has the highest percentage of skeletal muscle devoted to locomotion of all animals, at 68% of the animal's total body mass.
Skipjack tuna are highly sensitive to environmental conditions and changes. Climate change effects are significant in marine ecosystems, and ecological factors may change fish distribution and catchability.
Fisheries
It is an important commercial and game fish, usually caught using purse seine nets, and is sold fresh, frozen, canned, dried, salted, and smoked. In 2018, landings of were reported, the third highest of any marine capture fishery (after Peruvian anchoveta and Alaska pollock).
Countries recording large amounts of skipjack catches include the Maldives, France, Spain, Malaysia, Sri Lanka, and Indonesia.
Skipjack is the most fecund of the main commercial tunas, and its population is considered sustainable against its current consumption. Its fishing is still controversial due to the methodology, with rod and reel or fishery options being promoted as ecologically preferable.
Purse seine methods are considered unsustainable by some authorities due to excess bycatch, although bycatch is said to be much reduced if fish aggregation devices are not used. These considerations have led to the availability of canned skipjack marked with the fishing method used to catch it.
Skipjack is considered to have "moderate" mercury contamination. As a result, pregnant women are advised against eating large quantities. In addition, skipjack's livers were tested globally for tributyltin (TBT) contamination. TBT is an organotin compound introduced into marine ecosystems through antifouling paint used on ship hulls and has been determined to be very toxic. About 90% of skipjack tested positive for contamination, especially in Southeast Asia, where regulations of TBT use are less rigorous than in Europe or the US.
As food
Japan
Skipjack tuna is used extensively in Japanese cuisine, where it is known as . It is eaten raw in sushi and sashimi, as well as slightly seared in katsuo tataki. It is also smoked and dried to make katsuobushi, and the shavings are commonly used to make dashi (soup stock). Katsuobushi flakes are also used as seasoning, such as in onigiri (rice balls) or on top of tofu. The raw viscera of skipjack tuna is salted and fermented to make shutō, a type of shiokara.
The fish's fat content changes during migrations along the Japanese islands. When they migrate north in summer, they are called hatsugatsuo ("first katsuo") or noborigatsuo ("ascending katsuo"), and have a lesser amount of fat. When they migrate south in autumn, they are called modorigatsuo ("returning katsuo") or kudarigatsuo ("descending katsuo"), and have a high level of fat.
Other places
In Indonesian cuisine, skipjack tuna is known as cakalang. The most popular Indonesian dish made from skipjack tuna is cakalang fufu from Minahasa. It is a cured and smoked skipjack tuna dish, made by cooking the fish after clipping it to a bamboo frame. Skipjack known as kalhubilamas in Maldives is integral to Maldivian cuisine.
Skipjack tuna is an important fish in the native cuisine of Hawaii (where it is known as aku) and throughout the Pacific islands. Hawaiians prefer to eat aku either raw as a sashimi or poke or seared in Japanese tataki style.
The trade in pickled skipjack tuna is a driving force behind the commercial fishery of this species in Spain.
| Biology and health sciences | Acanthomorpha | Animals |
2039039 | https://en.wikipedia.org/wiki/Spherical%20astronomy | Spherical astronomy | Spherical astronomy, or positional astronomy, is a branch of observational astronomy used to locate astronomical objects on the celestial sphere, as seen at a particular date, time, and location on Earth. It relies on the mathematical methods of spherical trigonometry and the measurements of astrometry.
This is the oldest branch of astronomy and dates back to antiquity. Observations of celestial objects have been, and continue to be, important for religious and astrological purposes, as well as for timekeeping and navigation. The science of actually measuring positions of celestial objects in the sky is known as astrometry.
The primary elements of spherical astronomy are celestial coordinate systems and time. The coordinates of objects on the sky are listed using the equatorial coordinate system, which is based on the projection of Earth's equator onto the celestial sphere. The position of an object in this system is given in terms of right ascension (α) and declination (δ). The latitude and local time can then be used to derive the position of the object in the horizontal coordinate system, consisting of the altitude and azimuth.
The coordinates of celestial objects such as stars and galaxies are tabulated in a star catalog, which gives the position for a particular year. However, the combined effects of axial precession and nutation will cause the coordinates to change slightly over time. The effects of these changes in Earth's motion are compensated by the periodic publication of revised catalogs.
To determine the position of the Sun and planets, an astronomical ephemeris (a table of values that gives the positions of astronomical objects in the sky at a given time) is used, which can then be converted into suitable real-world coordinates.
The unaided human eye can perceive about 6,000 stars, of which about half are below the horizon at any one time. On modern star charts, the celestial sphere is divided into 88 constellations. Every star lies within a constellation. Constellations are useful for navigation. Polaris lies nearly due north to an observer in the Northern Hemisphere. This pole star is always at a position nearly directly above the North Pole.
Positional phenomena
Planets which are in conjunction form a line which passes through the center of the Solar System.
The ecliptic is the plane which contains the orbit of a planet, usually in reference to Earth.
Elongation refers to the angle formed by a planet, with respect to the system's center and a viewing point.
A quadrature occurs when the position of a body (moon or planet) is such that its elongation is 90° or 270°; i.e. the body-earth-sun angle is 90°
Superior planets have a larger orbit than Earth's, while the inferior planets (Mercury and Venus) orbit the Sun inside Earth's orbit.
A transit may occur when an inferior planet passes through a point of conjunction.
Ancient structures associated with positional astronomy include
Arkaim
Chichen Itza
The Medicine Wheel
The Pyramids
Stonehenge
The Temple of the Sun
| Physical sciences | Celestial sphere: General | Astronomy |
2039133 | https://en.wikipedia.org/wiki/Theoretical%20astronomy | Theoretical astronomy | Theoretical astronomy is the use of analytical and computational models based on principles from physics and chemistry to describe and explain astronomical objects and astronomical phenomena. Theorists in astronomy endeavor to create theoretical models and from the results predict observational consequences of those models. The observation of a phenomenon predicted by a model allows astronomers to select between several alternate or conflicting models as the one best able to describe the phenomena.
Ptolemy's Almagest, although a brilliant treatise on theoretical astronomy combined with a practical handbook for computation, nevertheless includes compromises to reconcile discordant observations with a geocentric model. Modern theoretical astronomy is usually assumed to have begun with the work of Johannes Kepler (1571–1630), particularly with Kepler's laws. The history of the descriptive and theoretical aspects of the Solar System mostly spans from the late sixteenth century to the end of the nineteenth century.
Theoretical astronomy is built on the work of observational astronomy, astrometry, astrochemistry, and astrophysics. Astronomy was early to adopt computational techniques to model stellar and galactic formation and celestial mechanics. From the point of view of theoretical astronomy, not only must the mathematical expression be reasonably accurate but it should preferably exist in a form which is amenable to further mathematical analysis when used in specific problems. Most of theoretical astronomy uses Newtonian theory of gravitation, considering that the effects of general relativity are weak for most celestial objects. Theoretical astronomy does not attempt to predict the position, size and temperature of every object in the universe, but by and large has concentrated upon analyzing the apparently complex but periodic motions of celestial objects.
Integrating astronomy and physics
"Contrary to the belief generally held by laboratory physicists, astronomy has contributed to the growth of our understanding of physics." Physics has helped in the elucidation of astronomical phenomena, and astronomy has helped in the elucidation of physical phenomena:
discovery of the law of gravitation came from the information provided by the motion of the Moon and the planets,
viability of nuclear fusion as demonstrated in the Sun and stars and yet to be reproduced on earth in a controlled form.
Integrating astronomy with physics involves:
The aim of astronomy is to understand the physics and chemistry from the laboratory that is behind cosmic events so as to enrich our understanding of the cosmos and of these sciences as well.
Integrating astronomy and chemistry
Astrochemistry, the overlap of the disciplines of astronomy and chemistry, is the study of the abundance and reactions of chemical elements and molecules in space, and their interaction with radiation. The formation, atomic and chemical composition, evolution and fate of molecular gas clouds, is of special interest because it is from these clouds that solar systems form.
Infrared astronomy, for example, has revealed that the interstellar medium contains a suite of complex gas-phase carbon compounds called aromatic hydrocarbons, often abbreviated (PAHs or PACs). These molecules composed primarily of fused rings of carbon (either neutral or in an ionized state) are said to be the most common class of carbon compound in the galaxy. They are also the most common class of carbon molecule in meteorites and in cometary and asteroidal dust (cosmic dust). These compounds, as well as the amino acids, nucleobases, and many other compounds in meteorites, carry deuterium (2H) and isotopes of carbon, nitrogen, and oxygen that are very rare on earth, attesting to their extraterrestrial origin. The PAHs are thought to form in hot circumstellar environments (around dying carbon rich red giant stars).
The sparseness of interstellar and interplanetary space results in some unusual chemistry, since symmetry-forbidden reactions cannot occur except on the longest of timescales. For this reason, molecules and molecular ions which are unstable on earth can be highly abundant in space, for example the H3+ ion. Astrochemistry overlaps with astrophysics and nuclear physics in characterizing the nuclear reactions which occur in stars, the consequences for stellar evolution, as well as stellar 'generations'. Indeed, the nuclear reactions in stars produce every naturally occurring chemical element. As the stellar 'generations' advance, the mass of the newly formed elements increases. A first-generation star uses elemental hydrogen (H) as a fuel source and produces helium (He). Hydrogen is the most abundant element, and it is the basic building block for all other elements as its nucleus has only one proton. Gravitational pull toward the center of a star creates massive amounts of heat and pressure, which cause nuclear fusion. Through this process of merging nuclear mass, heavier elements are formed. Lithium, carbon, nitrogen and oxygen are examples of elements that form in stellar fusion. After many stellar generations, very heavy elements are formed (e.g. iron and lead).
Tools of theoretical astronomy
Theoretical astronomers use a wide variety of tools which include analytical models (for example, polytropes to approximate the behaviors of a star) and computational numerical simulations. Each has some advantages. Analytical models of a process are generally better for giving insight into the heart of what is going on. Numerical models can reveal the existence of phenomena and effects that would otherwise not be seen.
Astronomy theorists endeavor to create theoretical models and figure out the observational consequences of those models. This helps observers look for data that can refute a model or help in choosing between several alternate or conflicting models.
Theorists also try to generate or modify models to take into account new data. Consistent with the general scientific approach, in the case of an inconsistency, the general tendency is to try to make minimal modifications to the model to fit the data. In some cases, a large amount of inconsistent data over time may lead to total abandonment of a model.
Topics of theoretical astronomy
Topics studied by theoretical astronomers include:
stellar dynamics and evolution;
galaxy formation;
large-scale structure of matter in the Universe;
origin of cosmic rays;
general relativity and physical cosmology, including string cosmology and astroparticle physics.
Astrophysical relativity serves as a tool to gauge the properties of large scale structures for which gravitation plays a significant role in physical phenomena investigated and as the basis for black hole physics and the study of gravitational waves.
Astronomical models
Some widely accepted and studied theories and models in astronomy, now included in the Lambda-CDM model are the Big Bang, Cosmic inflation, dark matter, and fundamental theories of physics.
A few examples of this process:
Leading topics in theoretical astronomy
Dark matter and dark energy are the current leading topics in astronomy, as their discovery and controversy originated during the study of the galaxies.
Theoretical astrophysics
Of the topics approached with the tools of theoretical physics, particular consideration is often given to stellar photospheres, stellar atmospheres, the solar atmosphere, planetary atmospheres, gaseous nebulae, nonstationary stars, and the interstellar medium. Special attention is given to the internal structure of stars.
Weak equivalence principle
The observation of a neutrino burst within 3 h of the associated optical burst from Supernova 1987A in the Large Magellanic Cloud (LMC) gave theoretical astrophysicists an opportunity to test that neutrinos and photons follow the same trajectories in the gravitational field of the galaxy.
Thermodynamics for stationary black holes
A general form of the first law of thermodynamics for stationary black holes can be derived from the microcanonical functional integral for the gravitational field. The boundary data
the gravitational field as described with a microcanonical system in a spatially finite region and
the density of states expressed formally as a functional integral over Lorentzian metrics and as a functional of the geometrical boundary data that are fixed in the corresponding action,
are the thermodynamical extensive variables, including the energy and angular momentum of the system. For the simpler case of nonrelativistic mechanics as is often observed in astrophysical phenomena associated with a black hole event horizon, the density of states can be expressed as a real-time functional integral and subsequently used to deduce Feynman's imaginary-time functional integral for the canonical partition function.
Theoretical astrochemistry
Reaction equations and large reaction networks are an important tool in theoretical astrochemistry, especially as applied to the gas-grain chemistry of the interstellar medium. Theoretical astrochemistry offers the prospect of being able to place constraints on the inventory of organics for exogenous delivery to the early Earth.
Interstellar organics
"An important goal for theoretical astrochemistry is to elucidate which organics are of true interstellar origin, and to identify possible interstellar precursors and reaction pathways for those molecules which are the result of aqueous alterations." One of the ways this goal can be achieved is through the study of carbonaceous material as found in some meteorites. Carbonaceous chondrites (such as C1 and C2) include organic compounds such as amines and amides; alcohols, aldehydes, and ketones; aliphatic and aromatic hydrocarbons; sulfonic and phosphonic acids; amino, hydroxycarboxylic, and carboxylic acids; purines and pyrimidines; and kerogen-type material. The organic inventories of primitive meteorites display large and variable enrichments in deuterium, carbon-13 (13C), and nitrogen-15 (15N), which is indicative of their retention of an interstellar heritage.
Chemistry in cometary comae
The chemical composition of comets should reflect both the conditions in the outer solar nebula some 4.5 × 109 ayr, and the nature of the natal interstellar cloud from which the Solar System was formed. While comets retain a strong signature of their ultimate interstellar origins, significant processing must have occurred in the protosolar nebula. Early models of coma chemistry showed that reactions can occur rapidly in the inner coma, where the most important reactions are proton transfer reactions. Such reactions can potentially cycle deuterium between the different coma molecules, altering the initial D/H ratios released from the nuclear ice, and necessitating the construction of accurate models of cometary deuterium chemistry, so that gas-phase coma observations can be safely extrapolated to give nuclear D/H ratios.
Theoretical chemical astronomy
While the lines of conceptual understanding between theoretical astrochemistry and theoretical chemical astronomy often become blurred so that the goals and tools are the same, there are subtle differences between the two sciences. Theoretical chemistry as applied to astronomy seeks to find new ways to observe chemicals in celestial objects, for example. This often leads to theoretical astrochemistry having to seek new ways to describe or explain those same observations.
Astronomical spectroscopy
The new era of chemical astronomy had to await the clear enunciation of the chemical principles of spectroscopy and the applicable theory.
Chemistry of dust condensation
Supernova radioactivity dominates light curves and the chemistry of dust condensation is also dominated by radioactivity. Dust is usually either carbon or oxides depending on which is more abundant, but Compton electrons dissociate the CO molecule in about one month. The new chemical astronomy of supernova solids depends on the supernova radioactivity:
the radiogenesis of 44Ca from 44Ti decay after carbon condensation establishes their supernova source,
their opacity suffices to shift emission lines blueward after 500 d and emits significant infrared luminosity,
parallel kinetic rates determine trace isotopes in meteoritic supernova graphites,
the chemistry is kinetic rather than due to thermal equilibrium and
is made possible by radiodeactivation of the CO trap for carbon.
Theoretical physical astronomy
Like theoretical chemical astronomy, the lines of conceptual understanding between theoretical astrophysics and theoretical physical astronomy are often blurred, but, again, there are subtle differences between these two sciences. Theoretical physics as applied to astronomy seeks to find new ways to observe physical phenomena in celestial objects and what to look for, for example. This often leads to theoretical astrophysics having to seek new ways to describe or explain those same observations, with hopefully a convergence to improve our understanding of the local environment of Earth and the physical Universe.
Weak interaction and nuclear double beta decay
Nuclear matrix elements of relevant operators as extracted from data and from a shell-model and theoretical approximations both for the two-neutrino and neutrinoless modes of decay are used to explain the weak interaction and nuclear structure aspects of nuclear double beta decay.
Neutron-rich isotopes
New neutron-rich isotopes, 34Ne, 37Na, and 43Si have been produced unambiguously for the first time, and convincing evidence for the particle instability of three others, 33Ne, 36Na, and 39Mg has been obtained. These experimental findings compare with recent theoretical predictions.
Theory of astronomical time keeping
Until recently all the time units that appear natural to us are caused by astronomical phenomena:
Earth's orbit around the Sun => the year, and the seasons,
Moon's orbit around the Earth => the month,
Earth's rotation and the succession of brightness and darkness => the day (and night).
High precision appears problematic:
ambiguities arise in the exact definition of a rotation or revolution,
some astronomical processes are uneven and irregular, such as the noncommensurability of year, month, and day,
there are a multitude of time scales and calendars to solve the first two problems.
Some of these time standard scales are sidereal time, solar time, and universal time.
Atomic time
From the Systeme Internationale (SI) comes the second as defined by the duration of 9 192 631 770 cycles of a particular hyperfine structure transition in the ground state of caesium-133 (133Cs). For practical usability a device is required that attempts to produce the SI second (s) such as an atomic clock. But not all such clocks agree. The weighted mean of many clocks distributed over the whole Earth defines the Temps Atomique International; i.e., the Atomic Time TAI. From the General theory of relativity the time measured depends on the altitude on earth and the spatial velocity of the clock so that TAI refers to a location on sea level that rotates with the Earth.
Ephemeris time
Since the Earth's rotation is irregular, any time scale derived from it such as Greenwich Mean Time led to recurring problems in predicting the Ephemerides for the positions of the Moon, Sun, planets and their natural satellites. In 1976 the International Astronomical Union (IAU) resolved that the theoretical basis for ephemeris time (ET) was wholly non-relativistic, and therefore, beginning in 1984 ephemeris time would be replaced by two further time scales with allowance for relativistic corrections. Their names, assigned in 1979, emphasized their dynamical nature or origin, Barycentric Dynamical Time (TDB) and Terrestrial Dynamical Time (TDT). Both were defined for continuity with ET and were based on what had become the standard SI second, which in turn had been derived from the measured second of ET.
During the period 1991–2006, the TDB and TDT time scales were both redefined and replaced, owing to difficulties or inconsistencies in their original definitions. The current fundamental relativistic time scales are Geocentric Coordinate Time (TCG) and Barycentric Coordinate Time (TCB). Both of these have rates that are based on the SI second in respective reference frames (and hypothetically outside the relevant gravity well), but due to relativistic effects, their rates would appear slightly faster when observed at the Earth's surface, and therefore diverge from local Earth-based time scales using the SI second at the Earth's surface.
The currently defined IAU time scales also include Terrestrial Time (TT) (replacing TDT, and now defined as a re-scaling of TCG, chosen to give TT a rate that matches the SI second when observed at the Earth's surface), and a redefined Barycentric Dynamical Time (TDB), a re-scaling of TCB to give TDB a rate that matches the SI second at the Earth's surface.
Extraterrestrial time-keeping
Stellar dynamical time scale
For a star, the dynamical time scale is defined as the time that would be taken for a test particle released at the surface to fall under the star's potential to the centre point, if pressure forces were negligible. In other words, the dynamical time scale measures the amount of time it would take a certain star to collapse in the absence of any internal pressure. By appropriate manipulation of the equations of stellar structure this can be found to be
where R is the radius of the star, G is the gravitational constant, M is the mass of the star, ρ the star gas density (assumed constant here) and v is the escape velocity. As an example, the Sun dynamical time scale is approximately 1133 seconds. Note that the actual time it would take a star like the Sun to collapse is greater because internal pressure is present.
The 'fundamental' oscillatory mode of a star will be at approximately the dynamical time scale. Oscillations at this frequency are seen in Cepheid variables.
Theory of astronomical navigation
On Earth
The basic characteristics of applied astronomical navigation are
usable in all areas of sailing around the Earth,
applicable autonomously (does not depend on others – persons or states) and passively (does not emit energy),
conditional usage via optical visibility (of horizon and celestial bodies), or state of cloudiness,
precisional measurement, sextant is 0.1', altitude and position is between 1.5' and 3.0'.
temporal determination takes a couple of minutes (using the most modern equipment) and ≤ 30 min (using classical equipment).
The superiority of satellite navigation systems to astronomical navigation are currently undeniable, especially with the development and use of GPS/NAVSTAR. This global satellite system
enables automated three-dimensional positioning at any moment,
automatically determines position continuously (every second or even more often),
determines position independent of weather conditions (visibility and cloudiness),
determines position in real time to a few meters (two carrying frequencies) and 100 m (modest commercial receivers), which is two to three orders of magnitude better than by astronomical observation,
is simple even without expert knowledge,
is relatively cheap, comparable to equipment for astronomical navigation, and
allows incorporation into integrated and automated systems of control and ship steering. The use of astronomical or celestial navigation is disappearing from the surface and beneath or above the surface of the Earth.
Geodetic astronomy is the application of astronomical methods into networks and technical projects of geodesy for
apparent places of stars, and their proper motions
precise astronomical navigation
astro-geodetic geoid determination and
modelling the rock densities of the topography and of geological layers in the subsurface
Satellite geodesy using the stellar background (see also astrometry and cosmic triangulation)
Monitoring of the Earth rotation and polar wandering
Contribution to the time system of physics and geosciences
Astronomical algorithms are the algorithms used to calculate ephemerides, calendars, and positions (as in celestial navigation or satellite navigation).
Many astronomical and navigational computations use the Figure of the Earth as a surface representing the Earth.
The International Earth Rotation and Reference Systems Service (IERS), formerly the International Earth Rotation Service, is the body responsible for maintaining global time and reference frame standards, notably through its Earth Orientation Parameter (EOP) and International Celestial Reference System (ICRS) groups.
Deep space
The Deep Space Network, or DSN, is an international network of large antennas and communication facilities that supports interplanetary spacecraft missions, and radio and radar astronomy observations for the exploration of the Solar System and the universe. The network also supports selected Earth-orbiting missions. DSN is part of the NASA Jet Propulsion Laboratory (JPL).
Aboard an exploratory vehicle
An observer becomes a deep space explorer upon escaping Earth's orbit. While the Deep Space Network maintains communication and enables data download from an exploratory vessel, any local probing performed by sensors or active systems aboard usually require astronomical navigation, since the enclosing network of satellites to ensure accurate positioning is absent.
| Physical sciences | Astronomy basics | Astronomy |
2039690 | https://en.wikipedia.org/wiki/Isotopes%20of%20hydrogen | Isotopes of hydrogen | Hydrogen (H) has three naturally occurring isotopes: H, H, and H. H and H are stable, while H has a half-life of years. Heavier isotopes also exist; all are synthetic and have a half-life of less than 1 zeptosecond (10 s).
Of these, H is the least stable, while H is the most.
Hydrogen is the only element whose isotopes have different names that remain in common use today: H is deuterium and H is tritium. The symbols D and T are sometimes used for deuterium and tritium; IUPAC (International Union of Pure and Applied Chemistry) accepts said symbols, but recommends the standard isotopic symbols H and H, to avoid confusion in alphabetic sorting of chemical formulas. H, with no neutrons, may be called protium to disambiguate. (During the early study of radioactivity, some other heavy radioisotopes were given names, but such names are rarely used today.)
List of isotopes
Note: "y" means year, but "ys" means yoctosecond (10 second).
|-
| H
| 1
| 0
|
| colspan=3 align=center|Stable
| 1/2+
| colspan="2" style="text-align:center" | [, ]
| Protium
|-
| H (D)
| 1
| 1
|
| colspan=3 align=center |Stable
| 1+
| colspan="2" style="text-align:center" | [, ]
| Deuterium
|-
| H (T)
| 1
| 2
|
|
| β
| He
| 1/2+
| Trace
|
| Tritium
|-
| H
| 1
| 3
|
|
| n
| H
| 2−
|
|
|-
| H
| 1
| 4
|
|
| 2n
| H
| (1/2+)
|
|
|-
| H
| 1
| 5
|
|
|
|
| 2−#
|
|
|-
| H
| 1
| 6
| #
|
|
|
| 1/2+#
|
|
Hydrogen-1 (protium)
H (atomic mass ) is the most common hydrogen isotope, with an abundance of >99.98%. Its nucleus consists of only a single proton, so it has the formal name protium.
The proton has never been observed to decay, so H is considered stable. Some Grand Unified Theories proposed in the 1970s predict that proton decay can occur with a half-life between and years. If so, then H (and all nuclei now believed to be stable) are only observationally stable. As of 2018, experiments have shown that the mean lifetime of the proton is > years.
Hydrogen-2 (deuterium)
Deuterium, H (atomic mass ), the other stable hydrogen isotope, has one proton and one neutron in its nucleus, called a deuteron. H comprises 26–184 ppm (by population, not mass) of hydrogen on Earth; the lower number tends to be found in hydrogen gas and higher enrichment (150 ppm) is typical of seawater. Deuterium on Earth has been enriched with respect to its initial concentration in the Big Bang and outer solar system (≈27 ppm, atom fraction) and older parts of the Milky Way (≈23 ppm). Presumably the differential concentration of deuterium in the inner solar system is due to the lower volatility of deuterium gas and compounds, enriching deuterium fractions in comets and planets exposed to significant heat from the Sun over billions of years of solar system evolution.
Deuterium is not radioactive, and is not a significant toxicity hazard. Water enriched in H is called heavy water. Deuterium and its compounds are used as a non-radioactive label in chemical experiments and in solvents for H-nuclear magnetic resonance spectroscopy. Heavy water is used as a neutron moderator and coolant for nuclear reactors. Deuterium is also a potential fuel for commercial nuclear fusion.
Hydrogen-3 (tritium)
Tritium, H (atomic mass ), has one proton and two neutrons in its nucleus (triton). It is radioactive, β decaying into helium-3 with half-life . Traces of H occur naturally due to cosmic rays interacting with atmospheric gases. H has also been released in nuclear tests. It is used in fusion bombs, as a tracer in isotope geochemistry, and in self-powered lighting devices.
The most common way to produce H is to bombard a natural isotope of lithium, Li, with neutrons in a nuclear reactor.
Tritium can be used in chemical and biological labeling experiments as a radioactive tracer. Deuterium–tritium fusion uses H and H as its main reactants, giving energy through the loss of mass when the two nuclei collide and fuse at high temperatures.
Hydrogen-4
H (atomic mass ), with one proton and three neutrons, is a highly unstable isotope. It has been synthesized in the laboratory by bombarding tritium with fast-moving deuterons; the triton captured a neutron from the deuteron. The presence of H was deduced by detecting the emitted protons. It decays by neutron emission into H with a half-life of (or ).
In the 1955 satirical novel The Mouse That Roared, the name quadium was given to the H that powered the Q-bomb that the Duchy of Grand Fenwick captured from the United States.
Hydrogen-5
H (atomic mass ), with one proton and four neutrons, is highly unstable. It has been synthesized in the lab by bombarding tritium with fast-moving tritons; one triton captures two neutrons from the other, becoming a nucleus with one proton and four neutrons. The remaining proton may be detected, and the existence of H deduced. It decays by double neutron emission into H and has a half-life of () – the shortest half-life of any known nuclide.
Hydrogen-6
H (atomic mass ) has one proton and five neutrons. It has a half-life of ().
Hydrogen-7
H (atomic mass ) has one proton and six neutrons. It was first synthesized in 2003 by a group of Russian, Japanese and French scientists at Riken's Radioactive Isotope Beam Factory by bombarding hydrogen with helium-8 atoms; all six of the helium-8's neutrons were donated to the hydrogen nucleus. The two remaining protons were detected by the "RIKEN telescope", a device made of several layers of sensors, positioned behind the target of the RI Beam cyclotron. H has a half-life of ().
Decay chains
H and H decay directly to H, which then decays to stable He. Decay of the heaviest isotopes, H and H, has not been experimentally observed.
Decay times are in yoctoseconds () for all these isotopes except H, which is in years.
| Physical sciences | s-Block | Chemistry |
2041030 | https://en.wikipedia.org/wiki/Pacific%20saury | Pacific saury | The Pacific saury (Cololabis saira) is species of fish in the family Scomberesocidae. Saury is a seafood in several East Asian cuisines and is also known by the name mackerel pike.
Biology
Saury is a fish with a small mouth, an elongated body, a series of small finlets between the dorsal and anal fins, and a small forked tail. The fish's color is dark green to blue on the dorsal surface, silvery below, and there are small, bright blue blotches distributed randomly on the sides.
It is typically about long when caught, but it can grow up to long and about when caught in the autumn. The life expectancy of Saury is approximately four years. Saury is a pelagic fish typically found and harvested close to the surface, but it can also be found down to depths of up to . When saury is escaping from predators, it floats on the surface and is similar to other fishes within the genus.
These pelagic schooling fish are found in the North Pacific, from China, Korea and Japan eastward to the Gulf of Alaska and southward to sub-tropical Mexico, preferring temperatures around .
The Pacific saury is a highly migratory species. Adults are generally found offshore, near the surface of the ocean, in schools. Juveniles associate with drifting seaweed. Pacific saury are oviparous. Eggs are attached to floating objects, such as seaweed, via filaments on the shell surface.
The saury feeds on zooplankton, such as copepods, krill, amphipods, and the eggs and larvae of common fish, such as anchovies, due to their lack of stomach, and their short straight intestines . The internal organs of the saury may contain small, red, earthworm-like parasites named Rhadinorhynchus selkirki; these are harmless.
A few of the natural predators of Pacific saury include marine mammals, squid and tuna.
Saury oil contains considerable levels of n-3 unsaturated fatty acids (PUFA) and long-chain monounsaturated fatty acids (LCMUFA) with aliphatic tails longer than 18 carbons.
Uses
Japan
Pacific saury is known as sanma (さんま/サンマ) or saira (さいら/サイラ) in Japanese. The kanji used in the Japanese name of the fish (秋刀魚) literally translates as "autumn knife fish," as its body shape resembles a katana.
Saury is one of the most prominent seasonal foods representing autumn in Japanese cuisine. It is most commonly served salted and grilled (broiled) whole, garnished with daikon oroshi (grated daikon) and served alongside a bowl of rice and a bowl of miso soup. Other condiments may include soy sauce, sudachi, lime, lemon, or other citrus juices. The intestines are bitter, but many people choose not to gut the fish, as many say its bitterness, balanced by the condiments, is part of the enjoyment.
It also has many small bones, though not as many as sardines. Saury festivals are held in various parts of Japan, such as the Meguro Autumn Sanma Festival.
Sanma sashimi is becoming increasingly available but is not common. Although rarely used for sushi, sanma-zushi is a regional delicacy along parts of the Kii Peninsula, especially along the coast of southern Mie Prefecture. It is prepared by pickling the saury in salt and vinegar (depending on the region, bitter orange or citron vinegar may be used), and then placing it on top of vinegared rice to create the finished sushi.
The fish can also be pan-fried or canned kabayaki.
Korea
Pacific saury is called kkongchi (꽁치) in Korean.
Gwamegi is a Korean dish of half-dried Pacific saury made during winter. It is mostly eaten in the region of North Gyeongsang Province in places such as Pohang, Uljin, and Yeongdeok, where a large amount of the fish are harvested.
Simmered saury (꽁치조림, kkongchi-jorim) is a common variety of jorim, Korean traditional simmered foods. Salt-grilled saury is known as kkongchi gui (꽁치구이) in Korea.
Russia
Called saira (сайра) in Russian, Pacific saury is popular in Russia, which directly access the Pacific Ocean. In Russia, it is sold canned with salt and spice, sometimes with the addition of vegetable oil or tomato sauce. It is also eaten smoked.
United Kingdom
Pacific saury is used as bait for pike and sea fishing. In the UK, they are usually called blueys, possibly due to people confusing the Pacific saury with blue mackerel.
Fishing
Around 1950, Japan caught about 98% of the catch and South Korea about 2%, but Japan's catch share has decreased proportionally in recent decades. Other nations that fish saury now include China and Taiwan. The Soviet Union fished saury around 1960 until the dissolution of the country. Taiwan began fishing for saury around 1988 and has been expanding its catch. In 2002, the Chinese also started fishing for saury, and they have been catching over 100,000 tons a year.
Approximately ninety Taiwanese vessels participate in the long distance North Pacific fishery. Taiwan's total Pacific saury landings were 30k metric tons in 2021 and 40k metric tons in 2022. Boats in the Taiwanese saury fishery have been transitioning from incandescent and high-intensity discharge (HID) light bulbs to light emitting diodes (LED) which allows them for an environmental impact reduction.
Gallery
| Biology and health sciences | Acanthomorpha | Animals |
2041117 | https://en.wikipedia.org/wiki/Social%20networking%20service | Social networking service | A social networking service (SNS), or social networking site, is a type of online social media platform which people use to build social networks or social relationships with other people who share similar personal or career content, interests, activities, backgrounds or real-life connections.
Social networking services vary in format and the number of features. They can incorporate a range of new information and communication tools, operating on desktops and on laptops, on mobile devices such as tablet computers and smartphones. This may feature digital photo/video/sharing and diary entries online (blogging). Online community services are sometimes considered social-network services by developers and users, though in a broader sense, a social-network service usually provides an individual-centered service whereas online community services are groups centered. Generally defined as "websites that facilitate the building of a network of contacts in order to exchange various types of content online," social networking sites provide a space for interaction to continue beyond in-person interactions. These computer mediated interactions link members of various networks and may help to create, sustain and develop new social and professional relationships.
Social networking sites allow users to share ideas, digital photos and videos, posts, and to inform others about online or real-world activities and events with people within their social network. While in-person social networking – such as gathering in a village market to talk about events – has existed since the earliest development of towns, the web enables people to connect with others who live in different locations across the globe (dependent on access to an Internet connection to do so).
Depending on the platform, members may be able to contact any other member. In other cases, members can contact anyone they have a connection to, and subsequently anyone that contact has a connection to, and so on.
Facebook having a massive 2.13 billion active monthly users and an average of 1.4 billion daily active users in 2017.
LinkedIn, a career-oriented social-networking service, generally requires that a member personally know another member in real life before they contact them online. Some services require members to have a preexisting connection to contact other members.
With COVID-19, Zoom, a videoconferencing platform, has taken an integral place to connect people located around the world and facilitate many online environments such as school, university, work and government meetings.
The main types of social networking services contain category places (such as age or occupation or religion), means to connect with friends (usually with self-description pages), and a recommendation system linked to trust. One can categorize social-network services into four types:
socialization social network services used primarily for socializing with existing friends or users (e.g., Facebook, Instagram, Twitter/X)
online social networks are decentralized and distributed computer networks where users communicate with each other through Internet services.
networking social network services used primarily for non-social interpersonal communication (e.g., LinkedIn, a career- and employment-oriented site)
social navigation social network services used primarily for helping users to find specific information or resources (e.g., Goodreads for books, Reddit)
There have been attempts to standardize these services to avoid the need to duplicate entries of friends and interests (see the FOAF standard). A study reveals that India recorded world's largest growth in terms of social media users in 2013. A 2013 survey found that 73% of U.S. adults use social-networking sites.
Offline and online social networking services
History
The potential for computer networking to facilitate newly improved forms of computer-mediated social interaction was suggested early on. Efforts to support social networks via computer-mediated communication were made in many early online services, including Usenet, ARPANET, LISTSERV, and bulletin board services (BBS). Many prototypical features of social networking sites were also present in online services such as The Source, Delphi, America Online, Prodigy, CompuServe, and The WELL.
Early social networking on the World Wide Web began in the form of generalized online communities such as Theglobe.com (1995), Geocities (1994) and Tripod.com (1995). Many of these early communities focused on bringing people together to interact with each other through chat rooms and encouraged users to share personal information and ideas via personal web pages by providing easy-to-use publishing tools and free or inexpensive web space. Some communities – such as Classmates.com – took a different approach by simply having people link to each other via email addresses. PlanetAll started in 1996.
In the late 1990s, user profiles became a central feature of social networking sites, allowing users to compile lists of "friends" and search for other users with similar interests. New social networking methods were developed by the end of the 1990s, and many sites began to develop more advanced features for users to find and manage friends. Open Diary, a community for online diarists, invented both friends-only content and the reader comment, two features of social networks important to user interaction.
This newer generation of social networking sites began to flourish with the emergence of SixDegrees in 1997, Open Diary in 1998, Mixi in 1999, Makeoutclub in 2000, Cyworld in 2001, Hub Culture in 2002, and Friendster and Nexopia in 2003. Cyworld also became one of the first companies to profit from the sale of virtual goods. MySpace and LinkedIn were launched in 2003, and Bebo was launched in 2005. Orkut became the first popular social networking service in Brazil (although most of its very first users were from the United States) and quickly grew in popularity in India (Madhavan, 2007). There was a rapid increase in social networking sites' popularity; in 2005, MySpace had more pageviews than Google. Many of these services were displaced by Facebook, which launched in 2004 and became the largest social networking site in the world in 2009.
Social media
The term social media was first used in 2004 and is often used to describe social networking services.
Social impact
Web-based social networking services make it possible to connect people who share interests and activities across political, economic, and geographic borders. Through e-mail and instant messaging, online communities are created where a gift economy and reciprocal altruism are encouraged through cooperation. Information is suited to a gift economy, as information is a nonrival good and can be gifted at practically no cost. Scholars have noted that the term "social" cannot account for technological features of the social network platforms alone. Hence, the level of network sociability should determine by the actual performances of its users. According to the communication theory of uses and gratifications, an increasing number of individuals are looking to the Internet and social media to fulfill cognitive, affective, personal integrative, social integrative, and tension free needs. With Internet technology as a supplement to fulfill needs, it is in turn affecting everyday life, including relationships, school, church, entertainment, and family. Companies are using social media as a way to learn about potential employees' personalities and behavior. In numerous situations, a candidate who might otherwise have been hired has been rejected due to offensive or otherwise unseemly photos or comments posted to social networks or appearing on a newsfeed.
Facebook and other social networking tools are increasingly the aims of scholarly research. Scholars in many fields have begun to investigate the impact of social networking sites, investigating how such sites may play into issues of identity, politics, privacy,
social capital, youth culture, and education. Research has also suggested that individuals add offline friends on Facebook to maintain contact and often this blurs the lines between work and home lives. Users from around the world also utilise social networking sites as an alternative news source. While social networking sites have arguably changed how we access the news, users tend to have mixed opinions about the reliability of content accessed through these sites.
According to a study in 2015, 63% of the users of Facebook or Twitter in the USA consider these networks to be their main source of news, with entertainment news being the most seen. In the times of breaking news, Twitter users are more likely to stay invested in the story. In some cases when the news story is more political, users may be more likely to voice their opinion on a linked Facebook story with a comment or like, while Twitter users will just follow the site's feed and retweet the article. In online social networks, the veracity and reliability of news may be diminished due to the absence of traditional media gatekeepers.
A 2015 study shows that 85% of people aged 18 to 34 use social networking sites for their purchase decision making. While over 65% of people aged 55 and over-rely on word of mouth. Several websites are beginning to tap into the power of the social networking model for philanthropy. Such models provide a means for connecting otherwise fragmented industries and small organizations without the resources to reach a broader audience with interested users. Social networks are providing a different way for individuals to communicate digitally. These communities of hypertexts allow for the sharing of information and ideas, an old concept placed in a digital environment. In 2011, HCL Technologies conducted research that showed that 50% of British employers had banned the use of social networking sites/services during office hours.
Research has provided us with mixed results as to whether or not a person's involvement in social networking can affect their feelings of loneliness. Studies have indicated that how a person chooses to use social networking can change their feelings of loneliness in either a negative or positive way. Some companies with mobile workers have encouraged their workers to use social networking to feel connected. Educators are using social networking to stay connected with their students whereas individuals use it to stay connected with their close relationships.
Social networking sites can be used by consumers to create a social media firestorm which is "A digital artifact created by large numbers of user comments of multiple purposes (condemnation and support) and tones (aggressive and cordial) that appear rapidly and recede shortly after”.
Each social networking user is able to create a community that centers around a personal identity they choose to create online. In his book Digital Identities: Creating and Communicating the Online Self, Rob Cover argues that social networking's foundation in Web 2.0, high-speed networking shifts online representation to one which is both visual and relational to other people, complexifying the identity process for younger people and creating new forms of anxiety. In 2016, news reports stated that excessive usage of SNS sites may be associated with an increase in the rates of depression, to almost triple the rate for non-SNS users. Experts worldwide have said that 2030 people who use SNS more have higher levels of depression than those who use SNS less. At least one study went as far as to conclude that the negative effects of Facebook usage are equal to or greater than the positive effects of face-to-face interactions.
According to a recent article from Computers in Human Behavior, Facebook has also been shown to lead to issues of social comparison. Users are able to select which photos and status updates to post, allowing them to portray their lives in acclamatory manners. These updates can lead to other users feeling like their lives are inferior by comparison. Users may feel especially inclined to compare themselves to other users with whom they share similar characteristics or lifestyles, leading to a fairer comparison. Motives for these comparisons can be associated with the goals of improving oneself by looking at profiles of people who one feels are superior, especially when their lifestyle is similar and possible. One can also self-compare to make oneself feel superior to others by looking at the profiles of users who one believes to be worse off. However, a study by the Harvard Business Review shows that these goals often lead to negative consequences, as use of Facebook has been linked with lower levels of well-being; mental health has been shown to decrease due to the use of Facebook. Computers in Human Behavior emphasizes that these feelings of poor mental health have been suggested to cause people to take time off from their Facebook accounts; this action is called "Facebook Fatigue" and has been common in recent years.
Usage of social networking has contributed to a new form of abusive communication, and academic research has highlighted a number of social-technological explanations for this behaviour. These including the anonymity afforded by interpersonal communications, factors that include boredom or attention seeking, or the result of more polarised online debate. The impact in this abuse has found impacts through the prevalence of online cyberbullying, and online trolling. There has also been a marked increase in political violence and abuse through social media platforms. For instance, one study by Ward and McLoughlin found that 2.57% of all messages sent to UK MPs on Twitter were found to contain abusive messages.
Features
Typical features
According to boyd and Ellison's 2007 article, "Why Youth (Heart) Social Network Sites: The Role of Networked Publics in Teenage Social Life", social networking sites share a variety of technical features that allow individuals to: construct a public/semi-public profile, articulate a list of other users that they share a connection with, and view their list of connections within the system. The most basic of these are visible profiles with a list of "friends" who are also users of the site. In an article entitled "Social Network Sites: Definition, History, and Scholarship," boyd and Ellison adopt Sunden's (2003) description of profiles as unique pages where one can "type oneself into being". A profile is generated from answers to questions, such as age, location, interests, etc. Some sites allow users to upload pictures, add multimedia content or modify the look and feel of the profile. Others, e.g., Facebook, allow users to enhance their profile by adding modules or "Applications". Many sites allow users to post blog entries, search for others with similar interests and compile and share lists of contacts. User profiles often have a section dedicated to comments from friends and other users. To protect user privacy, social networks typically have controls that allow users to choose who can view their profile, contact them, add them to their list of contacts, and so on.
Additional features
There is a trend towards more interoperability between social networks led by technologies such as OpenID and OpenSocial. In most mobile communities, mobile phone users can now create their own profiles, make friends, participate in chat rooms, create chat rooms, hold private conversations, share photos and videos, and share blogs by using their mobile phone. Some companies provide wireless services that allow their customers to build their own mobile community and brand it; one of the most popular wireless services for social networking in North America and Nepal is Facebook Mobile.
Recently, Twitter has also introduced fact check labels to combat misinformation which was primarily spread due to the coronavirus but also has had an impact on debunking false claims by Donald Trump in the 2020 election.
Social media platforms may allow users to change their user name (or "handle", distinct from the "display name"), which could change the URL to their profile. Users are advised to do so with caution, since it could break back links from others' posts and comments depending on implementation, and external back links.
Emerging trends
While the popularity of social networking consistently rises, new uses for the technology are frequently being observed. Today's technologically savvy population requires convenient solutions to their daily needs. At the forefront of emerging trends in social networking sites is the concept of "real-time web" and "location-based". Real-time allows users to contribute contents, which is then broadcast as it is being uploaded—the concept is analogous to live radio and television broadcasts. Twitter set the trend for "real-time" services, wherein users can broadcast to the world what they are doing, or what is on their minds within a 140-character limit. Facebook followed suit with their "Live Feed" where users' activities are streamed as soon as it happens. While Twitter focuses on words, Clixtr, another real-time service, focuses on group photo sharing wherein users can update their photo streams with photos while at an event. Facebook, however, remains the largest photo sharing site with over 250 billion photos as of September 2013. In April 2012, the image-based social media network Pinterest had become the third largest social network in the United States.
Companies have begun to merge business technologies and solutions, such as cloud computing, with social networking concepts. Instead of connecting individuals based on social interest, companies are developing interactive communities that connect individuals based on shared business needs or experiences. Many provide specialized networking tools and applications that can be accessed via their websites, such as LinkedIn. Others companies, such as Monster.com, have been steadily developing a more "socialized" feel to their career center sites to harness some of the power of social networking sites. These more business related sites have their own nomenclature for the most part but the most common naming conventions are "Vocational Networking Sites" or "Vocational Media Networks", with the former more closely tied to individual networking relationships based on social networking principles.
Foursquare gained popularity as it allowed for users to check into places that they are frequenting at that moment. Gowalla is another such service that functions in much the same way that Foursquare does, leveraging the GPS in phones to create a location-based user experience. Clixtr, though in the real-time space, is also a location-based social networking site, since events created by users are automatically geotagged, and users can view events occurring nearby through the Clixtr iPhone app. Recently, Yelp announced its entrance into the location-based social networking space through check-ins with their mobile app; whether or not this becomes detrimental to Foursquare or Gowalla is yet to be seen, as it is still considered a new space in the Internet technology industry.
One popular use for this new technology is social networking between businesses. Companies have found that social networking sites such as Facebook and Twitter are great ways to build their brand image. According to Jody Nimetz, author of Marketing Jive, there are five major uses for businesses and social media: to create brand awareness, as an online reputation management tool, for recruiting, to learn about new technologies and competitors, and as a lead generation tool to intercept potential prospects. These companies are able to drive traffic to their own online sites while encouraging their consumers and clients to have discussions on how to improve or change products or services. As of September 2013, 71% of online adults use Facebook, 17% use Instagram, 21% use Pinterest, and 22% use LinkedIn.
Niche networks
In 2012, it was reported that in the past few years, the niche social network has steadily grown in popularity, thanks to better levels of user interaction and engagement. In 2012, a survey by Reuters and research firm Ipsos
found that one in three users were getting bored with Facebook and in 2014 the GlobalWebIndex found that this figured had risen to almost 50%. The niche social network offers a specialized space that's designed to appeal to a very specific market with a clearly defined set of needs. Where once the streams of social minutia on networks such as Facebook and Twitter were the ultimate in online voyeurism, now users are looking for connections, community and shared experiences. Social networks that tap directly into specific activities, hobbies, tastes, and lifestyles are seeing a consistent rise in popularity.
Science
One other use that is being discussed is the use of social networks in the science communities. Julia Porter Liebeskind et al. have published a study on how new biotechnology firms are using social networking sites to share exchanges in scientific knowledge. They state in their study that by sharing information and knowledge with one another, they are able to "increase both their learning and their flexibility in ways that would not have been possible within a self-contained hierarchical organization". Social networking is allowing scientific groups to expand their knowledge base and share ideas, and without these new means of communicating their theories might become "isolated and irrelevant". Researchers use social networks frequently to maintain and develop professional relationships. They are interested in consolidating social ties and professional contact, keeping in touch with friends and colleagues and seeing what their own contacts are doing. This can be related to their need to keep updated on the activities and events of their friends and colleagues in order to establish collaborations on common fields of interest and knowledge sharing.
Social networks are also used to communicate scientists research results and as a public communication tool and to connect people who share the same professional interests, their benefits can vary according to the discipline. The most interesting aspects of social networks for professional purposes are their potentialities in terms of dissemination of information and the ability to reach and multiple professional contacts exponentially. Social networks like Academia.edu, LinkedIn, Facebook, and ResearchGate give the possibility to join professional groups and pages, to share papers and results, publicize events, to discuss issues and create debates. Academia.edu is extensively used by researchers, where they follow a combination of social networking and scholarly norms. ResearchGate is also widely used by researchers, especially to disseminate and discuss their publications, where it seems to attract an audience that it wider than just other scientists. The usage of ResearchGate and Academia in different academic communities has increasingly been studied in recent years.
Education
The advent of social networking platforms may also be impacting the ways in which learners engage with technology in general. For a number of years, Prensky's (2001) dichotomy between Digital Natives and Digital Immigrants has been considered a relatively accurate representation of the ease with which people of a certain age range—in particular those born before and after 1980—use technology. Prensky's theory has been largely disproved, however, and not least on account of the burgeoning popularity of social networking sites and other metaphors such as White and Le Cornu's "Visitors" and "Residents" (2011) are greater currency. The use of online social networks by school libraries is also increasingly prevalent and they are being used to communicate with potential library users, as well as extending the services provided by individual school libraries. Social networks and their educational uses are of interest to many researchers. According to Livingstone and Brake (2010), "Social networking sites, like much else on the Internet, represent a moving target for researchers and policymakers." Pew Research Center project, called Pew Internet, did a USA-wide survey in 2009 and in 2010 February published that 47% of American adults use a social networking website. Same survey found that 73% of online teenagers use SNS, which is an increase from 65% in 2008, 55% in 2006. Recent studies have shown that social network services provide opportunities within professional education, curriculum education, and learning. However, there are constraints in this area. Researches, especially in Africa, have disclosed that the use of social networks among students has been known to affect their academic life negatively. This is buttressed by the fact that their use constitutes distractions, as well as that the students tend to invest a good deal of time in the use of such technologies.
Albayrak and Yildirim (2015) examined the educational use of social networking sites. They investigated students' involvement in Facebook as a Course Management System (CMS) and the findings of their study support that Facebook as a CMS has the potential to increase student involvement in discussions and out-of-class communication among instructors and students.
Professional use
Professional use of social networking services refers to the employment of a network site to connect with other professionals within a given field of interest. These type of social networking services are referred to as "Career-oriented social networking markets (CSNM)".
LinkedIn is one example and is a social networking website geared towards companies and industry professionals looking to make new business contacts or keep in touch with previous co-workers, affiliates, and clients. LinkedIn provides not only a professional social use but also encourages people to inject their personality into their profile – making it more personal than a resume.
Similar websites to LinkedIn (also geared towards companies and industry professionals looking for work opportunities) to connect include AngelList, XING, Goodwall, The Dots, Jobcase, Bark.com, ... Various freelance marketplace websites (which focus on freelance work) also exist. There are also a number of other employment websites focused on international volunteering, notably VolunteerMatch, Idealist.org and All for Good. National WWOOF networks finally allow for searching for homestays on organic farms.
Now other social network sites are also being used in this manner. Twitter has become [a] mainstay for professional development as well as promotion and online SNSs support both the maintenance of existing social ties and the formation of new connections. Much of the early research on online communities assume that individuals using these systems would be connecting with others outside their preexisting social group or location, liberating them to form communities around shared interests, as opposed to shared geography. Other researchers have suggested that the professional use of network sites produce "social capital". For individuals, social capital allows a person to draw on resources from other members of the networks to which he or she belongs. These resources can take the form of useful information, personal relationships, or the capacity to organize groups. As well, networks within these services also can be established or built by joining special interest groups that others have made, or creating one and asking others to join.
Curriculum use
According to Doering, Beach, and O'Brien, a future English curriculum needs to recognize a significant shift in how adolescents are communicating with each other. Curriculum uses of social networking services can also include sharing curriculum-related resources. Educators tap into user-generated content to find and discuss curriculum-related content for students. Responding to the popularity of social networking services among many students, teachers are increasingly using social networks to supplement teaching and learning in traditional classroom environments. This way they can provide new opportunities for enriching existing curriculum through creative, authentic and flexible, non-linear learning experiences. Some social networks, such as English, baby! and LiveMocha, are explicitly education-focused and couple instructional content with an educational peer environment. The new Web 2.0 technologies built into most social networking services promote conferencing, interaction, creation, research on a global scale, enabling educators to share, remix, and repurpose curriculum resources. In short, social networking services can become research networks as well as learning networks.
Learning use
Educators and advocates of new digital literacies are confident that social networking encourages the development of transferable, technical, and social skills of value in formal and informal learning. In a formal learning environment, goals or objectives are determined by an outside department or agency. Tweeting, instant messaging, or blogging enhances student involvement. Students who would not normally participate in class are more apt to partake through social network services. Networking allows participants the opportunity for just-in-time learning and higher levels of engagement. The use of SNSs allow educators to enhance the prescribed curriculum. When learning experiences are infused into a website student utilize every day for fun, students realize that learning can and should be a part of everyday life. It does not have to be separate and unattached.
Informal learning consists of the learner setting the goals and objectives. It has been claimed that media no longer just influence human culture; they are human culture. With such a high number of users between the ages of 13 and 18, a number of skills are developed. Participants hone technical skills in choosing to navigate through social networking services. This includes elementary items such as sending an instant message or updating a status. The development of new media skills are paramount in helping youth navigate the digital world with confidence.
Social networking services foster learning through what Jenkins (2006) describes as a "participatory culture". A participatory culture consists of a space that allows engagement, sharing, mentoring, and an opportunity for social interaction. Participants of social network services avail of this opportunity. Informal learning, in the forms of participatory and social learning online, is an excellent tool for teachers to sneak in material and ideas that students will identify with and therefore, in a secondary manner, students will learn skills that would normally be taught in a formal setting in the more interesting and engaging environment of social learning. Sites like Twitter provide students with the opportunity to converse and collaborate with others in real time.
Social networking services provide a virtual "space" for learners. James Gee (2004) suggests that affinity spaces instantiate participation, collaboration, distribution, dispersion of expertise, and relatedness. Registered users share and search for knowledge which contributes to informal learning.
Constraints
In the past, social networking services were viewed as a distraction and offered no educational benefit. Blocking these social networks was a form of protection for students against wasting time, bullying, and invasions of privacy. In an educational setting, Facebook, for example, is seen by many instructors and educators as a frivolous, time-wasting distraction from schoolwork, and it is not uncommon to be banned in junior high or high school computer labs. Cyberbullying has become an issue of concern with social networking services. According to the UK Children Go Online survey of 9- to 19-year-olds, it was found that a third have received bullying comments online. To avoid this problem, many school districts/boards have blocked access to social networking services such as Facebook, MySpace, and Twitter within the school environment. Social networking services often include a lot of personal information posted publicly, and many believe that sharing personal information is a window into privacy theft. Schools have taken action to protect students from this. It is believed that this outpouring of identifiable information and the easy communication vehicle that social networking services open the door to sexual predators, cyberbullying, and cyberstalking. In contrast, however, 70% of social media using teens and 85% of adults believe that people are mostly kind to one another on social network sites.
Recent research suggests that there has been a shift in blocking the use of social networking services. In many cases, the opposite is occurring as the potential of online networking services is being realized. It has been suggested that if schools block them [social networking services], they are preventing students from learning the skills they need. Banning social networking [...] is not only inappropriate but also borderline irresponsible when it comes to providing the best educational experiences for students. Schools and school districts have the option of educating safe media usage as well as incorporating digital media into the classroom experience, thus preparing students for the literacy they will encounter in the future.
Positive correlates
A cyberpsychology research study conducted by Australian researchers demonstrated that a number of positive psychological outcomes are related to Facebook use. These researchers established that people can derive a sense of social connectedness and belongingness in the online environment. Importantly, this online social connectedness was associated with lower levels of depression and anxiety, and greater levels of subjective well-being. These findings suggest that the nature of online social networking determines the outcomes of online social network use.
Grassroots organizing
Social networks are being used by activists as a means of low-cost grassroots organizing. Extensive use of an array of social networking sites enabled organizers of 2009 National Equality March to mobilize an estimated 200,000 participants to march on Washington with a cost savings of up to 85% per participant over previous methods.
The August 2011 England riots were similarly considered to have escalated and been fuelled by this type of grassroots organization.
Employment
A rise in social network use is being driven by college students using the services to network with professionals for internship and job opportunities. Many studies have been done on the effectiveness of networking online in a college setting, and one notable one is by Phipps Arabie and Yoram Wind published in Advances in Social Network Analysis. Many schools have implemented online alumni directories which serve as makeshift social networks that current and former students can turn to for career advice. However, these alumni directories tend to suffer from an oversupply of advice-seekers and an undersupply of advice providers. One new social networking service, Ask-a-peer, aims to solve this problem by enabling advice seekers to offer modest compensation to advisers for their time. LinkedIn is also another great resource. It helps alumni, students and unemployed individuals look for work. They are also able to connect with others professionally and network with companies.
In addition, employers have been found to use social network sites to screen job candidates.
Hosting service
A social network hosting service is a web hosting service that specifically hosts the user creation of web-based social networking services, alongside related applications.
Trade network
A social trade network is a service that allows participants interested in specific trade sectors to share related contents and personal opinions.
Business model
Few social networks charge money for membership. In part, this may be because social networking is a relatively new service, and the value of using them has not been firmly established in customers' minds. Companies such as Myspace and Facebook sell online advertising on their site. Their business model is based upon large membership count, and charging for membership would be counterproductive. Some believe that the deeper information that the sites have on each user will allow much better targeted advertising than any other site can currently provide. In recent times, Apple has been critical of the Google and Facebook model, in which users are defined as product and a commodity, and their data being sold for marketing revenue. Social networks operate under an autonomous business model, in which a social network's members serve dual roles as both the suppliers and the consumers of content. This is in contrast to a traditional business model, where the suppliers and consumers are distinct agents. Revenue is typically gained in the autonomous business model via advertisements, but subscription-based revenue is possible when membership and content levels are sufficiently high.
Social interaction
People use social networking sites for meeting new friends, finding old friends, or locating people who have the same problems or interests they have, called niche networking. More and more relationships and friendships are being formed online and then carried to an offline setting. Psychologist and University of Hamburg professor Erich H. Witte says that relationships which start online are much more likely to succeed. In this regard, there are studies which predict tie strength among the friends on social networking websites. One online dating site claims that 2% of all marriages begin at its site, the equivalent of 236 marriages a day. Other sites claim one in five relationships begin online.
Users do not necessarily share with others the content which is of most interest to them, but rather that which projects a good impression of themselves. While everyone agrees that social networking has had a significant impact on social interaction, there remains a substantial disagreement as to whether the nature of this impact is completely positive. A number of scholars have done research on the negative effects of Internet communication as well. These researchers have contended that this form of communication is an impoverished version of conventional face-to-face social interactions, and therefore produce negative outcomes such as loneliness and depression for users who rely on social networking entirely. By engaging solely in online communication, interactions between communities, families, and other social groups are weakened.
Issues
Social networking services have led to many issues regarding privacy, bullying, social anxiety and potential for misuse.
Investigations
Social networking services are increasingly being used in legal and criminal investigations. The information posted on sites such as MySpace and Facebook has been used by police (forensic profiling), probation, and university officials to prosecute users of said sites. In some situations, content posted on MySpace has been used in court.
Facebook is increasingly being used by school administrations and law enforcement agencies as a source of evidence against student users. This site being the number one online destination for college students, allows users to create profile pages with personal details. These pages can be viewed by other registered users from the same school, which often include resident assistants and campus police who have signed up for the service. One UK police force has sifted pictures from Facebook and arrested some people who had been photographed in a public place holding a weapon such as a knife (having a weapon in a public place is illegal).
Application domains
Government applications
Social networking is more recently being used by various government agencies. Social networking tools serve as a quick and easy way for the government to get the suggestion of the public and to keep the public updated on their activity, however, this comes with a significant risk of abuse, for example, to cultivate a culture of fear such as that outlined in Nineteen Eighty-Four or THX-1138.
The Centers for Disease Control demonstrated the importance of vaccinations on the popular children's site Whyville and the National Oceanic and Atmospheric Administration has a virtual island on Second Life where people can explore caves or explore the effects of global warming. Likewise, NASA has taken advantage of a few social networking tools, including Twitter and Flickr. The NSA is taking advantage of them all. NASA is using such tools to aid the Review of U.S. Human Space Flight Plans Committee, whose goal it is to ensure that the nation is on a vigorous and sustainable path to achieving its boldest aspirations in space.
Business applications
The use of social networking services in an enterprise context presents the potential of having a major impact on the world of business and work. Social networks connect people at low cost; this can be beneficial for entrepreneurs and small businesses looking to expand their contact bases. These networks often act as a customer relationship management tool for companies selling products and services. Companies can also use social networks for advertising in the form of banners and text ads. Since businesses operate globally, social networks can make it easier to keep in touch with contacts around the world. Applications for social networking sites have extended toward businesses and brands are creating their own, high functioning sites, a sector known as brand networking. It is the idea that a brand can build its consumer relationship by connecting their consumers to the brand image on a platform that provides them relative content, elements of participation, and a ranking or score system. Brand networking is a new way to capitalize on social trends as a marketing tool. The power of social networks is beginning to permeate into internal culture of businesses where they are finding uses for collaboration, file sharing and knowledge transfer. The term "enterprise social software" is becoming increasingly popular for these types of applications.
Dating applications
Many social networks provide an online environment for people to communicate and exchange personal information for dating purposes. Intentions can vary from looking for a one time date, short-term relationships, and long-term relationships. Most of these social networks, just like online dating services, require users to give out certain pieces of information. This usually includes a user's age, gender, location, interests, and perhaps a picture. Releasing very personal information is usually discouraged for safety reasons. This allows other users to search or be searched by some sort of criteria, but at the same time, people can maintain a degree of anonymity similar to most online dating services. Online dating sites are similar to social networks in the sense that users create profiles to meet and communicate with others, but their activities on such sites are for the sole purpose of finding a person of interest to date. Social networks do not necessarily have to be for dating; many users simply use it for keeping in touch with friends, and colleagues.
However, an important difference between social networks and online dating services is the fact that online dating sites usually require a fee, where social networks are free.
This difference is one of the reasons the online dating industry is seeing a massive decrease in revenue due to many users opting to use social networking services instead. Many popular online dating services such as Match.com, Yahoo Personals, and eHarmony.com are seeing a decrease in users, where social networks like MySpace and Facebook are experiencing an increase in users. The number of Internet users in the United States that visit online dating sites has fallen from a peak of 21% in 2003 to 10% in 2006. Whether it is the cost of the services, the variety of users with different intentions, or any other reason, it is undeniable that social networking sites are quickly becoming the new way to find dates online.
Educational applications
The National School Boards Association reports that almost 60% of students who use social networking talk about education topics online, and more than 50% talk specifically about schoolwork. Yet the vast majority of school districts have stringent rules against nearly all forms of social networking during the school day—even though students and parents report few problem behaviors online. Social networks focused on supporting relationships between teachers and their students are now used for learning, educators professional development, and content sharing. HASTAC is a collaborative social network space for new modes of learning and research in higher education, K-12, and lifelong learning; Ning supports teachers; TermWiki, TeachStreet and other sites are being built to foster relationships that include educational blogs, portfolios, formal and ad hoc communities, as well as communication such as chats, discussion threads, and synchronous forums. These sites also have content sharing and rating features. Social networks are also emerging as online yearbooks, both public and private. One such service is MyYearbook, which allows anyone from the general public to register and connect. A new trend emerging is private label yearbooks accessible only by students, parents, and teachers of a particular school, similar to Facebook's beginning within Harvard.
Finance applications
The use of virtual currency systems inside social networks create new opportunities for global finance. Hub Culture operates a virtual currency Ven used for global transactions among members, product sales and financial trades in commodities and carbon credits. In May 2010, carbon pricing contracts were introduced to the weighted basket of currencies and commodities that determine the floating exchange value of Ven. The introduction of carbon to the calculation price of the currency made Ven the first and only currency that is linked to the environment.
Medical and health applications
Social networks are beginning to be adopted by healthcare professionals as a means to manage institutional knowledge, disseminate peer to peer knowledge and to highlight individual physicians and institutions. The advantage of using a dedicated medical social networking site is that all the members are screened against the state licensing board list of practitioners. A new trend is emerging with social networks created to help its members with various physical and mental ailments. For people suffering from life-altering diseases or chronic health conditions, companies such as HealthUnlocked and PatientsLikeMe offers their members the chance to connect with others dealing with similar issues and share experiences. For alcoholics and addicts, SoberCircle gives people in recovery the ability to communicate with one another and strengthen their recovery through the encouragement of others who can relate to their situation. DailyStrength is also a website that offers support groups for a wide array of topics and conditions, including the support topics offered by PatientsLikeMe and SoberCircle. Some social networks aim to encourage healthy lifestyles in their users. SparkPeople and HealthUnlocked offer community and social networking tools for peer support during weight loss. Fitocracy and QUENTIQ are focused on exercise, enabling users to share their own workouts and comment on those of other users. Other aspects of social network usage include the analysis of data coming from existing social networks (such as Twitter) to discover large crowd concentration events (based on tweets location statistical analysis) and disseminate the information to e.g. mobility-challenged individuals for e.g. avoiding the specific areas and optimizing their journey in an urban environment.
Social and political applications
Social networking sites have recently showed a value in social and political movements. In the Egyptian revolution, Facebook and Twitter both played an allegedly pivotal role in keeping people connected to the revolt. Egyptian activists have credited social networking sites with providing a platform for planning protest and sharing news from Tahrir Square in real time. By presenting a platform for thousands of people to instantaneously share videos of mainly events featuring brutality, social networking can be a vital tool in revolutions. On the flip side, social networks enable government authorities to easily identify, and repress, protestors and dissidents. Another political application of social media is promoting the involvement of younger generations in politics and ongoing political issues.
Perhaps the most significant political application of social media is Barack Obama's election campaign in 2008. It was the first of its kind, as it successfully incorporated social media into its campaign winning strategy, evolving the way of political campaigns forevermore in the ever-changing technological world we find ourselves in today. His campaign won by engaging everyday people and empowering volunteers, donors, and advocates, through social networks, text messaging, email messaging and online videos. Obama's social media campaign was vast, with his campaign boasting 5 million 'friends' on over 15 social networking sites, with over 3 million friends just on Facebook. Another significant success of the campaign was online videos, with nearly 2,000 YouTube videos being put online, receiving over 80 million views.
In 2007, when Obama first announced his candidacy, there was no such thing as an iPhone or Twitter. However, a year later, Obama was sending out voting reminders to thousands of people through Twitter, showing just how fast social media moves. Obama's campaign was current and needed to be successful in incorporating social media, as social media acts best and is most effective in real time.
Building up to the 2012 presidential election, it was interesting to see how strong the influence of social media would be following the 2008 campaigns, where Obama's winning campaign had been social media-heavy, whereas McCain's campaign did not really grasp social media. John F. Kennedy was the first president who really understood television, and similarly, Obama is the first president to fully understand the power of social media. Obama has recognized social media is about creating relationships and connections and therefore used social media to the advantage of presidential election campaigns, in which Obama has dominated his opponents in terms of social media space.
Other political campaigns have followed on from Obama's successful social media campaigns, recognizing the power of social media and incorporating it as a key factor embedded within their political campaigns, for example, Donald Trump's presidential electoral campaign, 2016. Dan Pfeiffer, Obama's former digital and social media guru, commented that Donald Trump is "way better at the internet than anyone else in the GOP which is partly why he is winning".
Research has shown that 66% of social media users actively engage in political activity online, and like many other behaviors, online activities translate into offline ones. With research from the 'MacArthur Research Network on Youth and Participatory Politics' stating that young people who are politically active online are double as likely to vote than those who are not politically active online. Therefore, political applications of social networking sites are crucial, particularly to engage with the youth, who perhaps are the least educated in politics and the most in social networking sites. Social media is, therefore, a very effective way in which politicians can connect with a younger audience through their political campaigns.
On June 28, 2020, The New York Times released an article sharing the finding of two researchers who studied the impact of TikTok, a video-sharing and social networking application, on political expression. The application, besides being a creative space to express oneself, has been used maliciously to spread disinformation ahead of US President Donald Trump's Tulsa rally in Oklahoma and amplified footage of police brutality at Black Lives Matter protests.
Crowdsourcing applications
Crowdsourcing social media platform, such as Design Contest, Arcbazar, Tongal, combined group of professional freelancers, such as designers, and help them communicate with business owners interested in their suggestion. This process is often used to subdivide tedious work or to fund-raise startup companies and charities, and can also occur offline.
Open source software
There are a number of projects that aim to develop free and open source software to use for social networking services. These technologies are often referred to as social engine or social networking engine software.
Largest social networking services
The following is a list of the largest social networking services, in order by number of active users, as of January 2024, as published by Statista:
*Platforms have not published updated user figures in the past 12 months, figures may be out of date and less reliable
**Figure uses daily active users, so monthly active user number is likely higher
| Technology | Internet | null |
2042384 | https://en.wikipedia.org/wiki/Bengal%20monitor | Bengal monitor | The Bengal monitor (Varanus bengalensis), also called the Indian monitor, is a species of monitor lizard distributed widely in the Indian subcontinent, as well as parts of Southeast Asia and West Asia.
Description
The Bengal monitor can reach with a snout-to-vent length (SVL) of and a tail of . Males are generally larger than females. Heavy individuals may weigh nearly .
The populations of monitors in India and Sri Lanka differ in the scalation from those of Myanmar; these populations were once considered subspecies of the Bengal monitor, but are now considered two species within the V. bengalensis species complex. What was once the nominate subspecies, V. bengalensis, is found west of Myanmar, while the clouded monitor (V. nebulosus) is found to the east. Clouded monitors can be differentiated by the presence of a series of enlarged scales in the supraocular region. The number of ventral scales varies, decreasing from 108 in the west to 75 in the east (Java).
Young monitor lizards are more colourful than adults. Young have a series of dark crossbars on the neck, throat and back. The belly is white, banded with dark crossbars and are spotted with grey or yellow (particularly in the eastern part of the range). On the dorsal surface of young monitors, there are a series of yellow spots with dark transverse bars connecting them. As they mature, the ground colour becomes light brown or grey, and dark spots give them a speckled appearance. Clouded monitor hatchlings by comparison tend to have a series of backward-pointing, V-shaped bands on their necks.
Bengal monitors have external nostril openings (nares) that is slit-like and oriented near horizontal, and positions between the eye and the tip of the snout. The nares can be closed at will, especially to keep away debris or water. The scales of the skin are rougher in patches and on the sides, they have minute pits, especially well distributed in males. These scales with micropores have glandular structures in the underlying dermal tissue and produce a secretion which may be a pheromone-like substance. Like other monitors, Bengal monitors have a forked tongue similar to snakes. The function is mainly sensory, and is not very involved in the transport of food down the throat. Bengal monitors have fat deposits in the tail and body that serve them in conditions when prey are not easily available.
The lungs have spongy tissue unlike the sacs of other saurians. This allows for a greater rate of gas exchange and allows a faster metabolic rate and higher activity levels. Like all monitors, they have subpleurodont teeth, meaning the teeth are fused to the inside of the jaw bones. The teeth are placed one behind another, and there are replacement teeth behind and between each functional tooth (polyphyodont). The maxillary and dentary teeth are laterally compressed, sometimes with a slightly serrate cutting edge, while the premaxillary teeth are conical. There are 78 premaxillary teeth, 10 maxillary and 13 dentary teeth. Replacement teeth move forward and about four replacements happens each year for a tooth.
While all monitor lizards are now placed in a clade called the Toxicofera which are known to possess venom glands, there are no reports of the effects of venom in Bengal monitors other than a very controversial case report of fatal renal failure as a result of envenomation from this species.
Distribution and habitat
The species ranges from Iran to Java, among the most widely distributed of monitor lizards as they are eurytopic and adaptable to a range of habitats. It is found in river valleys in eastern Iran, Afghanistan, India, Nepal, Sri Lanka, Pakistan, Bangladesh and Burma.
The closely related species, the clouded monitor, occurs in southern Myanmar, Vietnam, Cambodia, Thailand, Malaysia, Sumatra, Java and the Sunda Islands. They have not been confirmed on Sumatra, and have been found to be absent from the Andaman Islands.
The species is mainly distributed through lower elevations below an altitude of 1500 metres, and is found both in dry semiarid desert habitats to moist forest. They are often found in agricultural areas.
Ecology and behaviour
Bengal monitors are usually solitary and usually found on the ground, although the young are often seen on trees. Clouded monitors by contrast have a greater propensity for tree climbing. Bengal and yellow monitors are sympatric but are partially separated by their habitat as Bengal monitors prefers forest over agricultural areas. Bengal monitors shelter in burrows they dig or crevices in rocks and buildings, whilst clouded monitors prefer tree hollows. Both species will make use of abandoned termite mounds. Bengal monitors are diurnal like other monitors, becoming active around 6 AM and bask in the morning sun. During winter in the colder parts of their range, they may take shelter and go through a period of reduced metabolic activity. They are not territorial, and may change their range seasonally in response to food availability.
They are usually shy and avoid humans. They have keen eyesight and can detect human movement nearly 250 m away. When caught, a few individuals may bite, but rarely do so.
Although they are found on agricultural land, they prefer forests with large trees. Generally, high ground cover with large trees are favorable areas.
Captives have been known to live for nearly 22 years. Predators of adults include pythons, mammalian predators and birds. A number of ectoparasites and endoparasites are recorded.
Breeding
Females may be able to retain sperm, and females held in confinement have been able to lay fertile eggs. Some species of monitor lizards such as the Nile monitor have additionally demonstrated to be capable of parthenogenesis. The main breeding season is June to September, but males begin to show combat behaviour by April. Females dig a nest hole in level ground or a vertical bank and lay the eggs inside, filling it up and using their snouts to compact the soil. The females often dig false nests nearby and shovel soil around the area. They sometimes make use of a termite mound to nest. A single clutch of about 20 eggs are laid. The eggs hatch in 168 to nearly as long as 254 days. About 40-80% of the eggs may hatch.
Locomotion
They are capable of rapid movement on the ground. Small individuals may climb trees to escape, but larger ones prefer to escape on the ground. They can climb well. On the ground, they sometimes stand on the hind legs to get a better view or when males fight other males. They can also swim well and can stay submerged for at least 17 minutes. They can use both trees and bushes for shelter.
Feeding
Bengal monitors tend to remain active the whole day. Large adults may ascend vertical tree trunks, where they sometimes stalk and capture roosting bats. The species is a generalist, and feeds on a varied diet of invertebrates and vertebrates. Invertebrate prey mostly consists of beetles and their larvae followed by orthopterans, but also maggots, caterpillars, centipedes, scorpions, crabs, crayfish, snails, termites, ants, and earwigs. Larger individuals in addition to invertebrates also eat a large amount of vertebrate prey, including toads and frogs and their eggs, fish, lizards, snakes, rats, squirrels, hares, musk shrews, and birds. Hares and rodents such as Lesser bandicoot rats are often caught by digging them out of their nests. Diet may differ based on season and locality, for example, they often forage for fish and aquatic insects in streams during the summer, and individuals in Andhra Pradesh eat mostly frogs and toads.
Bengal monitors will also scavenge carrion, and sometimes congregate when feeding on large carcasses such as that of deer. In areas where livestock are common, they often seek out dung to forage for beetles and other insects.
Threats
The Bengal monitor has been assessed as Least Concern on the IUCN Red List ; the wild population is decreasing as it is hunted for both consumption and medicinal purposes as well as for the skin.
As it is adaptable to a range of habitats, the threat of habitat degradation is relatively less prominent and is superseded by the threat of agricultural pollution, as pesticides reduce the availability of prey. In Iran, it is also sometimes killed due to being seen as a dangerous threat.
The dried and dyed hemipenes of Bengal monitors, and less often yellow and water monitors, are frequently trafficked and illegally sold in India and online under the deceptive term 'Hatha Jodi', where it is claimed to be the root of a supposed rare Himalayan plant in order to fool buyers and retailers, and to disguise the trade from wildlife authorities. Sellers advertise 'Hatha Jodi' as having the tantric power to bring wealth, power and contentment. A pair of hemipenes may sell at a value of up to US$250.
In India, the body oil of monitor lizards is sold for thousands of Indian rupees to residents in metropolitan cities as a treatment for rheumatism.
Conservation
The Bengal monitor is listed on Schedule I of the Wild Life (Protection) Act, 1972 and on Appendix I of CITES.
In culture
The lizard is known as bis-cobra in western India, Goyra in Rajasthan, or in Bangladesh and West Bengal, goh in both Punjab, India, Punjab, Pakistan and Bihar, as ghorpad in Maharashtra and as Thalagoya in Sri Lanka. Folk mythology across the region includes the idea that these lizards, though actually harmless, are venomous, and in Rajasthan, the locals believe that the lizards become venomous only during the rainy season. Monitor lizards are hunted, and their body fat, extracted by boiling, is used in a wide range of folk remedies.
In Sri Lanka, the Asian water monitor is considered venomous and dangerous when confronted, while the Bengal monitor (Thalagoya) is considered harmless and rather defenseless. Land monitor meat is considered edible (especially by indigenous Veddah and Rodiya people) while water monitor meat is not. Killing a land monitor is usually considered a cowardly act, and is frequently referred to folklore along with other harmless reptiles such as rat snakes (Garandiya).
A clan in Maharashtra called Ghorpade claims that the name is derived from a legendary founder Tanaji Malusare who supposedly scaled a fort wall using a monitor lizard tied to a rope.
The Bengal monitor's belly skin has traditionally been used in making the drum head for the kanjira (known as Dimadi in Maharashtra), a South Indian percussion instrument.
| Biology and health sciences | Lizards and other Squamata | Animals |
2043244 | https://en.wikipedia.org/wiki/Nigella%20sativa | Nigella sativa | Nigella sativa (black caraway, also known as black cumin, nigella, kalonji, charnushka) is an annual flowering plant in the family Ranunculaceae, native to eastern Europe (Bulgaria and Romania) and western Asia (Cyprus, Turkey, Iran and Iraq), but naturalized over a much wider area, including parts of Europe, northern Africa and east to Myanmar. It is used as a spice in many cuisines.
Etymology
The genus name Nigella is a diminutive of the Latin "black", referring to the seed color. The specific epithet sativa means "cultivated".
In English, Nigella sativa and its seed are variously called black caraway, black seed, black cumin, fennel flower, nigella, nutmeg flower, Roman coriander, black onion seed and kalonji.
Black seed and black caraway may also refer to Elwendia persica, which is also known as Bunium persicum.
Description
N. sativa grows to tall, with finely divided, linear (but not thread-like) leaves. The flowers are delicate, and usually coloured pale blue and white, with five to ten petals. The fruit is a large and inflated capsule composed of three to seven united follicles, each containing numerous seeds which are used as spice, sometimes as a replacement for Bunium bulbocastanum (also called black cumin).
Culinary uses
The seeds of N. sativa are used as a spice in many cuisines. In Palestine, the seeds are ground to make bitter qizha paste.
The dry-roasted seeds flavour curries, vegetables, and pulses. They can be used as a seasoning in recipes with pod fruit, vegetables, salads, and poultry. In some cultures, the black seeds are used to flavour bread products. They are used as a part of the spice mixture panch phoron (meaning a mixture of five spices) in many recipes in Bengali cuisine and most recognizably in some variations of naan, such as nân-e barbari. Nigella is also used in tresse cheese, a braided string cheese called majdouleh or majdouli in the Middle East.
In the United States, the Food and Drug Administration classifies Nigella sativa as Generally Recognized as Safe (GRAS) for use as a spice, natural seasoning, or flavouring.
History
Archaeological evidence about the earliest cultivation of N. sativa dates back three millennia, with N. sativa seeds found in several sites from ancient Egypt, including the Tomb of Tutankhamun. Seeds were found in a Hittite flask in Turkey from the second millennium BC.
N. sativa may have been used as a condiment of the Old World to flavour food. The Persian physician Avicenna described N. sativa as a treatment for dyspnea in his The Canon of Medicine. N. sativa was used in the Middle East as a traditional medicine.
Chemistry
Oils are 32% to 40% of the total composition of N. sativa seeds. N. sativa oil contains linoleic acid, oleic acid, palmitic acid, and trans-anethole, and other minor constituents, such as nigellicine, nigellidine, nigellimine, and nigellimine N-oxide. Aromatics include thymoquinone, dihydrothymoquinone, p-cymene, carvacrol, α-thujene, thymol, α-pinene, β-pinene and trans-anethole. Protein and various alkaloids are present in the seeds.
Medicinal use
Despite considerable use of N. sativa in traditional medicine practices in Africa and Asia, there is insufficient high-quality clinical evidence to indicate that consuming the seeds or oil can be used to treat human diseases. One meta-analysis of clinical trials found weak evidence that N. sativa has a short-term benefit on lowering systolic and diastolic blood pressure. A 2016 review indicated that N. sativa supplementation may lower total cholesterol, LDL, and triglyceride levels.
| Biology and health sciences | Herbs and spices | Plants |
2044863 | https://en.wikipedia.org/wiki/Potassium%20citrate | Potassium citrate | Potassium citrate (also known as tripotassium citrate) is a potassium salt of citric acid with the molecular formula K3C6H5O7. It is a white, hygroscopic crystalline powder. It is odorless with a saline taste. It contains 38.28% potassium by mass. In the monohydrate form, it is highly hygroscopic and deliquescent.
As a food additive, potassium citrate is used to regulate acidity, and is known as E number E332. Medicinally, it may be used to control kidney stones derived from uric acid or cystine.
In 2020, it was the 297th most commonly prescribed medication in the United States, with more than 1million prescriptions.
Synthesis
Potassium citrate can be synthesized by the neutralization of citric acid which is achieved by the addition of potassium bicarbonate, potassium carbonate or potassium hydroxide to it. The solution can then be filtered and the solvent can be evaporated till granulation.
Uses
Potassium citrate is rapidly absorbed when given by mouth, and is excreted in the urine. Since it is an alkaline salt, it is effective in reducing the pain and frequency of urination when these are caused by highly acidic urine. It is used for this purpose in dogs and cats, but is chiefly employed as a non-irritating diuretic.
Potassium citrate is an effective way to treat/manage arrhythmia, if the patient is hypokalemic.
It is widely used to treat urinary calculi (kidney stones), and is often used by patients with cystinuria. A systematic review showed a significant reduction in the incidence of stone formation RR 0.26, 95% CI 0.10 to 0.68.
It is also used as an alkalizing agent in the treatment of mild urinary tract infections, such as cystitis.
It is also used in many soft drinks as a buffering agent.
Frequently used in an aqueous solution with other potassium salts, it is a wet chemical fire suppressant that is particularly useful against kitchen fires. Its alkaline pH encourages saponification to insulate the fuel from oxidizing air, and the endothermic dehydration reaction absorbs heat energy to reduce temperatures.
Administration
Potassium citrate liquid is usually administered by mouth in a diluted aqueous solution, because of its somewhat caustic effect on the stomach lining, and the potential for other mild health hazards. Pill tablets also exist in normal, and extended-release formulations.
| Physical sciences | Citrates | Chemistry |
30413568 | https://en.wikipedia.org/wiki/Saba%20banana | Saba banana | Saba banana (pron. or ) is a triploid hybrid (ABB) banana cultivar originating from the Philippines. It is primarily a cooking banana, though it can also be eaten raw. It is one of the most important banana varieties in Philippine cuisine. It is also sometimes known as the "cardaba banana", though the latter name is more correctly applied to the cardava, a very similar cultivar also classified within the saba subgroup.
Description
Saba bananas have very large, robust pseudostems that can reach heights of . The trunk can reach diameters of . The trunk and leaves are dark blue-green in color. Like all bananas, each pseudostem flowers and bears fruits only once before dying. Each mat bears about eight suckers.
The fruits become ready for harvesting 150 to 180 days after flowering, longer than other banana varieties. Each plant has a potential yield of per bunch. Typically, a bunch has 16 hands, with each hand having 12 to 20 fingers.
Saba bananas grow best in well-drained, fertile soils with full sun exposure. They inherit most of the characteristics of Musa balbisiana, making them tolerant of dry soil and colder conditions of temperate climates. They require minimum rainfall and can survive long dry seasons as long as adequate irrigation is provided. However, their fruits may not ripen under such conditions. They also have good resistance against Sigatoka leaf spot diseases.
The fruits are long and in diameter. Depending on the ripeness, the fruits are distinctively squarish and angular. The flesh is white and starchy; the starchiness makes this variety particularly suitable for cooking. They are usually harvested while still green 150 to 180 days after blooming, especially if they are to be transported over long distances.
Taxonomy and nomenclature
The saba banana is a triploid (ABB) hybrid of the seeded bananas Musa balbisiana and Musa acuminata.
Its official designation is Musa acuminata × balbisiana (ABB Group) 'Saba'. Synonyms include:
Musa × paradisiaca L. cultigroup Plantain cv. 'Saba'
Musa sapientum L. var. compressa (Blanco) N.G.Teodoro
'Saba' is known in English as saba, cardaba, sweet plantain, compact banana, and papaya banana. Saba is also known by other common names such as saba, sab-a, or kardaba in Filipino; biu gedang saba in Javanese; pisang nipah or pisang abu in Malaysian; dippig in Ilocano; burro or rulo in Mexico; pisang kepok in Indonesian; kluai hin in Thai; and opo-’ulu or dippig (from Ilocano migrants) in Hawaiian.
Saba bananas are part of the saba subgroup (ABB), which also includes the very similar 'Cardava' cultivar. Both were once erroneously identified as BBB polyploids, and both are used extensively in Philippine cuisine, with the latter being more popular in the Visayas and Mindanao regions. The subgroup also includes the 'Benedetta' cultivar, also known as 'Uht Kapakap' in Micronesia, 'Praying Hands' in Florida, and 'Inabaniko' or 'Ripping' in the Philippines.
Uses
Saba bananas are one of the most important banana cultivars in Philippine cuisine. The fruits provide the same nutritional value as potatoes. They can be eaten raw, boiled, or cooked into various traditional Filipino desserts and dishes such as maruya/sinapot, turrón, halo-halo and ginanggang. It is also popular in Indonesia, Malaysia, and Singapore in dishes like pisang aroma (similar to the Filipino turrón), pisang goreng (fried bananas), kolak pisang, and pisang kepok kukus (steamed banana).
Saba is also processed into a Filipino condiment known as banana ketchup, invented by the Filipino food technologist and war heroine Maria Y. Orosa (1893–1945). The dark red inflorescence of saba (banana hearts, locally known in the Philippines as puso ng saba) are edible. The waxy, green leaves are also used as traditional wrappings of native dishes in Southeast Asia. Fibers can also be taken from the trunk and leaves and used to manufacture ropes, mats, and sacks.
Saba bananas are also cultivated as ornamental plants and shade trees for their large size and showy coloration.
Pests and diseases
In comparison to most other types of cooking bananas, saba bananas are highly resistant to black sigatoka (Mycosphaerella fifiensis) and are more tolerant of drought conditions and soil nutrient deficiencies. As such, they are viewed as a possible source for breeding new hybrid cultivars to replace more susceptible cooking banana cultivars grown today (in particular, the threatened East African Highland bananas).
Common pests
Fruit-scarring beetles
Banana thrips
Mealy bug
Banana aphids
Corm weevil
Borers
Root nematodes
Grasshoppers
Banana skipper butterfly
Common diseases
Panama disease/Fusarium wilt
Sigatoka
Moko or bacterial wilt
Black leaf streak
Banana bunchy top disease
| Biology and health sciences | Tropical and tropical-like fruit | Plants |
30414731 | https://en.wikipedia.org/wiki/Fe%27i%20banana | Fe'i banana | Fe'i bananas (also spelt Fehi or Féi) are cultivated plants in the genus Musa, used mainly for their fruit. They are very distinct in appearance and origin from the majority of bananas and plantains currently grown. Found mainly in the islands of the Pacific, particularly French Polynesia, Fe'i bananas have skins which are brilliant orange to red in colour with yellow or orange flesh inside. They are usually eaten cooked and have been an important food for Pacific Islanders, moving with them as they migrated across the ocean. Most are high in beta-carotene (a precursor of vitamin A).
The botanical name for Fe'i bananas is Musa × troglodytarum L. Precisely which wild species they are descended from is unclear.
Description
Fe'i bananas are cultivated varieties (cultivars), rather than wild forms. They are distinctly different from the much more common bananas and plantains derived from Musa acuminata and Musa balbisiana. All members of the genus Musa are tall herbaceous plants, typically around tall or even more. Although they appear tree-like, the "trunk" is actually a pseudostem, formed from the tightly wrapped bases of the leaves. At maturity each pseudostem produces a single flowering stem that grows up inside it, eventually emerging from the top. As it elongates, female flowers appear which go on to form fruit – the bananas. Finally male flowers are produced. In cultivated bananas, the fruit is usually seedless and the male flowers sterile.
Fe'i bananas can be distinguished from other kinds of cultivated bananas and plantains in a number of ways. They have highly coloured sap, pink through to bright magenta and dark purple. The bracts of the flowering spike (inflorescence) are bright shiny green rather than dull red or purple. The flowering and fruiting stem is more or less upright (rather than drooping), so that the bunches of bananas are also upright. Ripe fruit has brilliant orange, copper-coloured or red skin with orange or yellow flesh inside. It has prominent ridges, making it squarish in cross-section.
Taxonomy
As with many names in the genus Musa, considerable confusion has existed as to the proper botanical name, if any, for Fe'i bananas. Some authorities have preferred to treat Fe'i bananas as a formal or informal cultivar group rather than employing a Latin binomial, using names like Musa (Fe'i Group) 'Utafan'.
One of the earliest detailed accounts of the genus Musa was by the German-Dutch botanist Georg E. Rumpf (c.1627–1702), usually known by the Latinized name Rumphius. His Herbarium amboinensis was published in 1747, after his death. His figure and description of a "species" under the name "Musa Uranoscopos" (meaning "heaven-looking banana") is consistent with a Fe'i banana; he refers to the upright flowering spike (although the figure, reproduced here, shows the terminal bud drooping), the coloured sap, and the effect of consumption on urine.
However, the starting point for botanical names is the publication of Carl Linnaeus' Species Plantarum in 1753, so "Musa uranoscopos" is not an acceptable name. In the second edition of Species Plantarum, Linnaeus lumped together Rumphius' Musa uranoscopos and Musa 'Pissang Batu' under the name Musa troglodytarum, in spite of the fact that Rumphius had noted several distinctions between the two. Linnaeus' treatment has been described as "beyond understanding". In 1917, Merrill designated the illustration of Rumphius' Musa uranoscopos as the lectotype of Musa troglodytarum L. On this basis, Häkkinen, Väre and Christenhusz concluded in 2012 that "all Fe'i cultivars, including those featured in Paul Gauguin's famous paintings, should be treated under the name M. troglodytarum L." Other sources also accept this as the scientific name for the group as a whole – for example Rafaël Govaerts in 2004. The name may be written as M. × troglodytarum to stress the hybrid origin of Fe'i bananas.
Synonyms of M. troglodytarum are:
"M. uranoscopos" Rumphius (1747) – published before 1753, so not an acceptable name
M. uranoscopos Lour. (1790) – a superfluous name because it was provided as an alternative name to Linnaeus', hence illegitimate; Loureiro's description is based on a different plant, M. coccinea
M. uranoscopos Colla (1820) – superfluous and hence illegitimate
M. uranoscopos Mig. (1855) – superfluous and hence illegitimate
M. uranoscopos Seem. (1868) – superfluous and hence illegitimate
M. seemannii Mueller (1875) – superfluous and hence illegitimate
M. fehi Bertero ex Vieillard (1862)
Whereas most cultivated bananas and plantains are derived from species in Musa section Musa, Fe'i bananas are clearly part of section Callimusa (in particular the species formerly grouped as section Australimusa). However, their precise origins are unclear. On the basis of appearance (morphology), Musa maclayi, native to Papua New Guinea, has been proposed as a parent. More recent genetic studies suggest they are close to M. lolodensis and M. peekelii, both from New Guinea and neighbouring islands. Fe'i bananas may be hybrids between several different wild species. They are generally considered to have originated in New Guinea and then to have been spread eastwards and northwards (as far as the Hawaiian Islands) for use as food.
A few cultivars have been found which appear to be intermediate between Fe'i bananas and the more common Musa section Musa bananas and plantains. Although the part of the stem holding the fruit is upright, the rest of the stem then bends over so that the terminal bud faces sideways or downwards. An example is the cultivar 'Tati'a' from Tahiti. Molecular analysis of bananas with this growth habit from Papua New Guinea has shown evidence of genetic input from M. acuminata and M. balbisiana, the parents of the section Musa cultivars. Rumphius' illustration of his "Musa uranoscopos" shows the same morphology, although this might be artistic license.
Distribution
Fe'i bananas are mainly found from the Maluku Islands (Moluccas) in the west to French Polynesia in the east, particularly the Society Islands and the Marquesas Islands. They have been important both as a staple and as a ceremonial food, although their cultivation and use has sharply declined in recent decades. As the Pacific Islanders spread by canoe throughout the Pacific, they took Fe'i bananas with them; cultivation has been traced back to around 250 BC in the Marquesas and to around 800 AD in Tahiti in the Society Islands. They are believed to have originated in the New Guinea area, where cultivars with seeds occur, as do the wild species from which they are thought to be descended.
Cultivars
Fe'i bananas are not in commercial cultivation. There are lists of cultivars for different islands, but it is not clear whether these are synonyms, with the same cultivar being known by different names in different locations and languages. Further, it is not clear whether local names apply to cultivars (i.e. distinct cultivated varieties) or to broader groups. Thus Ploetz et al. refer to a banana found in eastern Indonesia by the cultivar name 'Pisang Tongkat Langit'. However, pisang tongkat langit can be translated as "sky stick banana" or "heaven cane banana", corresponding to Rumphius' name Musa uranoscopos (heaven-gazing banana). Pisang tongkat langit is treated by other sources as referring to M. × troglodytarum as a whole rather than to a single cultivar. Significant genetic variation has been reported among bananas from Maluku Islands for which this name is used.
The list below is selective; where many names are given in sources, it concentrates on those with the most description available.
Federated States of Micronesia, particularly Pohnpei:
Karat type cultivars
'Karat Kole' or 'Karat Pwono' – round-shaped banana, flesh orange-yellow
'Karat Pako' – longer banana, skin rough, flesh orange-yellow
'Karat Pwekhu' – smaller banana, flesh orange-yellow
Utin Iap type cultivars
'Utin Iap' or 'Uht En Yap' – cone-shaped bunch, small bananas, flesh orange
'Utimwas' – small bananas, flesh orange
Solomon Islands:
'Aibwo' or 'Suria' – ripe skin orange; flesh yellow-orange
'Fagufagu' – ripe skin orange, flesh yellow-orange
'Gatagata' or 'Vudito' – ripe skin orange-brown, flesh yellow-orange
'Toraka Parao' – ripe skin red; flesh yellow-orange
'Warowaro' – ripe skin brown; flesh yellow
Indonesia:
'Pisang Tongkat Langit' or 'Pisang Ranggap'
Papua New Guinea:
'Menei', 'Rimina', 'Utafan', 'Sar', 'Wain'
New Caledonia:
'Daak'
Fiji:
'Soaqa'
Society Islands:
'Fe'i Aiuri', 'Fe'i Tatia'
Hawai'i:
'Borabora', 'Polapola', 'Mai'a Ha'i'
Use
As food
Fe'i bananas are generally eaten as "plantains", i.e. they are usually cooked rather than eaten raw. They have been described as "delicious and nutritious when baked or boiled, especially if the slices are swathed in fresh coconut cream." They have also been described as "unpleasantly astringent" unless cooked, having higher proportions of starch and lower proportions of sugar than other kinds of banana. However, in the Federated States of Micronesia, some cultivars, particularly 'Karat Pwehu', 'Karat Pako' and to a lesser extent 'Utin Iap' (='Uht En Yap'), are commonly eaten raw when fully ripe. Karat bananas have a soft texture and a sweet taste and were a traditional weaning food in the Micronesian island of Pohnpei.
In countries where Fe'i bananas were once a major food item, there has been a shift away from eating traditional foods towards eating imported foods. Bananas with whiter flesh are preferred over traditional varieties with deeply coloured flesh. One issue with Fe'i bananas is that eating them causes the production of yellow coloured urine, thought to be caused by the excretion of excess riboflavin present in the fruit. This effect led people to believe that Fe'i bananas might not be safe to eat, particularly for children. Along with the shift away from traditional foods, there has been a rise in vitamin A deficiency. Fe'i bananas with deeper coloured flesh have been shown to contain high levels of beta-carotene, a precursor of vitamin A. A year-long promotional campaign in Pohnpei in 1999 to encourage the consumption of Karat cultivars had some success in increasing sales.
Levels of beta-carotene vary considerably among Fe'i bananas. In a study of traditional Solomon Islands cultivars, the highest level of beta-carotene found in a Fe'i cultivar was almost 6,000 μg per 100 g of flesh compared to the highest level of 1,300 μg in a non-Fe'i cultivar. However, there was an overlap; some Fe'i cultivars contained less beta-carotene than non-Fe'i cultivars.
Other uses
Fe'i banana plants have many other uses. Like other kinds of banana, the leaves may be used as plates or containers for cooked food. They can also be used as a roofing material, particularly for temporary huts. The fibres of the midrib of the leaves can be used to make ropes, often used to carry bunches of bananas. Other fibrous parts of the leaves can be dried and plaited into mats and similar items. The pseudostems are buoyant, and so can be used to make temporary rafts.
Fe'i bananas have distinctive reddish sap which does not readily fade on exposure to light. It is used as a dye, and has also been used to make ink.
Historical reports
The early European explorers of the Pacific islands produced a few accounts of Fe'i bananas. In 1768, Daniel Solander accompanied Joseph Banks on James Cook's first voyage to the Pacific Ocean aboard the Endeavour. In the account he published later, he noted five kinds of banana or plantain called "Fe'i" by the Tahitians. William Ellis lived in the Society Islands in the 1850s. He refers to the name "Fe'i", saying that Fe'i bananas were the principal food for the inhabitants of some islands. He also noted that Fe'i banana plants have an upright fruit cluster.
Charles Darwin visited Tahiti in the Society Islands in 1835 and gave an account in The Voyage of the Beagle. Although he does not mention the name "Fe'i", he does speak of the "mountain-banana": "On each side of the ravine there were great beds of the mountain-banana, covered with ripe fruit. Many of these plants were from twenty to twenty-five feet high, and from three to four in circumference." Fe'i bananas have been noted to grow best in Tahiti on slopes at the base of cliffs.
Laurence H. MacDaniels published a study of the Fe'i banana in 1947. He reported that Fe'i bananas were the staple carbohydrate food of the Society Islanders, and that more than 95% of the bananas on sale were of the Fe'i type. Although some Fe'i banana plants were found in gardens, most bananas were gathered from the "wild", thought to have been planted in the past and abandoned.
In culture
Fe'i bananas are an important component of ceremonial feasts in the Marquesas and the Society Islands. Karat bananas are reported to be one of the few kinds of banana that can be used in ceremonial presentations in Pohnpei, Micronesia. A Samoan legend is that the mountain banana and the lowland banana fought. The mountain banana – the Fe'i banana – won. Filled with pride at its victory, the mountain banana raised its head high, whereas the defeated lowland banana never raised its head again. (Fe'i bananas have an upright fruiting stem, whereas the fruiting stem droops in other kinds of banana.)
The bright orange-red colours of Fe'i bananas make them attractive to artists. The French Post-Impressionist painter Paul Gauguin visited the Society Islands, including Tahiti, towards the end of the 19th century. Three of his works include what are considered to be Fe'i bananas: Le Repas (The Meal, 1891), La Orana Maria (Hail Mary, 1891) and Paysage de Tahiti (Tahitian Landscape, 1891).
Fe'i bananas were one of the main staples of Liv Coucheron-Torpand and Thor Heyerdahl during their one-and-a-half-year stay on the Marquesan island of Fatu-Hiva in 1937–38. Heyerdahl reported that Fe'i bananas grew all around their cabin on Fatu-Hiva, while on Tahiti they had only seen Fe'i bananas growing "in almost inaccessible cliffs."
Conservation
Fe'i banana cultivars, along with other Pacific crop propagation material, have been saved at the Centre for Pacific Crops and Trees (CePaCT), which catalogs living plants of the Pacific region for conservation. More than 100 samples of Fe'i bananas were collected in French Polynesia, from isolated farms on six different islands. The samples will be conserved in a gene bank in Tahiti, with duplicates kept at CePaCt.
| Biology and health sciences | Tropical and tropical-like fruit | Plants |
27634693 | https://en.wikipedia.org/wiki/Rubia%20tinctorum | Rubia tinctorum | Rubia tinctorum, the rose madder or common madder or dyer's madder, is a herbaceous perennial plant species belonging to the bedstraw and coffee family Rubiaceae.
Description
The common madder can grow up to 1.5 m in height. The evergreen leaves are approximately 5–10 cm long and 2–3 cm broad, produced in whorls of 4–7 starlike around the central stem. It climbs with tiny hooks at the leaves and stems. The flowers are small (3–5 mm across), with five pale yellow petals, in dense racemes, and appear from June to August, followed by small (4–6 mm diameter) red to black berries. The roots can be over a metre long, up to 12 mm thick and are the source of red dyes known as rose madder and Turkey red. It prefers loamy soils (sand and clay soil) with a constant level of moisture, as well as creek beds. Madder is used as a food plant by the larvae of some Lepidoptera species including the hummingbird hawk moth.
Uses
It has been used since ancient times as a vegetable red dye for leather, wool, cotton and silk. For dye production, the roots are harvested after two years. The outer red layer gives the common variety of the dye, the inner yellow layer the refined variety. The dye is fixed to the cloth with help of a mordant, most commonly alum. Madder can be fermented for dyeing as well (Fleurs de garance). In France, the remains were used to produce a spirit.
The roots contain the acid ruberthyrin. By drying, fermenting, or a treatment with acids, this is changed to sugar, alizarin and purpurin, which were first isolated by the French chemist Pierre Jean Robiquet in 1826. Purpurin is normally not coloured, but is red when dissolved in alkaline solutions. Mixed with clay and treated with alum and ammonia, it gives a brilliant red colourant (madder lake).
The pulverised roots can be dissolved in sulfuric acid, which leaves a dye called garance (the French name for madder) after drying. Another method of increasing the yield consisted of dissolving the roots in sulfuric acid after they had been used for dyeing. This produces a dye called garanceux. By treating the pulverized roots with alcohol, colorine was produced. It contained 40–50 times the amount of alizarin of the roots.
The chemical name for the pigment is alizarin, of the anthraquinone-group, and was used to make the alizarine ink in 1855 by Professor Leonhardi of Dresden, Germany. In 1869, the German chemists Graebe and Liebermann synthesised artificial alizarin, which was produced industrially from 1871 onwards, effectively ending the cultivation of madder. In the 20th century, madder was only grown in some areas of France.
The plant's roots contain several polyphenolic compounds, such as 1,3-Dihydroxyanthraquinone (purpuroxanthin), 1,4-Dihydroxyanthraquinone (quinizarin), 1,2,4-Trihydroxyanthraquinone (purpurin) and 1,2-dihydroxyanthraquinone (alizarin). This last compound gives it its red colour to a textile dye known as Rose madder. It was also used as a colourant, especially for paint, that is referred to as madder lake. The substance was also derived from another species, Rubia cordifolia.
Madder root-derived dyes have been used as textile dyes, lake pigments, food and cosmetic ingredients, and in medicine.
History
Early evidence of dyeing comes from India where a piece of cotton dyed with madder has been recovered from the archaeological site at Mohenjo-daro (3rd millennium BCE). In Sanskrit, this plant is known by the name Manjishtha. It was used by hermits to dye their clothes saffron. Dioscorides and Pliny the Elder (De Re Natura) mention the plant (which the Romans called rubia passiva). In Viking Age levels of York, remains of both woad and madder have been excavated. The oldest European textiles dyed with madder come from the grave of the Merovingian queen Arnegundis in Saint-Denis near Paris (between 565 and 570 AD). In the "Capitulare de villis" of Charlemagne, madder is mentioned as "warentiam". The herbal of Hildegard of Bingen mentions the plant as well. The red coats of the British Redcoats were dyed with madder; earlier and perhaps officer's fabric being dyed with the better but more expensive cochineal. Madder is mentioned in the Talmud (e.g., tractate Sabbath 66b) where the madder plant is termed "puah" in Aramaic.
Turkey red was a strong, very fast red dye for cotton obtained from madder root via a complicated multistep process involving "sumac and oak galls, calf's blood, sheep's dung, oil, soda, alum, and a solution of tin." Turkey red was developed in India and spread to Turkey. Greek workers familiar with the methods of its production were brought to France in 1747, and Dutch and English spies soon discovered the secret. A sanitized version of Turkey red was being produced in Manchester by 1784, and roller-printed dress cottons with a Turkey red ground were fashionable in England by the 1820s.
Purple dyes were also produced with madder, by combining it with indigo, or using an iron mordant.
Folk medicine
According to Culpeper's herbal, the plant is "an herb of Mars" and "hath an opening quality, and afterwards to bind and strengthen". The root was recommended in the treatment of yellow jaundice, obstruction of the spleen, the melancholy humour, palsy, sciatica, and of bruises. The leaves were advised for women “that have not their courses” and for the treatment of freckles and other discolorations of the skin.
Risks
Madder root may cause birth defects and miscarriages in humans when taken internally.
| Biology and health sciences | Gentianales | Plants |
557672 | https://en.wikipedia.org/wiki/Last%20Glacial%20Period | Last Glacial Period | The Last Glacial Period (LGP), also known as the Last glacial cycle, occurred from the end of the Last Interglacial to the beginning of the Holocene, years ago, and thus corresponds to most of the timespan of the Late Pleistocene.
The LGP is part of a larger sequence of glacial and interglacial periods known as the Quaternary glaciation which started around 2,588,000 years ago and is ongoing. The glaciation and the current Quaternary Period both began with the formation of the Arctic ice cap. The Antarctic ice sheet began to form earlier, at about 34 Mya, in the mid-Cenozoic (Eocene–Oligocene extinction event), and the term Late Cenozoic Ice Age is used to include this early phase with the current glaciation. The previous ice age within the Quaternary is the Penultimate Glacial Period, which ended about 128,000 years ago, was more severe than the Last Glacial Period in some areas such as Britain, but less severe in others.
The last glacial period saw alternating episodes of glacier advance and retreat with the Last Glacial Maximum occurring between 26,000 and 20,000 years ago. While the general pattern of cooling and glacier advance around the globe was similar, local differences make it difficult to compare the details from continent to continent (see picture of ice core data below for differences). The most recent cooling, the Younger Dryas, began around 12,800 years ago and ended around 11,700 years ago, also marking the end of the LGP and the Pleistocene epoch. It was followed by the Holocene, the current geological epoch.
Origin and definition
The LGP is often colloquially referred to as the "last ice age", though the term ice age is not strictly defined, and on a longer geological perspective, the last few million years could be termed a single ice age given the continual presence of ice sheets near both poles. Glacials are somewhat better defined, as colder phases during which glaciers advance, separated by relatively warm interglacials. The end of the last glacial period, which was about 10,000 years ago, is often called the end of the ice age, although extensive year-round ice persists in Antarctica and Greenland. Over the past few million years, the glacial-interglacial cycles have been "paced" by periodic variations in the Earth's orbit via Milankovitch cycles.
The LGP has been intensively studied in North America, northern Eurasia, the Himalayas, and other formerly glaciated regions around the world. The glaciations that occurred during this glacial period covered many areas, mainly in the Northern Hemisphere and to a lesser extent in the Southern Hemisphere. They have different names, historically developed and depending on their geographic distributions: Fraser (in the Pacific Cordillera of North America), Pinedale (in the Central Rocky Mountains), Wisconsinan or Wisconsin (in central North America), Devensian (in the British Isles), Midlandian (in Ireland), Würm (in the Alps), Mérida (in Venezuela), Weichselian or Vistulian (in Northern Europe and northern Central Europe), Valdai in Russia and Zyryanka in Siberia, Llanquihue in Chile, and Otira in New Zealand. The geochronological Late Pleistocene includes the late glacial (Weichselian) and the immediately preceding penultimate interglacial (Eemian) period.
Overview
Northern Hemisphere
Canada was almost completely covered by ice, as was the northern part of the United States, both blanketed by the huge Laurentide Ice Sheet. Alaska remained mostly ice free due to arid climate conditions. Local glaciations existed in the Rocky Mountains and the Cordilleran ice sheet and as ice fields and ice caps in the Sierra Nevada in northern California. In northern Eurasia, the Scandinavian ice sheet once again reached the northern parts of the British Isles, Germany, Poland, and Russia, extending as far east as the Taymyr Peninsula in western Siberia.
The maximum extent of western Siberian glaciation was reached by about 18,000 to 17,000 BP, later than in Europe (22,000–18,000 BP). Northeastern Siberia was not covered by a continental-scale ice sheet. Instead, large, but restricted, icefield complexes covered mountain ranges within northeast Siberia, including the Kamchatka-Koryak Mountains.
The Arctic Ocean between the huge ice sheets of America and Eurasia was not frozen throughout, but like today, probably was covered only by relatively shallow ice, subject to seasonal changes and riddled with icebergs calving from the surrounding ice sheets. According to the sediment composition retrieved from deep-sea cores, even times of seasonally open waters must have occurred.
Outside the main ice sheets, widespread glaciation occurred on the highest mountains of the Alpide belt. In contrast to the earlier glacial stages, the Würm glaciation was composed of smaller ice caps and mostly confined to valley glaciers, sending glacial lobes into the Alpine foreland. Local ice fields or small ice sheets could be found capping the highest massifs of the Pyrenees, the Carpathian Mountains, the Balkan Mountains, the Caucasus, and the mountains of Turkey and Iran.
In the Himalayas and the Tibetan Plateau, there is evidence that glaciers advanced considerably, particularly between 47,000 and 27,000 BP, but the exact ages, as well as the formation of a single contiguous ice sheet on the Tibetan Plateau, is controversial.
Other areas of the Northern Hemisphere did not bear extensive ice sheets, but local glaciers were widespread at high altitudes. Parts of Taiwan, for example, were repeatedly glaciated between 44,250 and 10,680 BP as well as the Japanese Alps. In both areas, maximum glacier advance occurred between 60,000 and 30,000 BP. To a still lesser extent, glaciers existed in Africa, for example in the High Atlas, the mountains of Morocco, the Mount Atakor massif in southern Algeria, and several mountains in Ethiopia. Just south of the equator, an ice cap of several hundred square kilometers was present on the east African mountains in the Kilimanjaro massif, Mount Kenya, and the Rwenzori Mountains, which still bear relic glaciers today.
Southern Hemisphere
Glaciation of the Southern Hemisphere was less extensive. Ice sheets existed in the Andes (Patagonian Ice Sheet), where six glacier advances between 33,500 and 13,900 BP in the Chilean Andes have been reported. Antarctica was entirely glaciated, much like today, but unlike today the ice sheet left no uncovered area. In mainland Australia only a very small area in the vicinity of Mount Kosciuszko was glaciated, whereas in Tasmania glaciation was more widespread. An ice sheet formed in New Zealand, covering all of the Southern Alps, where at least three glacial advances can be distinguished.
Local ice caps existed in the highest mountains of the island of New Guinea, where temperatures were 5 to 6 °C colder than at present. The main areas of Papua New Guinea where glaciers developed during the LGP were the Central Cordillera, the Owen Stanley Range, and the Saruwaged Range. Mount Giluwe in the Central Cordillera had a "more or less continuous ice cap covering about 188 km2 and extending down to 3200-3500 m". In Western New Guinea, remnants of these glaciers are still preserved atop Puncak Jaya and Ngga Pilimsit.
Small glaciers developed in a few favorable places in Southern Africa during the last glacial period. These small glaciers would have been located in the Lesotho Highlands and parts of the Drakensberg. The development of glaciers was likely aided in part due to shade provided by adjacent cliffs. Various moraines and former glacial niches have been identified in the eastern Lesotho Highlands a few kilometres west of the Great Escarpment, at altitudes greater than 3,000 m on south-facing slopes. Studies suggest that the annual average temperature in the mountains of Southern Africa was about 6 °C colder than at present, in line with temperature drops estimated for Tasmania and southern Patagonia during the same time. This resulted in an environment of relatively arid periglaciation without permafrost, but with deep seasonal freezing on south-facing slopes. Periglaciation in the eastern Drakensberg and Lesotho Highlands produced solifluction deposits and blockfields; including blockstreams and stone garlands.
Deglaciation
Scientists from the Center for Arctic Gas Hydrate, Environment and Climate at the University of Tromsø, published a study in June 2017
describing over a hundred ocean sediment craters, some 3,000 m wide and up to 300 m deep, formed by explosive eruptions of methane from destabilized methane hydrates, following ice-sheet retreat during the LGP, around 12,000 years ago. These areas around the Barents Sea still seep methane today. The study hypothesized that existing bulges containing methane reservoirs could eventually have the same fate.
Named local glaciations
Antarctica
During the last glacial period, Antarctica was blanketed by a massive ice sheet, much as it is today. The ice covered all land areas and extended into the ocean onto the middle and outer continental shelf. Counterintuitively though, according to ice modeling done in 2002, ice over central East Antarctica was generally thinner than it is today.
Europe
Devensian and Midlandian glaciation (Britain and Ireland)
British geologists refer to the LGP as the Devensian. Irish geologists, geographers, and archaeologists refer to the Midlandian glaciation, as its effects in Ireland are largely visible in the Irish Midlands. The name Devensian is derived from the Latin Dēvenses, people living by the Dee (Dēva in Latin), a river on the Welsh border near which deposits from the period are particularly well represented.
The effects of this glaciation can be seen in many geological features of England, Wales, Scotland, and Northern Ireland. Its deposits have been found overlying material from the preceding Ipswichian stage and lying beneath those from the following Holocene, which is the current stage. This is sometimes called the Flandrian interglacial in Britain.
The latter part of the Devensian includes pollen zones I–IV, the Allerød oscillation and Bølling oscillation, and the Oldest Dryas, Older Dryas, and Younger Dryas cold periods.
Weichselian glaciation (Scandinavia and northern Europe)
Alternative names include Weichsel glaciation or Vistulian glaciation (referring to the Polish River Vistula or its German name Weichsel). Evidence suggests that the ice sheets were at their maximum size for only a short period, between 25,000 and 13,000 BP. Eight interstadials have been recognized in the Weichselian, including the Oerel, Glinde, Moershoofd, Hengelo, and Denekamp. Correlation with isotope stages is still in process. During the glacial maximum in Scandinavia, only the western parts of Jutland were ice-free, and a large part of what is today the North Sea was dry land connecting Jutland with Britain (see Doggerland).
The Baltic Sea, with its unique brackish water, is a result of meltwater from the Weichsel glaciation combining with saltwater from the North Sea when the straits between Sweden and Denmark opened. Initially, when the ice began melting about 10,300 BP, seawater filled the isostatically depressed area, a temporary marine incursion that geologists dub the Yoldia Sea. Then, as postglacial isostatic rebound lifted the region about 9500 BP, the deepest basin of the Baltic became a freshwater lake, in palaeological contexts referred to as Ancylus Lake, which is identifiable in the freshwater fauna found in sediment cores.
The lake was filled by glacial runoff, but as worldwide sea level continued rising, saltwater again breached the sill about 8000 BP, forming a marine Littorina Sea, which was followed by another freshwater phase before the present brackish marine system was established. "At its present state of development, the marine life of the Baltic Sea is less than about 4000 years old", Drs. Thulin and Andrushaitis remarked when reviewing these sequences in 2003.
Overlying ice had exerted pressure on the Earth's surface. As a result of melting ice, the land has continued to rise yearly in Scandinavia, mostly in northern Sweden and Finland, where the land is rising at a rate of as much as 8–9 mm per year, or 1 m in 100 years. This is important for archaeologists, since a site that was coastal in the Nordic Stone Age now is inland and can be dated by its relative distance from the present shore.
Würm glaciation (Alps)
The term Würm is derived from a river in the Alpine foreland, roughly marking the maximum glacier advance of this particular glacial period. The Alps were where the first systematic scientific research on ice ages was conducted by Louis Agassiz at the beginning of the 19th century. Here, the Würm glaciation of the LGP was intensively studied. Pollen analysis, the statistical analyses of microfossilized plant pollens found in geological deposits, chronicled the dramatic changes in the European environment during the Würm glaciation. During the height of Würm glaciation, BP, most of western and central Europe and Eurasia was open steppe-tundra, while the Alps presented solid ice fields and montane glaciers. Scandinavia and much of Britain were under ice.
During the Würm, the Rhône Glacier covered the whole western Swiss plateau, reaching today's regions of Solothurn and Aargau. In the region of Bern, it merged with the Aar glacier. The Rhine Glacier is currently the subject of the most detailed studies. Glaciers of the Reuss and the Limmat advanced sometimes as far as the Jura. Montane and piedmont glaciers formed the land by grinding away virtually all traces of the older Günz and Mindel glaciation, by depositing base moraines and terminal moraines of different retraction phases and loess deposits, and by the proglacial rivers' shifting and redepositing gravels. Beneath the surface, they had profound and lasting influence on geothermal heat and the patterns of deep groundwater flow.
North America
Pinedale or Fraser glaciation (Rocky Mountains)
The Pinedale (central Rocky Mountains) or Fraser (Cordilleran ice sheet) glaciation was the last of the major glaciations to appear in the Rocky Mountains in the United States. The Pinedale lasted from around 30,000 to 10,000 years ago, and was at its greatest extent between 23,500 and 21,000 years ago. This glaciation was somewhat distinct from the main Wisconsin glaciation, as it was only loosely related to the giant ice sheets and was instead composed of mountain glaciers, merging into the Cordilleran ice sheet.
The Cordilleran ice sheet produced features such as glacial Lake Missoula, which broke free from its ice dam, causing the massive Missoula Floods. USGS geologists estimate that the cycle of flooding and reformation of the lake lasted an average of 55 years and that the floods occurred about 40 times over the 2,000-year period starting 15,000 years ago. Glacial lake outburst floods such as these are not uncommon today in Iceland and other places.
Wisconsin glaciation
The Wisconsin glacial episode was the last major advance of continental glaciers in the North American Laurentide ice sheet. At the height of glaciation, the Bering land bridge potentially permitted migration of mammals, including people, to North America from Siberia.
It radically altered the geography of North America north of the Ohio River. At the height of the Wisconsin episode glaciation, ice covered most of Canada, the Upper Midwest, and New England, as well as parts of Montana and Washington. On Kelleys Island in Lake Erie or in New York's Central Park, the grooves left by these glaciers can be easily observed. In southwestern Saskatchewan and southeastern Alberta, a suture zone between the Laurentide and Cordilleran ice sheets formed the Cypress Hills, which is the northernmost point in North America that remained south of the continental ice sheets.
The Great Lakes are the result of glacial scour and pooling of meltwater at the rim of the receding ice. When the enormous mass of the continental ice sheet retreated, the Great Lakes began gradually moving south due to isostatic rebound of the north shore. Niagara Falls is also a product of the glaciation, as is the course of the Ohio River, which largely supplanted the prior Teays River.
With the assistance of several very broad glacial lakes, it released floods through the gorge of the Upper Mississippi River, which in turn was formed during an earlier glacial period.
In its retreat, the Wisconsin episode glaciation left terminal moraines that form Long Island, Block Island, Cape Cod, Nomans Land, Martha's Vineyard, Nantucket, Sable Island, and the Oak Ridges Moraine in south-central Ontario, Canada. In Wisconsin itself, it left the Kettle Moraine. The drumlins and eskers formed at its melting edge are landmarks of the lower Connecticut River Valley.
Tahoe, Tenaya, and Tioga, Sierra Nevada
In the Sierra Nevada, three stages of glacial maxima, sometimes incorrectly called ice ages, were separated by warmer periods. These glacial maxima are called, from oldest to youngest, Tahoe, Tenaya, and Tioga. The Tahoe reached its maximum extent perhaps about 70,000 years ago. Little is known about the Tenaya. The Tioga was the least severe and last of the Wisconsin episode. It began about 30,000 years ago, reached its greatest advance 21,000 years ago, and ended about 10,000 years ago.
Greenland glaciation
In northwest Greenland, ice coverage attained a very early maximum in the LGP around 114,000. After this early maximum, ice coverage was similar to today until the end of the last glacial period. Towards the end, glaciers advanced once more before retreating to their present extent. According to ice core data, the Greenland climate was dry during the LGP, with precipitation reaching perhaps only 20% of today's value.
South America
Mérida glaciation (Venezuelan Andes)
The name Mérida glaciation is proposed to designate the alpine glaciation that affected the central Venezuelan Andes during the Late Pleistocene. Two main moraine levels have been recognized - one with an elevation of , and another with an elevation of . The snow line during the last glacial advance was lowered approximately below the present snow line, which is . The glaciated area in the Cordillera de Mérida was about ; this included these high areas, from southwest to northeast: Páramo de Tamá, Páramo Batallón, Páramo Los Conejos, Páramo Piedras Blancas, and Teta de Niquitao. Around of the total glaciated area was in the Sierra Nevada de Mérida, and of that amount, the largest concentration, , was in the areas of Pico Bolívar, Pico Humboldt [], and Pico Bonpland []. Radiocarbon dating indicates that the moraines are older than 10,000 BP, and probably older than 13,000 BP. The lower moraine level probably corresponds to the main Wisconsin glacial advance. The upper level probably represents the last glacial advance (Late Wisconsin).
Llanquihue glaciation (Southern Andes)
The Llanquihue glaciation takes its name from Llanquihue Lake in southern Chile, which is a fan-shaped piedmont glacial lake. On the lake's western shores, large moraine systems occur, of which the innermost belong to the LGP. Llanquihue Lake's varves are a node point in southern Chile's varve geochronology. During the last glacial maximum, the Patagonian ice sheet extended over the Andes from about 35°S to Tierra del Fuego at 55°S. The western part appears to have been very active, with wet basal conditions, while the eastern part was cold-based.
Cryogenic features such as ice wedges, patterned ground, pingos, rock glaciers, palsas, soil cryoturbation, and solifluction deposits developed in unglaciated extra-Andean Patagonia during the last glaciation, but not all these reported features have been verified. The area west of Llanquihue Lake was ice-free during the last glacial maximum, and had sparsely distributed vegetation dominated by Nothofagus. Valdivian temperate rain forest was reduced to scattered remnants on the western side of the Andes.
| Physical sciences | Events | Earth science |
557931 | https://en.wikipedia.org/wiki/Graph%20%28abstract%20data%20type%29 | Graph (abstract data type) | In computer science, a graph is an abstract data type that is meant to implement the undirected graph and directed graph concepts from the field of graph theory within mathematics.
A graph data structure consists of a finite (and possibly mutable) set of vertices (also called nodes or points), together with a set of unordered pairs of these vertices for an undirected graph or a set of ordered pairs for a directed graph. These pairs are known as edges (also called links or lines), and for a directed graph are also known as edges but also sometimes arrows or arcs. The vertices may be part of the graph structure, or may be external entities represented by integer indices or references.
A graph data structure may also associate to each edge some edge value, such as a symbolic label or a numeric attribute (cost, capacity, length, etc.).
Operations
The basic operations provided by a graph data structure G usually include:
: tests whether there is an edge from the vertex x to the vertex y;
: lists all vertices y such that there is an edge from the vertex x to the vertex y;
: adds the vertex x, if it is not there;
: removes the vertex x, if it is there;
: adds the edge z from the vertex x to the vertex y, if it is not there;
: removes the edge from the vertex x to the vertex y, if it is there;
: returns the value associated with the vertex x;
: sets the value associated with the vertex x to v.
Structures that associate values to the edges usually also provide:
: returns the value associated with the edge (x, y);
: sets the value associated with the edge (x, y) to v.
Common data structures for graph representation
Adjacency list
Vertices are stored as records or objects, and every vertex stores a list of adjacent vertices. This data structure allows the storage of additional data on the vertices. Additional data can be stored if edges are also stored as objects, in which case each vertex stores its incident edges and each edge stores its incident vertices.
Adjacency matrix
A two-dimensional matrix, in which the rows represent source vertices and columns represent destination vertices. Data on edges and vertices must be stored externally. Only the cost for one edge can be stored between each pair of vertices.
Incidence matrix
A two-dimensional matrix, in which the rows represent the vertices and columns represent the edges. The entries indicate the incidence relation between the vertex at a row and edge at a column.
The following table gives the time complexity cost of performing various operations on graphs, for each of these representations, with |V| the number of vertices and |E| the number of edges. In the matrix representations, the entries encode the cost of following an edge. The cost of edges that are not present are assumed to be ∞.
Adjacency lists are generally preferred for the representation of sparse graphs, while an adjacency matrix is preferred if the graph is dense; that is, the number of edges is close to the number of vertices squared, , or if one must be able to quickly look up if there is an edge connecting two vertices.
More efficient representation of adjacency sets
The time complexity of operations in the adjacency list representation can be improved by storing the sets of adjacent vertices in more efficient data structures, such as hash tables or balanced binary search trees (the latter representation requires that vertices are identified by elements of a linearly ordered set, such as integers or character strings). A representation of adjacent vertices via hash tables leads to an amortized average time complexity of to test adjacency of two given vertices and to remove an edge and an amortized average time complexity of to remove a given vertex of degree . The time complexity of the other operations and the asymptotic space requirement do not change.
Parallel representations
The parallelization of graph problems faces significant challenges: Data-driven computations, unstructured problems, poor locality and high data access to computation ratio. The graph representation used for parallel architectures plays a significant role in facing those challenges. Poorly chosen representations may unnecessarily drive up the communication cost of the algorithm, which will decrease its scalability. In the following, shared and distributed memory architectures are considered.
Shared memory
In the case of a shared memory model, the graph representations used for parallel processing are the same as in the sequential case, since parallel read-only access to the graph representation (e.g. an adjacency list) is efficient in shared memory.
Distributed memory
In the distributed memory model, the usual approach is to partition the vertex set of the graph into sets . Here, is the amount of available processing elements (PE). The vertex set partitions are then distributed to the PEs with matching index, additionally to the corresponding edges. Every PE has its own subgraph representation, where edges with an endpoint in another partition require special attention. For standard communication interfaces like MPI, the ID of the PE owning the other endpoint has to be identifiable. During computation in a distributed graph algorithms, passing information along these edges implies communication.
Partitioning the graph needs to be done carefully - there is a trade-off between low communication and even size partitioning But partitioning a graph is a NP-hard problem, so it is not feasible to calculate them. Instead, the following heuristics are used.
1D partitioning: Every processor gets vertices and the corresponding outgoing edges. This can be understood as a row-wise or column-wise decomposition of the adjacency matrix. For algorithms operating on this representation, this requires an All-to-All communication step as well as message buffer sizes, as each PE potentially has outgoing edges to every other PE.
2D partitioning: Every processor gets a submatrix of the adjacency matrix. Assume the processors are aligned in a rectangle , where and are the amount of processing elements in each row and column, respectively. Then each processor gets a submatrix of the adjacency matrix of dimension . This can be visualized as a checkerboard pattern in a matrix. Therefore, each processing unit can only have outgoing edges to PEs in the same row and column. This bounds the amount of communication partners for each PE to out of possible ones.
Compressed representations
Graphs with trillions of edges occur in machine learning, social network analysis, and other areas. Compressed graph representations have been developed to reduce I/O and memory requirements. General techniques such as Huffman coding are applicable, but the adjacency list or adjacency matrix can be processed in specific ways to increase efficiency.
Graph traversal
Breadth first search and depth first search
Breadth-first search (BFS) and depth-first search (DFS) are two closely-related approaches that are used for exploring all of the nodes in a given connected component. Both start with an arbitrary node, the "root".
| Mathematics | Data structures and types | null |
6783609 | https://en.wikipedia.org/wiki/Freshwater%20fish | Freshwater fish | Freshwater fish are fish species that spend some or all of their lives in bodies of fresh water such as rivers, lakes and inland wetlands, where the salinity is less than 1.05%. These environments differ from marine habitats in many ways, especially the difference in levels of osmolarity. To survive in fresh water, fish need a range of physiological adaptations.
41.24% of all known species of fish are found in fresh water. This is primarily due to the rapid speciation that the scattered habitats make possible. When dealing with ponds and lakes, one might use the same basic models of speciation as when studying island biogeography.
Physiology
Freshwater fish differ physiologically from saltwater fish in several respects. Their gills must be able to diffuse dissolved gases while keeping the electrolytes in the body fluids inside. Their scales reduce water diffusion through the skin: freshwater fish that have suffered too much scale loss will die. They also have well-developed kidneys to reclaim salts from body fluids before excretion.
Migratory fish
Many species of fish do reproduce in freshwater, but spend most of their adult lives in the sea. These are known as anadromous fish, and include, for instance, salmon, trout, sea lamprey and three-spined stickleback. Some other kinds of fish are, on the contrary, born in salt water, but live most of or parts of their adult lives in fresh water; for instance the eels. These are known as catadromous fish.
Species migrating between marine and fresh waters need adaptations for both environments; when in salt water they need to keep the bodily salt concentration on a level lower than the surroundings, and vice versa. Many species solve this problem by associating different habitats with different stages of life. Both eels, anadromous salmoniform fish and the sea lamprey have different tolerances in salinity in different stages of their lives.
Classification
Salt tolerance
Freshwater fish are traditionally divided into two divisions proposed by George S. Myers in 1938:
Primary division freshwater fish are strictly limited to freshwater. They cannot survive in saltwater for any significant length of time and cannot disperse across marine water.
Secondary division freshwater fish normally inhabit freshwater, but can survive in brackish and saltwater for some time. They may enter saltwater voluntarily and disperse across marine water.
A third group, peripheral freshwater fish, are fish which normally live in marine water but may enter and survive for some time in freshwater. This concept was introduced by John Treadwell Nichols in 1928.
Temperature
Among fishers in the United States, freshwater fish species are usually classified by the water temperature in which they survive. The water temperature affects the amount of oxygen available as cold water contains more oxygen than warm water.
Coldwater fish species survive in the coldest temperatures, preferring a water temperature of . In North America, air temperatures that result in sufficiently cold water temperatures are found in the northern United States, Canada, and in the southern United States at high elevation. Common coldwater fish include brook trout, rainbow trout, and brown trout.
Coolwater fish species prefer water temperature between the coldwater and the long warmwater species, around . They are found throughout North America except for the southern portions of the United States. Common coolwater species include muskellunge, northern pike, walleye, and yellow perch.
Warmwater fish species can survive in a wide range of conditions, preferring a water temperature around . Warmwater fish can survive cold winter temperatures in northern climates, but thrive in warmer water. Common warmwater fish include catfish, largemouth bass, bluegill, crappies, and many other species from the family Centrarchidae.
Status
In 2021, a group of conservation organizations estimated that one-third of the world's freshwater fish species were at risk of extinction. A global assessment of freshwater fishes estimates an average decline of 83% in populations between 1970 and 2014. The protection of 30% of Earth's surfaces by 2030 may encompass freshwater habitat and help protect these threatened species.
There is an increasing trend in freshwater fish for local taxonomic, functional, and phylogenetic richness in more than half of the world's rivers. This increase in local diversity is primarily explained by anthropogenic species introductions that compensate for or even exceed extinctions in most rivers.
PFAS contamination
A study and an interactive map by EWG using its results show freshwater fish in the U.S. ubiquitously contain high levels of harmful PFAS, with a single serving typically significantly increasing the blood PFOS level.
North America
About four in ten North American freshwater fish are endangered, according to a pan-North American study, the main cause being human pollution. The number of fish species and subspecies to become endangered has risen from 40 to 61, since 1989. For example, the Bigmouth Buffalo is now the oldest age-validated freshwater fish in the world, and its status urgently needs reevaluation in parts of its endemic range.
China
About of the total freshwater fisheries in China are in the Yangtze Basin. Many Yangtze fish species have declined drastically and 65 were recognized as threatened in the 2009 Chinese red list. The Chinese paddlefish, once common to the Yangtze River, is one of a number of extinctions to have taken place due to the degradation of the Yangtze, alongside that of the wild Yangtze sturgeon.
Threats
Habitat destruction
Intentional anthropogenic reconstruction and rerouting of waterways impacts stream flow, water temperature, and more, impacting normal habitat functionality. Dams not only interrupt linear water flow and cause major geological channel shifts, but also limit the amount of water available to fishes in lakes, streams and rivers and have the potential to change the trophic structure because of these alterations of the habitat and the limitations to movement and connectivity.
Unnatural water flow below dams causes immense habitat degradation, reducing viable options for aquatic organisms. Upstream migration is hindered by the dam structure and can cause population declines as fishes don't have access to normal feeding and/or spawning grounds. Dams tend to affect upstream species richness, that is, the number of fish species in the ecological community. Additionally, dams can cause the isolation of fish populations, and the lack of connectivity creates possible problems for inbreeding and low genetic diversity. The loss of connectivity impacts the structure of community assemblies and increases the fragmentation of habitats, which can compound existing problems for vulnerable species.
Temperature alterations are another unintended consequence of dam and land use projects. Temperature is a vital part of aquatic ecosystem stability, so changes to stream and river water temperature can have large impacts on biotic communities. Many aquatic larvae use thermal cues to regulate their life cycles, mostly notably here, insects. Insects are a large part of most fish diets, so this can pose a great dietary problem. Temperature can cause changes in fish behavior and distribution habits as well by increasing their metabolic rates and thus their drive to spawn and feed.
Linear systems are more easily fragmented and connectivity in aquatic ecosystems is vital. Freshwater fishes are particularly vulnerable to habitat destruction because they reside in small bodies of water which are often very close to human activity and thus easily polluted by trash, chemicals, waste, and other agents which are harmful to freshwater habitats.
Land use changes cause major shifts in aquatic ecosystems. Deforestation can change the structure and sedimentary composition of streams, which impacts the habitat functionality for many fish species and can reduce species richness, evenness, and diversity. Agriculture, mining, and basic infrastructural building can degrade freshwater habitats. Fertilizer runoffs can create excess nitrogen and phosphorus which feed massive algae blooms that block sunlight, limit water oxygenation, and make the habitat functionally unsustainable for aquatic species. Chemicals from mining and factories make their way into the soil and go into streams via runoff. More runoff makes its way into streams since paved roads, cement, and other basic infrastructure do not absorb materials, and all the harmful pollutants go directly into rivers and streams. Fish are very sensitive to changes in water pH, salinity, hardness, and temperature which can all be affected by runoff pollutants and indirect changes from land use. Freshwater fish face extinction due to habitat loss, overfishing, and "forever chemicals." Conservation efforts, sustainable practices, and awareness are crucial in maintaining fish populations and species diversity.
Exotic species
An exotic (or non-native) species is defined as a species that does not naturally occur in a certain area or ecosystem. This includes eggs and other biological material associated with the species. Non-native species are considered invasive if they cause ecological or economic injury.
The introduction of exotic fish species into ecosystems is a threat to many endemic populations. The native species struggle to survive alongside exotic species which decimate prey populations or outcompete indigenous fishes. High densities of exotic fish are negatively correlated with native species richness. Because the exotic species is suddenly introduced to a community, it does not have any established predators or prey. The exotic species then have a survival advantage over endemic organisms.
One such example is the destruction of the endemic cichlid population in Lake Victoria via the introduction of the predatory Nile perch (Lates niloticus). Although the exact time is unknown, in the 1950s the Ugandan Game and Fisheries Department covertly introduced the Nile perch into Lake Victoria, possibly to improve sport fishing and boost the fishery. In the 1980s, the Nile perch population saw a large increase which coincided with a great increase in the value of the fishery. This surge in Nile perch numbers restructured the lake's ecology. The endemic cichlid population, known to have around 500 species, was cut almost in half. By the 1990s, only three species of sport fish were left to support the once multispecies fishery, two of which were invasive. More recent research has suggested that remaining cichlids are recovering due to the recent surge in Nile perch commercial fishing, and the cichlids that are left have the greatest phenotypic plasticity and are able to react to environmental changes quickly.
The introduction of the rainbow trout (Oncorhynchus mykiss) in the late 19th century resulted in the extinction of the yellowfin cutthroat trout (Oncorhynchus clarkii macdonaldi) found only in the Twin Lakes of Colorado, USA. The yellowfin cutthroat trout was discovered in 1889 and was recognized as a subspecies of the cutthroat trout (Oncorhynchus clarkii). The rainbow trout was introduced to Colorado in the 1880s. By 1903, the yellowfin cutthroat trout stopped being reported. It is now presumed extinct. The rainbow trout is invasive worldwide, and there are multiple efforts to remove them from their non-native ecosystems.
Both species are among the "100 of the World’s Worst Invasive Alien Species," as determined by the IUCN Invasive Species Specialist Group based on their effect on anthropogenic activities, environmental biodiversity and their ability to act as a case study for important ecological issues.
Hybridization
Hybridization involves the mating of two genetically different species (interspecific hybridization). It is dangerous for native species to hybridize because hybrid phenotypes may have better fitness and outcompete the two parent species and/or other fishes in the ecosystem. This could irreversibly compromise the genetic identity of one or both of the parent species and even drive them to extinction if their range is limited.
The rainbow trout discussed above hybridized with the native greenback cutthroat trout (Oncorhynchus clarkii stomias), causing their local extinction in the Twin Lakes area of Colorado as their hybrid "cutbows" became more prevalent. The rainbow trout has been reported to hybridize with at least two other salmonid species. Additionally, the cichlids in Lake Victoria evolved over 700 unique species in only 150,000 years and are theorized to have done so via ancient hybridization events which led to speciation.
| Biology and health sciences | Fishes by habitat | Animals |
25777451 | https://en.wikipedia.org/wiki/Chloroplast%20DNA | Chloroplast DNA | Chloroplast DNA (cpDNA), also known as plastid DNA (ptDNA) is the DNA located in chloroplasts, which are photosynthetic organelles located within the cells of some eukaryotic organisms. Chloroplasts, like other types of plastid, contain a genome separate from that in the cell nucleus. The existence of chloroplast DNA was identified biochemically in 1959, and confirmed by electron microscopy in 1962. The discoveries that the chloroplast contains ribosomes and performs protein synthesis revealed that the chloroplast is genetically semi-autonomous. The first complete chloroplast genome sequences were published in 1986, Nicotiana tabacum (tobacco) by Sugiura and colleagues and Marchantia polymorpha (liverwort) by Ozeki et al. Since then, tens of thousands of chloroplast genomes from various species have been sequenced.
Molecular structure
Chloroplast DNAs are circular, and are typically 120,000–170,000 base pairs long. They can have a contour length of around 30–60 micrometers, and have a mass of about 80–130 million daltons.
Most chloroplasts have their entire chloroplast genome combined into a single large ring, though those of dinophyte algae are a notable exception—their genome is broken up into about forty small plasmids, each 2,000–10,000 base pairs long. Each minicircle contains one to three genes, but blank plasmids, with no coding DNA, have also been found.
Chloroplast DNA has long been thought to have a circular structure, but some evidence suggests that chloroplast DNA more commonly takes a linear shape. Over 95% of the chloroplast DNA in corn chloroplasts has been observed to be in branched linear form rather than individual circles.
Inverted repeats
Many chloroplast DNAs contain two inverted repeats, which separate a long single copy section (LSC) from a short single copy section (SSC).
The inverted repeats vary wildly in length, ranging from 4,000 to 25,000 base pairs long each. Inverted repeats in plants tend to be at the upper end of this range, each being 20,000–25,000 base pairs long.
The inverted repeat regions usually contain three ribosomal RNA and two tRNA genes, but they can be expanded or reduced to contain as few as four or as many as over 150 genes.
While a given pair of inverted repeats are rarely completely identical, they are always very similar to each other, apparently resulting from concerted evolution.
The inverted repeat regions are highly conserved among land plants, and accumulate few mutations. Similar inverted repeats exist in the genomes of cyanobacteria and the other two chloroplast lineages (glaucophyta and rhodophyceæ), suggesting that they predate the chloroplast, though some chloroplast DNAs like those of peas and a few red algae have since lost the inverted repeats. Others, like the red alga Porphyra flipped one of its inverted repeats (making them direct repeats). It is possible that the inverted repeats help stabilize the rest of the chloroplast genome, as chloroplast DNAs which have lost some of the inverted repeat segments tend to get rearranged more.
Nucleoids
Each chloroplast contains around 100 copies of its DNA in young leaves, declining to 15–20 copies in older leaves. They are usually packed into nucleoids which can contain several identical chloroplast DNA rings. Many nucleoids can be found in each chloroplast.
Though chloroplast DNA is not associated with true histones, in red algae, a histone-like chloroplast protein (HC) coded by the chloroplast DNA that tightly packs each chloroplast DNA ring into a nucleoid has been found.
In primitive red algae, the chloroplast DNA nucleoids are clustered in the center of a chloroplast, while in green plants and green algae, the nucleoids are dispersed throughout the stroma.
Gene content and plastid gene expression
More than 5000 chloroplast genomes have been sequenced and are accessible via the NCBI organelle genome database. The first chloroplast genomes were sequenced in 1986, from tobacco (Nicotiana tabacum) and liverwort (Marchantia polymorpha). Comparison of the gene sequences of the cyanobacteria Synechocystis to those of the chloroplast genome of Arabidopsis provided confirmation of the endosymbiotic origin of the chloroplast. It also demonstrated the significant extent of gene transfer from the cyanobacterial ancestor to the nuclear genome.
In most plant species, the chloroplast genome encodes approximately 120 genes. The genes primarily encode core components of the photosynthetic machinery and factors involved in their expression and assembly. Across species of land plants, the set of genes encoded by the chloroplast genome is fairly conserved. This includes four ribosomal RNAs, approximately 30 tRNAs, 21 ribosomal proteins, and 4 subunits of the plastid-encoded RNA polymerase complex that are involved in plastid gene expression. The large Rubisco subunit and 28 photosynthetic thylakoid proteins are encoded within the chloroplast genome.
Chloroplast genome reduction and gene transfer
Over time, many parts of the chloroplast genome were transferred to the nuclear genome of the host, a process called endosymbiotic gene transfer.
As a result, the chloroplast genome is heavily reduced compared to that of free-living cyanobacteria. Chloroplasts may contain 60–100 genes whereas cyanobacteria often have more than 1500 genes in their genome. The parasitic Pilostyles have even lost their plastid genes for tRNA. Contrarily, there are only a few known instances where genes have been transferred to the chloroplast from various donors, including bacteria.
Endosymbiotic gene transfer is how we know about the lost chloroplasts in many chromalveolate lineages. Even if a chloroplast is eventually lost, the genes it donated to the former host's nucleus persist, providing evidence for the lost chloroplast's existence. For example, while diatoms (a heterokontophyte) now have a red algal derived chloroplast, the presence of many green algal genes in the diatom nucleus provide evidence that the diatom ancestor (probably the ancestor of all chromalveolates too) had a green algal derived chloroplast at some point, which was subsequently replaced by the red chloroplast.
In land plants, some 11–14% of the DNA in their nuclei can be traced back to the chloroplast, up to 18% in Arabidopsis, corresponding to about 4,500 protein-coding genes. There have been a few recent transfers of genes from the chloroplast DNA to the nuclear genome in land plants.
Proteins encoded by the chloroplast
Of the approximately three-thousand proteins found in chloroplasts, some 95% of them are encoded by nuclear genes. Many of the chloroplast's protein complexes consist of subunits from both the chloroplast genome and the host's nuclear genome. As a result, protein synthesis must be coordinated between the chloroplast and the nucleus. The chloroplast is mostly under nuclear control, though chloroplasts can also give out signals regulating gene expression in the nucleus, called retrograde signaling.
Protein synthesis
Protein synthesis within chloroplasts relies on an RNA polymerase coded by the chloroplast's own genome, which is related to RNA polymerases found in bacteria. Chloroplasts also contain a mysterious second RNA polymerase that is encoded by the plant's nuclear genome. The two RNA polymerases may recognize and bind to different kinds of promoters within the chloroplast genome. The ribosomes in chloroplasts are similar to bacterial ribosomes.
RNA editing in plastids
RNA editing is the insertion, deletion, and substitution of nucleotides in a mRNA transcript prior to translation to protein. The highly oxidative environment inside chloroplasts increases the rate of mutation so post-transcription repairs are needed to conserve functional sequences. The chloroplast editosome substitutes C -> U and U -> C at very specific locations on the transcript. This can change the codon for an amino acid or restore a non-functional pseudogene by adding an AUG start codon or removing a premature UAA stop codon.
The editosome recognizes and binds to cis sequence upstream of the editing site. The distance between the binding site and editing site varies by gene and proteins involved in the editosome. Hundreds of different PPR proteins from the nuclear genome are involved in the RNA editing process. These proteins consist of 35-mer repeated amino acids, the sequence of which determines the cis binding site for the edited transcript.
Basal land plants such as liverworts, mosses and ferns have hundreds of different editing sites while flowering plants typically have between thirty and forty. Parasitic plants such as Epifagus virginiana show a loss of RNA editing resulting in a loss of function for photosynthesis genes.
DNA replication
Leading model of cpDNA replication
The mechanism for chloroplast DNA (cpDNA) replication has not been conclusively determined, but two main models have been proposed. Scientists have attempted to observe chloroplast replication via electron microscopy since the 1970s. The results of the microscopy experiments led to the idea that chloroplast DNA replicates using a double displacement loop (D-loop). As the D-loop moves through the circular DNA, it adopts a theta intermediary form, also known as a Cairns replication intermediate, and completes replication with a rolling circle mechanism. Replication starts at specific points of origin. Multiple replication forks open up, allowing replication machinery to replicate the DNA. As replication continues, the forks grow and eventually converge. The new cpDNA structures separate, creating daughter cpDNA chromosomes.
In addition to the early microscopy experiments, this model is also supported by the amounts of deamination seen in cpDNA. Deamination occurs when an amino group is lost and is a mutation that often results in base changes. When adenine is deaminated, it becomes hypoxanthine (H). Hypoxanthine can bind to cytosine, and when the HC base pair is replicated, it becomes a GC (thus, an A → G base change).
In cpDNA, there are several A → G deamination gradients. DNA becomes susceptible to deamination events when it is single stranded. When replication forks form, the strand not being copied is single stranded, and thus at risk for A → G deamination. Therefore, gradients in deamination indicate that replication forks were most likely present and the direction that they initially opened (the highest gradient is most likely nearest the start site because it was single stranded for the longest amount of time). This mechanism is still the leading theory today; however, a second theory suggests that most cpDNA is actually linear and replicates through homologous recombination. It further contends that only a minority of the genetic material is kept in circular chromosomes while the rest is in branched, linear, or other complex structures.
Alternative model of replication
One of the main competing models for cpDNA asserts that most cpDNA is linear and participates in homologous recombination and replication structures similar to bacteriophage T4. It has been established that some plants have linear cpDNA, such as maize, and that more still contain complex structures that scientists do not yet understand; however, the predominant view today is that most cpDNA is circular. When the original experiments on cpDNA were performed, scientists did notice linear structures; however, they attributed these linear forms to broken circles. If the branched and complex structures seen in cpDNA experiments are real and not artifacts of concatenated circular DNA or broken circles, then a D-loop mechanism of replication is insufficient to explain how those structures would replicate. At the same time, homologous recombination does not explain the multiple A → G gradients seen in plastomes. This shortcoming is one of the biggest for the linear structure theory.
Protein targeting and import
The movement of so many chloroplast genes to the nucleus means that many chloroplast proteins that were supposed to be translated in the chloroplast are now synthesized in the cytoplasm. This means that these proteins must be directed back to the chloroplast, and imported through at least two chloroplast membranes.
Curiously, around half of the protein products of transferred genes aren't even targeted back to the chloroplast. Many became exaptations, taking on new functions like participating in cell division, protein routing, and even disease resistance. A few chloroplast genes found new homes in the mitochondrial genome—most became nonfunctional pseudogenes, though a few tRNA genes still work in the mitochondrion. Some transferred chloroplast DNA protein products get directed to the secretory pathway (though many secondary plastids are bounded by an outermost membrane derived from the host's cell membrane, and therefore topologically outside of the cell, because to reach the chloroplast from the cytosol, you have to cross the cell membrane, just like if you were headed for the extracellular space. In those cases, chloroplast-targeted proteins do initially travel along the secretory pathway).
Because the cell acquiring a chloroplast already had mitochondria (and peroxisomes, and a cell membrane for secretion), the new chloroplast host had to develop a unique protein targeting system to avoid having chloroplast proteins being sent to the wrong organelle.
Cytoplasmic translation and N-terminal transit sequences
Polypeptides, the precursors of proteins, are chains of amino acids. The two ends of a polypeptide are called the N-terminus, or amino end, and the C-terminus, or carboxyl end. For many (but not all) chloroplast proteins encoded by nuclear genes, cleavable transit peptides are added to the N-termini of the polypeptides, which are used to help direct the polypeptide to the chloroplast for import (N-terminal transit peptides are also used to direct polypeptides to plant mitochondria).
N-terminal transit sequences are also called presequences because they are located at the "front" end of a polypeptide—ribosomes synthesize polypeptides from the N-terminus to the C-terminus.
Chloroplast transit peptides exhibit huge variation in length and amino acid sequence. They can be from 20 to 150 amino acids long—an unusually long length, suggesting that transit peptides are actually collections of domains with different functions. Transit peptides tend to be positively charged, rich in hydroxylated amino acids such as serine, threonine, and proline, and poor in acidic amino acids like aspartic acid and glutamic acid. In an aqueous solution, the transit sequence forms a random coil.
Not all chloroplast proteins include a N-terminal cleavable transit peptide though. Some include the transit sequence within the functional part of the protein itself. A few have their transit sequence appended to their C-terminus instead. Most of the polypeptides that lack N-terminal targeting sequences are the ones that are sent to the outer chloroplast membrane, plus at least one sent to the inner chloroplast membrane.
Phosphorylation, chaperones, and transport
After a chloroplast polypeptide is synthesized on a ribosome in the cytosol, ATP energy can be used to phosphorylate, or add a phosphate group to many (but not all) of them in their transit sequences. Serine and threonine (both very common in chloroplast transit sequences—making up 20–30% of the sequence) are often the amino acids that accept the phosphate group. The enzyme that carries out the phosphorylation is specific for chloroplast polypeptides, and ignores ones meant for mitochondria or peroxisomes.
Phosphorylation changes the polypeptide's shape, making it easier for 14-3-3 proteins to attach to the polypeptide. In plants, 14-3-3 proteins only bind to chloroplast preproteins. It is also bound by the heat shock protein Hsp70 that keeps the polypeptide from folding prematurely. This is important because it prevents chloroplast proteins from assuming their active form and carrying out their chloroplast functions in the wrong place—the cytosol. At the same time, they have to keep just enough shape so that they can be recognized and imported into the chloroplast.
The heat shock protein and the 14-3-3 proteins together form a cytosolic guidance complex that makes it easier for the chloroplast polypeptide to get imported into the chloroplast.
Alternatively, if a chloroplast preprotein's transit peptide is not phosphorylated, a chloroplast preprotein can still attach to a heat shock protein or Toc159. These complexes can bind to the TOC complex on the outer chloroplast membrane using GTP energy.
The translocon on the outer chloroplast membrane (TOC)
The TOC complex, or translocon on the outer chloroplast membrane, is a collection of proteins that imports preproteins across the outer chloroplast envelope. Five subunits of the TOC complex have been identified—two GTP-binding proteins Toc34 and Toc159, the protein import tunnel Toc75, plus the proteins Toc64 and Toc12.
The first three proteins form a core complex that consists of one Toc159, four to five Toc34s, and four Toc75s that form four holes in a disk 13 nanometers across. The whole core complex weighs about 500 kilodaltons. The other two proteins, Toc64 and Toc12, are associated with the core complex but are not part of it.
Toc34 and 33
Toc34 is an integral protein in the outer chloroplast membrane that's anchored into it by its hydrophobic C-terminal tail. Most of the protein, however, including its large guanosine triphosphate (GTP)-binding domain projects out into the stroma.
Toc34's job is to catch some chloroplast preproteins in the cytosol and hand them off to the rest of the TOC complex. When GTP, an energy molecule similar to ATP attaches to Toc34, the protein becomes much more able to bind to many chloroplast preproteins in the cytosol. The chloroplast preprotein's presence causes Toc34 to break GTP into guanosine diphosphate (GDP) and inorganic phosphate. This loss of GTP makes the Toc34 protein release the chloroplast preprotein, handing it off to the next TOC protein. Toc34 then releases the depleted GDP molecule, probably with the help of an unknown GDP exchange factor. A domain of Toc159 might be the exchange factor that carry out the GDP removal. The Toc34 protein can then take up another molecule of GTP and begin the cycle again.
Toc34 can be turned off through phosphorylation. A protein kinase drifting around on the outer chloroplast membrane can use ATP to add a phosphate group to the Toc34 protein, preventing it from being able to receive another GTP molecule, inhibiting the protein's activity. This might provide a way to regulate protein import into chloroplasts.
Arabidopsis thaliana has two homologous proteins, AtToc33 and AtToc34 (The At stands for Arabidopsis thaliana), which are each about 60% identical in amino acid sequence to Toc34 in peas (called psToc34). AtToc33 is the most common in Arabidopsis, and it is the functional analogue of Toc34 because it can be turned off by phosphorylation. AtToc34 on the other hand cannot be phosphorylated.
Toc159
Toc159 is another GTP binding TOC subunit, like Toc34. Toc159 has three domains. At the N-terminal end is the A-domain, which is rich in acidic amino acids and takes up about half the protein length. The A-domain is often cleaved off, leaving an 86 kilodalton fragment called Toc86. In the middle is its GTP binding domain, which is very similar to the homologous GTP-binding domain in Toc34. At the C-terminal end is the hydrophilic M-domain, which anchors the protein to the outer chloroplast membrane.
Toc159 probably works a lot like Toc34, recognizing proteins in the cytosol using GTP. It can be regulated through phosphorylation, but by a different protein kinase than the one that phosphorylates Toc34. Its M-domain forms part of the tunnel that chloroplast preproteins travel through, and seems to provide the force that pushes preproteins through, using the energy from GTP.
Toc159 is not always found as part of the TOC complex—it has also been found dissolved in the cytosol. This suggests that it might act as a shuttle that finds chloroplast preproteins in the cytosol and carries them back to the TOC complex. There isn't a lot of direct evidence for this behavior though.
A family of Toc159 proteins, Toc159, Toc132, Toc120, and Toc90 have been found in Arabidopsis thaliana. They vary in the length of their A-domains, which is completely gone in Toc90. Toc132, Toc120, and Toc90 seem to have specialized functions in importing stuff like nonphotosynthetic preproteins, and can't replace Toc159.
Toc75
Toc75 is the most abundant protein on the outer chloroplast envelope. It is a transmembrane tube that forms most of the TOC pore itself. Toc75 is a β-barrel channel lined by 16 β-pleated sheets. The hole it forms is about 2.5 nanometers wide at the ends, and shrinks to about 1.4–1.6 nanometers in diameter at its narrowest point—wide enough to allow partially folded chloroplast preproteins to pass through.
Toc75 can also bind to chloroplast preproteins, but is a lot worse at this than Toc34 or Toc159.
Arabidopsis thaliana has multiple isoforms of Toc75 that are named by the chromosomal positions of the genes that code for them. AtToc75 III is the most abundant of these.
The translocon on the inner chloroplast membrane (TIC)
The TIC translocon, or translocon on the inner chloroplast membrane translocon is another protein complex that imports proteins across the inner chloroplast envelope. Chloroplast polypeptide chains probably often travel through the two complexes at the same time, but the TIC complex can also retrieve preproteins lost in the intermembrane space.
Like the TOC translocon, the TIC translocon has a large core complex surrounded by some loosely associated peripheral proteins like Tic110, Tic40, and Tic21.
The core complex weighs about one million daltons and contains Tic214, Tic100, Tic56, and Tic20 I, possibly three of each.
Tic20
Tic20 is an integral protein thought to have four transmembrane α-helices. It is found in the 1 million dalton TIC complex. Because it is similar to bacterial amino acid transporters and the mitochondrial import protein Tim17 (translocase on the inner mitochondrial membrane), it has been proposed to be part of the TIC import channel. There is no in vitro evidence for this though. In Arabidopsis thaliana, it is known that for about every five Toc75 proteins in the outer chloroplast membrane, there are two Tic20 I proteins (the main form of Tic20 in Arabidopsis) in the inner chloroplast membrane.
Unlike Tic214, Tic100, or Tic56, Tic20 has homologous relatives in cyanobacteria and nearly all chloroplast lineages, suggesting it evolved before the first chloroplast endosymbiosis. Tic214, Tic100, and Tic56 are unique to chloroplastidan chloroplasts, suggesting that they evolved later.
Tic214
Tic214 is another TIC core complex protein, named because it weighs just under 214 kilodaltons. It is 1786 amino acids long and is thought to have six transmembrane domains on its N-terminal end. Tic214 is notable for being coded for by chloroplast DNA, more specifically the first open reading frame ycf1. Tic214 and Tic20 together probably make up the part of the one million dalton TIC complex that spans the entire membrane. Tic20 is buried inside the complex while Tic214 is exposed on both sides of the inner chloroplast membrane.
Tic100
Tic100 is a nuclear encoded protein that's 871 amino acids long. The 871 amino acids collectively weigh slightly less than 100 thousand daltons, and since the mature protein probably doesn't lose any amino acids when itself imported into the chloroplast (it has no cleavable transit peptide), it was named Tic100. Tic100 is found at the edges of the 1 million dalton complex on the side that faces the chloroplast intermembrane space.
Tic56
Tic56 is also a nuclear encoded protein. The preprotein its gene encodes is 527 amino acids long, weighing close to 62 thousand daltons; the mature form probably undergoes processing that trims it down to something that weighs 56 thousand daltons when it gets imported into the chloroplast. Tic56 is largely embedded inside the 1 million dalton complex.
Tic56 and Tic100 are highly conserved among land plants, but they don't resemble any protein whose function is known. Neither has any transmembrane domains.
| Biology and health sciences | Plant cells | Biology |
911833 | https://en.wikipedia.org/wiki/Graphene | Graphene | Graphene () is a carbon allotrope consisting of a single layer of atoms arranged in a honeycomb planar nanostructure. The name "graphene" is derived from "graphite" and the suffix -ene, indicating the presence of double bonds within the carbon structure.
Graphene is known for its exceptionally high tensile strength, electrical conductivity, transparency, and being the thinnest two-dimensional material in the world. Despite the nearly transparent nature of a single graphene sheet, graphite (formed from stacked layers of graphene) appears black because it absorbs all visible light wavelengths. On a microscopic scale, graphene is the strongest material ever measured.
The existence of graphene was first theorized in 1947 by Philip R. Wallace during his research on graphite's electronic properties. In 2004, the material was isolated and characterized by Andre Geim and Konstantin Novoselov at the University of Manchester using a piece of graphite and adhesive tape. In 2010, Geim and Novoselov were awarded the Nobel Prize in Physics for their "groundbreaking experiments regarding the two-dimensional material graphene". While small amounts of graphene are easy to produce using the method by which it was originally isolated, attempts to scale and automate the manufacturing process for mass production have had limited success due to cost-effectiveness and quality control concerns. The global graphene market was $9 million in 2012, with most of the demand from research and development in semiconductors, electronics, electric batteries, and composites.
The IUPAC (International Union of Pure and Applied Chemistry) advises using the term "graphite" for the three-dimensional material and reserving "graphene" for discussions about the properties or reactions of single-atom layers. A narrower definition, of "isolated or free-standing graphene", requires that the layer be sufficiently isolated from its environment, but would include layers suspended or transferred to silicon dioxide or silicon carbide.
History
Structure of graphite and its intercalation compounds
In 1859, Benjamin Brodie noted the highly lamellar structure of thermally reduced graphite oxide. Pioneers in X-ray crystallography attempted to determine the structure of graphite. The lack of large single crystal graphite specimens contributed to the independent development of X-ray powder diffraction by Peter Debye and Paul Scherrer in 1915, and Albert Hull in 1916. However, neither of their proposed structures was correct. In 1918, Volkmar Kohlschütter and P. Haenni described the properties of graphite oxide paper. The structure of graphite was successfully determined from single-crystal X-ray diffraction by J. D. Bernal in 1924, although subsequent research has made small modifications to the unit cell parameters.
The theory of graphene was first explored by P. R. Wallace in 1947 as a starting point for understanding the electronic properties of 3D graphite. The emergent massless Dirac equation was separately pointed out in 1984 by Gordon Walter Semenoff, and by David P. Vincenzo and Eugene J. Mele. Semenoff emphasized the occurrence in a magnetic field of an electronic Landau level precisely at the Dirac point. This level is responsible for the anomalous integer quantum Hall effect.
Observations of thin graphite layers and related structures
Transmission electron microscopy (TEM) images of thin graphite samples consisting of a few graphene layers were published by G. Ruess and F. Vogt in 1948. Eventually, single layers were also observed directly. Single layers of graphite were also observed by transmission electron microscopy within bulk materials, particularly inside soot obtained by chemical exfoliation.
From 1961 to 1962, Hanns-Peter Boehm published a study of extremely thin flakes of graphite. The study measured flakes as small as ~0.4 nm, which is around 3 atomic layers of amorphous carbon. This was the best possible resolution for TEMs in the 1960s. However, it is impossible to distinguish between suspended monolayer and multilayer graphene by their TEM contrasts, and the only known method is to analyze the relative intensities of various diffraction spots. The first reliable TEM observations of monolayers are likely given in references 24 and 26 of Geim and Novoselov's 2007 review.
In 1975, van Bommel et al. epitaxially grew a single layer of graphite on top of silicon carbide. Others grew single layers of carbon atoms on other materials. This "epitaxial graphene" consists of a single-atom-thick hexagonal lattice of sp2-bonded carbon atoms, as in free-standing graphene. However, there is significant charge transfer between the two materials and, in some cases, hybridization between the d-orbitals of the substrate atoms and π orbitals of graphene, which significantly alter the electronic structure compared to that of free-standing graphene.
Boehm et al. coined the term "graphene" for the hypothetical single-layer structure in 1986. The term was used again in 1987 to describe single sheets of graphite as a constituent of graphite intercalation compounds, which can be seen as crystalline salts of the intercalant and graphene. It was also used in the descriptions of carbon nanotubes by R. Saito and Mildred and Gene Dresselhaus in 1992, and in the description of polycyclic aromatic hydrocarbons in 2000 by S. Wang and others.
Efforts to make thin films of graphite by mechanical exfoliation started in 1990.
Initial attempts employed exfoliation techniques similar to the drawing method. Multilayer samples down to 10 nm in thickness were obtained.
In 2002, Robert B. Rutherford and Richard L. Dudman filed for a patent in the US on a method to produce graphene by repeatedly peeling off layers from a graphite flake adhered to a substrate, achieving a graphite thickness of . The key to success was the ability to quickly and efficiently identify graphene flakes on the substrate using optical microscopy, which provided a small but visible contrast between the graphene and the substrate.
Another U.S. patent was filed in the same year by Bor Z. Jang and Wen C. Huang for a method to produce graphene-based on exfoliation followed by attrition.
In 2014, inventor Larry Fullerton patented a process for producing single-layer graphene sheets.
Full isolation and characterization
Graphene was properly isolated and characterized in 2004 by Andre Geim and Konstantin Novoselov at the University of Manchester. They pulled graphene layers from graphite with a common adhesive tape in a process called micro-mechanical cleavage, colloquially referred to as the Scotch tape technique. The graphene flakes were then transferred onto a thin silicon dioxide layer on a silicon plate ("wafer"). The silica electrically isolated the graphene and weakly interacted with it, providing nearly charge-neutral graphene layers. The silicon beneath the could be used as a "back gate" electrode to vary the charge density in the graphene over a wide range.
This work resulted in the two winning the Nobel Prize in Physics in 2010 for their groundbreaking experiments with graphene. Their publication and the surprisingly easy preparation method that they described, sparked a "graphene gold rush". Research expanded and split off into many different subfields, exploring different exceptional properties of the material—quantum mechanical, electrical, chemical, mechanical, optical, magnetic, etc.
Exploring commercial applications
Since the early 2000s, several companies and research laboratories have been working to develop commercial applications of graphene. In 2014, a National Graphene Institute was established with that purpose at the University of Manchester, with a £60 million initial funding. In North East England two commercial manufacturers, Applied Graphene Materials and Thomas Swan Limited have begun manufacturing. Cambridge Nanosystems is a large-scale graphene powder production facility in East Anglia.
Structure
Graphene is a single layer of carbon atoms tightly bound in a hexagonal honeycomb lattice. It is an allotrope of carbon in the form of a plane of sp2-bonded atoms with a molecular bond length of . In a graphene sheet, each atom is connected to its three nearest carbon neighbors by σ-bonds, and a delocalized π-bond, which contributes to a valence band that extends over the whole sheet. This type of bonding is also seen in polycyclic aromatic hydrocarbons. The valence band is touched by a conduction band, making graphene a semimetal with unusual electronic properties that are best described by theories for massless relativistic particles. Charge carriers in graphene show linear, rather than quadratic, dependence of energy on momentum, and field-effect transistors with graphene can be made that show bipolar conduction. Charge transport is ballistic over long distances; the material exhibits large quantum oscillations and large nonlinear diamagnetism.
Bonding
Three of the four outer-shell electrons of each atom in a graphene sheet occupy three sp2 hybrid orbitals – a combination of orbitals s, px and py — that are shared with the three nearest atoms, forming σ-bonds. The length of these bonds is about 0.142 nanometers.
The remaining outer-shell electron occupies a pz orbital that is oriented perpendicularly to the plane. These orbitals hybridize together to form two half-filled bands of free-moving electrons, π, and π∗, which are responsible for most of graphene's notable electronic properties. Recent quantitative estimates of aromatic stabilization and limiting size derived from the enthalpies of hydrogenation (ΔHhydro) agree well with the literature reports.
Graphene sheets stack to form graphite with an interplanar spacing of .
Graphene sheets in solid form usually show evidence in diffraction for graphite's (002) layering. This is true of some single-walled nanostructures. However, unlayered graphene displaying only (hk0) rings have been observed in the core of presolar graphite onions. TEM studies show faceting at defects in flat graphene sheets and suggest a role for two-dimensional crystallization from a melt.
Geometry
The hexagonal lattice structure of isolated, single-layer graphene can be directly seen with transmission electron microscopy (TEM) of sheets of graphene suspended between bars of a metallic grid. Some of these images showed a "rippling" of the flat sheet, with an amplitude of about one nanometer. These ripples may be intrinsic to the material as a result of the instability of two-dimensional crystals, or may originate from the ubiquitous dirt seen in all TEM images of graphene. Photoresist residue, which must be removed to obtain atomic-resolution images, may be the "adsorbates" observed in TEM images, and may explain the observed rippling.
The hexagonal structure is also seen in scanning tunneling microscope (STM) images of graphene supported on silicon dioxide substrates The rippling seen in these images is caused by the conformation of graphene to the substrates' lattice and is not intrinsic.
Stability
Ab initio calculations show that a graphene sheet is thermodynamically unstable if its size is less than about 20 nm and becomes the most stable fullerene (as within graphite) only for molecules larger than 24,000 atoms.
Electronic properties
Graphene is a zero-gap semiconductor because its conduction and valence bands meet at the Dirac points. The Dirac points are six locations in momentum space on the edge of the Brillouin zone, divided into two non-equivalent sets of three points. These sets are labeled K and K'. These sets give graphene a valley degeneracy of . In contrast, for traditional semiconductors, the primary point of interest is generally Γ, where momentum is zero.
If the in-plane direction is confined rather than infinite, its electronic structure changes. These confined structures are referred to as graphene nanoribbons. If the nanoribbon has a "zig-zag" edge, the bandgap remains zero. If it has an "armchair" edge, the bandgap is non-zero.
Graphene's honeycomb structure can be viewed as two interleaving triangular lattices. This perspective has been used to calculate the band structure for a single graphite layer using a tight-binding approximation.
Electronic spectrum
Electrons propagating through the graphene honeycomb lattice effectively lose their mass, producing quasi-particles described by a 2D analogue of the Dirac equation rather than the Schrödinger equation for spin- particles.
Dispersion relation
The cleavage technique led directly to the first observation of the anomalous quantum Hall effect in graphene in 2005 by Geim's group and by Philip Kim and Yuanbo Zhang. This effect provided direct evidence of graphene's theoretically predicted Berry's phase of massless Dirac fermions and proof of the Dirac fermion nature of electrons. These effects were previously observed in bulk graphite by Yakov Kopelevich, Igor A. Luk'yanchuk, and others, in 2003–2004.
When atoms are placed onto the graphene hexagonal lattice, the overlap between the pz(π) orbitals and the s or the px and py orbitals is zero by symmetry. Therefore, pz electrons forming the π bands in graphene can be treated independently. Within this π-band approximation, using a conventional tight-binding model, the dispersion relation (restricted to first-nearest-neighbor interactions only) that produces the energy of the electrons with wave vector k is:
with the nearest-neighbor (π orbitals) hopping energy γ0 ≈ and the lattice constant . The conduction and valence bands correspond to the different signs. With one pz electron per atom in this model, the valence band is fully occupied, while the conduction band is vacant. The two bands touch at the zone corners (the K point in the Brillouin zone), where there is a zero density of states but no band gap. Thus, graphene exhibits a semi-metallic (or zero-gap semiconductor) character, although this is not true for a graphene sheet rolled into a carbon nanotube due to its curvature. Two of the six Dirac points are independent, while the rest are equivalent by symmetry. Near the K-points, the energy depends linearly on the wave vector, similar to a relativistic particle. Since an elementary cell of the lattice has a basis of two atoms, the wave function has an effective 2-spinor structure.
Consequently, at low energies even neglecting the true spin, electrons can be described by an equation formally equivalent to the massless Dirac equation. Hence, the electrons and holes are called Dirac fermions. This pseudo-relativistic description is restricted to the chiral limit, i.e., to vanishing rest mass M0, leading to interesting additional features:
Here vF ~ (.003 c) is the Fermi velocity in graphene, which replaces the velocity of light in the Dirac theory; is the vector of the Pauli matrices, is the two-component wave function of the electrons, and E is their energy.
The equation describing the electrons' linear dispersion relation is:
where the wavevector q is measured from the Brillouin zone vertex K, , and the zero of energy is set to coincide with the Dirac point. The equation uses a pseudospin matrix formula that describes two sublattices of the honeycomb lattice.
Single-atom wave propagation
Electron waves in graphene propagate within a single-atom layer, making them sensitive to the proximity of other materials such as high-κ dielectrics, superconductors, and ferromagnets.
Ambipolar electron and hole transport
Graphene exhibits high electron mobility at room temperature, with values reported in excess of . Hole and electron mobilities are nearly identical. The mobility is independent of temperature between and , showing minimal change even at room temperature (300 K), suggesting that the dominant scattering mechanism is defect scattering. Scattering by graphene's acoustic phonons intrinsically limits room temperature mobility in freestanding graphene to at a carrier density of .
The corresponding resistivity of graphene sheets is , lower than the resistivity of silver, which is the lowest known at room temperature. However, on substrates, electron scattering by optical phonons of the substrate has a more significant effect than scattering by graphene's phonons, limiting mobility to .
Charge transport can be affected by the adsorption of contaminants such as water and oxygen molecules, leading to non-repetitive and large hysteresis I-V characteristics. Researchers need to conduct electrical measurements in a vacuum. Coating the graphene surface with materials such as SiN, PMMA or h-BN has been proposed for protection. In January 2015, the first stable graphene device operation in the air over several weeks was reported for graphene whose surface was protected by aluminum oxide. In 2015, lithium-coated graphene exhibited superconductivity, a first for graphene.
Electrical resistance in 40-nanometer-wide nanoribbons of epitaxial graphene changes in discrete steps. The ribbons' conductance exceeds predictions by a factor of 10. The ribbons can function more like optical waveguides or quantum dots, allowing electrons to flow smoothly along the ribbon edges. In copper, resistance increases proportionally with length as electrons encounter impurities.
Transport is dominated by two modes: one ballistic and temperature-independent, and the other thermally activated. Ballistic electrons resemble those in cylindrical carbon nanotubes. At room temperature, resistance increases abruptly at a specific length—the ballistic mode at 16 micrometers and the thermally activated mode at 160 nanometers (1% of the former length).
Graphene electrons can traverse micrometer distances without scattering, even at room temperature.
Electrical conductivity and charge transport
Despite zero carrier density near the Dirac points, graphene exhibits a minimum conductivity on the order of . The origin of this minimum conductivity is still unclear. However, rippling of the graphene sheet or ionized impurities in the substrate may lead to local puddles of carriers that allow conduction. Several theories suggest that the minimum conductivity should be ; however, most measurements are of the order of or greater and depend on impurity concentration.
Near zero carrier density, graphene exhibits positive photoconductivity and negative photoconductivity at high carrier density, governed by the interplay between photoinduced changes of both the Drude weight and the carrier scattering rate.
Graphene doped with various gaseous species (both acceptors and donors) can be returned to an undoped state by gentle heating in a vacuum. Even for dopant concentrations in excess of 1012 cm−2, carrier mobility exhibits no observable change. Graphene doped with potassium in ultra-high vacuum at low temperature can reduce mobility 20-fold. The mobility reduction is reversible on heating the graphene to remove the potassium.
Due to graphene's two dimensions, charge fractionalization (where the apparent charge of individual pseudoparticles in low-dimensional systems is less than a single quantum) is thought to occur. It may therefore be a suitable material for constructing quantum computers using anyonic circuits.
Chiral half-integer quantum Hall effect
Quantum hall effect in graphene
The quantum Hall effect is a quantum mechanical version of the Hall effect, which is the production of transverse (perpendicular to the main current) conductivity in the presence of a magnetic field. The quantization of the Hall effect at integer multiples (the "Landau level") of the basic quantity e2/h (where e is the elementary electric charge and h is the Planck constant). It can usually be observed only in very clean silicon or gallium arsenide solids at temperatures around and very high magnetic fields.
Graphene shows the quantum Hall effect: the conductivity quantization is unusual in that the sequence of steps is shifted by 1/2 with respect to the standard sequence and with an additional factor of 4. Graphene's Hall conductivity is , where N is the Landau level and the double valley and double spin degeneracies give the factor of 4. These anomalies are present not only at extremely low temperatures but also at room temperature, i.e. at roughly .
Chiral electrons and anomalies
This behavior is a direct result of graphene's chiral, massless Dirac electrons. In a magnetic field, their spectrum has a Landau level with energy precisely at the Dirac point. This level is a consequence of the Atiyah–Singer index theorem and is half-filled in neutral graphene, leading to the "+1/2" in the Hall conductivity. Bilayer graphene also shows the quantum Hall effect, but with only one of the two anomalies (i.e. ). In the second anomaly, the first plateau at is absent, indicating that bilayer graphene stays metallic at the neutrality point.
Unlike normal metals, graphene's longitudinal resistance shows maxima rather than minima for integral values of the Landau filling factor in measurements of the Shubnikov–de Haas oscillations, thus the term "integral quantum Hall effect". These oscillations show a phase shift of π, known as Berry's phase. Berry's phase arises due to chirality or dependence (locking) of the pseudospin quantum number on the momentum of low-energy electrons near the Dirac points. The temperature dependence of the oscillations reveals that the carriers have a non-zero cyclotron mass, despite their zero effective mass in the Dirac-fermion formalism.
Experimental observations
Graphene samples prepared on nickel films, and on both the silicon face and carbon face of silicon carbide, show the anomalous effect directly in electrical measurements. Graphitic layers on the carbon face of silicon carbide show a clear Dirac spectrum in angle-resolved photoemission experiments, and the effect is observed in cyclotron resonance and tunneling experiments.
"Massive" electrons
Graphene's unit cell has two identical carbon atoms and two zero-energy states: one where the electron resides on atom A, and the other on atom B. However, if the unit cell's two atoms are not identical, the situation changes. Research shows that placing hexagonal boron nitride (h-BN) in contact with graphene can alter the potential felt at atoms A and B sufficiently for the electrons to develop a mass and an accompanying band gap of about 30 meV.
The mass can be positive or negative. An arrangement that slightly raises the energy of an electron on atom A relative to atom B gives it a positive mass, while an arrangement that raises the energy of atom B produces a negative electron mass. The two versions behave alike and are indistinguishable via optical spectroscopy. An electron traveling from a positive-mass region to a negative-mass region must cross an intermediate region where its mass once again becomes zero. This region is gapless and therefore metallic. Metallic modes bounding semiconducting regions of opposite-sign mass is a hallmark of a topological phase and displays much the same physics as topological insulators.
If the mass in graphene can be controlled, electrons can be confined to massless regions by surrounding them with massive regions, allowing the patterning of quantum dots, wires, and other mesoscopic structures. It also produces one-dimensional conductors along the boundary. These wires would be protected against backscattering and could carry currents without dissipation.
Interactions and phenomena
Strong magnetic fields
In magnetic fields above 10 tesla, additional plateaus of the Hall conductivity at with are observed. A plateau at and the fractional quantum Hall effect at were also reported.
These observations with indicate that the four-fold degeneracy (two valley and two spin degrees of freedom) of the Landau energy levels is partially or completely lifted.
Casimir effect
The Casimir effect is an interaction between disjoint neutral bodies provoked by the fluctuations of the electromagnetic vacuum. Mathematically, it can be explained by considering the normal modes of electromagnetic fields, which explicitly depend on the boundary conditions on the interacting bodies' surfaces. Due to graphene's strong interaction with the electromagnetic field as a one-atom-thick material, the Casimir effect has garnered significant interest.
Van der Waals force
The Van der Waals force (or dispersion force) is also unusual, obeying an inverse cubic asymptotic power law in contrast to the usual inverse quartic law.
Permittivity
Graphene's permittivity varies with frequency. Over a range from microwave to millimeter wave frequencies, it is approximately 3.3. This permittivity, combined with its ability to function as both a conductor and as an insulator, theoretically allows compact capacitors made of graphene to store large amounts of electrical energy.
Optical properties
Graphene exhibits unique optical properties, showing unexpectedly high opacity for an atomic monolayer in vacuum, absorbing approximately of light from visible to infrared wavelengths, where α is the fine-structure constant. This is due to the unusual low-energy electronic structure of monolayer graphene, characterized by electron and hole conical bands meeting at the Dirac point, which is qualitatively different from more common quadratic massive bands. Based on the Slonczewski–Weiss–McClure (SWMcC) band model of graphite, calculations using Fresnel equations in the thin-film limit account for interatomic distance, hopping values, and frequency, thus assessing optical conductance.
Experimental verification, though confirmed, lacks the precision required to improve upon existing techniques for determining the fine-structure constant.
Multi-parametric surface plasmon resonance
Multi-parametric surface plasmon resonance has been utilized to characterize both the thickness and refractive index of chemical-vapor-deposition (CVD)-grown graphene films. At a wavelength of , measured refractive index and extinction coefficient values are 3.135 and 0.897, respectively. Thickness determination yielded 3.7 Å across a 0.5mm area, consistent with the 3.35 Å reported for layer-to-layer carbon atom distance of graphite crystals. This method is applicable for real-time label-free interactions of graphene with organic and inorganic substances. The existence of unidirectional surface plasmons in nonreciprocal graphene-based gyrotropic interfaces has been theoretically demonstrated, offering tunability from THz to near-infrared and visible frequencies by controlling graphene's chemical potential. Particularly, the unidirectional frequency bandwidth can be 1– 2 orders of magnitude larger than that achievable with metal under similar magnetic field conditions, stemming from graphene's extremely small effective electron mass.
Tunable band gap and optical response
Graphene's band gap can be tuned from 0 to (about 5-micrometer wavelength) by applying a voltage to a dual-gate bilayer graphene field-effect transistor (FET) at room temperature. The optical response of graphene nanoribbons is tunable into the terahertz regime by an applied magnetic fields. Graphene/graphene oxide systems exhibit electrochromic behavior, enabling tuning of both linear and ultrafast optical properties.
Graphene-based Bragg grating
A graphene-based Bragg grating (one-dimensional photonic crystal) has been fabricated, demonstrating its capability to excite surface electromagnetic waves in periodic structure using a He–Ne laser as the light source.
Saturable absorption
Graphene exhibits unique saturable absorption, which saturates when the input optical intensity exceeds a threshold value. This nonlinear optical behavior, termed saturable absorption, occurs across the visible to near-infrared spectrum, due to graphene's universal optical absorption and zero band gap. This property has enabled full-band mode-locking in fiber lasers using graphene-based saturable absorbers, contributing significantly to ultrafast photonics. Additionally, the optical response of graphene/graphene oxide layers can be electrically tuned.
Saturable absorption in graphene could occur at the Microwave and Terahertz band, owing to its wideband optical absorption property. The microwave-saturable absorption in graphene demonstrates the possibility of graphene microwaves and terahertz photonics devices, such as a microwave-saturable absorber, modulator, polarizer, microwave signal processing, and broadband wireless access networks.
Nonlinear Kerr effect
Under intense laser illumination, graphene exhibits a nonlinear phase shift due to the optical nonlinear Kerr effect. Graphene demonstrates a large nonlinear Kerr coefficient of , nearly nine orders of magnitude larger than that of bulk dielectrics, suggesting its potential as a powerful nonlinear Kerr medium capable of supporting various nonlinear effects, including solitons.
Excitonic properties
First-principle calculations incorporating quasiparticle corrections and many-body effects have been employed to study the electronic and optical properties of graphene-based materials. The approach was described as three stages. With GW calculation, the properties of graphene-based materials were accurately investigated, including bulk graphene, nanoribbons, edge and surface functionalized armchair ribbons, hydrogen saturated armchair ribbons, Josephson effect in graphene SNS junctions with single localized defect and armchair ribbon scaling properties.
Spin transport
Graphene is considered an ideal material for spintronics due to its minimal spin–orbit interaction, the near absence of nuclear magnetic moments in carbon, and weak hyperfine interaction. Electrical injection and detection of spin current have been demonstrated up to room temperature, with spin coherence length exceeding 1 micrometer observed at this temperature. Control of spin current polarity via electrical gating has been achieved at low temperatures.
Magnetic properties
Strong magnetic fields
Graphene's quantum Hall effect in magnetic fields above approximately 10 tesla reveals additional interesting features. Additional plateaus in Hall conductivity at with have been observed, along with plateau at and a fractional quantum Hall effect at .
These observations with indicate that the four-fold degeneracy (two valley and two spin degrees of freedom) of the Landau energy levels is partially or completely lifted. One hypothesis proposes that magnetic catalysis of symmetry breaking is responsible for this degeneracy lift.
Spintronic properties
Graphene exhibits spintronic and magnetic properties concurrently. Low-defect graphene Nano-meshes, fabricated using a non-lithographic approach, exhibit significant ferromagnetism even at room temperature. Additionally, a spin pumping effect has been observed with fields applied in parallel to the planes of few-layer ferromagnetic nano-meshes, while a magnetoresistance hysteresis loop is evident under perpendicular fields. Charge-neutral graphene has demonstrated magnetoresistance exceeding 100% in magnetic fields generated by standard permanent magnets (approximately 0.1 tesla), marking a record magneto resistivity at room temperature among known materials.
Magnetic substrates
In 2010 researchers magnetized graphene by producing it via CVD on the Ni(111) substrate and then in 2014 by placing it on an atomically smooth layer of magnetic yttrium iron garnet, maintaining graphene's electronic properties unaffected. Previous methods involved doping graphene with other substances. The dopant's presence negatively affected its electronic properties.
Mechanical properties
The (two-dimensional) density of graphene is 0.763 mg per square meter.
Graphene is the strongest material ever tested, with an intrinsic tensile strength of (with representative engineering tensile strength ~50-60 GPa for stretching large-area freestanding graphene) and a Young's modulus (stiffness) close to . The Nobel announcement illustrated this by saying that a 1 square meter graphene hammock would support a cat but would weigh only as much as one of the cat's whiskers, at (about 0.001% of the weight of of paper).
Large-angle bending of graphene monolayers with minimal strain demonstrates its mechanical robustness. Even under extreme deformation, monolayer graphene maintains excellent carrier mobility.
The spring constant of suspended graphene sheets has been measured using an atomic force microscope (AFM). Graphene sheets were suspended over cavities where an AFM tip was used to apply stress to the sheet to test its mechanical properties. Its spring constant was in the range 1–5 N/m and the stiffness was , which differs from that of bulk graphite. These intrinsic properties could lead to applications such as NEMS as pressure sensors and resonators. Due to its large surface energy and out of plane ductility, flat graphene sheets are unstable with respect to scrolling, i.e. bending into a cylindrical shape, which is its lower-energy state.
In two-dimensional structures like graphene, thermal and quantum fluctuations cause relative displacement, with fluctuations growing logarithmically with structure size as per the Mermin–Wagner theorem. This shows that the amplitude of long-wavelength fluctuations grows logarithmically with the scale of a 2D structure, and would therefore be unbounded in structures of infinite size. Local deformation and elastic strain are negligibly affected by this long-range divergence in relative displacement. It is believed that a sufficiently large 2D structure, in the absence of applied lateral tension, will bend and crumple to form a fluctuating 3D structure. Researchers have observed ripples in suspended layers of graphene, and it has been proposed that the ripples are caused by thermal fluctuations in the material. As a consequence of these dynamical deformations, it is debatable whether graphene is truly a 2D structure. These ripples, when amplified by vacancy defects, induce a negative Poisson's ratio into graphene, resulting in the thinnest auxetic material known so far.
Graphene-nickel (Ni) composites, created through plating processes, exhibit enhanced mechanical properties due to strong Ni-graphene interactions inhibiting dislocation sliding in the Ni matrix.
Fracture toughness
In 2014, researchers from Rice University and the Georgia Institute of Technology have indicated that despite its strength, graphene is also relatively brittle, with a fracture toughness of about 4 MPa√m. This indicates that imperfect graphene is likely to crack in a brittle manner like ceramic materials, as opposed to many metallic materials which tend to have fracture toughness in the range of 15–50 MPa√m. Later in 2014, the Rice team announced that graphene showed a greater ability to distribute force from an impact than any known material, ten times that of steel per unit weight. The force was transmitted at .
Polycrystalline graphene
Various methods – most notably, chemical vapor deposition (CVD), as discussed in the section below – have been developed to produce large-scale graphene needed for device applications. Such methods often synthesize polycrystalline graphene. The mechanical properties of polycrystalline graphene are affected by the nature of the defects, such as grain-boundaries (GB) and vacancies, present in the system and the average grain-size.
Graphene grain boundaries typically contain heptagon-pentagon pairs. The arrangement of such defects depends on whether the GB is in a zig-zag or armchair direction. It further depends on the tilt-angle of the GB. In 2010, researchers from Brown University computationally predicted that as the tilt-angle increases, the grain boundary strength also increases. They showed that the weakest link in the grain boundary is at the critical bonds of the heptagon rings. As the grain boundary angle increases, the strain in these heptagon rings decreases, causing the grain boundary to be stronger than lower-angle GBs. They proposed that, in fact, for sufficiently large angle GB, the strength of the GB is similar to pristine graphene. In 2012, it was further shown that the strength can increase or decrease, depending on the detailed arrangements of the defects. These predictions have since been supported by experimental evidence. In a 2013 study led by James Hone's group, researchers probed the elastic stiffness and strength of CVD-grown graphene by combining nano-indentation and high-resolution TEM. They found that the elastic stiffness is identical and strength is only slightly lower than those in pristine graphene. In the same year, researchers from University of California, Berkeley and University of California, Los Angeles probed bi-crystalline graphene with TEM and AFM. They found that the strength of grain boundaries indeed tends to increase with the tilt angle.
While the presence of vacancies is not only prevalent in polycrystalline graphene, vacancies can have significant effects on the strength of graphene. The consensus is that the strength decreases along with increasing densities of vacancies. Various studies have shown that for graphene with a sufficiently low density of vacancies, the strength does not vary significantly from that of pristine graphene. On the other hand, a high density of vacancies can severely reduce the strength of graphene.
Compared to the fairly well-understood nature of the effect that grain boundary and vacancies have on the mechanical properties of graphene, there is no clear consensus on the general effect that the average grain size has on the strength of polycrystalline graphene. In fact, three notable theoretical or computational studies on this topic have led to three different conclusions. First, in 2012, Kolakowski and Myer studied the mechanical properties of polycrystalline graphene with "realistic atomistic model", using molecular-dynamics (MD) simulation. To emulate the growth mechanism of CVD, they first randomly selected nucleation sites that are at least 5A (arbitrarily chosen) apart from other sites. Polycrystalline graphene was generated from these nucleation sites and was subsequently annealed at 3000K, and then quenched. Based on this model, they found that cracks are initiated at grain-boundary junctions, but the grain size does not significantly affect the strength. Second, in 2013, Z. Song et al. used MD simulations to study the mechanical properties of polycrystalline graphene with uniform-sized hexagon-shaped grains. The hexagon grains were oriented in various lattice directions and the GBs consisted of only heptagon, pentagon, and hexagonal carbon rings. The motivation behind such a model was that similar systems had been experimentally observed in graphene flakes grown on the surface of liquid copper. While they also noted that crack is typically initiated at the triple junctions, they found that as the grain size decreases, the yield strength of graphene increases. Based on this finding, they proposed that polycrystalline follows pseudo Hall-Petch relationship. Third, in 2013, Z. D. Sha et al. studied the effect of grain size on the properties of polycrystalline graphene, by modeling the grain patches using Voronoi construction. The GBs in this model consisted of heptagons, pentagons, and hexagons, as well as squares, octagons, and vacancies. Through MD simulation, contrary to the aforementioned study, they found an inverse Hall-Petch relationship, where the strength of graphene increases as the grain size increases. Experimental observations and other theoretical predictions also gave differing conclusions, similar to the three given above. Such discrepancies show the complexity of the effects that grain size, arrangements of defects, and the nature of defects have on the mechanical properties of polycrystalline graphene.
Other properties
Thermal conductivity
Thermal transport in graphene is a burgeoning area of research, particularly for its potential applications in thermal management. Most experimental measurements have posted large uncertainties in the results of thermal conductivity due to the limitations of the instruments used. Following predictions for graphene and related carbon nanotubes, early measurements of the thermal conductivity of suspended graphene reported an exceptionally large thermal conductivity up to , compared with the thermal conductivity of pyrolytic graphite of approximately at room temperature. However, later studies primarily on more scalable but more defected graphene derived by Chemical Vapor Deposition have been unable to reproduce such high thermal conductivity measurements, producing a wide range of thermal conductivities between – for suspended single-layer graphene. The large range in the reported thermal conductivity can be caused by large measurement uncertainties as well as variations in the graphene quality and processing conditions. In addition, it is known that when single-layer graphene is supported on an amorphous material, the thermal conductivity is reduced to about – at room temperature as a result of scattering of graphene lattice waves by the substrate, and can be even lower for few-layer graphene encased in amorphous oxide. Likewise, polymeric residue can contribute to a similar decrease in the thermal conductivity of suspended graphene to approximately – for bilayer graphene.
Isotopic composition, specifically the ratio of 12C to 13C, significantly affects graphene's thermal conductivity. Isotopically pure 12C graphene exhibits higher thermal conductivity than either a 50:50 isotope ratio or the naturally occurring 99:1 ratio. It can be shown by using the Wiedemann–Franz law, that the thermal conduction is phonon-dominated. However, for a gated graphene strip, an applied gate bias causing a Fermi energy shift much larger than kBT can cause the electronic contribution to increase and dominate over the phonon contribution at low temperatures. The ballistic thermal conductance of graphene is isotropic.
Graphite, a 3D counterpart to graphene, exhibits a basal plane thermal conductivity exceeding (similar to diamond), In graphite, the c-axis (out of plane) thermal conductivity is over a factor of ~100 smaller due to the weak binding forces between basal planes as well as the larger lattice spacing. In addition, the ballistic thermal conductance of graphene is shown to give the lower limit of the ballistic thermal conductance, per unit circumference, length of carbon nanotubes.
Graphene's thermal conductivity is influenced by its three acoustic phonon modes: two linear dispersion relation dispersion relation in-plane modes (LA, TA) and one quadratic dispersion relation out-of-plane mode (ZA). At low temperatures, the dominance of the T1.5 thermal conductivity contribution of the out-of-plane mode supersedes the T2 dependence of the linear modes. Some graphene phonon bands exhibit negative Grüneisen parameters, resulting in negative thermal expansion coefficient at low temperatures. The lowest negative Grüneisen parameters correspond to the lowest transverse acoustic ZA modes, whose frequencies increase with in-plane lattice parameter, akin to a stretched string with higher frequency vibrations.
Chemical properties
Graphene has a theoretical specific surface area (SSA) of . This is much larger than that reported to date for carbon black (typically smaller than ) or for carbon nanotubes (CNTs), from ≈100 to and is similar to activated carbon.
Graphene is the only form of carbon (or solid material) in which every atom is available for chemical reaction from two sides (due to the 2D structure). Atoms at the edges of a graphene sheet have special chemical reactivity. Graphene has the highest ratio of edge atoms of any allotrope. Defects within a sheet increase its chemical reactivity. The onset temperature of reaction between the basal plane of single-layer graphene and oxygen gas is below . Graphene burns at very low temperatures (e.g., ). Graphene is commonly modified with oxygen- and nitrogen-containing functional groups and analyzed by infrared spectroscopy and X-ray photoelectron spectroscopy. However, the determination of structures of graphene with oxygen- and nitrogen- functional groups require the structures to be well controlled.
In 2013, Stanford University physicists reported that single-layer graphene is a hundred times more chemically reactive than thicker multilayer sheets.
Graphene can self-repair holes in its sheets, when exposed to molecules containing carbon, such as hydrocarbons. Bombarded with pure carbon atoms, the atoms perfectly align into hexagons, filling the holes.
Biological properties
Despite the promising results in different cell studies and proof of concept studies, there is still incomplete understanding of the full biocompatibility of graphene-based materials. Different cell lines react differently when exposed to graphene, and it has been shown that the lateral size of the graphene flakes, the form and surface chemistry can elicit different biological responses on the same cell line.
There are indications that graphene has promise as a useful material for interacting with neural cells; studies on cultured neural cells show limited success.
Graphene also has some utility in osteogenesis. Researchers at the Graphene Research Centre at the National University of Singapore (NUS) discovered in 2011 the ability of graphene to accelerate the osteogenic differentiation of human mesenchymal stem cells without the use of biochemical inducers.
Graphene can be used in biosensors; in 2015, researchers demonstrated that a graphene-based sensor can be used to detect a cancer risk biomarker. In particular, by using epitaxial graphene on silicon carbide, they were repeatedly able to detect 8-hydroxydeoxyguanosine (8-OHdG), a DNA damage biomarker.
Support substrate
The electronic property of graphene can be significantly influenced by the supporting substrate. Studies of graphene monolayers on clean and hydrogen(H)-passivated silicon (100) (Si(100)/H) surfaces have been performed. The Si(100)/H surface does not perturb the electronic properties of graphene, whereas the interaction between the clean Si(100) surface and graphene changes the electronic states of graphene significantly. This effect results from the covalent bonding between C and surface Si atoms, modifying the π-orbital network of the graphene layer. The local density of states shows that the bonded C and Si surface states are highly disturbed near the Fermi energy.
Graphene layers and structural variants
Monolayer sheets
In 2013 a group of Polish scientists presented a production unit that allows the manufacture of continuous monolayer sheets. The process is based on graphene growth on a liquid metal matrix. The product of this process was called High Strength Metallurgical Graphene. In a new study published in Nature, the researchers have used a single-layer graphene electrode and a novel surface-sensitive non-linear spectroscopy technique to investigate the top-most water layer at the electrochemically charged surface. They found that the interfacial water response to the applied electric field is asymmetric concerning the nature of the applied field.
Bilayer graphene
Bilayer graphene displays the anomalous quantum Hall effect, a tunable band gap and potential for excitonic condensation –making it a promising candidate for optoelectronic and nanoelectronic applications. Bilayer graphene typically can be found either in twisted configurations where the two layers are rotated relative to each other or graphitic Bernal stacked configurations where half the atoms in one layer lie atop half the atoms in the other. Stacking order and orientation govern the optical and electronic properties of bilayer graphene.
One way to synthesize bilayer graphene is via chemical vapor deposition, which can produce large bilayer regions that almost exclusively conform to a Bernal stack geometry.
It has been shown that the two graphene layers can withstand important strain or doping mismatch which ultimately should lead to their exfoliation.
Turbostratic
Turbostratic graphene exhibits weak interlayer coupling, and the spacing is increased with respect to Bernal-stacked multilayer graphene. Rotational misalignment preserves the 2D electronic structure, as confirmed by Raman spectroscopy. The D peak is very weak, whereas the 2D and G peaks remain prominent.
A rather peculiar feature is that the I2D/IG ratio can exceed 10. However, most importantly, the M peak, which originates from AB stacking, is absent, whereas the TS1 and TS2 modes are visible in the Raman spectrum. The material is formed through conversion of non-graphenic carbon into graphenic carbon without providing sufficient energy to allow for the reorganization through annealing of adjacent graphene layers into crystalline graphitic structures.
Graphene superlattices
Periodically stacked graphene and its insulating isomorph provide a fascinating structural element in implementing highly functional superlattices at the atomic scale, which offers possibilities for designing nanoelectronic and photonic devices. Various types of superlattices can be obtained by stacking graphene and its related forms. The energy band in layer-stacked superlattices is found to be more sensitive to the barrier width than that in conventional III–V semiconductor superlattices. When adding more than one atomic layer to the barrier in each period, the coupling of electronic wavefunctions in neighboring potential wells can be significantly reduced, which leads to the degeneration of continuous subbands into quantized energy levels. When varying the well width, the energy levels in the potential wells along the L-M direction behave distinctly from those along the K-H direction.
A superlattice corresponds to a periodic or quasi-periodic arrangement of different materials and can be described by a superlattice period which confers a new translational symmetry to the system, impacting their phonon dispersions and subsequently their thermal transport properties. Recently, uniform monolayer graphene-hBN structures have been successfully synthesized via lithography patterning coupled with chemical vapor deposition (CVD). Furthermore, superlattices of graphene-HBN are ideal model systems for the realization and understanding of coherent (wave-like) and incoherent (particle-like) phonon thermal transport.
Nanostructured graphene forms
Graphene nanoribbons
Graphene nanoribbons ("nanostripes" in the "zig-zag"/"zigzag" orientation), at low temperatures, show spin-polarized metallic edge currents, which also suggests applications in the new field of spintronics. (In the "armchair" orientation, the edges behave like semiconductors.)
Graphene quantum dots
A graphene quantum dot (GQD) is a graphene fragment with a size lesser than 100 nm. The properties of GQDs are different from bulk graphene due to the quantum confinement effects which only become apparent when the size is smaller than 100 nm.
Modified and functionalized graphene
Graphene oxide
Graphene oxide is usually produced through chemical exfoliation of graphite. A particularly popular technique is the improved Hummers' method. Using paper-making techniques on dispersed, oxidized and chemically processed graphite in water, the monolayer flakes form a single sheet and create strong bonds. These sheets, called graphene oxide paper, have a measured tensile modulus of 32 GPa. The chemical property of graphite oxide is related to the functional groups attached to graphene sheets. These can change the polymerization pathway and similar chemical processes. Graphene oxide flakes in polymerss display enhanced photo-conducting properties. Graphene is normally hydrophobic and impermeable to all gases and liquids (vacuum-tight). However, when formed into a graphene oxide-based capillary membrane, both liquid water and water vapor flow through as quickly as if the membrane were not present.
In 2022, researchers evaluated the biological effects of low doses on graphene oxide on larvae and imago of Drosophila melanogaster. Results show that oral administration of graphene oxide at concentrations of 0.02-1% has a beneficial effect on the developmental rate and hatching ability of larvae. Long-term administration of a low dose of graphene oxide extends the lifespan of Drosophila and significantly enhances resistance to environmental stresses. These suggest that graphene oxide affects carbohydrate and lipid metabolism in adult Drosophila. These findings might provide a useful reference to assess the biological effects of graphene oxide, which could play an important role in a variety of graphene-based biomedical applications.
Chemical modification
Soluble fragments of graphene can be prepared in the laboratory through chemical modification of graphite. First, microcrystalline graphite is treated with an acidic mixture of sulfuric acid and nitric acid. A series of oxidation and exfoliation steps produce small graphene plates with carboxyl groups at their edges. These are converted to acid chloride groups by treatment with thionyl chloride; next, they are converted to the corresponding graphene amide via treatment with octadecyl amine. The resulting material (circular graphene layers of thickness) is soluble in tetrahydrofuran, tetrachloromethane and dichloroethane.
Refluxing single-layer graphene oxide (SLGO) in solvents leads to size reduction and folding of individual sheets as well as loss of carboxylic group functionality, by up to 20%, indicating thermal instabilities of SLGO sheets dependent on their preparation methodology. When using thionyl chloride, acyl chloride groups result, which can then form aliphatic and aromatic amides with a reactivity conversion of around 70–80%.
Hydrazine reflux is commonly used for reducing SLGO to SLG(R), but titrations show that only around 20–30% of the carboxylic groups are lost, leaving a significant number available for chemical attachment. Analysis of SLG(R) generated by this route reveals that the system is unstable and using a room temperature stirring with hydrochloric acid (< 1.0 M) leads to around 60% loss of COOH functionality. Room temperature treatment of SLGO with carbodiimides leads to the collapse of the individual sheets into star-like clusters that exhibited poor subsequent reactivity with amines (c. 3–5% conversion of the intermediate to the final amide). It is apparent that conventional chemical treatment of carboxylic groups on SLGO generates morphological changes of individual sheets that leads to a reduction in chemical reactivity, which may potentially limit their use in composite synthesis. Therefore, chemical reaction types have been explored. SLGO has also been grafted with polyallylamine, cross-linked through epoxy groups. When filtered into graphene oxide paper, these composites exhibit increased stiffness and strength relative to unmodified graphene oxide paper.
Full hydrogenation from both sides of the graphene sheet results in Graphane, but partial hydrogenation leads to hydrogenated graphene. Similarly, both-side fluorination of graphene (or chemical and mechanical exfoliation of graphite fluoride) leads to fluorographene (graphene fluoride), while partial fluorination (generally halogenation) provides fluorinated (halogenated) graphene.
Graphene ligand/complex
Graphene can be a ligand to coordinate metals and metal ions by introducing functional groups. Structures of graphene ligands are similar to e.g. metal-porphyrin complex, metal-phthalocyanine complex, and metal-phenanthroline complex. Copper and nickel ions can be coordinated with graphene ligands.
Advanced graphene structures
Graphene fiber
In 2011, researchers reported a novel yet simple approach to fabricating graphene fibers from chemical vapor deposition-grown graphene films. The method was scalable and controllable, delivering tunable morphology and pore structure by controlling the evaporation of solvents with suitable surface tension. Flexible all-solid-state supercapacitors based on these graphene fibers were demonstrated in 2013.
In 2015, intercalating small graphene fragments into the gaps formed by larger, coiled graphene sheets, after annealing provided pathways for conduction, while the fragments helped reinforce the fibers. The resulting fibers offered better thermal and electrical conductivity and mechanical strength. Thermal conductivity reached , while tensile strength reached .
In 2016, kilometer-scale continuous graphene fibers with outstanding mechanical properties and excellent electrical conductivity were produced by high-throughput wet-spinning of graphene oxide liquid crystals followed by graphitization through a full-scale synergetic defect-engineering strategy. The graphene fibers with superior performances promise wide applications in functional textiles, lightweight motors, microelectronic devices, etc.
Tsinghua University in Beijing, led by Wei Fei of the Department of Chemical Engineering, claims to be able to create a carbon nanotube fiber that has a tensile strength of .
3D graphene
In 2013, a three-dimensional honeycomb of hexagonally arranged carbon was termed 3D graphene, and self-supporting 3D graphene was also produced. 3D structures of graphene can be fabricated by using either CVD or solution-based methods. A 2016 review by Khurram and Xu et al. provided a summary of then-state-of-the-art techniques for fabrication of the 3D structure of graphene and other related two-dimensional materials. In 2013, researchers at Stony Brook University reported a novel radical-initiated crosslinking method to fabricate porous 3D free-standing architectures of graphene and carbon nanotubes using nanomaterials as building blocks without any polymer matrix as support. These 3D graphenes (all-carbon) scaffolds/foams have applications in several fields such as energy storage, filtration, thermal management, and biomedical devices and implants.
Box-shaped graphene (BSG) nanostructure appearing after mechanical cleavage of pyrolytic graphite was reported in 2016. The discovered nanostructure is a multilayer system of parallel hollow nanochannels located along the surface and having quadrangular cross-section. The thickness of the channel walls is approximately equal to 1 nm. Potential fields of BSG application include ultra-sensitive detectors, high-performance catalytic cells, nanochannels for DNA sequencing and manipulation, high-performance heat sinking surfaces, rechargeable batteries of enhanced performance, nanomechanical resonators, electron multiplication channels in emission Nano-electronic devices, high-capacity sorbents for safe hydrogen storage.
Three dimensional bilayer graphene has also been reported.
Pillared graphene
Pillared graphene is a hybrid carbon, structure consisting of an oriented array of carbon nanotubes connected at each end to a sheet of graphene. It was first described theoretically by George Froudakis and colleagues at the University of Crete in Greece in 2008. Pillared graphene has not yet been synthesized in the laboratory, but it has been suggested that it may have useful electronic properties, or as a hydrogen storage material.
Reinforced graphene
Graphene reinforced with embedded carbon nanotube reinforcing bars ("rebar") is easier to manipulate, while improving the electrical and mechanical qualities of both materials.
Functionalized single- or multi-walled carbon nanotubes are spin-coated on copper foils and then heated and cooled, using the nanotubes themselves as the carbon source. Under heating, the functional carbon groups decompose into graphene, while the nanotubes partially split and form in-plane covalent bonds with the graphene, adding strength. π–π stacking domains add more strength. The nanotubes can overlap, making the material a better conductor than standard CVD-grown graphene. The nanotubes effectively bridge the grain boundaries found in conventional graphene. The technique eliminates the traces of substrate on which later-separated sheets were deposited using epitaxy.
Stacks of a few layers have been proposed as a cost-effective and physically flexible replacement for indium tin oxide (ITO) used in displays and photovoltaic cells.
Molded graphene
In 2015, researchers from the University of Illinois at Urbana-Champaign (UIUC) developed a new approach for forming 3D shapes from flat, 2D sheets of graphene. A film of graphene that had been soaked in solvent to make it swell and become malleable was overlaid on an underlying substrate "former". The solvent evaporated over time, leaving behind a layer of graphene that had taken on the shape of the underlying structure. In this way, they were able to produce a range of relatively intricate micro-structured shapes. Features vary from 3.5 to 50 μm. Pure graphene and gold-decorated graphene were each successfully integrated with the substrate.
Specialized graphene configurations
Graphene aerogel
An aerogel made of graphene layers separated by carbon nanotubes was measured at 0.16 milligrams per cubic centimeter. A solution of graphene and carbon nanotubes in a mold is freeze-dried to dehydrate the solution, leaving the aerogel. The material has superior elasticity and absorption. It can recover completely after more than 90% compression, and absorb up to 900 times its weight in oil, at a rate of 68.8 grams per second.
Graphene nanocoil
In 2015, a coiled form of graphene was discovered in graphitic carbon (coal). The spiraling effect is produced by defects in the material's hexagonal grid that causes it to spiral along its edge, mimicking a Riemann surface, with the graphene surface approximately perpendicular to the axis. When voltage is applied to such a coil, current flows around the spiral, producing a magnetic field. The phenomenon applies to spirals with either zigzag or armchair patterns, although with different current distributions. Computer simulations indicated that a conventional spiral inductor of 205 microns in diameter could be matched by a nanocoil just 70 nanometers wide, with a field strength reaching as much as 1 tesla.
The nano-solenoids analyzed through computer models at Rice University should be capable of producing powerful magnetic fields of about 1 tesla, about the same as the coils found in typical loudspeakers, according to Yakobson and his team – and about the same field strength as some MRI machines. They found the magnetic field would be strongest in the hollow, nanometer-wide cavity at the spiral's center.
A solenoid made with such a coil behaves as a quantum conductor whose current distribution between the core and exterior varies with applied voltage, resulting in nonlinear inductance.
Crumpled graphene
In 2016, Brown University introduced a method for "crumpling" graphene, adding wrinkles to the material on a nanoscale. This was achieved by depositing layers of graphene oxide onto a shrink film, then shrunken, with the film dissolved before being shrunken again on another sheet of film. The crumpled graphene became superhydrophobic, and when used as a battery electrode, the material was shown to have as much as a 400% increase in electrochemical current density.
Mechanical synthesis
A rapidly increasing list of production techniques have been developed to enable graphene's use in commercial applications.
Isolated 2D crystals cannot be grown via chemical synthesis beyond small sizes even in principle, because the rapid growth of phonon density with increasing lateral size forces 2D crystallites to bend into the third dimension. In all cases, graphene must bond to a substrate to retain its two-dimensional shape.
Bottom-up and top-down methods
Small graphene structures, such as graphene quantum dots and nanoribbons, can be produced by "bottom-up" methods that assemble the lattice from organic molecule monomers (e. g. citric acid, glucose). "Top-down" methods, on the other hand, cut bulk graphite and graphene materials with strong chemicals (e. g. mixed acids).
Micro-mechanical cleavage
The most famous, clean and rather straight-forward method of isolating graphene sheets, called micro-mechanical cleavage or more colloquially called the scotch tape method, was introduced by Novoselov et al. in 2004, which uses adhesive tape to mechanically cleave high-quality graphite crystals into successively thinner platelets. Other methods do exist like exfoliation.
Exfoliation techniques
Mechanical exfoliation
Geim and Novoselov initially used adhesive tape to pull graphene sheets away from graphite. Achieving single layers typically requires multiple exfoliation steps. After exfoliation, the flakes are deposited on a silicon wafer. Crystallites larger than 1 mm and visible to the naked eye can be obtained.
As of 2014, exfoliation produced graphene with the lowest number of defects and highest electron mobility. Alternatively, a sharp single-crystal diamond wedge can penetrate onto the graphite source to cleave layers. In the same year, defect-free, unoxidized graphene-containing liquids were made from graphite using mixers that produce local shear rates greater than .
Shear exfoliation is another method in which by using a rotor-stator mixer the scalable production of defect-free graphene has become possible. It has been shown that, as turbulence is not necessary for mechanical exfoliation, ResonantAcoustic mixing or low speed ball milling is effective in the production of high-yield and water-soluble graphene.
Liquid phase exfoliation
Liquid phase exfoliation (LPE) is a relatively simple method that involves dispersing graphite in a liquid medium to produce graphene by sonication or high shear mixing, followed by centrifugation. Restacking is an issue with this technique unless solvents with appropriate surface energy are used (e.g. NMP). Adding a surfactant to a solvent prior to sonication prevents restacking by adsorbing to the graphene's surface. This produces a higher graphene concentration, but removing the surfactant requires chemical treatments.
LPE results in nanosheets with a broad size distribution and thicknesses roughly in the range of 1-10 monolayers. However, liquid cascade centrifugation can be used to size-select the suspensions and achieve monolayer enrichment.
Sonicating graphite at the interface of two immiscible liquids, most notably heptane and water, produced macro-scale graphene films. The graphene sheets are adsorbed to the high-energy interface between the materials and are kept from restacking. The sheets are up to about 95% transparent and conductive.
With definite cleavage parameters, the box-shaped graphene (BSG) nanostructure can be prepared on graphite crystal. A major advantage of LPE is that it can be used to exfoliate many inorganic 2D materials beyond graphene, e.g. BN, MoS2, WS2.
Exfoliation with supercritical carbon dioxide
Liquid-phase exfoliation can also be done by a less-known process of intercalating supercritical carbon dioxide (scCO2) into the interstitial spaces in the graphite lattice, followed by rapid depressurization. The scCO2 intercalates easily inside the graphite lattice at a pressure of roughly 100 atm. Carbon dioxide turns gaseous as soon as the vessel is depressurized and makes the graphite explode into few-layered graphene.
This method may have multiple advantages: being non-toxic, the graphite does not have to be chemically treated in any way before the process, and the whole process can be completed in a single step as opposed to other exfoliation methods.
Splitting monolayer carbon allotropes
Graphene can be created by opening carbon nanotubes by cutting or etching. In one such method, multi-walled carbon nanotubes were cut open in solution by action of potassium permanganate and sulfuric acid. In 2014, carbon nanotube-reinforced graphene was made via spin coating and annealing functionalized carbon nanotubes.
Another approach sprays buckyballs at supersonic speeds onto a substrate. The balls cracked open upon impact, and the resulting unzipped cages then bond together to form a graphene film.
Chemical synthesis
Graphite oxide reduction
P. Boehm reported producing monolayer flakes of reduced graphene oxide in 1962. Rapid heating of graphite oxide and exfoliation yields highly dispersed carbon powder with a few percent of graphene flakes.
Another method is the reduction of graphite oxide monolayer films, e.g. by hydrazine with annealing in argon/hydrogen with an almost intact carbon framework that allows efficient removal of functional groups. Measured charge carrier mobility exceeded 1,000 cm/Vs (10 m/Vs).
Burning a graphite oxide coated DVD produced a conductive graphene film (1,738 siemens per meter) and specific surface area (1,520 square meters per gram) that was highly resistant and malleable.
A dispersed reduced graphene oxide suspension was synthesized in water by a hydrothermal dehydration method without using any surfactant. The approach is facile, industrially applicable, environmentally friendly, and cost-effective. Viscosity measurements confirmed that the graphene colloidal suspension (graphene nanofluid) exhibits Newtonian behavior, with the viscosity showing a close resemblance to that of water.
Molten salts
Graphite particles can be corroded in molten salts to form a variety of carbon nanostructures including graphene. Hydrogen cations, dissolved in molten lithium chloride, can be discharged on cathodically-polarized graphite rods, which then intercalate, peeling graphene sheets. The graphene nanosheets produced displayed a single-crystalline structure with a lateral size of several hundred nanometers and a high degree of crystallinity and thermal stability.
Electrochemical synthesis
Electrochemical synthesis can exfoliate graphene. Varying a pulsed voltage controls thickness, flake area, and number of defects and affects its properties. The process begins by bathing the graphite in a solvent for intercalation. The process can be tracked by monitoring the solution's transparency with an LED and photodiode.
Hydrothermal self-assembly
Graphene has been prepared by using a sugar like glucose, fructose, etc. This substrate-free "bottom-up" synthesis is safer, simpler and more environmentally friendly than exfoliation. The method can control the thickness, ranging from monolayer to multilayer, which is known as the "Tang-Lau Method".
Sodium ethoxide pyrolysis
Gram-quantities were produced by the reaction of ethanol with sodium metal, followed by pyrolysis and washing with water.
Microwave-assisted oxidation
In 2012, microwave energy was reported to directly synthesize graphene in one step. This approach avoids use of potassium permanganate in the reaction mixture. It was also reported that by microwave radiation assistance, graphene oxide with or without holes can be synthesized by controlling microwave time. Microwave heating can dramatically shorten the reaction time from days to seconds.
Graphene can also be made by microwave assisted hydrothermal pyrolysis.
Thermal decomposition of silicon carbide
Heating silicon carbide (SiC) to high temperatures () under low pressures (c. 10−6 torr, or 10−4 Pa) reduces it to graphene.
Vapor deposition and growth techniques
Chemical vapor deposition
Epitaxy
Epitaxial graphene growth on silicon carbide is a wafer-scale technique to produce graphene. Epitaxial graphene may be coupled to surfaces weakly enough (by the active valence electrons that create Van der Waals forces) to retain the two-dimensional electronic band structure of isolated graphene.
A normal silicon wafer coated with a layer of germanium (Ge) dipped in dilute hydrofluoric acid strips the naturally forming germanium oxide groups, creating hydrogen-terminated germanium. CVD can coat that with graphene.
The direct synthesis of graphene on insulator TiO2 with high-dielectric-constant (high-κ). A two-step CVD process is shown to grow graphene directly on TiO2 crystals or exfoliated TiO2 nanosheets without using any metal catalyst.
Metal substrates
CVD graphene can be grown on metal substrates including ruthenium, iridium, nickel and copper.
Roll-to-roll
In 2014, a two-step roll-to-roll manufacturing process was announced. The first roll-to-roll step produces the graphene via chemical vapor deposition. The second step binds the graphene to a substrate.
Cold wall
Growing graphene in an industrial resistive-heating cold wall CVD system was claimed to produce graphene 100 times faster than conventional CVD systems, cut costs by 99%, and produce material with enhanced electronic qualities.
Wafer scale CVD graphene
CVD graphene is scalable and has been grown on deposited Cu thin film catalyst on 100 to 300 mm standard Si/SiO2 wafers on an Axitron Black Magic system. Monolayer graphene coverage of >95% is achieved on 100 to 300 mm wafer substrates with negligible defects, confirmed by extensive Raman mapping.
Solvent interface trapping method (SITM)
As reported by a group led by D. H. Adamson, graphene can be produced from natural graphite while preserving the integrity of the sheets using the solvent interface trapping method (SITM). SITM uses a high-energy interface, such as oil and water, to exfoliate graphite to graphene. Stacked graphite delaminates, or spreads, at the oil/water interface to produce few-layer graphene in a thermodynamically favorable process in much the same way as small molecule surfactants spread to minimize the interfacial energy. In this way, graphene behaves like a 2D surfactant. SITM has been reported for a variety of applications such conductive polymer-graphene foams, conductive polymer-graphene microspheres, conductive thin films and conductive inks.
Carbon dioxide reduction
A highly exothermic reaction combusts magnesium in an oxidation-reduction reaction with carbon dioxide, producing carbon nanoparticles including graphene and fullerenes.
Supersonic spray
Supersonic acceleration of droplets through a Laval nozzle was used to deposit reduced graphene oxide on a substrate. The energy of the impact rearranges those carbon atoms into flawless graphene.
Laser
In 2014, a infrared laser was used to produce patterned porous three-dimensional laser-induced graphene (LIG) film networks from commercial polymer films. The resulting material exhibits high electrical conductivity and surface area. The laser induction process is compatible with roll-to-roll manufacturing processes. A similar material, laser-induced graphene fibers (LIGF), was reported in 2018.
Flash Joule heating
In 2019, flash Joule heating (transient high-temperature electrothermal heating) was discovered to be a method to synthesize turbostratic graphene in bulk powder form. The method involves electrothermally converting various carbon sources, such as carbon black, coal, and food waste into micron-scale flakes of graphene. More recent works demonstrated the use of mixed plastic waste, waste rubber tires, and pyrolysis ash as carbon feedstocks. The graphenization process is kinetically controlled, and the energy dose is chosen to preserve the carbon in its graphenic state (excessive energy input leads to subsequent graphitization through annealing).
Ion implantation
Accelerating carbon ions inside an electrical field into a semiconductor made of thin nickel films on a substrate of SiO2/Si, creates a wafer-scale () wrinkle/tear/residue-free graphene layer at a relatively low temperature of 500 °C.
CMOS-compatible graphene
Integration of graphene in the widely employed CMOS fabrication process demands its transfer-free direct synthesis on dielectric substrates at temperatures below 500 °C. At the IEDM 2018, researchers from University of California, Santa Barbara, demonstrated a novel CMOS-compatible graphene synthesis process at 300 °C suitable for back-end-of-line (BEOL) applications. The process involves pressure-assisted solid-state diffusion of carbon through a thin-film of metal catalyst. The synthesized large-area graphene films were shown to exhibit high quality (via Raman characterization) and similar resistivity values when compared with high-temperature CVD synthesized graphene films of the same cross-section down to widths of 20 nm.
Simulation
In addition to experimental investigation of graphene and graphene-based devices, numerical modeling and simulation of graphene has also been an important research topic. The Kubo formula provides an analytic expression for the graphene's conductivity and shows that it is a function of several physical parameters including wavelength, temperature, and chemical potential. Moreover, a surface conductivity model, which describes graphene as an infinitesimally thin (two-sided) sheet with a local and isotropic conductivity, has been proposed. This model permits the derivation of analytical expressions for the electromagnetic field in the presence of a graphene sheet in terms of a dyadic Green function (represented using Sommerfeld integrals) and exciting electric current.
Even though these analytical models and methods can provide results for several canonical problems for benchmarking purposes, many practical problems involving graphene, such as the design of arbitrarily shaped electromagnetic devices, are analytically intractable. With the recent advances in the field of computational electromagnetics (CEM), various accurate and efficient numerical methods have become available for analysis of electromagnetic field/wave interactions on graphene sheets and/or graphene-based devices. A comprehensive summary of computational tools developed for analyzing graphene-based devices/systems is proposed.
Graphene analogs
Graphene analogs (also referred to as "artificial graphene") are two-dimensional systems which exhibit similar properties to graphene. Graphene analogs have been studied intensively since the discovery of graphene in 2004. People try to develop systems in which the physics is easier to observe and manipulate than in graphene. In those systems, electrons are not always the particles that are used. They might be optical photons, microwave photons, plasmons, microcavity polaritons, or even atoms. Also, the honeycomb structure in which those particles evolve can be of a different nature than carbon atoms in graphene. It can be, respectively, a photonic crystal, an array of metallic rods, metallic nanoparticles, a lattice of coupled microcavities, or an optical lattice.
Applications
Graphene is a transparent and flexible conductor that holds great promise for various material/device applications, including solar cells, light-emitting diodes (LED), integrated photonic circuit devices, touch panels, and smart windows or phones. Smartphone products with graphene touch screens are already on the market.
In 2013, Head announced their new range of graphene tennis racquets.
As of 2015, there is one product available for commercial use: a graphene-infused printer powder. Many other uses for graphene have been proposed or are under development, in areas including electronics, biological engineering, filtration, lightweight/strong composite materials, photovoltaics and energy storage. Graphene is often produced as a powder and as a dispersion in a polymer matrix. This dispersion is supposedly suitable for advanced composites, paints and coatings, lubricants, oils and functional fluids, capacitors and batteries, thermal management applications, display materials and packaging, solar cells, inks and 3D-printer materials, and barriers and films.
On August 2, 2016, Briggs Automative Company's new Mono model is said to be made out of graphene as the first of both a street-legal track car and a production car.
In January 2018, graphene-based spiral inductors exploiting kinetic inductance at room temperature were first demonstrated at the University of California, Santa Barbara, led by Kaustav Banerjee. These inductors were predicted to allow significant miniaturization in radio-frequency integrated circuit applications.
The potential of epitaxial graphene on SiC for metrology has been shown since 2010, displaying quantum Hall resistance quantization accuracy of three parts per billion in monolayer epitaxial graphene. Over the years precisions of parts-per-trillion in the Hall resistance quantization and giant quantum Hall plateaus have been demonstrated. Developments in the encapsulation and doping of epitaxial graphene have led to the commercialization of epitaxial graphene quantum resistance standards.
Novel uses for graphene continue to be researched and explored. One such use is in combination with water-based epoxy resins to produce anticorrosive coatings. The van der Waals nature of graphene and other two-dimensional (2D) materials also permits van der Waals heterostructures and integrated circuits based on Van der Waals integration of 2D materials.
Graphene is utilized in detecting gasses and chemicals in environmental monitoring, developing highly sensitive biosensors for medical diagnostics, and creating flexible, wearable sensors for health monitoring. Graphene's transparency also enhances optical sensors, making them more effective in imaging and spectroscopy.
Toxicity
One review on graphene toxicity published in 2016 by Lalwani et al. summarizes the in vitro, in vivo, antimicrobial and environmental effects and highlights the various mechanisms of graphene toxicity. Another review published in 2016 by Ou et al. focused on graphene-family nanomaterials (GFNs) and revealed several typical mechanisms such as physical destruction, oxidative stress, DNA damage, inflammatory response, apoptosis, autophagy, and necrosis.
A 2020 study showed that the toxicity of graphene is dependent on several factors such as shape, size, purity, post-production processing steps, oxidative state, functional groups, dispersion state, synthesis methods, route and dose of administration, and exposure times.
In 2014, research at Stony Brook University showed that graphene nanoribbons, graphene nanoplatelets, and graphene nano–onions are non-toxic at concentrations up to 50 μg/ml. These nanoparticles do not alter the differentiation of human bone marrow stem cells towards osteoblasts (bone) or adipocytes (fat), suggesting that at low doses, graphene nanoparticles are safe for biomedical applications. In 2013, research at Brown University found that 10 μm few-layered graphene flakes can pierce cell membranes in solution. They were observed to enter initially via sharp and jagged points, allowing graphene to be internalized in the cell. The physiological effects of this remain unknown, and this remains a relatively unexplored field.
| Physical sciences | Group 14 | Chemistry |
911953 | https://en.wikipedia.org/wiki/Turbidite | Turbidite | A turbidite is the geologic deposit of a turbidity current, which is a type of amalgamation of fluidal and sediment gravity flow responsible for distributing vast amounts of clastic sediment into the deep ocean.
Sequencing
Turbidites were first properly described by Arnold H. Bouma (1962), who studied deepwater sediments and recognized particular "fining-up intervals" within deep water, fine-grained shales, which were anomalous because they started at pebble conglomerates and terminated in shales. This was anomalous because within the deep ocean it had historically been assumed that there was no mechanism by which tractional flow could carry and deposit coarse-grained sediments into the abyssal depths.
Bouma cycles begin with an erosional contact of a coarse lower bed of pebble to granule conglomerate in a sandy matrix, and grade up through coarse then medium plane parallel sandstone; through cross-bedded sandstone; rippled cross-bedded sand/silty sand, and finally laminar siltstone and shale. This vertical succession of sedimentary structures, bedding, and changing lithology is representative of strong to waning flow regime currents and their corresponding sedimentation.
It is unusual to see all of a complete Bouma cycle, as successive turbidity currents may erode the unconsolidated upper sequences. Alternatively, the entire sequence may not be present depending on whether the exposed section was at the edge of the turbidity current lobe (where it may be present as a thin deposit), or upslope from the deposition centre and manifested as a scour channel filled with fine sands grading up into a pelagic ooze.
It is now recognized that the vertical progression of sedimentary structures described by Bouma applies to turbidites deposited by low-density turbidity currents. As the sand concentration of a flow increases, grain-to-grain collisions within the turbid suspension create dispersive pressures that become important in hindering further settling of grains. As a consequence, a slightly different set of sedimentary structures develops in turbidites deposited by high-density turbidity currents. This different set of structures is known as the Lowe sequence, which is a descriptive classification that complements, but does not replace, the Bouma sequence.
Formation
Turbidites are sediments which are transported and deposited by density flow, not by tractional or frictional flow.
The distinction is that, in a normal river or stream bed, particles of rock are carried along by frictional drag of water on the particle (known as tractional flow). The water must be travelling at a certain velocity in order to suspend the particle in the water and push it along. The greater the size or density of the particle relative to the fluid in which it is travelling, the higher the water velocity required to suspend it and transport it.
Density-based flow, however, occurs when liquefaction of sediment during transport causes a change to the density of the fluid. This is usually achieved by highly turbulent liquids which have a suspended load of fine grained particles forming a slurry. In this case, larger fragments of rock can be transported at water velocities too low to otherwise do so because of the lower density contrast (that is, the water plus sediment has a higher density than the water and is therefore closer to the density of the rock).
This condition occurs in many environments aside from simply the deep ocean, where turbidites are particularly well represented. Lahars on the side of volcanoes, mudslides and pyroclastic flows all create density-based flow situations and, especially in the latter, can create sequences which are strikingly similar to turbidites.
Turbidites in sediments can occur in carbonate as well as siliciclastic sequences.
Classic, low-density turbidites are characterized by graded bedding, current ripple marks, climbing ripple laminations, alternating sequences with pelagic sediments, distinct fauna changes between the turbidite and native pelagic sediments, sole markings, thick sediment sequences, regular bedding, and an absence of shallow-water features. A different vertical progression of sedimentary structures characterize high-density turbidites.
Massive accumulations of turbidites and other deep-water deposits may result in the formation of submarine fans. Sedimentary models of such fan systems typically are subdivided into upper, mid, and lower fan sequences each with distinct sand-body geometries, sediment distributions, and lithologic characteristics.
Turbidite deposits typically occur in foreland basins.
Submarine fan models
Submarine fan models are often based on source-to-sink [S2S] concepts linking sediment source areas, and sediment routing systems to the eventual depositional environments of turbidite deposits. They are aimed at providing insights into the relationships between different geologic processes and turbidite fan systems.
Geologic processes influencing turbidite systems can either be of allogenic or autogenic origin and submarine fan models are designed to capture the impact of these processes on reservoir presence, reservoir distribution, morphology, and architecture of turbidite deposits. Some significant allogenic forcing includes the effect of sea level fluctuations, regional tectonic events, sediment supply type, sediment supply rate, and sediment concentration. Autogenic controls can include seafloor topography, confinements, and slope gradients.
There are about 26 submarine fan models. Some common fan models include the classical single-source suprafan model, models depicting fans with attached lobes, detached lobes fan model, and submarine fan models relating to the response of turbidite systems to varying grain sizes and different feeder systems.
The integration of subsurface datasets such as 3D/4D seismic reflection, well logs, and core data as well as modern seafloor bathymetry studies, numerical forward stratigraphic modeling, and flume tank experiments are enabling improvements and more realistic development of submarine fan models across different basins.
Importance
Turbidites provide a mechanism for assigning a tectonic and depositional setting to ancient sedimentary sequences as they usually represent deep-water rocks formed offshore of a convergent margin, and generally require at least a sloping shelf and some form of tectonism to trigger density-based avalanches. Density currents may be triggered in areas of high sediment supply by gravitational failure alone. Turbidites can represent a high resolution record of seismicity, and terrestrial storm/flood events depending on the connectivity of canyon/channel systems to terrestrial sediment sources.
Turbidites from lakes and fjords are also important as they can provide chronologic evidence of the frequency of landslides and the earthquakes that presumably formed them, by dating using radiocarbon or varves above and below the turbidite.
Economic importance
Turbidite sequences are classic hosts for lode gold deposits, the prime example being Bendigo and Ballarat in Victoria, Australia, where more than 2,600 tons of gold have been extracted from saddle-reef deposits hosted in shale sequences from a thick succession of Cambrian-Ordovician turbidites. Proterozoic gold deposits are also known from turbidite basin deposits.
Lithified accumulations of turbidite deposits may, in time, become hydrocarbon reservoirs and the petroleum industry makes strenuous efforts to predict the location, overall shape, and internal characteristics of these sediment bodies in order to efficiently develop fields as well as explore for new reserves.
| Physical sciences | Sedimentary rocks | Earth science |
913362 | https://en.wikipedia.org/wiki/Dall%20sheep | Dall sheep | Ovis dalli, also known as the Dall sheep or thinhorn sheep, is a species of wild sheep native to northwestern North America. Ovis dalli contains two subspecies: Ovis dalli dalli and Ovis dalli stonei. O. dalli live in mountainous alpine habitats distributed across northwestern British Columbia, the Yukon, Northwest Territories and Alaska. They browse a variety of plants, such as grasses, sedges and even shrubs, such as willow, during different times of the year. They also acquire minerals to supplement their diet from mineral licks. Like other Ovis species, the rams engage in dominance contests with their horns.
Taxonomy and genetics
The specific name dalli, is derived from William Healey Dall (1845–1927), an American naturalist. The common name, Dall's sheep or Dall sheep is often used to refer to the nominate subspecies, O. d. dalli. The other subspecies, O. d. stonei, is called the Stone sheep.
Originally, the subspecies O. d. dalli and O. d. stonei were distinguished by the color of their fur. However, the pelage-based designations have been shown to be questionable. Complete colour intergradation occurs in both O. dalli sheep subspecies (i.e., Dall's and Stone's), ranging between white and dark morphs of the species. Intermediately coloured populations, called Fannin sheep, were originally (incorrectly) identified as a unique subspecies (O. d. fannini) with distributions inhabiting in the Pelly Mountains and Ogilvie Mountains of the Yukon Territory. Fannin sheep have more recently been confirmed as admixed individuals with predominantly Dall's sheep genetic origins. Previous mitochondrial DNA evidence had shown no molecular division along earlier subspecies boundaries, although evidence from nuclear DNA may provide some support. Current taxonomy using mitochondrial DNA information may be less reliable due to hybridization between O. dalli and O. canadensis recorded in evolutionary history.
Current genetics analyses using a genomewide set of single nucleotide polymorphisms (SNPs) has confirmed new subspecies range boundaries for both Dall's and Stone's sheep, updating the previous pelage-based and mitochondrial DNA classifications.
Description
O. dalli stand about at the shoulder. They are off-white in color, and their coat consists of a fine wool undercoat and stiff, long, and hollow guard hairs. Their winter coats can be over thick. O. dalli can live to be 12 to 16 years of age.
O. dalli are sexually dimorphic, which means rams and ewes look different. Rams are larger than ewes and typically weigh between at maturity. Ewes weigh approximately on average. During the winter, adult sheep may lose up to 16% of their body mass, and lambs and yearlings as much as 40% depending on winter weather severity. O. dalli begin growing horns at about two months old. Ewes have small, slender horns compared to the massive, curling horns of rams. Young rams resemble ewes until they are about 3 years of age. At this point, their horns begin to grow much faster and larger than ewes' horns.
Adult male O. dalli have thick, curling horns. Adult males are easily distinguished by their horns, which continue to grow steadily from spring to early fall. This results in a start-and-stop growth pattern of rings called annuli. Annuli can be used to help determine age.
Natural history
Ecology
The sheep inhabit the subarctic and arctic mountain ranges of Alaska, the Yukon Territory, the Mackenzie Mountains in the western Northwest Territories, and central and northern British Columbia. O. dalli are found in areas with a combination of dry alpine tundra, meadows, and steep or rugged ground. This combination allows for both grazing and escape from predators.
O. dalli can often be observed along the Seward Highway South of Anchorage, Alaska, within Denali National Park and Preserve (which was created in 1917 to preserve the sheep from overhunting), at Sheep Mountain in Kluane National Park and Reserve, in the Tatshenshini Park Tatshenshini-Alsek Provincial Park in northwestern British Columbia, and near Faro, Yukon.
Primary predators of this sheep are wolf packs, coyotes, black bears, and grizzly bears; golden eagles are predators of the young. O. dalli have been known to butt gray wolves off the face of cliffs.
Social structure
Rams and ewes are rarely found in the same groups outside of the mating season, or rut, which occurs from mid-November through mid-December. For most of the year, rams feed in the best foraging areas to enhance their reproductive fitness. During spring and summer, ewes are more likely to select areas such as steep, rocky slopes with lower predation risk to raise offspring.
Social order and dominance rank is maintained in ram groups through a variety of behaviors including head-on collisions. These dramatic clashes involve each ram getting a running start before colliding, horns-first into one another. Other behaviors associated with establishing social order include leg kicks, bluff charges, and dominance mounting. Most of this behavior establishes order year-round, but clashes between males with similar horn sizes intensify as the rut approaches. Ewes occasionally engage in similar competitive behavior over feeding or bedding sites. Young sheep practice such interactions as part of their play. While rams do clash horns, it is done to establish order, not over fights to possess ewes.
Rams are known to occupy up to six seasonal ranges, including different areas used during autumn, rut (or mating season from mid-November to mid-December), midwinter, late winter/spring, and summer, as well as spending time at salt licks.
For most of the year, ewes select areas free of snow and close to forage. After lambs are born in May, close proximity to escape terrain as well as nearby forage are important in habitat selection. Ewes and lambs will travel farther from escape terrain to forage when in larger groups.
In the summer, food has a high variety and is abundant. In the winter, food is limited to what is available in snow-free areas, such as frozen grasses, sedges, lichens, or mosses. O. dalli will travel long distances in the spring to visit mineral licks to supplement their diet.
Relationship with humans
Hunting
The Inupiat people have a long history of hunting O. dalli that dates back to at least the 16th century. Sheep are valued for their skin, which is used for warm clothing, and their meat, especially in times when caribou are not available. Historically, the sheep were hunted in summer by foot and in winter by dog sled teams. Today, the rugged terrain in which they live still requires foot travel to reach these animals. The dependence on O. dalli for meat and clothing fluctuates with caribou populations. Caribou herds declined considerably in the 1940s, and O. dalli became an important harvest species. Since the 1990s, caribou populations have been large enough to sustain people. Consequently, subsistence harvest of O. dalli is lower now than in the 1940s, but sheep continue to be an important meat source when caribou migration routes shift during the winter or between years.
Where sport hunting is allowed in Alaska's national preserves, hunters can harvest mature O. dalli rams that have horns that are full-curl or greater, have both tips broken off or are eight years of age or older.
Climate change
Changes in O. dalli abundance, distribution, composition and health may indicate changes happening with other species and ecosystem processes. The sheep live in alpine, or high mountain, areas. These areas are expected to experience significant changes associated with climate change. Changes may include shifts in locations of plant communities (e.g., an increase in shrubs in alpine areas), diversity of plant species (e.g., loss of important forage species for sheep), and local weather patterns (such as increased incidence of high winter snowfall and icing events), which may affect sheep distribution and abundance.
Some species are expected to benefit from climate change while others will not. Shrubs and woody plants typically dominate plant communities at lower elevations. As elevation increases, the dominant plant community transitions to one dominated by low-growing grasses, flowers, and lichens. Warming climate trends, longer growing seasons, and changes in precipitation have the potential to allow woody plant species to find suitable habitat at higher elevations.
As a result, low-growing alpine species may be out-competed or shaded by the encroaching woody plants. Changes in the seasonal availability and diversity of alpine plants may affect O. dalli populations by altering sheep diets and consequently where they can live in mountain parks, as well as ewe pregnancy rates and lamb growth and survival.
| Biology and health sciences | Bovidae | Animals |
914432 | https://en.wikipedia.org/wiki/Omega%20Centauri | Omega Centauri | Omega Centauri (ω Cen, NGC 5139, or Caldwell 80) is a globular cluster in the constellation of Centaurus that was first identified as a non-stellar object by Edmond Halley in 1677. Located at a distance of , it is the largest known globular cluster in the Milky Way at a diameter of roughly 150 light-years. It is estimated to contain approximately 10 million stars, with a total mass of 4 million solar masses, making it the most massive known globular cluster in the Milky Way.
Omega Centauri is very different from most other galactic globular clusters to the extent that it is thought to have originated as the core remnant of a disrupted dwarf galaxy.
Observation history
Around 150 AD, Greco-Roman writer and astronomer Ptolemy catalogued this object in his Almagest as a star on the centaur's back, "Quae est in principio scapulae". German cartographer Johann Bayer used Ptolemy's data to designate this object "Omega Centauri" with his 1603 publication of Uranometria. Using a telescope from the South Atlantic island of Saint Helena, English astronomer Edmond Halley observed this object in 1677, listing it as a non-stellar object. In 1716, it was published by Halley among his list of six "luminous spots or patches" in the Philosophical Transactions of the Royal Society.
Swiss astronomer Jean-Philippe de Cheseaux included Omega Centauri in his 1746 list of 21 nebulae, as did French astronomer Lacaille in 1755, whence the catalogue number is designated L I.5. It was first recognized as a globular cluster by Scottish astronomer James Dunlop in 1826, who described it as a "beautiful globe of stars very gradually and moderately compressed to the centre".
Properties
At a distance of about from Earth, Omega Centauri is one of the few globular clusters visible to the naked eye—and appears almost as large as the full Moon when seen from a dark, rural area. It is the brightest, largest and, at 4 million solar masses, the most massive-known globular cluster associated with the Milky Way. Of all the globular clusters in the Local Group of galaxies, only Mayall II in the Andromeda Galaxy is brighter and more massive. Orbiting through the Milky Way, Omega Centauri contains several million Population II stars and is about 12 billion years old.
The stars in the core of Omega Centauri are so crowded that they are estimated to average only 0.1 light-year away from each other. The internal dynamics have been analyzed using measurements of the radial velocities of 469 stars. The members of this cluster are orbiting the center of mass with a peak velocity dispersion of 7.9 km s−1. The mass distribution inferred from the kinematics is slightly more extended than, though not strongly inconsistent with, the luminosity distribution.
Evidence of a central black hole
A 2008 study presented evidence for an intermediate-mass black hole at the center of Omega Centauri, based on observations made by the Hubble Space Telescope and Gemini Observatory on Cerro Pachón in Chile. Hubble's Advanced Camera for Surveys showed that stars are bunching up near the center of Omega Centauri, as evidenced by the gradual increase in starlight near the center. Using instruments at the Gemini Observatory to measure the speed of stars swirling in the cluster's core, E. Noyola and colleagues found that stars closer to the core are moving faster than stars farther away. This measurement was interpreted to mean that unseen matter at the core is interacting gravitationally with nearby stars. By comparing these results with standard models, the astronomers concluded that the most likely cause was the gravitational pull of a dense, massive object such as a black hole. They calculated the object's mass at 40,000 solar masses.
More recent work has challenged conclusions that there is a black hole in the cluster's core, in particular disputing the proposed location of the cluster center. Calculations using a revised location for the center found that the velocity of core stars does not vary with distance, as would be expected if an intermediate-mass black hole were present. The same studies also found that starlight does not increase toward the center but instead remains relatively constant. The authors noted that their results do not entirely rule out the black hole proposed by Noyola and colleagues, but they do not confirm it, and they limit its maximum mass to 12,000 solar masses.
A study from July 10, 2024 has examined seven fast-moving stars from the center of Omega Centauri and found that their speeds were consistent with an intermediate-mass black hole of at least 8,200 solar masses.
Disrupted dwarf galaxy
It has been speculated that Omega Centauri is the core of a dwarf galaxy that was disrupted and absorbed by the Milky Way. Indeed, Kapteyn's Star, which is currently only 13 light-years away from Earth, is thought to originate from Omega Centauri. Omega Centauri's chemistry and motion in the Milky Way are also consistent with this picture. Like Mayall II, Omega Centauri has a range of metallicities and stellar ages that suggests that it did not all form at once (as globular clusters are thought to form) and may in fact be the remainder of the core of a smaller galaxy long since incorporated into the Milky Way.
In fiction
The novel Singularity (2012), by Ian Douglas, presents as fact that Omega Centauri and Kapteyn's Star originate from a disrupted dwarf galaxy, and this origin is central to the novel's plot. A number of scientific aspects of Omega Centauri are discussed as the story progresses, including the likely radiation environment inside the cluster and what the sky might look like from inside the cluster.
The character Atlan has adventures in Omega Centauri in cycle 7 of the Atlan series, a spinoff of the German science fiction series Perry Rhodan.
| Physical sciences | Other notable objects | null |
21391751 | https://en.wikipedia.org/wiki/Turing%20test | Turing test | The Turing test, originally called the imitation game by Alan Turing in 1949, is a test of a machine's ability to exhibit intelligent behaviour equivalent to that of a human. In the test, a human evaluator judges a text transcript of a natural-language conversation between a human and a machine. The evaluator tries to identify the machine, and the machine passes if the evaluator cannot reliably tell them apart. The results would not depend on the machine's ability to answer questions correctly, only on how closely its answers resembled those of a human. Since the Turing test is a test of indistinguishability in performance capacity, the verbal version generalizes naturally to all of human performance capacity, verbal as well as nonverbal (robotic).
The test was introduced by Turing in his 1950 paper "Computing Machinery and Intelligence" while working at the University of Manchester. It opens with the words: "I propose to consider the question, 'Can machines think? Because "thinking" is difficult to define, Turing chooses to "replace the question by another, which is closely related to it and is expressed in relatively unambiguous words". Turing describes the new form of the problem in terms of a three-person party game called the "imitation game", in which an interrogator asks questions of a man and a woman in another room in order to determine the correct sex of the two players. Turing's new question is: "Are there imaginable digital computers which would do well in the imitation game?" This question, Turing believed, was one that could actually be answered. In the remainder of the paper, he argued against the major objections to the proposition that "machines can think".
Since Turing introduced his test, it has been highly influential in the philosophy of artificial intelligence, resulting in substantial discussion and controversy, as well as criticism from philosophers like John Searle, who argue against the test's ability to detect consciousness.
Since the early 2020s, several large language models such as ChatGPT have passed modern, rigorous variants of the Turing test.
Attempts
Several early symbolic AI programs were controversially claimed to pass the Turing test, either by limiting themselves to scripted situations or by presenting "excuses" for poor reasoning and conversational abilities, such as mental illness or a poor grasp of English.
In 1966, Joseph Weizenbaum created a program called ELIZA, which mimicked a Rogerian psychotherapist. The program would search the user's sentence for keywords before repeating them back to the user, providing the impression of a program listening and paying attention. Weizenbaum thus succeeded by designing a context where a chatbot could mimic a person despite "knowing almost nothing of the real world". Weizenbaum's program was able to fool some people into believing that they were talking to a real person.
Kenneth Colby created PARRY in 1972, a program modeled after the behaviour of paranoid schizophrenics. Psychiatrists asked to compare transcripts of conversations generated by the program to those of conversations by actual schizophrenics could only identify about 52 percent of cases correctly (a figure consistent with random guessing).
In 2001, three programmers developed Eugene Goostman, a chatbot portraying itself as a 13-year old boy from Odesa who spoke English as a second language. This background was intentionally chosen so judges would forgive mistakes by the program. In a competition, 33% of judges thought Goostman was human.
Large language models
Google LaMDA
In June 2022, Google's LaMDA model received widespread coverage after claims about it having achieved sentience. Initially in an article in The Economist Google Research Fellow Blaise Agüera y Arcas said the chatbot had demonstrated a degree of understanding of social relationships. Several days later, Google engineer Blake Lemoine claimed in an interview with the Washington Post that LaMDA had achieved sentience. Lemoine had been placed on leave by Google for internal assertions to this effect. Google had investigated the claims but dismissed them.
ChatGPT
OpenAI's chatbot, ChatGPT, was released in November 2022, is based on GPT-3.5 and GPT-4 large language models. Celeste Biever wrote in a Nature article that "ChatGPT broke the Turing test". Stanford researchers reported that ChatGPT passes the test; they found that ChatGPT-4 "passes a rigorous Turing test, diverging from average human behavior chiefly to be more cooperative", making it the first computer program to successfully do so.
History
Philosophical background
The question of whether it is possible for machines to think has a long history, which is firmly entrenched in the distinction between dualist and materialist views of the mind. René Descartes prefigures aspects of the Turing test in his 1637 Discourse on the Method when he writes:
Here Descartes notes that automata are capable of responding to human interactions but argues that such automata cannot respond appropriately to things said in their presence in the way that any human can. Descartes therefore prefigures the Turing test by defining the insufficiency of appropriate linguistic response as that which separates the human from the automaton. Descartes fails to consider the possibility that future automata might be able to overcome such insufficiency, and so does not propose the Turing test as such, even if he prefigures its conceptual framework and criterion.
Denis Diderot formulates in his 1746 book Pensées philosophiques a Turing-test criterion, though with the important implicit limiting assumption maintained, of the participants being natural living beings, rather than considering created artifacts:
This does not mean he agrees with this, but that it was already a common argument of materialists at that time.
According to dualism, the mind is non-physical (or, at the very least, has non-physical properties) and, therefore, cannot be explained in purely physical terms. According to materialism, the mind can be explained physically, which leaves open the possibility of minds that are produced artificially.
In 1936, philosopher Alfred Ayer considered the standard philosophical question of other minds: how do we know that other people have the same conscious experiences that we do? In his book, Language, Truth and Logic, Ayer suggested a protocol to distinguish between a conscious man and an unconscious machine: "The only ground I can have for asserting that an object which appears to be conscious is not really a conscious being, but only a dummy or a machine, is that it fails to satisfy one of the empirical tests by which the presence or absence of consciousness is determined". (This suggestion is very similar to the Turing test, but it is not certain that Ayer's popular philosophical classic was familiar to Turing.) In other words, a thing is not conscious if it fails the consciousness test.
Cultural background
A rudimentary idea of the Turing test appears in the 1726 novel Gulliver's Travels by Jonathan Swift. When Gulliver is brought before the king of Brobdingnag, the king thinks at first that Gulliver might be a "a piece of clock-work (which is in that country arrived to a very great perfection) contrived by some ingenious artist". Even when he hears Gulliver speaking, the king still doubts whether Gulliver was taught "a set of words" to make him "sell at a better price". Gulliver tells that only after "he put several other questions to me, and still received rational answers" the king became satisfied that Gulliver was not a machine.
Tests where a human judges whether a computer or an alien is intelligent were an established convention in science fiction by the 1940s, and it is likely that Turing would have been aware of these. Stanley G. Weinbaum's "A Martian Odyssey" (1934) provides an example of how nuanced such tests could be.
Earlier examples of machines or automatons attempting to pass as human include the Ancient Greek myth of Pygmalion who creates a sculpture of a woman that is animated by Aphrodite, Carlo Collodi's novel The Adventures of Pinocchio, about a puppet who wants to become a real boy, and E. T. A. Hoffmann's 1816 story "The Sandman," where the protagonist falls in love with an automaton. In all these examples, people are fooled by artificial beings that - up to a point - pass as human.
Alan Turing and the Imitation Game
Researchers in the United Kingdom had been exploring "machine intelligence" for up to ten years prior to the founding of the field of artificial intelligence (AI) research in 1956. It was a common topic among the members of the Ratio Club, an informal group of British cybernetics and electronics researchers that included Alan Turing.
Turing, in particular, had been running the notion of machine intelligence since at least 1941 and one of the earliest-known mentions of "computer intelligence" was made by him in 1947. In Turing's report, "Intelligent Machinery," he investigated "the question of whether or not it is possible for machinery to show intelligent behaviour" and, as part of that investigation, proposed what may be considered the forerunner to his later tests:
It is not difficult to devise a paper machine which will play a not very bad game of chess. Now get three men A, B and C as subjects for the experiment. A and C are to be rather poor chess players, B is the operator who works the paper machine. ... Two rooms are used with some arrangement for communicating moves, and a game is played between C and either A or the paper machine. C may find it quite difficult to tell which he is playing.
"Computing Machinery and Intelligence" (1950) was the first published paper by Turing to focus exclusively on machine intelligence. Turing begins the 1950 paper with the claim, "I propose to consider the question 'Can machines think? As he highlights, the traditional approach to such a question is to start with definitions, defining both the terms "machine" and "think". Turing chooses not to do so; instead, he replaces the question with a new one, "which is closely related to it and is expressed in relatively unambiguous words". In essence he proposes to change the question from "Can machines think?" to "Can machines do what we (as thinking entities) can do?" The advantage of the new question, Turing argues, is that it draws "a fairly sharp line between the physical and intellectual capacities of a man".
To demonstrate this approach Turing proposes a test inspired by a party game, known as the "imitation game", in which a man and a woman go into separate rooms and guests try to tell them apart by writing a series of questions and reading the typewritten answers sent back. In this game, both the man and the woman aim to convince the guests that they are the other. (Huma Shah argues that this two-human version of the game was presented by Turing only to introduce the reader to the machine-human question-answer test.) Turing described his new version of the game as follows:
We now ask the question, "What will happen when a machine takes the part of A in this game?" Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, "Can machines think?"
Later in the paper, Turing suggests an "equivalent" alternative formulation involving a judge conversing only with a computer and a man. While neither of these formulations precisely matches the version of the Turing test that is more generally known today, he proposed a third in 1952. In this version, which Turing discussed in a BBC radio broadcast, a jury asks questions of a computer and the role of the computer is to make a significant proportion of the jury believe that it is really a man.
Turing's paper considered nine putative objections, which include some of the major arguments against artificial intelligence that have been raised in the years since the paper was published (see "Computing Machinery and Intelligence").
The Chinese room
John Searle's 1980 paper Minds, Brains, and Programs proposed the "Chinese room" thought experiment and argued that the Turing test could not be used to determine if a machine could think. Searle noted that software (such as ELIZA) could pass the Turing test simply by manipulating symbols of which they had no understanding. Without understanding, they could not be described as "thinking" in the same sense people did. Therefore, Searle concluded, the Turing test could not prove that machines could think. Much like the Turing test itself, Searle's argument has been both widely criticised and endorsed.
Arguments such as Searle's and others working on the philosophy of mind sparked off a more intense debate about the nature of intelligence, the possibility of machines with a conscious mind and the value of the Turing test that continued through the 1980s and 1990s.
Loebner Prize
The Loebner Prize, now reported as defunct, provided an annual platform for practical Turing tests with the first competition held in November 1991. It was underwritten by Hugh Loebner. The Cambridge Center for Behavioral Studies in Massachusetts, United States, organised the prizes up to and including the 2003 contest. As Loebner described it, one reason the competition was created is to advance the state of AI research, at least in part, because no one had taken steps to implement the Turing test despite 40 years of discussing it.
The first Loebner Prize competition in 1991 led to a renewed discussion of the viability of the Turing test and the value of pursuing it, in both the popular press and academia. The first contest was won by a mindless program with no identifiable intelligence that managed to fool naïve interrogators into making the wrong identification. This highlighted several of the shortcomings of the Turing test (discussed below): The winner won, at least in part, because it was able to "imitate human typing errors"; the unsophisticated interrogators were easily fooled; and some researchers in AI have been led to feel that the test is merely a distraction from more fruitful research.
The silver (text only) and gold (audio and visual) prizes have never been won. However, the competition has awarded the bronze medal every year for the computer system that, in the judges' opinions, demonstrates the "most human" conversational behaviour among that year's entries. Artificial Linguistic Internet Computer Entity (A.L.I.C.E.) has won the bronze award on three occasions in recent times (2000, 2001, 2004). Learning AI Jabberwacky won in 2005 and 2006.
The Loebner Prize tested conversational intelligence; winners were typically chatterbot programs, or Artificial Conversational Entities (ACE)s. Early Loebner Prize rules restricted conversations: Each entry and hidden-human conversed on a single topic, thus the interrogators were restricted to one line of questioning per entity interaction. The restricted conversation rule was lifted for the 1995 Loebner Prize. Interaction duration between judge and entity has varied in Loebner Prizes. In Loebner 2003, at the University of Surrey, each interrogator was allowed five minutes to interact with an entity, machine or hidden-human. Between 2004 and 2007, the interaction time allowed in Loebner Prizes was more than twenty minutes. The final competition was in 2019, due to a lack of funding for the prize following Loebner's death in 2016.
CAPTCHA
CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart) is one of the oldest concepts for artificial intelligence. The CAPTCHA system is commonly used online to tell humans and bots apart on the internet. It is based on the Turing test. Displaying distorted letters and numbers, it asks the user to identify the letters and numbers and type them into a field, which bots struggle to do.
The reCaptcha is a CAPTCHA system owned by Google. The reCaptcha v1 and v2 both used to operate by asking the user to match distorted pictures or identify distorted letters and numbers. The reCaptcha v3 is designed to not interrupt users and run automatically when pages are loaded or buttons are clicked. This "invisible" CAPTCHA verification happens in the background and no challenges appear, which filters out most basic bots.
Versions
Saul Traiger argues that there are at least three primary versions of the Turing test, two of which are offered in "Computing Machinery and Intelligence" and one that he describes as the "Standard Interpretation". While there is some debate regarding whether the "Standard Interpretation" is that described by Turing or, instead, based on a misreading of his paper, these three versions are not regarded as equivalent, and their strengths and weaknesses are distinct.
Turing's original article describes a simple party game involving three players. Player A is a man, player B is a woman and player C (who plays the role of the interrogator) is of either gender. In the imitation game, player C is unable to see either player A or player B, and can communicate with them only through written notes. By asking questions of player A and player B, player C tries to determine which of the two is the man and which is the woman. Player A's role is to trick the interrogator into making the wrong decision, while player B attempts to assist the interrogator in making the right one.
Turing then asks:
"What will happen when a machine takes the part of A in this game? Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman?" These questions replace our original, "Can machines think?"
The second version appeared later in Turing's 1950 paper. Similar to the original imitation game test, the role of player A is performed by a computer. However, the role of player B is performed by a man rather than a woman.
Let us fix our attention on one particular digital computer C. Is it true that by modifying this computer to have an adequate storage, suitably increasing its speed of action, and providing it with an appropriate programme, C can be made to play satisfactorily the part of A in the imitation game, the part of B being taken by a man?
In this version, both player A (the computer) and player B are trying to trick the interrogator into making an incorrect decision.
The standard interpretation is not included in the original paper, but is both accepted and debated.
Common understanding has it that the purpose of the Turing test is not specifically to determine whether a computer is able to fool an interrogator into believing that it is a human, but rather whether a computer could imitate a human. While there is some dispute whether this interpretation was intended by Turing, Sterrett believes that it was and thus conflates the second version with this one, while others, such as Traiger, do not – this has nevertheless led to what can be viewed as the "standard interpretation". In this version, player A is a computer and player B a person of either sex. The role of the interrogator is not to determine which is male and which is female, but which is a computer and which is a human. The fundamental issue with the standard interpretation is that the interrogator cannot differentiate which responder is human, and which is machine. There are issues about duration, but the standard interpretation generally considers this limitation as something that should be reasonable.
Interpretations
Controversy has arisen over which of the alternative formulations of the test Turing intended. Sterrett argues that two distinct tests can be extracted from his 1950 paper and that, pace Turing's remark, they are not equivalent. The test that employs the party game and compares frequencies of success is referred to as the "Original Imitation Game Test", whereas the test consisting of a human judge conversing with a human and a machine is referred to as the "Standard Turing Test", noting that Sterrett equates this with the "standard interpretation" rather than the second version of the imitation game. Sterrett agrees that the standard Turing test (STT) has the problems that its critics cite but feels that, in contrast, the original imitation game test (OIG test) so defined is immune to many of them, due to a crucial difference: Unlike the STT, it does not make similarity to human performance the criterion, even though it employs human performance in setting a criterion for machine intelligence. A man can fail the OIG test, but it is argued that it is a virtue of a test of intelligence that failure indicates a lack of resourcefulness: The OIG test requires the resourcefulness associated with intelligence and not merely "simulation of human conversational behaviour". The general structure of the OIG test could even be used with non-verbal versions of imitation games.
According to Huma Shah, Turing himself was concerned with whether a machine could think and was providing a simple method to examine this: through human-machine question-answer sessions. Shah argues the imitation game which Turing described could be practicalized in two different ways: a) one-to-one interrogator-machine test, and b) simultaneous comparison of a machine with a human, both questioned in parallel by an interrogator.
Still other writers have interpreted Turing as proposing that the imitation game itself is the test, without specifying how to take into account Turing's statement that the test that he proposed using the party version of the imitation game is based upon a criterion of comparative frequency of success in that imitation game, rather than a capacity to succeed at one round of the game.
Some writers argue that the imitation game is best understood by its social aspects. In his 1948 paper, Turing refers to intelligence as an "emotional concept," and notes that The extent to which we regard something as behaving in an intelligent manner is determined as much by our own state of mind and training as by the properties of the object under consideration. If we are able to explain and predict its behaviour or if there seems to be little underlying plan, we have little temptation to imagine intelligence. With the same object therefore it is possible that one man would consider it as intelligent and another would not; the second man would have found out the rules of its behaviour.Following this remark and similar ones scattered throughout Turing's publications, Diane Proudfoot claims that Turing held a response-dependence approach to intelligence, according to which an intelligent (or thinking) entity is one that appears intelligent to an average interrogator. Shlomo Danziger promotes a socio-technological interpretation, according to which Turing saw the imitation game not as an intelligence test but as a technological aspiration - one whose realization would likely involve a change in society's attitude toward machines. According to this reading, Turing's celebrated 50-year prediction - that by the end of the 20th century his test will be passed by some machine - actually consists of two distinguishable predictions. The first is a technological prediction:I believe that in about fifty years' time it will be possible to programme computers ... to make them play the imitation game so well that an average interrogator will not have more than 70% chance of making the right identification after five minutes of questioning.The second prediction Turing makes is a sociological one:I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.Danziger claims further that for Turing, alteration of society's attitude towards machinery is a prerequisite for the existence of intelligent machines: Only when the term "intelligent machine" is no longer seen as an oxymoron the existence of intelligent machines would become logically possible.
Saygin has suggested that maybe the original game is a way of proposing a less biased experimental design as it hides the participation of the computer. The imitation game also includes a "social hack" not found in the standard interpretation, as in the game both computer and male human are required to play as pretending to be someone they are not.
Should the interrogator know about the computer?
A crucial piece of any laboratory test is that there should be a control. Turing never makes clear whether the interrogator in his tests is aware that one of the participants is a computer. He states only that player A is to be replaced with a machine, not that player C is to be made aware of this replacement. When Colby, FD Hilf, S Weber and AD Kramer tested PARRY, they did so by assuming that the interrogators did not need to know that one or more of those being interviewed was a computer during the interrogation. As Ayse Saygin, Peter Swirski, and others have highlighted, this makes a big difference to the implementation and outcome of the test. An experimental study looking at Gricean maxim violations using transcripts of Loebner's one-to-one (interrogator-hidden interlocutor) Prize for AI contests between 1994 and 1999, Ayse Saygin found significant differences between the responses of participants who knew and did not know about computers being involved.
Strengths
Tractability and simplicity
The power and appeal of the Turing test derives from its simplicity. The philosophy of mind, psychology, and modern neuroscience have been unable to provide definitions of "intelligence" and "thinking" that are sufficiently precise and general to be applied to machines. Without such definitions, the central questions of the philosophy of artificial intelligence cannot be answered. The Turing test, even if imperfect, at least provides something that can actually be measured. As such, it is a pragmatic attempt to answer a difficult philosophical question.
Breadth of subject matter
The format of the test allows the interrogator to give the machine a wide variety of intellectual tasks. Turing wrote that "the question and answer method seems to be suitable for introducing almost any one of the fields of human endeavour that we wish to include". John Haugeland adds that "understanding the words is not enough; you have to understand the topic as well".
To pass a well-designed Turing test, the machine must use natural language, reason, have knowledge and learn. The test can be extended to include video input, as well as a "hatch" through which objects can be passed: this would force the machine to demonstrate skilled use of well designed vision and robotics as well. Together, these represent almost all of the major problems that artificial intelligence research would like to solve.
The Feigenbaum test is designed to take advantage of the broad range of topics available to a Turing test. It is a limited form of Turing's question-answer game which compares the machine against the abilities of experts in specific fields such as literature or chemistry.
Emphasis on emotional and aesthetic intelligence
As a Cambridge honours graduate in mathematics, Turing might have been expected to propose a test of computer intelligence requiring expert knowledge in some highly technical field, and thus anticipating a more recent approach to the subject. Instead, as already noted, the test which he described in his seminal 1950 paper requires the computer to be able to compete successfully in a common party game, and this by performing as well as the typical man in answering a series of questions so as to pretend convincingly to be the woman contestant.
Given the status of human sexual dimorphism as one of the most ancient of subjects, it is thus implicit in the above scenario that the questions to be answered will involve neither specialised factual knowledge nor information processing technique. The challenge for the computer, rather, will be to demonstrate empathy for the role of the female, and to demonstrate as well a characteristic aesthetic sensibility—both of which qualities are on display in this snippet of dialogue which Turing has imagined:
Interrogator: Will X please tell me the length of his or her hair?
Contestant: My hair is shingled, and the longest strands are about nine inches long.
When Turing does introduce some specialised knowledge into one of his imagined dialogues, the subject is not maths or electronics, but poetry:
Interrogator: In the first line of your sonnet which reads, "Shall I compare thee to a summer's day," would not "a spring day" do as well or better?
Witness: It wouldn't scan.
Interrogator: How about "a winter's day". That would scan all right.
Witness: Yes, but nobody wants to be compared to a winter's day.
Turing thus once again demonstrates his interest in empathy and aesthetic sensitivity as components of an artificial intelligence; and in light of an increasing awareness of the threat from an AI run amok, it has been suggested that this focus perhaps represents a critical intuition on Turing's part, i.e., that emotional and aesthetic intelligence will play a key role in the creation of a "friendly AI". It is further noted, however, that whatever inspiration Turing might be able to lend in this direction depends upon the preservation of his original vision, which is to say, further, that the promulgation of a "standard interpretation" of the Turing test—i.e., one which focuses on a discursive intelligence only—must be regarded with some caution.
Weaknesses
Turing did not explicitly state that the Turing test could be used as a measure of "intelligence", or any other human quality. He wanted to provide a clear and understandable alternative to the word "think", which he could then use to reply to criticisms of the possibility of "thinking machines" and to suggest ways that research might move forward.
Nevertheless, the Turing test has been proposed as a measure of a machine's "ability to think" or its "intelligence". This proposal has received criticism from both philosophers and computer scientists. The interpretation makes the assumption that an interrogator can determine if a machine is "thinking" by comparing its behaviour with human behaviour. Every element of this assumption has been questioned: the reliability of the interrogator's judgement, the value of comparing the machine with a human, and the value of comparing only behaviour. Because of these and other considerations, some AI researchers have questioned the relevance of the test to their field.
Naïveté of interrogators
In practice, the test's results can easily be dominated not by the computer's intelligence, but by the attitudes, skill, or naïveté of the questioner. Numerous experts in the field, including cognitive scientist Gary Marcus, insist that the Turing test only shows how easy it is to fool humans and is not an indication of machine intelligence.
Turing doesn't specify the precise skills and knowledge required by the interrogator in his description of the test, but he did use the term "average interrogator": "[the] average interrogator would not have more than 70 per cent chance of making the right identification after five minutes of questioning".
Chatterbot programs such as ELIZA have repeatedly fooled unsuspecting people into believing that they are communicating with human beings. In these cases, the "interrogators" are not even aware of the possibility that they are interacting with computers. To successfully appear human, there is no need for the machine to have any intelligence whatsoever and only a superficial resemblance to human behaviour is required.
Early Loebner Prize competitions used "unsophisticated" interrogators who were easily fooled by the machines. Since 2004, the Loebner Prize organisers have deployed philosophers, computer scientists, and journalists among the interrogators. Nonetheless, some of these experts have been deceived by the machines.
One interesting feature of the Turing test is the frequency of the confederate effect, when the confederate (tested) humans are misidentified by the interrogators as machines. It has been suggested that what interrogators expect as human responses is not necessarily typical of humans. As a result, some individuals can be categorised as machines. This can therefore work in favour of a competing machine. The humans are instructed to "act themselves", but sometimes their answers are more like what the interrogator expects a machine to say. This raises the question of how to ensure that the humans are motivated to "act human".
Human intelligence vs. intelligence in general
The Turing test does not directly test whether the computer behaves intelligently. It tests only whether the computer behaves like a human being. Since human behaviour and intelligent behaviour are not exactly the same thing, the test can fail to accurately measure intelligence in two ways:
Some human behaviour is unintelligent The Turing test requires that the machine be able to execute all human behaviours, regardless of whether they are intelligent. It even tests for behaviours that may not be considered intelligent at all, such as the susceptibility to insults, the temptation to lie or, simply, a high frequency of typing mistakes. If a machine cannot imitate these unintelligent behaviours in detail it fails the test.
This objection was raised by The Economist, in an article entitled "artificial stupidity" published shortly after the first Loebner Prize competition in 1992. The article noted that the first Loebner winner's victory was due, at least in part, to its ability to "imitate human typing errors". Turing himself had suggested that programs add errors into their output, so as to be better "players" of the game.
Some intelligent behaviour is inhuman The Turing test does not test for highly intelligent behaviours, such as the ability to solve difficult problems or come up with original insights. In fact, it specifically requires deception on the part of the machine: if the machine is more intelligent than a human being it must deliberately avoid appearing too intelligent. If it were to solve a computational problem that is practically impossible for a human to solve, then the interrogator would know the program is not human, and the machine would fail the test.
Because it cannot measure intelligence that is beyond the ability of humans, the test cannot be used to build or evaluate systems that are more intelligent than humans. Because of this, several test alternatives that would be able to evaluate super-intelligent systems have been proposed.
Consciousness vs. the simulation of consciousness
The Turing test is concerned strictly with how the subject acts – the external behaviour of the machine. In this regard, it takes a behaviourist or functionalist approach to the study of the mind. The example of ELIZA suggests that a machine passing the test may be able to simulate human conversational behaviour by following a simple (but large) list of mechanical rules, without thinking or having a mind at all.
John Searle has argued that external behaviour cannot be used to determine if a machine is "actually" thinking or merely "simulating thinking". His Chinese room argument is intended to show that, even if the Turing test is a good operational definition of intelligence, it may not indicate that the machine has a mind, consciousness, or intentionality. (Intentionality is a philosophical term for the power of thoughts to be "about" something.)
Turing anticipated this line of criticism in his original paper, writing:
Impracticality and irrelevance: the Turing test and AI research
Mainstream AI researchers argue that trying to pass the Turing test is merely a distraction from more fruitful research. Indeed, the Turing test is not an active focus of much academic or commercial effort—as Stuart Russell and Peter Norvig write: "AI researchers have devoted little attention to passing the Turing test". There are several reasons.
First, there are easier ways to test their programs. Most current research in AI-related fields is aimed at modest and specific goals, such as object recognition or logistics. To test the intelligence of the programs that solve these problems, AI researchers simply give them the task directly. Stuart Russell and Peter Norvig suggest an analogy with the history of flight: Planes are tested by how well they fly, not by comparing them to birds. "Aeronautical engineering texts," they write, "do not define the goal of their field as 'making machines that fly so exactly like pigeons that they can fool other pigeons.
Second, creating lifelike simulations of human beings is a difficult problem on its own that does not need to be solved to achieve the basic goals of AI research. Believable human characters may be interesting in a work of art, a game, or a sophisticated user interface, but they are not part of the science of creating intelligent machines, that is, machines that solve problems using intelligence.
Turing did not intend for his idea to be used to test the intelligence of programs—he wanted to provide a clear and understandable example to aid in the discussion of the philosophy of artificial intelligence. John McCarthy argues that we should not be surprised that a philosophical idea turns out to be useless for practical applications. He observes that the philosophy of AI is "unlikely to have any more effect on the practice of AI research than philosophy of science generally has on the practice of science".
The language-centric objection
Another well known objection raised towards the Turing test concerns its exclusive focus on linguistic behaviour (i.e. it is only a "language-based" experiment, while all the other cognitive faculties are not tested). This drawback downsizes the role of other modality-specific "intelligent abilities" concerning human beings that the psychologist Howard Gardner, in his "multiple intelligence theory", proposes to consider (verbal-linguistic abilities are only one of those).
Silence
A critical aspect of the Turing test is that a machine must give itself away as being a machine by its utterances. An interrogator must then make the "right identification" by correctly identifying the machine as being just that. If, however, a machine remains silent during a conversation, then it is not possible for an interrogator to accurately identify the machine other than by means of a calculated guess.
Even taking into account a parallel/hidden human as part of the test may not help the situation as humans can often be misidentified as being a machine.
The Turing Trap
By focusing on imitating humans, rather than augmenting or extending human capabilities, the Turing Test risks directing research and implementation toward technologies that substitute for humans and thereby drive down wages and income for workers. As they lose economic power, these workers may also lose political power, making it more difficult for them to change the allocation of wealth and income. This can trap them in a bad equilibrium. Erik Brynjolfsson has called this "The Turing Trap" and argued that there are currently excess incentives for creating machines that imitate rather than augment humans.
Variations
Numerous other versions of the Turing test, including those expounded above, have been raised through the years.
Reverse Turing test and CAPTCHA
A modification of the Turing test wherein the objective of one or more of the roles have been reversed between machines and humans is termed a reverse Turing test. An example is implied in the work of psychoanalyst Wilfred Bion, who was particularly fascinated by the "storm" that resulted from the encounter of one mind by another. In his 2000 book, among several other original points with regard to the Turing test, literary scholar Peter Swirski discussed in detail the idea of what he termed the Swirski test—essentially the reverse Turing test. He pointed out that it overcomes most if not all standard objections levelled at the standard version.
Carrying this idea forward, R. D. Hinshelwood described the mind as a "mind recognizing apparatus". The challenge would be for the computer to be able to determine if it were interacting with a human or another computer. This is an extension of the original question that Turing attempted to answer but would, perhaps, offer a high enough standard to define a machine that could "think" in a way that we typically define as characteristically human.
CAPTCHA is a form of reverse Turing test. Before being allowed to perform some action on a website, the user is presented with alphanumerical characters in a distorted graphic image and asked to type them out. This is intended to prevent automated systems from being used to abuse the site. The rationale is that software sufficiently sophisticated to read and reproduce the distorted image accurately does not exist (or is not available to the average user), so any system able to do so is likely to be a human.
Software that could reverse CAPTCHA with some accuracy by analysing patterns in the generating engine started being developed soon after the creation of CAPTCHA.
In 2013, researchers at Vicarious announced that they had developed a system to solve CAPTCHA challenges from Google, Yahoo!, and PayPal up to 90% of the time.
In 2014, Google engineers demonstrated a system that could defeat CAPTCHA challenges with 99.8% accuracy.
In 2015, Shuman Ghosemajumder, former click fraud czar of Google, stated that there were cybercriminal sites that would defeat CAPTCHA challenges for a fee, to enable various forms of fraud.
Distinguishing accurate use of language from actual understanding
A further variation is motivated by the concern that modern Natural Language Processing prove to be highly successful in generating text on the basis of a huge text corpus and could eventually pass the Turing test simply by manipulating words and sentences that have been used in the initial training of the model. Since the interrogator has no precise understanding of the training data, the model might simply be returning sentences that exist in similar fashion in the enormous amount of training data. For this reason, Arthur Schwaninger proposes a variation of the Turing test that can distinguish between systems that are only capable of using language and systems that understand language. He proposes a test in which the machine is confronted with philosophical questions that do not depend on any prior knowledge and yet require self-reflection to be answered appropriately.
Subject matter expert Turing test
Another variation is described as the subject-matter expert Turing test, where a machine's response cannot be distinguished from an expert in a given field. This is also known as a "Feigenbaum test" and was proposed by Edward Feigenbaum in a 2003 paper.
"Low-level" cognition test
Robert French (1990) makes the case that an interrogator can distinguish human and non-human interlocutors by posing questions that reveal the low-level (i.e., unconscious) processes of human cognition, as studied by cognitive science. Such questions reveal the precise details of the human embodiment of thought and can unmask a computer unless it experiences the world as humans do.
Total Turing test
The "Total Turing test" variation of the Turing test, proposed by cognitive scientist Stevan Harnad, adds two further requirements to the traditional Turing test. The interrogator can also test the perceptual abilities of the subject (requiring computer vision) and the subject's ability to manipulate objects (requiring robotics).
Electronic health records
A letter published in Communications of the ACM describes the concept of generating a synthetic patient population and proposes a variation of Turing test to assess the difference between synthetic and real patients. The letter states: "In the EHR context, though a human physician can readily distinguish between synthetically generated and real live human patients, could a machine be given the intelligence to make such a determination on its own?" and further the letter states: "Before synthetic patient identities become a public health problem, the legitimate EHR market might benefit from applying Turing Test-like techniques to ensure greater data reliability and diagnostic value. Any new techniques must thus consider patients' heterogeneity and are likely to have greater complexity than the Allen eighth-grade-science-test is able to grade".
Minimum intelligent signal test
The minimum intelligent signal test was proposed by Chris McKinstry as "the maximum abstraction of the Turing test", in which only binary responses (true/false or yes/no) are permitted, to focus only on the capacity for thought. It eliminates text chat problems like anthropomorphism bias, and does not require emulation of unintelligent human behaviour, allowing for systems that exceed human intelligence. The questions must each stand on their own, however, making it more like an IQ test than an interrogation. It is typically used to gather statistical data against which the performance of artificial intelligence programs may be measured.
Hutter Prize
The organisers of the Hutter Prize believe that compressing natural language text is a hard AI problem, equivalent to passing the Turing test. The data compression test has some advantages over most versions and variations of a Turing test, including:
It gives a single number that can be directly used to compare which of two machines is "more intelligent".
It does not require the computer to lie to the judge
The main disadvantages of using data compression as a test are:
It is not possible to test humans this way.
It is unknown what particular "score" on this test—if any—is equivalent to passing a human-level Turing test.
Other tests based on compression or Kolmogorov complexity
A related approach to Hutter's prize which appeared much earlier in the late 1990s is the inclusion of compression problems in an extended Turing test. or by tests which are completely derived from Kolmogorov complexity.
Other related tests in this line are presented by Hernandez-Orallo and Dowe.
Algorithmic IQ, or AIQ for short, is an attempt to convert the theoretical Universal Intelligence Measure from Legg and Hutter (based on Solomonoff's inductive inference) into a working practical test of machine intelligence.
Two major advantages of some of these tests are their applicability to nonhuman intelligences and their absence of a requirement for human testers.
Ebert test
The Turing test inspired the Ebert test proposed in 2011 by film critic Roger Ebert which is a test whether a computer-based synthesised voice has sufficient skill in terms of intonations, inflections, timing and so forth, to make people laugh.
Social Turing game
Taking advantage of large language models, in 2023 the research company AI21 Labs created an online social experiment titled "Human or Not?" It was played more than 10 million times by more than 2 million people. It is the biggest Turing-style experiment to that date. The results showed that 32% of people could not distinguish between humans and machines.
Conferences
Turing Colloquium
1990 marked the fortieth anniversary of the first publication of Turing's "Computing Machinery and Intelligence" paper, and saw renewed interest in the test. Two significant events occurred in that year: the first was the Turing Colloquium, which was held at the University of Sussex in April, and brought together academics and researchers from a wide variety of disciplines to discuss the Turing test in terms of its past, present, and future; the second was the formation of the annual Loebner Prize competition.
Blay Whitby lists four major turning points in the history of the Turing test – the publication of "Computing Machinery and Intelligence" in 1950, the announcement of Joseph Weizenbaum's ELIZA in 1966, Kenneth Colby's creation of PARRY, which was first described in 1972, and the Turing Colloquium in 1990.
2008 AISB Symposium
In parallel to the 2008 Loebner Prize held at the University of Reading,
the Society for the Study of Artificial Intelligence and the Simulation of Behaviour (AISB), hosted a one-day symposium to discuss the Turing test, organised by John Barnden, Mark Bishop, Huma Shah and Kevin Warwick.
The speakers included the Royal Institution's Director Baroness Susan Greenfield, Selmer Bringsjord, Turing's biographer Andrew Hodges, and consciousness scientist Owen Holland. No agreement emerged for a canonical Turing test, though Bringsjord expressed that a sizeable prize would result in the Turing test being passed sooner.
| Technology | Artificial intelligence concepts | null |
21391870 | https://en.wikipedia.org/wiki/Halting%20problem | Halting problem | In computability theory, the halting problem is the problem of determining, from a description of an arbitrary computer program and an input, whether the program will finish running, or continue to run forever. The halting problem is undecidable, meaning that no general algorithm exists that solves the halting problem for all possible program–input pairs. The problem comes up often in discussions of computability since it demonstrates that some functions are mathematically definable but not computable.
A key part of the formal statement of the problem is a mathematical definition of a computer and program, usually via a Turing machine. The proof then shows, for any program that might determine whether programs halt, that a "pathological" program exists for which makes an incorrect determination. Specifically, is the program that, when called with some input, passes its own source and its input to f and does the opposite of what predicts will do. The behavior of on shows undecidability as it means no program will solve the halting problem in every possible case.
Background
The halting problem is a decision problem about properties of computer programs on a fixed Turing-complete model of computation, i.e., all programs that can be written in some given programming language that is general enough to be equivalent to a Turing machine. The problem is to determine, given a program and an input to the program, whether the program will eventually halt when run with that input. In this abstract framework, there are no resource limitations on the amount of memory or time required for the program's execution; it can take arbitrarily long and use an arbitrary amount of storage space before halting. The question is simply whether the given program will ever halt on a particular input.
For example, in pseudocode, the program
while (true) continue
does not halt; rather, it goes on forever in an infinite loop. On the other hand, the program
print "Hello, world!"
does halt.
While deciding whether these programs halt is simple, more complex programs prove problematic. One approach to the problem might be to run the program for some number of steps and check if it halts. However, as long as the program is running, it is unknown whether it will eventually halt or run forever. Turing proved no algorithm exists that always correctly decides whether, for a given arbitrary program and input, the program halts when run with that input. The essence of Turing's proof is that any such algorithm can be made to produce contradictory output and therefore cannot be correct.
Programming consequences
Some infinite loops can be quite useful. For instance, event loops are typically coded as infinite loops. However, most subroutines are intended to finish. In particular, in hard real-time computing, programmers attempt to write subroutines that are not only guaranteed to finish, but are also guaranteed to finish before a given deadline.
Sometimes these programmers use some general-purpose (Turing-complete) programming language,
but attempt to write in a restricted style—such as MISRA C or SPARK—that makes it easy to prove that the resulting subroutines finish before the given deadline.
Other times these programmers apply the rule of least power—they deliberately use a computer language that is not quite fully Turing-complete. Frequently, these are languages that guarantee all subroutines finish, such as Coq.
Common pitfalls
The difficulty in the halting problem lies in the requirement that the decision procedure must work for all programs and inputs. A particular program either halts on a given input or does not halt. Consider one algorithm that always answers "halts" and another that always answers "does not halt". For any specific program and input, one of these two algorithms answers correctly, even though nobody may know which one. Yet neither algorithm solves the halting problem generally.
There are programs (interpreters) that simulate the execution of whatever source code they are given. Such programs can demonstrate that a program does halt if this is the case: the interpreter itself will eventually halt its simulation, which shows that the original program halted. However, an interpreter will not halt if its input program does not halt, so this approach cannot solve the halting problem as stated; it does not successfully answer "does not halt" for programs that do not halt.
The halting problem is theoretically decidable for linear bounded automata (LBAs) or deterministic machines with finite memory. A machine with finite memory has a finite number of configurations, and thus any deterministic program on it must eventually either halt or repeat a previous configuration:
However, a computer with a million small parts, each with two states, would have at least 21,000,000 possible states:
Although a machine may be finite, and finite automata "have a number of theoretical limitations":
It can also be decided automatically whether a nondeterministic machine with finite memory halts on none, some, or all of the possible sequences of nondeterministic decisions, by enumerating states after each possible decision.
History
In April 1936, Alonzo Church published his proof of the undecidability of a problem in the lambda calculus. Turing's proof was published later, in January 1937. Since then, many other undecidable problems have been described, including the halting problem which emerged in the 1950s.
Timeline
Origin of the halting problem
Many papers and textbooks refer the definition and proof of undecidability of the halting problem to Turing's 1936 paper. However, this is not correct. Turing did not use the terms "halt" or "halting" in any of his published works, including his 1936 paper. A search of the academic literature from 1936 to 1958 showed that the first published material using the term “halting problem” was . However, Rogers says he had a draft of available to him, and Martin Davis states in the introduction that "the expert will perhaps find some novelty in the arrangement and treatment of topics", so the terminology must be attributed to Davis. Davis stated in a letter that he had been referring to the halting problem since 1952. The usage in Davis's book is as follows:
A possible precursor to Davis's formulation is Kleene's 1952 statement, which differs only in wording:
The halting problem is Turing equivalent to both Davis's printing problem ("does a Turing machine starting from a given state ever print a given symbol?") and to the printing problem considered in Turing's 1936 paper ("does a Turing machine starting from a blank tape ever print a given symbol?"). However, Turing equivalence is rather loose and does not mean that the two problems are the same. There are machines which print but do not halt, and halt but not print. The printing and halting problems address different issues and exhibit important conceptual and technical differences. Thus, Davis was simply being modest when he said:
Formalization
In his original proof Turing formalized the concept of algorithm by introducing Turing machines. However, the result is in no way specific to them; it applies equally to any other model of computation that is equivalent in its computational power to Turing machines, such as Markov algorithms, Lambda calculus, Post systems, register machines, or tag systems.
What is important is that the formalization allows a straightforward mapping of algorithms to some data type that the algorithm can operate upon. For example, if the formalism lets algorithms define functions over strings (such as Turing machines) then there should be a mapping of these algorithms to strings, and if the formalism lets algorithms define functions over natural numbers (such as computable functions) then there should be a mapping of algorithms to natural numbers. The mapping to strings is usually the most straightforward, but strings over an alphabet with n characters can also be mapped to numbers by interpreting them as numbers in an n-ary numeral system.
Representation as a set
The conventional representation of decision problems is the set of objects possessing the property in question. The halting set
K = {(i, x) | program i halts when run on input x}
represents the halting problem.
This set is recursively enumerable, which means there is a computable function that lists all of the pairs (i, x) it contains. However, the complement of this set is not recursively enumerable.
There are many equivalent formulations of the halting problem; any set whose Turing degree equals that of the halting problem is such a formulation. Examples of such sets include:
{i | program i eventually halts when run with input 0}
{i | there is an input x such that program i eventually halts when run with input x}.
Proof concept
Christopher Strachey outlined a proof by contradiction that the halting problem is not solvable. The proof proceeds as follows: Suppose that there exists a total computable function halts(f) that returns true if the subroutine f halts (when run with no inputs) and returns false otherwise. Now consider the following subroutine:
def g():
if halts(g):
loop_forever()
halts(g) must either return true or false, because halts was assumed to be total. If halts(g) returns true, then g will call loop_forever and never halt, which is a contradiction. If halts(g) returns false, then g will halt, because it will not call loop_forever; this is also a contradiction. Overall, g does the opposite of what halts says g should do, so halts(g) can not return a truth value that is consistent with whether g halts. Therefore, the initial assumption that halts is a total computable function must be false.
Sketch of rigorous proof
The concept above shows the general method of the proof, but the computable function halts does not directly take a subroutine as an argument; instead it takes the source code of a program. Moreover, the definition of g is self-referential. A rigorous proof addresses these issues. The overall goal is to show that there is no total computable function that decides whether an arbitrary program i halts on arbitrary input x; that is, the following function h (for "halts") is not computable:
Here program i refers to the i th program in an enumeration of all the programs of a fixed Turing-complete model of computation.
Possible values for a total computable function f arranged in a 2D array. The orange cells are the diagonal. The values of f(i,i) and g(i) are shown at the bottom; U indicates that the function g is undefined for a particular input value.
The proof proceeds by directly establishing that no total computable function with two arguments can be the required function h. As in the sketch of the concept, given any total computable binary function f, the following partial function g is also computable by some program e:
The verification that g is computable relies on the following constructs (or their equivalents):
computable subprograms (the program that computes f is a subprogram in program e),
duplication of values (program e computes the inputs i,i for f from the input i for g),
conditional branching (program e selects between two results depending on the value it computes for f(i,i)),
not producing a defined result (for example, by looping forever),
returning a value of 0.
The following pseudocode for e illustrates a straightforward way to compute g:
procedure e(i):
if f(i, i) == 0 then
return 0
else
loop forever
Because g is partial computable, there must be a program e that computes g, by the assumption that the model of computation is Turing-complete. This program is one of all the programs on which the halting function h is defined. The next step of the proof shows that h(e,e) will not have the same value as f(e,e).
It follows from the definition of g that exactly one of the following two cases must hold:
f(e,e) = 0 and so g(e) = 0. In this case program e halts on input e, so h(e,e) = 1.
f(e,e) ≠ 0 and so g(e) is undefined. In this case program e does not halt on input e, so h(e,e) = 0.
In either case, f cannot be the same function as h. Because f was an arbitrary total computable function with two arguments, all such functions must differ from h.
This proof is analogous to Cantor's diagonal argument. One may visualize a two-dimensional array with one column and one row for each natural number, as indicated in the table above. The value of f(i,j) is placed at column i, row j. Because f is assumed to be a total computable function, any element of the array can be calculated using f. The construction of the function g can be visualized using the main diagonal of this array. If the array has a 0 at position (i,i), then g(i) is 0. Otherwise, g(i) is undefined. The contradiction comes from the fact that there is some column e of the array corresponding to g itself. Now assume f was the halting function h, if g(e) is defined (g(e) = 0 in this case), g(e) halts so f(e,e) = 1. But g(e) = 0 only when f(e,e) = 0, contradicting f(e,e) = 1. Similarly, if g(e) is not defined, then halting function f(e,e) = 0, which leads to g(e) = 0 under g'''s construction. This contradicts the assumption of g(e) not being defined. In both cases contradiction arises. Therefore any arbitrary computable function f cannot be the halting function h.
Computability theory
A typical method of proving a problem to be undecidable is to reduce the halting problem to .
For example, there cannot be a general algorithm that decides whether a given statement about natural numbers is true or false. The reason for this is that the proposition stating that a certain program will halt given a certain input can be converted into an equivalent statement about natural numbers. If an algorithm could find the truth value of every statement about natural numbers, it could certainly find the truth value of this one; but that would determine whether the original program halts.
Rice's theorem generalizes the theorem that the halting problem is unsolvable. It states that for any non-trivial property, there is no general decision procedure that, for all programs, decides whether the partial function implemented by the input program has that property. (A partial function is a function which may not always produce a result, and so is used to model programs, which can either produce results or fail to halt.) For example, the property "halt for the input 0" is undecidable. Here, "non-trivial" means that the set of partial functions that satisfy the property is neither the empty set nor the set of all partial functions. For example, "halts or fails to halt on input 0" is clearly true of all partial functions, so it is a trivial property, and can be decided by an algorithm that simply reports "true." Also, this theorem holds only for properties of the partial function implemented by the program; Rice's Theorem does not apply to properties of the program itself. For example, "halt on input 0 within 100 steps" is not a property of the partial function that is implemented by the program—it is a property of the program implementing the partial function and is very much decidable.
Gregory Chaitin has defined a halting probability, represented by the symbol Ω, a type of real number that informally is said to represent the probability that a randomly produced program halts. These numbers have the same Turing degree as the halting problem. It is a normal and transcendental number which can be defined but cannot be completely computed. This means one can prove that there is no algorithm which produces the digits of Ω, although its first few digits can be calculated in simple cases.
Since the negative answer to the halting problem shows that there are problems that cannot be solved by a Turing machine, the Church–Turing thesis limits what can be accomplished by any machine that implements effective methods. However, not all machines conceivable to human imagination are subject to the Church–Turing thesis (e.g. oracle machines). It is an open question whether there can be actual deterministic physical processes that, in the long run, elude simulation by a Turing machine, and in particular whether any such hypothetical process could usefully be harnessed in the form of a calculating machine (a hypercomputer) that could solve the halting problem for a Turing machine amongst other things. It is also an open question whether any such unknown physical processes are involved in the working of the human brain, and whether humans can solve the halting problem.
Approximations
Turing's proof shows that there can be no mechanical, general method (i.e., a Turing machine or a program in some equivalent model of computation) to determine whether algorithms halt. However, each individual instance of the halting problem has a definitive answer, which may or may not be practically computable. Given a specific algorithm and input, one can often show that it halts or does not halt, and in fact computer scientists often do just that as part of a correctness proof. There are some heuristics that can be used in an automated fashion to attempt to construct a proof, which frequently succeed on typical programs. This field of research is known as automated termination analysis.
Some results have been established on the theoretical performance of halting problem heuristics, in particular the fraction of programs of a given size that may be correctly classified by a recursive algorithm. These results do not give precise numbers because the fractions are uncomputable and also highly dependent on the choice of program encoding used to determine "size". For example, consider classifying programs by their number of states and using a specific "Turing semi-infinite tape" model of computation that errors (without halting) if the program runs off the left side of the tape. Then , over programs chosen uniformly by number of states. But this result is in some sense "trivial" because these decidable programs are simply the ones that fall off the tape, and the heuristic is simply to predict not halting due to error. Thus a seemingly irrelevant detail, namely the treatment of programs with errors, can turn out to be the deciding factor in determining the fraction of programs.
To avoid these issues, several restricted notions of the "size" of a program have been developed. A dense Gödel numbering assigns numbers to programs such that each computable function occurs a positive fraction in each sequence of indices from 1 to n, i.e. a Gödelization φ is dense iff for all , there exists a such that . For example, a numbering that assigns indices to nontrivial programs and all other indices the error state is not dense, but there exists a dense Gödel numbering of syntactically correct Brainfuck programs. A dense Gödel numbering is called optimal if, for any other Gödel numbering , there is a 1-1 total recursive function and a constant such that for all , and . This condition ensures that all programs have indices not much larger than their indices in any other Gödel numbering. Optimal Gödel numberings are constructed by numbering the inputs of a universal Turing machine. A third notion of size uses universal machines operating on binary strings and measures the length of the string needed to describe the input program. A universal machine U is a machine for which every other machine V there exists a total computable function h such that . An optimal machine is a universal machine that achieves the Kolmogorov complexity invariance bound, i.e. for every machine V, there exists c such that for all outputs x, if a V-program of length n outputs x, then there exists a U-program of at most length outputting x.
We consider partial computable functions (algorithms) . For each we consider the fraction of errors among all programs of size metric at most , counting each program for which fails to terminate, produces a "don't know" answer, or produces a wrong answer, i.e. halts and outputs DOES_NOT_HALT, or does not halt and outputs HALTS. The behavior may be described as follows, for dense Gödelizations and optimal machines:
For every algorithm , . In words, any algorithm has a positive minimum error rate, even as the size of the problem becomes extremely large.
There exists such that for every algorithm , . In words, there is a positive error rate for which any algorithm will do worse than that error rate arbitrarily often, even as the size of the problem grows indefinitely.
. In words, there is a sequence of algorithms such that the error rate gets arbitrarily close to zero for a specific sequence of increasing sizes. However, this result allows sequences of algorithms that produce wrong answers.
If we consider only "honest" algorithms that may be undefined but never produce wrong answers, then depending on the metric, may or may not be 0. In particular it is 0 for left-total universal machines, but for effectively optimal machines it is greater than 0.
The complex nature of these bounds is due to the oscillatory behavior of . There are infrequently occurring new varieties of programs that come in arbitrarily large "blocks", and a constantly growing fraction of repeats. If the blocks of new varieties are fully included, the error rate is at least , but between blocks the fraction of correctly categorized repeats can be arbitrarily high. In particular a "tally" heuristic that simply remembers the first N inputs and recognizes their equivalents allows reaching an arbitrarily low error rate infinitely often.
Gödel's incompleteness theorems
Generalization
Many variants of the halting problem can be found in computability textbooks. Typically, these problems are RE-complete and describe sets of complexity in the arithmetical hierarchy, the same as the standard halting problem. The variants are thus undecidable, and the standard halting problem reduces to each variant and vice-versa. However, some variants have a higher degree of unsolvability and cannot be reduced to the standard halting problem. The next two examples are common.
Halting on all inputs
The universal halting problem, also known (in recursion theory) as totality, is the problem of determining whether a given computer program will halt for every input (the name totality comes from the equivalent question of whether the computed function is total).
This problem is not only undecidable, as the halting problem is, but highly undecidable. In terms of the arithmetical hierarchy, it is -complete.
This means, in particular, that it cannot be decided even with an oracle for the halting problem.
Recognizing partial solutions
There are many programs that, for some inputs, return a correct answer to the halting problem, while for other inputs they do not return an answer at all.
However the problem "given program p, is it a partial halting solver" (in the sense described) is at least as hard as the halting problem.
To see this, assume that there is an algorithm PHSR ("partial halting solver recognizer") to do that. Then it can be used to solve the halting problem,
as follows:
To test whether input program x halts on y, construct a program p that on input (x,y) reports true and diverges on all other inputs.
Then test p with PHSR.
The above argument is a reduction of the halting problem to PHS recognition, and in the same manner,
harder problems such as halting on all inputs can also be reduced, implying that PHS recognition is not only undecidable, but higher in the arithmetical hierarchy, specifically -complete.
Lossy computation
A lossy Turing machine is a Turing machine in which part of the tape may non-deterministically disappear. The halting problem is decidable for a lossy Turing machine but non-primitive recursive.
Oracle machines
A machine with an oracle for the halting problem can determine whether particular Turing machines will halt on particular inputs, but they cannot determine, in general, whether machines equivalent to themselves will halt.
| Mathematics | Computability theory | null |
21392941 | https://en.wikipedia.org/wiki/Anglerfish | Anglerfish | The anglerfish are fish of the teleost order Lophiiformes (). They are bony fish named for their characteristic mode of predation, in which a modified luminescent fin ray (the esca or illicium) acts as a lure for other fish. The luminescence comes from symbiotic bacteria, which are thought to be acquired from seawater, that dwell in and around the sea.
Some anglerfish are notable for extreme sexual dimorphism and sexual symbiosis of the small male with the much larger female, seen in the suborder Ceratioidei, the deep sea anglerfish. In these species, males may be several orders of magnitude smaller than females.
Anglerfish occur worldwide. Some are pelagic (dwelling away from the sea floor), while others are benthic (dwelling close to the sea floor). Some live in the deep sea (such as the Ceratiidae), while others live on the continental shelf, such as the frogfishes and the Lophiidae (monkfish or goosefish). Pelagic forms are most often laterally compressed, whereas the benthic forms are often extremely dorsoventrally compressed (depressed), often with large upward-pointing mouths.
Evolution
The earliest fossils of anglerfish are from the Eocene Monte Bolca formation of Italy, and these already show significant diversification into the modern families that make up the order. Given this, and their close relationship to the Tetraodontiformes, which are known from Cretaceous fossils, they likely originated during the Cretaceous.
A 2010 mitochondrial genome phylogenetic study suggested the anglerfishes diversified in a short period of the early to mid-Cretaceous, between 130 and 100 million years ago. A more recent preprint reduces this time to the Late Cretaceous, between 92 to 61 million years ago. Other studies indicate that anglerfish only originated shortly after the Cretaceous-Paleogene extinction event as part of a massive adaptive radiation of percomorphs, although this clashes with the extensive diversity already known from the group by the Eocene. A 2024 study found that all anglerfish suborders most likely diverged from one another during the Late Cretaceous and Paleocene, but the multiple families of deep-sea anglerfishes (Ceratioidei), as well as their trademark sexual parasitism, originated during the Eocene in a rapid radiation following the Paleocene-Eocene thermal maximum.
Classification
Anglerfishes are classified by the 5th edition of Fishes of the World as set out below into 5 suborders and 18 families. The following taxa have been arranged to show their evolutionary relationships.
Suborder Lophioidei Regan, 1912
Family Lophiidae Rafinesque, 1810 (Monkfishes and goosefishes)
Suborder Antennarioidei Regan, 1912
Family Antennariidae Jarocki 1822 (Frogfishes)
Family Tetrabrachiidae Regan, 1912 (Tetrabrachid frogfishes)
Family Lophichthyidae Boeseman, 1964 (Lophichthyid frogfishes)
Family Brachionichthyidae Gill, 1863 (Handfishes or warty anglerfishes)
Suborder Chaunacoidei Pietsch & Grobecker, 1987
Family Chaunacidae Gill, 1863 (Sea toad)
Suborder Ogcocephaloidei Pietsch, 1984
Family Ogcocephalidae Gill, 1893 (Batfishes)
Suborder Ceratioidei Regan, 1912
Family Caulophrynidae Goode & Bean, 1896 (Fanfins)
Family Neoceratiidae Regan, 1926 (Spiny seadevils)
Family Melanocetidae Gill, 1878 (Black seadevils)
Family Himantolophidae Gill, 1861 (Footballfishes)
Family Diceratiidae Regan & Trewavas, 1932 (Double anglers)
Family Oneirodidae Gill, 1878 (Dreamers)
Family Thaumatichthyidae Smith & Radcliffe, 1912 (Wolftrap anglers)
Family Centrophrynidae Bertelsen, 1951 (Prickly seadevils)
Family Ceratiidae Gill, 1861 (Warty seadevils)
Family Gigantactinidae Boulenger, 1904 (Whipnose anglers)
Family Linophrynidae Regan, 1925 (Leftvents)
The relationships of the suborders within Lophiiformes as set out in Pietsch and Grobecker's 1987 Frogfishes of the world: systematics, zoogeography, and behavioral ecology is shown below.
It has been found in phylogenetic studies that both the Lophiiformes and the Tetraodontiformes nest within the Acanthuriformes and are so classified as clades within that taxon.
Anatomy
All anglerfish are carnivorous and are thus adapted for the capture of prey. Ranging in color from dark gray to dark brown, deep-sea species have large heads that bear enormous, crescent-shaped mouths full of long, fang-like teeth angled inward for efficient prey-grabbing. Their length can vary from , with a few types getting as large as , but this variation is largely due to sexual dimorphism, with females being much larger than males. Frogfish and other shallow-water anglerfish species are ambush predators, and often appear camouflaged as rocks, sponges or seaweed.
Anglerfish have a flap, or the illicium, towards the distal end of their body on their first of two dorsal fins which extends to the snout and acts as a luring mechanism where prey will approach in a face-to-face manner. The illicium is moved back and forth by five distinct pairs of muscles: namely the shorter erector and depressor muscles that dictate movement of the illicial bone, along with inclinator, protractor, and retractor muscles that aid motion of the pterygiophore.
Specifically considering Cryptopsaras couesii, this deep sea ceratioid anglerfish has unique rotational biomechanics in its musculature. The robust retractor and protractor muscles move in a winding pattern in opposite directions along the length of the pterygiophore, which exists in a deep longitudinal ridge along the skull. Further, the long and thin inclinator of the deep sea ceratioid anglerfish allows for a distinctly wide range of anterior and posterior motion, assisting in the movement of the luring apparatus to aid in the ambush of prey.
Most adult female ceratioid anglerfish have a luminescent organ called the esca at the tip of a modified dorsal ray (the illicium or fishing rod; derived from Latin ēsca, "bait"). The organ has been hypothesized to serve the purpose of luring prey in dark, deep-sea environments, but also serves to call males' attention to the females to facilitate mating.
The source of luminescence is symbiotic bacteria that dwell in and around the esca, enclosed in a cup-shaped reflector containing crystals, probably consisting of guanine. Anglerfish make use of these symbiotic relationships with extracellular luminous bacteria. Atypical of luminous symbionts that live outside of the host's cells, the bacteria found in the lures of anglerfish are experiencing an evolutionary shift to smaller and less developed genomes (genomic reduction) assisted by transposon expansions. Only a handful of luminescent symbiont species can associate with deep-sea anglerfishes. In some species, the bacteria recruited to the esca are incapable of luminescence independent of the anglerfish, suggesting they have developed a symbiotic relationship and the bacteria are unable to synthesize all of the chemicals necessary for luminescence on their own. They depend on the fish to make up the difference. While females found within most anglerfish families have bioluminescence, there are exceptions including the Caulophrynidae and Neoceratiidae families.
The bacterial symbionts are not found at consistent levels throughout stages of anglerfish development or throughout the different depths of the ocean. Sequencing of larval organisms of the Ceratioidei suborder show an absence of bacterial symbionts, while sequencing of adult anglerfish showed higher levels of bioluminescent bacterial symbionts. This correlates to the mesopelagic region having the highest levels of symbiont relationships in the anglerfish samples, as this is where adult anglerfish reside for most of their lives after their larval stage. Electron microscopy of these bacteria in some species reveals they are Gram-negative rods that lack capsules, spores, or flagella. They have double-layered cell walls and mesosomes. A pore connects the esca with the seawater, which enables the removal of dead bacteria and cellular waste, and allows the pH and tonicity of the culture medium to remain constant. This, as well as the constant temperature of the bathypelagic zone inhabited by these fish, is crucial for the long-term viability of bacterial cultures.
The light gland is always open to the exterior, so it is possible that the fish acquires the bacteria from the seawater. However, it appears that each species uses its own particular species of bacteria, and these bacteria have never been found in seawater. Haygood (1993) theorized that esca discharge bacteria during spawning and the bacteria are thereby transferred to the eggs.
Some evidence shows that some anglerfish acquired their bioluminescent symbionts from the local environment. Genetic materials of the symbiont bacteria is found near the anglerfish, indicating that the anglerfish and their associated bacteria are most likely not evolved together and the bacteria take difficult journeys to enter the host. In a study on Ceratioid anglerfish in the Gulf of Mexico, researchers noticed that the confirmed host-associated bioluminescent microbes are not present in the larval specimens and throughout host development. The Ceratioids likely acquired their bioluminescent symbionts from the seawater. Photobacterium phosphoreum and members from kishitanii clade constitute the major or sole bioluminescent symbiont of several families of deep-sea luminous fishes.
It is known that genetic makeup of the symbiont bacteria has undergone changes since they became associated with their host. Compared to their free-living relatives, deep-sea anglerfish symbiont genomes are reduced in size by 50%. Reductions in amino acid synthesis pathways and abilities to utilize diverse sugars are found. Nevertheless, genes involved in chemotaxis and motility that are thought to be useful only outside the host are retained in the genome. Symbiont genome contains very high numbers of pseudogenes and show massive expansions of transposable elements. The process of genome reduction is still ongoing in these symbionts lineages, and the gene loss may lead to host dependence.
In most species, a wide mouth extends all around the anterior circumference of the head, and bands of inwardly inclined teeth line both jaws. The teeth can be depressed so as to offer no impediment to an object gliding towards the stomach, but prevent its escape from the mouth. The anglerfish is able to distend both its jaw and its stomach, since its bones are thin and flexible, to enormous size, allowing it to swallow prey up to twice as large as its entire body.
Behavior
Swimming and energy conservation
In 2005, near Monterey, California, at 1,474 metres depth, an ROV filmed a female ceratioid anglerfish of the genus Oneirodes for 24 minutes. When approached, the fish retreated rapidly, but in 74% of the video footage, it drifted passively, oriented at any angle. When advancing, it swam intermittently at a speed of 0.24 body lengths per second, beating its pectoral fins in-phase. The lethargic behavior of this ambush predator is suited to the energy-poor environment of the deep sea.
Another in situ observation of three different whipnose anglerfish showed unusual inverted swimming-behavior. Fish were observed floating inverted completely motionless with the illicium hanging down stiffly in a slight arch in front of the fish. The illicium was hanging over small visible burrows. It was suggested this is an effort to entice prey and an example of low-energy opportunistic foraging and predation. When the ROV approached the fish, they exhibited burst swimming, still inverted.
The jaw and stomach of the anglerfish can extend to allow it to consume prey up to twice its size. Because of the limited amount of food available in the anglerfish's environment, this adaptation allows the anglerfish to store food when there is an abundance.
Predation
The name "anglerfish" derives from the species' characteristic method of predation. Anglerfish typically have at least one long filament sprouting from the middle of their heads, termed the illicium. The illicium is the detached and modified first three spines of the anterior dorsal fin. In most anglerfish species, the longest filament is the first. This first spine protrudes above the fish's eyes and terminates in an irregular growth of flesh (the esca), and can move in all directions. Anglerfish can wiggle the esca to make it resemble a prey animal, which lures the anglerfish's prey close enough for the anglerfish to devour them whole. Some deep-sea anglerfish of the bathypelagic zone also emit light from their esca to attract prey.
Because anglerfish are opportunistic foragers, they show a range of preferred prey with fish at the extremes of the size spectrum, whilst showing increased selectivity for certain prey. One study examining the stomach contents of threadfin anglerfish off the Pacific coast of Central America found these fish primarily ate two categories of benthic prey: crustaceans and teleost fish. The most frequent prey were pandalid shrimp. 52% of the stomachs examined were empty, supporting the observations that anglerfish are low energy consumers.
Reproduction
Some anglerfish, like those of the Ceratiidae, or sea devils, employ an unusual mating method. Because individuals are locally rare, encounters are also very rare. Therefore, finding a mate is problematic. When scientists first started capturing ceratioid anglerfish, they noticed that all of the specimens were female. These individuals were a few centimetres in size and almost all of them had what appeared to be parasites attached to them. It turned out that these "parasites" were highly dimorphic male ceratioids. This indicates some taxa of anglerfish use a polyandrous mating system. In some species of anglerfish, fusion between male and female when reproducing is possible due to the lack of immune system keys that allow antibodies to mature and create receptors for T-cells. It is assumed they have evolved new immune strategies which compensate for the loss of B and T lymphocyte functions found in an adaptive immune system.
Certain ceratioids rely on parabiotic reproduction. Free-living males and unparasitized females in these species never have fully developed gonads. Thus, males never mature without attaching to a female, and die if they cannot find one. At birth, male ceratioids are already equipped with extremely well-developed olfactory organs that detect scents in the water. Males of some species also develop large, highly specialized eyes that may aid in identifying mates in dark environments. The male ceratioids are significantly smaller than a female anglerfish, and may have trouble finding food in the deep sea. Furthermore, growth of the alimentary canals of some males becomes stunted, preventing them from feeding. Some taxa have jaws that are never suitable or effective for prey capture. These features mean the male must quickly find a female anglerfish to prevent death. The sensitive olfactory organs help the male to detect the pheromones that signal the proximity of a female anglerfish.
The methods anglerfish use to locate mates vary. Some species have minute eyes that are unfit for identifying females, while others have underdeveloped nostrils, making them unlikely to effectively find females by scent. When a male finds a female, he bites into her skin, and releases an enzyme that digests the skin of his mouth and her body, fusing the pair down to the blood-vessel level. The male becomes dependent on the female host for survival by receiving nutrients via their shared circulatory system, and provides sperm to the female in return. After fusing, males increase in volume and become much larger relative to free-living males of the species. They live and remain reproductively functional as long as the female lives, and can take part in multiple spawnings. This extreme sexual dimorphism ensures that when the female is ready to spawn, she has a mate immediately available. Multiple males can be incorporated into a single individual female with up to eight males in some species, though some taxa appear to have a "one male per female" rule.
Symbiosis is not the only method of reproduction in anglerfish. In fact, many families, including the Melanocetidae, Himantolophidae, Diceratiidae, and Gigantactinidae, show no evidence of male symbiosis. Females in some of these species contain large, developed ovaries and free-living males have large testes, suggesting these sexually mature individuals may spawn during a temporary sexual attachment that does not involve fusion of tissue. Males in these species also have well-toothed jaws that are far more effective in hunting than those seen in symbiotic species.
Sexual symbiosis may be an optional strategy in some species of anglerfishes. In the Oneirodidae, females carrying symbiotic males have been reported in Leptacanthichthys and Bertella—and others that were not still developed fully functional gonads. One theory suggests the males attach to females regardless of their own reproductive development if the female is not sexually mature, but when both male and female are mature, they spawn then separate.
One explanation for the evolution of sexual symbiosis is that the relatively low density of females in deep-sea environments leaves little opportunity for mate choice among anglerfish. Females remain large to accommodate fecundity, as is evidenced by their large ovaries and eggs. Males would be expected to shrink to reduce metabolic costs in resource-poor environments and would develop highly specialized female-finding abilities. If a male manages to find a female, then symbiotic attachment is ultimately more likely to improve life-time fitness relative to free living, particularly when the prospect of finding future mates is poor. An additional advantage to symbiosis is that the male's sperm can be used in multiple fertilizations, as he always remains available to the female for mating. Higher densities of male-female encounters might correlate with species that demonstrate facultative symbiosis or simply use a more traditional temporary contact mating.
The spawn of the anglerfish of the genus Lophius consists of a thin sheet of transparent gelatinous material wide and greater than long. Such an egg sheet is rare among fish. The eggs in this sheet are in a single layer, each in its own cavity. The spawn is free in the sea. The larvae are free-swimming and have the pelvic fins elongated into filaments.
Threats
Northwest European Lophius species are heavily fished and are listed by the ICES as "outside safe biological limits". Additionally, anglerfish are known to occasionally rise to the surface during El Niño, leaving large groups of dead anglerfish floating on the surface.
In 2010, Greenpeace International added the American angler (Lophius americanus), the angler (Lophius piscatorius), and the black-bellied angler (Lophius budegassa) to its seafood red list—a list of fish commonly sold worldwide with a high likelihood of being sourced from unsustainable fisheries.Red List Fish
Human consumption
One family, the Lophiidae, is of commercial interest with fisheries found in western Europe, eastern North America, Africa, and East Asia. In Europe and North America, the tail meat of fish of the genus Lophius, known as monkfish or goosefish (North America), is widely used in cooking, and is often compared to lobster tail in taste and texture.
In Africa, the countries of Namibia and the Republic of South Africa record the highest catches. In Asia, especially Japan, monkfish liver, known as ankimo, is considered a delicacy. Anglerfish is especially heavily consumed in South Korea, where it is featured as the main ingredient in dishes such as Agujjim.
Timeline of genera
Anglerfish appear in the fossil record as follows:
| Biology and health sciences | Fishes | null |
21392989 | https://en.wikipedia.org/wiki/Red%20algae | Red algae | Red algae, or Rhodophyta (, ; ), make up one of the oldest groups of eukaryotic algae. The Rhodophyta comprises one of the largest phyla of algae, containing over 7,000 recognized species within over 900 genera amidst ongoing taxonomic revisions. The majority of species (6,793) are Florideophyceae, and mostly consist of multicellular, marine algae, including many notable seaweeds. Red algae are abundant in marine habitats. Approximately 5% of red algae species occur in freshwater environments, with greater concentrations in warmer areas. Except for two coastal cave dwelling species in the asexual class Cyanidiophyceae, no terrestrial species exist, which may be due to an evolutionary bottleneck in which the last common ancestor lost about 25% of its core genes and much of its evolutionary plasticity.
Red algae form a distinct group characterized by eukaryotic cells without flagella and centrioles, chloroplasts without external endoplasmic reticulum or unstacked (stroma) thylakoids, and use phycobiliproteins as accessory pigments, which give them their red color. Despite their name, red algae can vary in color from bright green, soft pink, resembling brown algae, to shades of red and purple, and may be almost black at greater depths. Unlike green algae, red algae store sugars as food reserves outside the chloroplasts as floridean starch, a type of starch that consists of highly branched amylopectin without amylose. Most red algae are multicellular, macroscopic, and reproduce sexually. The life history of red algae is typically an alternation of generations that may have three generations rather than two. Coralline algae, which secrete calcium carbonate and play a major role in building coral reefs, belong there.
Red algae such as Palmaria palmata (dulse) and Porphyra species (laver/nori/gim) are a traditional part of European and Asian cuisines and are used to make products such as agar, carrageenans, and other food additives.
Evolution
Chloroplasts probably evolved following an endosymbiotic event between an ancestral, photosynthetic cyanobacterium and an early eukaryotic phagotroph. This event (termed primary endosymbiosis) is at the origin of the red and green algae (including the land plants or Embryophytes which emerged within them) and the glaucophytes, which together make up the oldest evolutionary lineages of photosynthetic eukaryotes, the Archaeplastida. A secondary endosymbiosis event involving an ancestral red alga and a heterotrophic eukaryote resulted in the evolution and diversification of several other photosynthetic lineages such as Cryptophyta, Haptophyta, Stramenopiles (or Heterokontophyta), and Alveolata. In addition to multicellular brown algae, it is estimated that more than half of all known species of microbial eukaryotes harbor red-alga-derived plastids.
Red algae are divided into the Cyanidiophyceae, a class of unicellular and thermoacidophilic extremophiles found in sulphuric hot springs and other acidic environments, an adaptation partly made possible by horizontal gene transfers from prokaryotes, with about 1% of their genome having this origin, and two sister clades called SCRP (Stylonematophyceae, Compsopogonophyceae, Rhodellophyceae and Porphyridiophyceae) and BF (Bangiophyceae and Florideophyceae), which are found in both marine and freshwater environments. The BF are macroalgae, seaweed that usually do not grow to more than about 50 cm in length, but a few species can reach lengths of 2 m. In the SCRP clade the class Compsopogonophyceae is multicellular, with forms varying from microscopic filaments to macroalgae. Stylonematophyceae have both unicellular and small simple filamentous species, while Rhodellophyceae and Porphyridiophyceae are exclusively unicellular. Most rhodophytes are marine with a worldwide distribution, and are often found at greater depths compared to other seaweeds. While this was formerly attributed to the presence of pigments (such as phycoerythrin) that would permit red algae to inhabit greater depths than other macroalgae by chromatic adaption, recent evidence calls this into question (e.g. the discovery of green algae at great depth in the Bahamas). Some marine species are found on sandy shores, while most others can be found attached to rocky substrata. Freshwater species account for 5% of red algal diversity, but they also have a worldwide distribution in various habitats; they generally prefer clean, high-flow streams with clear waters and rocky bottoms, but with some exceptions. A few freshwater species are found in black waters with sandy bottoms and even fewer are found in more lentic waters. Both marine and freshwater taxa are represented by free-living macroalgal forms and smaller endo/epiphytic/zoic forms, meaning they live in or on other algae, plants, and animals. In addition, some marine species have adopted a parasitic lifestyle and may be found on closely or more distantly related red algal hosts.
Taxonomy
In the classification system of Adl et al. 2005, the red algae are classified in the Archaeplastida, along with the glaucophytes and the green algae plus land plants (Viridiplantae or Chloroplastida). The authors use a hierarchical arrangement where the clade names do not signify rank; the class name Rhodophyceae is used for the red algae. No subdivisions are given; the authors say, "Traditional subgroups are artificial constructs, and no longer valid." Many subsequent studies provided evidence that is in agreement for monophyly in the Archaeplastida (including red algae). However, other studies have suggested Archaeplastida is paraphyletic. , the general consensus is that Archaeplastida is paraphyletic.
Below are other published taxonomies of the red algae using molecular and traditional alpha taxonomic data; however, the taxonomy of the red algae is still in a state of flux (with classification above the level of order having received little scientific attention for most of the 20th century).
If the kingdom Plantae is defined as the Archaeplastida, then red algae will be part of that group.
If Plantae are defined more narrowly, to be the Viridiplantae, then the red algae might be excluded.
A major research initiative to reconstruct the Red Algal Tree of Life (RedToL) using phylogenetic and genomic approach is funded by the National Science Foundation as part of the Assembling the Tree of Life Program.
Classification comparison
Some sources (such as Lee) place all red algae into the class "Rhodophyceae". (Lee's organization is not a comprehensive classification, but a selection of orders considered common or important.)
A subphylum - Proteorhodophytina - has been proposed to encompass the existing classes Compsopogonophyceae, Porphyridiophyceae, Rhodellophyceae and Stylonematophyceae. This proposal was made on the basis of the analysis of the plastid genomes.
Species of red algae
Over 7,000 species are currently described for the red algae, but the taxonomy is in constant flux with new species described each year. The vast majority of these are marine with about 200 that live only in fresh water.
Some examples of species and genera of red algae are:
Cyanidioschyzon merolae, a primitive red alga
Atractophora hypnoides
Gelidiella calcicola
Lemanea, a freshwater genus
Palmaria palmata, dulse
Schmitzia hiscockiana
Chondrus crispus, Irish moss
Mastocarpus stellatus
Vanvoorstia bennettiana, became extinct in the early 20th century
Acrochaetium efflorescens
Audouinella, with freshwater as well as marine species
Polysiphonia ceramiaeformis, banded siphon weed
Vertebrata simulans
Phylogeny
While Cyanidiophyceae is universally agreed to be the most basal, the remaining 6 classes in the subphylum Rhodophytina have uncertain relationships. The below cladogram follows the results of a 2016 study concerning diversification times among red algae.
Morphology
Red algal morphology is diverse ranging from unicellular forms to complex parenchymatous and non- parenchymatous thallus. Red algae have double cell walls. The outer layers contain the polysaccharides agarose and agaropectin that can be extracted from the cell walls as agar by boiling. The internal walls are mostly cellulose. They also have the most gene-rich plastid genomes known.
Cell structure
Red algae do not have flagella and centrioles during their entire life cycle. The distinguishing characters of red algal cell structure include the presence of normal spindle fibres, microtubules, un-stacked photosynthetic membranes, phycobilin pigment granules, pit connection between cells, filamentous genera, and the absence of chloroplast endoplasmic reticulum.
Chloroplasts
The presence of the water-soluble pigments called phycobilins (phycocyanobilin, phycoerythrobilin, phycourobilin and phycobiliviolin), which are localized into phycobilisomes, gives red algae their distinctive color. Their chloroplasts contain evenly spaced and ungrouped thylakoids and contain the pigments chlorophyll a, α- and β-carotene, lutein and zeaxanthin. Their chloroplasts are enclosed in a double membrane, lack grana and phycobilisomes on the stromal surface of the thylakoid membrane.
Storage products
The major photosynthetic products include floridoside (major product), D‐isofloridoside, digeneaside, mannitol, sorbitol, dulcitol etc. Floridean starch (similar to amylopectin in land plants), a long-term storage product, is deposited freely (scattered) in the cytoplasm. The concentration of photosynthetic products are altered by the environmental conditions like change in pH, the salinity of medium, change in light intensity, nutrient limitation etc. When the salinity of the medium increases the production of floridoside is increased in order to prevent water from leaving the algal cells.
Pit connections and pit plugs
Pit connections
Pit connections and pit plugs are unique and distinctive features of red algae that form during the process of cytokinesis following mitosis. In red algae, cytokinesis is incomplete. Typically, a small pore is left in the middle of the newly formed partition. The pit connection is formed where the daughter cells remain in contact.
Shortly after the pit connection is formed, cytoplasmic continuity is blocked by the generation of a pit plug, which is deposited in the wall gap that connects the cells.
Connections between cells having a common parent cell are called primary pit connections. Because apical growth is the norm in red algae, most cells have two primary pit connections, one to each adjacent cell.
Connections that exist between cells not sharing a common parent cell are labelled secondary pit connections. These connections are formed when an unequal cell division produced a nucleated daughter cell that then fuses to an adjacent cell. Patterns of secondary pit connections can be seen in the order Ceramiales.
Pit plugs
After a pit connection is formed, tubular membranes appear. A granular protein called the plug core then forms around the membranes. The tubular membranes eventually disappear. While some orders of red algae simply have a plug core, others have an associated membrane at each side of the protein mass, called cap membranes. The pit plug continues to exist between the cells until one of the cells dies. When this happens, the living cell produces a layer of wall material that seals off the plug.
Function
The pit connections have been suggested to function as structural reinforcement, or as avenues for cell-to-cell communication and transport in red algae, however little data supports this hypothesis.
Reproduction
The reproductive cycle of red algae may be triggered by factors such as day length. Red algae reproduce sexually as well as asexually. Asexual reproduction can occur through the production of spores and by vegetative means (fragmentation, cell division or propagules production).
Fertilization
Red algae lack motile sperm. Hence, they rely on water currents to transport their gametes to the female organs – although their sperm are capable of "gliding" to a carpogonium's trichogyne. Animals also help with the dispersal and fertilization of the gametes. The first species discovered to do so is the isopod Idotea balthica.
The trichogyne will continue to grow until it encounters a spermatium; once it has been fertilized, the cell wall at its base progressively thickens, separating it from the rest of the carpogonium at its base.
Upon their collision, the walls of the spermatium and carpogonium dissolve. The male nucleus divides and moves into the carpogonium; one half of the nucleus merges with the carpogonium's nucleus.
The polyamine spermine is produced, which triggers carpospore production.
Spermatangia may have long, delicate appendages, which increase their chances of "hooking up".
Life cycle
They display alternation of generations. In addition to a gametophyte generation, many have two sporophyte generations, the carposporophyte-producing carpospores, which germinate into a tetrasporophyte – this produces spore tetrads, which dissociate and germinate into gametophytes. The gametophyte is typically (but not always) identical to the tetrasporophyte.
Carpospores may also germinate directly into thalloid gametophytes, or the carposporophytes may produce a tetraspore without going through a (free-living) tetrasporophyte phase. Tetrasporangia may be arranged in a row (zonate), in a cross (cruciate), or in a tetrad.
The carposporophyte may be enclosed within the gametophyte, which may cover it with branches to form a cystocarp.
The two following case studies may be helpful to understand some of the life histories algae may display:
In a simple case, such as Rhodochorton investiens:
In the carposporophyte: a spermatium merges with a trichogyne (a long hair on the female sexual organ), which then divides to form carposporangia – which produce carpospores.
Carpospores germinate into gametophytes, which produce sporophytes. Both of these are very similar; they produce monospores from monosporangia "just below a cross-wall in a filament" and their spores are "liberated through the apex of sporangial cell."
The spores of a sporophyte produce either tetrasporophytes. Monospores produced by this phase germinates immediately, with no resting phase, to form an identical copy of the parent. Tetrasporophytes may also produce a carpospore, which germinates to form another tetrasporophyte.
The gametophyte may replicate asexually using monospores, but also produces nonmotile sperm in spermatangia, and a lower, nucleus-containing "egg" region of the carpogonium.
A rather different example is Porphyra gardneri:
In its diploid phase, a carpospore can germinate to form a filamentous "conchocelis stage", which can also self-replicate using monospores. The conchocelis stage eventually produces conchosporangia. The resulting conchospore germinates to form a tiny prothallus with rhizoids, which develops to a cm-scale leafy thallus. This too can reproduce via monospores, which are produced inside the thallus itself. They can also reproduce via spermatia, produced internally, which are released to meet a prospective carpogonium in its conceptacle.
Chemistry
The values of red algae reflect their lifestyles. The largest difference results from their photosynthetic metabolic pathway: algae that use HCO3 as a carbon source have less negative values than those that only use . An additional difference of about 1.71‰ separates groups intertidal from those below the lowest tide line, which are never exposed to atmospheric carbon. The latter group uses the more 13C-negative dissolved in sea water, whereas those with access to atmospheric carbon reflect the more positive signature of this reserve.
Photosynthetic pigments of Rhodophyta are chlorophylls a and d. Red algae are red due to phycoerythrin. They contain the sulfated polysaccharide carrageenan in the amorphous sections of their cell walls, although red algae from the genus Porphyra contain porphyran. They also produce a specific type of tannin called phlorotannins, but in a lower amount than brown algae do.
Genomes and transcriptomes of red algae
As enlisted in realDB, 27 complete transcriptomes and 10 complete genomes sequences of red algae are available. Listed below are the 10 complete genomes of red algae.
Cyanidioschyzon merolae, Cyanidiophyceae
Galdieria sulphuraria, Cyanidiophyceae
Pyropia yezoensis, Bangiophyceae
Chondrus crispus, Florideophyceae
Porphyridium purpureum, Porphyridiophyceae
Porphyra umbilicalis, Bangiophyceae
Gracilaria changii, Gracilariales
Galdieria phlegrea, Cyanidiophytina
Gracilariopsis lemaneiformis, Gracilariales
Gracilariopsis chorda, Gracilariales
Fossil record
One of the oldest fossils identified as a red alga is also the oldest fossil eukaryote that belongs to a specific modern taxon. Bangiomorpha pubescens, a multicellular fossil from arctic Canada, strongly resembles the modern red alga Bangia and occurs in rocks dating to 1.05 billion years ago.
Two kinds of fossils resembling red algae were found sometime between 2006 and 2011 in well-preserved sedimentary rocks in Chitrakoot, central India. The presumed red algae lie embedded in fossil mats of cyanobacteria, called stromatolites, in 1.6 billion-year-old Indian phosphorite – making them the oldest plant-like fossils ever found by about 400 million years.
Red algae are important builders of limestone reefs. The earliest such coralline algae, the solenopores, are known from the Cambrian period. Other algae of different origins filled a similar role in the late Paleozoic, and in more recent reefs.
Calcite crusts that have been interpreted as the remains of coralline red algae, date to the Ediacaran Period. Thallophytes resembling coralline red algae are known from the late Proterozoic Doushantuo formation.
Relationship to other algae
Chromista and Alveolata algae (e.g., chrysophytes, diatoms, phaeophytes, dinophytes) seem to have evolved from bikonts that have acquired red algae as endosymbionts. According to this theory, over time these endosymbiont red algae have evolved to become chloroplasts. This part of endosymbiotic theory is supported by various structural and genetic similarities.
Applications
Human consumption
Red algae have a long history of use as a source of nutritional, functional food ingredients and pharmaceutical substances. They are a source of antioxidants including polyphenols, and phycobiliproteins and contain proteins, minerals, trace elements, vitamins and essential fatty acids.
Traditionally, red algae are eaten raw, in salads, soups, meal and condiments. Several species are food crops, in particular dulse (Palmaria palmata) and members of the genus Porphyra, variously known as nori (Japan), gim (Korea), (China), and laver (British Isles).
Red algal species such as Gracilaria and Laurencia are rich in polyunsaturated fatty acids (eicopentaenoic acid, docohexaenoic acid, arachidonic acid) and have protein content up to 47% of total biomass. Where a big portion of world population is getting insufficient daily iodine intake, a 150 ug/day requirement of iodine is obtained from a single gram of red algae. Red algae, like Gracilaria, Gelidium, Euchema, Porphyra, Acanthophora, and Palmaria are primarily known for their industrial use for phycocolloids (agar, algin, furcellaran and carrageenan) as thickening agent, textiles, food, anticoagulants, water-binding agents, etc. Dulse (Palmaria palmata) is one of the most consumed red algae and is a source of iodine, protein, magnesium and calcium. Red algae's nutritional value is used for the dietary supplement of algas calcareas.
China, Japan, Republic of Korea are the top producers of seaweeds. In East and Southeast Asia, agar is most commonly produced from Gelidium amansii. These rhodophytes are easily grown and, for example, nori cultivation in Japan goes back more than three centuries.
Animal feed
Researchers in Australia discovered that limu kohu (Asparagopsis taxiformis) can reduce methane emissions in cattle. In one Hawaii experiment, the reduction reached 77%. The World Bank predicted the industry could be worth ~$1.1 billion by 2030. As of 2024, preparation included three stages of cultivation and drying. Australia's first commercial harvest was in 2022. Agriculture accounts for 37% of the world’s anthropogenic methane emissions. One cow produces between 154 to 264 pounds of methane/yr.
Other
Other algae-based markets include construction materials, fertilizers and other agricultural inputs, bioplastics, biofuels and fabric. Red algae also provides ecosystem services such as filtering water and carbon sequestration.
Gallery
| Biology and health sciences | Other organisms | null |
21393031 | https://en.wikipedia.org/wiki/Black%20seadevil | Black seadevil | Black seadevils are small, deepsea lophiiform fishes of the family Melanocetidae. The five known species (with only two given common names) are all within the genus Melanocetus. They are found in tropical to temperate waters of the Atlantic, Indian, and Pacific Oceans, with one species known only from the Ross Sea.
One of several anglerfish families, black seadevils are named for their intimidating appearance and typically pitch black skin. The humpback anglerfish (Melanocetus johnsonii) was featured on the August 14, 1995, issue of Time magazine, becoming a flagship species for deep sea fauna.
Taxonomy
The black seadevil family, Melanocetidae, was first proposed as a subfamily in 1878 by the American biologist Theodore Gill. The only genus in the family is Melanocetus which was proposed as a monospecific genus in 1864 by the German-born British herpetologist and ichthyologist Albert Günther when he described the humpback anglerfish (M. johnsoni). The type locality of M. johnsoni was give as off Madeira. The 5th edition of Fishes of the World classifies the family Melanocetidae in the suborder Ceratioidei of the anglerfish order Lophiiformes.
Etymology
The black seadevil family, Melanocetidae and the genus name Melanocetus are a combination of melanos meaning "black" and cetus, which means a "large sea creature", typically used to refer to whales. Günther did not explain this choice of name but did note the uniform black colour, including the inside of the mouth of M. johnsoni.
Species
The black seadevil family, Melanocetidae, contains the single genus Melanocetus which has six valid species classified within it:
Melanocetus eustalus Pietsch & Van Duzer, 1980
Melanocetus johnsonii Günther, 1864 (Humpback anglerfish)
Melanocetus murrayi Günther, 1887 (Murray's abyssal anglerfish)
Melanocetus niger Regan, 1925
Melanocetus polyactis Regan, 1925
Melanocetus rossi Balushkin & Fedorov, 1981
Physical description
Black seadevils are characterised by a gelatinous, mostly scaleless, globose body, a large head, and generous complement of menacingly large, sharp, glassy, fang-like teeth lining the jaws of a cavernous, oblique mouth. These teeth are depressible and present only in females. Some species have a scattering of epidermal spinules on the body, and the scales (when present) are conical, hollow, and translucent. Like other anglerfishes, black seadevils possess an illicium and esca; the former being a modified dorsal spine—the "fishing rod"—and the latter being the bulbous, bioluminescent "fishing lure". The esca is simple in black seadevils (with either a conical terminus or anterior and posterior ridges in some species), and both it and the illicium are free of denticles.
The bioluminescence is produced by symbiotic bacteria; these bacteria are thought to enter the esca via an external duct (in at least two species, the esca is not luminous until this duct develops, suggesting the bacteria originate from the surrounding seawater). The bacteria, belonging to the family Vibrionaceae, are apparently different in each anglerfish species; the bacteria have yet to be cultured in vitro.
The eyes of black seadevils are small; the pupil is larger than the lens, leaving an aphakic space. Common among deepsea anglerfish is the strong sexual dimorphism in melanocetids: while females may reach a length of 18 cm (7 in) or more, males remain under 3 cm (1 in). Aside from jaw teeth, males also lack lures. Pelvic fins are absent in both sexes. All fins are rounded with slightly incised membranes; the pectoral fins are small. The single dorsal fin is positioned far back from the head, larger than and above the retrorse anal fin.
Females have large, highly distensible stomachs which give the ventral region a flabby appearance. In life, black seadevils are a dark brown to black. The skin is extremely soft and easily abraded during collection or even by simple handling.
Life history
The Melanocetidae appear to buck the trend in deepsea anglers, in that the males—despite not feeding as an adult and thus being little more than couriers of sperm—are free-living rather than parasitic. A brief attachment to the female does probably occur, however, as evidenced by a case of mistaken identity: A male humpback anglerfish was found attached to the lip of a female horned lantern fish (Centrophryne spinulosa) of an unrelated (though also nonparasitic) family of anglerfish, Centrophrynidae. Little else is known of their reproduction: They are presumed to not be guarders, releasing buoyant eggs into the water which become part of the zooplankton.
While adults have been trawled from as deep as 3,000 m (9,900 ft), larvae appear to remain in the upper 100 m (330 ft) of the water column and gradually descend with maturity. Males likely outnumber—and mature well before—females by a wide margin.
The females use their bioluminescent "fishing poles" to lure both conspecifics and prey, which include crustaceans and small fish such as lanternfish and bristlemouths; the seadevils' highly distensible stomachs also allow them to swallow prey larger than themselves, which is an important adaptation to life in the lean depths. In contrast with males, females are poor swimmers and spend most of their time motionless, waiting for something to approach their lures. Predators of black seadevils are not well known, but include lancetfish. Bruce Robison of California Monterey Bay Aquarium Research Institute, who led the dive, spotted a Black Seadevil at 600 m (1,900 ft) during an exploration of the Monterey Canyon via a remote-operated diving vehicle on Nov. 22, 2014.
| Biology and health sciences | Acanthomorpha | Animals |
21393077 | https://en.wikipedia.org/wiki/Microbiology | Microbiology | Microbiology () is the scientific study of microorganisms, those being of unicellular (single-celled), multicellular (consisting of complex cells), or acellular (lacking cells). Microbiology encompasses numerous sub-disciplines including virology, bacteriology, protistology, mycology, immunology, and parasitology.
The organisms that constitute the microbial world are characterized as either prokaryotes or eukaryotes; Eukaryotic microorganisms possess membrane-bound organelles and include fungi and protists, whereas prokaryotic organisms are conventionally classified as lacking membrane-bound organelles and include Bacteria and Archaea. Microbiologists traditionally relied on culture, staining, and microscopy for the isolation and identification of microorganisms. However, less than 1% of the microorganisms present in common environments can be cultured in isolation using current means. With the emergence of biotechnology, Microbiologists currently rely on molecular biology tools such as DNA sequence-based identification, for example, the 16S rRNA gene sequence used for bacterial identification.
Viruses have been variably classified as organisms because they have been considered either very simple microorganisms or very complex molecules. Prions, never considered microorganisms, have been investigated by virologists; however, as the clinical effects traced to them were originally presumed due to chronic viral infections, virologists took a search—discovering "infectious proteins".
The existence of microorganisms was predicted many centuries before they were first observed, for example by the Jains in India and by Marcus Terentius Varro in ancient Rome. The first recorded microscope observation was of the fruiting bodies of moulds, by Robert Hooke in 1666, but the Jesuit priest Athanasius Kircher was likely the first to see microbes, which he mentioned observing in milk and putrid material in 1658. Antonie van Leeuwenhoek is considered a father of microbiology as he observed and experimented with microscopic organisms in the 1670s, using simple microscopes of his design. Scientific microbiology developed in the 19th century through the work of Louis Pasteur and in medical microbiology Robert Koch.
History
The existence of microorganisms was hypothesized for many centuries before their actual discovery. The existence of unseen microbiological life was postulated by Jainism which is based on Mahavira's teachings as early as 6th century BCE (599 BC - 527 BC). Paul Dundas notes that Mahavira asserted the existence of unseen microbiological creatures living in earth, water, air and fire. Jain scriptures describe nigodas which are sub-microscopic creatures living in large clusters and having a very short life, said to pervade every part of the universe, even in tissues of plants and flesh of animals. The Roman Marcus Terentius Varro made references to microbes when he warned against locating a homestead in the vicinity of swamps "because there are bred certain minute creatures which cannot be seen by the eyes, which float in the air and enter the body through the mouth and nose and thereby cause serious diseases."
Persian scientists hypothesized the existence of microorganisms, such as Avicenna in his book The Canon of Medicine, Ibn Zuhr (also known as Avenzoar) who discovered scabies mites, and Al-Razi who gave the earliest known description of smallpox in his book The Virtuous Life (al-Hawi). The tenth-century Taoist describes "countless micro organic worms" which resemble vegetable seeds, which prompted Dutch sinologist Kristofer Schipper to claim that "the existence of harmful bacteria was known to the Chinese of the time."
In 1546, Girolamo Fracastoro proposed that epidemic diseases were caused by transferable seedlike entities that could transmit infection by direct or indirect contact, or vehicle transmission.
In 1676, Antonie van Leeuwenhoek, who lived most of his life in Delft, Netherlands, observed bacteria and other microorganisms using a single-lens microscope of his own design. He is considered a father of microbiology as he used simple single-lensed microscopes of his own design. While Van Leeuwenhoek is often cited as the first to observe microbes, Robert Hooke made his first recorded microscopic observation, of the fruiting bodies of moulds, in 1665. It has, however, been suggested that a Jesuit priest called Athanasius Kircher was the first to observe microorganisms.
Kircher was among the first to design magic lanterns for projection purposes, and so he was well acquainted with the properties of lenses. He wrote "Concerning the wonderful structure of things in nature, investigated by Microscope" in 1646, stating "who would believe that vinegar and milk abound with an innumerable multitude of worms." He also noted that putrid material is full of innumerable creeping animalcules. He published his Scrutinium Pestis (Examination of the Plague) in 1658, stating correctly that the disease was caused by microbes, though what he saw was most likely red or white blood cells rather than the plague agent itself.
The birth of bacteriology
The field of bacteriology (later a subdiscipline of microbiology) was founded in the 19th century by Ferdinand Cohn, a botanist whose studies on algae and photosynthetic bacteria led him to describe several bacteria including Bacillus and Beggiatoa. Cohn was also the first to formulate a scheme for the taxonomic classification of bacteria, and to discover endospores. Louis Pasteur and Robert Koch were contemporaries of Cohn, and are often considered to be the fathers of modern microbiology and medical microbiology, respectively. Pasteur is most famous for his series of experiments designed to disprove the then widely held theory of spontaneous generation, thereby solidifying microbiology's identity as a biological science. One of his students, Adrien Certes, is considered the founder of marine microbiology. Pasteur also designed methods for food preservation (pasteurization) and vaccines against several diseases such as anthrax, fowl cholera and rabies. Koch is best known for his contributions to the germ theory of disease, proving that specific diseases were caused by specific pathogenic microorganisms. He developed a series of criteria that have become known as the Koch's postulates. Koch was one of the first scientists to focus on the isolation of bacteria in pure culture resulting in his description of several novel bacteria including Mycobacterium tuberculosis, the causative agent of tuberculosis.
While Pasteur and Koch are often considered the founders of microbiology, their work did not accurately reflect the true diversity of the microbial world because of their exclusive focus on microorganisms having direct medical relevance. It was not until the late 19th century and the work of Martinus Beijerinck and Sergei Winogradsky that the true breadth of microbiology was revealed. Beijerinck made two major contributions to microbiology: the discovery of viruses and the development of enrichment culture techniques. While his work on the tobacco mosaic virus established the basic principles of virology, it was his development of enrichment culturing that had the most immediate impact on microbiology by allowing for the cultivation of a wide range of microbes with wildly different physiologies. Winogradsky was the first to develop the concept of chemolithotrophy and to thereby reveal the essential role played by microorganisms in geochemical processes. He was responsible for the first isolation and description of both nitrifying and nitrogen-fixing bacteria. French-Canadian microbiologist Felix d'Herelle co-discovered bacteriophages in 1917 and was one of the earliest applied microbiologists.
Joseph Lister was the first to use phenol disinfectant on the open wounds of patients.
Branches
The branches of microbiology can be classified into applied sciences, or divided according to taxonomy, as is the case with bacteriology, mycology, protozoology, virology, phycology, and microbial ecology. There is considerable overlap between the specific branches of microbiology with each other and with other disciplines, and certain aspects of these branches can extend beyond the traditional scope of microbiology A pure research branch of microbiology is termed cellular microbiology.
Applications
While some people have fear of microbes due to the association of some microbes with various human diseases, many microbes are also responsible for numerous beneficial processes such as industrial fermentation (e.g. the production of alcohol, vinegar and dairy products), antibiotic production can act as molecular vehicles to transfer DNA to complex organisms such as plants and animals. Scientists have also exploited their knowledge of microbes to produce biotechnologically important enzymes such as Taq polymerase, reporter genes for use in other genetic systems and novel molecular biology techniques such as the yeast two-hybrid system.
Bacteria can be used for the industrial production of amino acids. organic acids, vitamin, proteins, antibiotics and other commercially used metabolites which are produced by microorganisms. Corynebacterium glutamicum is one of the most important bacterial species with an annual production of more than two million tons of amino acids, mainly L-glutamate and L-lysine. Since some bacteria have the ability to synthesize antibiotics, they are used for medicinal purposes, such as Streptomyces to make aminoglycoside antibiotics.
A variety of biopolymers, such as polysaccharides, polyesters, and polyamides, are produced by microorganisms. Microorganisms are used for the biotechnological production of biopolymers with tailored properties suitable for high-value medical application such as tissue engineering and drug delivery. Microorganisms are for example used for the biosynthesis of xanthan, alginate, cellulose, cyanophycin, poly(gamma-glutamic acid), levan, hyaluronic acid, organic acids, oligosaccharides polysaccharide and polyhydroxyalkanoates.
Microorganisms are beneficial for microbial biodegradation or bioremediation of domestic, agricultural and industrial wastes and subsurface pollution in soils, sediments and marine environments. The ability of each microorganism to degrade toxic waste depends on the nature of each contaminant. Since sites typically have multiple pollutant types, the most effective approach to microbial biodegradation is to use a mixture of bacterial and fungal species and strains, each specific to the biodegradation of one or more types of contaminants.
Symbiotic microbial communities confer benefits to their human and animal hosts health including aiding digestion, producing beneficial vitamins and amino acids, and suppressing pathogenic microbes. Some benefit may be conferred by eating fermented foods, probiotics (bacteria potentially beneficial to the digestive system) or prebiotics (substances consumed to promote the growth of probiotic microorganisms). The ways the microbiome influences human and animal health, as well as methods to influence the microbiome are active areas of research.
Research has suggested that microorganisms could be useful in the treatment of cancer. Various strains of non-pathogenic clostridia can infiltrate and replicate within solid tumors. Clostridial vectors can be safely administered and their potential to deliver therapeutic proteins has been demonstrated in a variety of preclinical models.
Some bacteria are used to study fundamental mechanisms. An example of model bacteria used to study motility or the production of polysaccharides and development is Myxococcus xanthus.
| Biology and health sciences | Basics | null |
21393779 | https://en.wikipedia.org/wiki/Pholcus%20phalangioides | Pholcus phalangioides | Pholcus phalangioides, commonly known as the cosmopolitan cellar spider, long-bodied cellar spider, or one of various types called a daddy long-legs spider, is a spider of the family Pholcidae. This is the only spider species described by the Swiss entomologist Johann Kaspar Füssli, who first recorded it in 1775. Its common name of "daddy long-legs" should not be confused with a different arachnid group with the same common name, the harvestman (Opiliones), or the crane flies of the superfamily Tipuloidea.
Females have a body length of about 8 mm while males tend to be slightly smaller. The length of the spider's legs are on average 5 or 6 times the length of its body. Pholcus phalangioides has a habit of living on the ceilings of rooms, caves, garages or cellars.
This spider species is considered beneficial in parts of the world because it preys on other spiders, including species considered dangerous such as redback spiders. Pholcus phalangioides is known to be harmless to humans and a potential for the medicinal use of their silk has been reported.
Taxonomy and phylogeny
Pholcus phalangioides was first described in 1775 by the Swiss entomologist Johann Kaspar Füssli. A member of the genus Pholcus in the family Pholcidae, P. phalangioides shares ancestry with roughly 1,340 similar cellar-spiders. All of these spiders are known for their characteristic long legs, which can range from 5 to 6 times the size of their bodies. This is not to be confused with organisms with similar physical appearances, such as the crane fly - an insect - and harvestmen of the arachnid order Opiliones.
Genetic population structure
The population sizes of P. phalangioides are influenced greatly by the presence of human-made buildings since these spiders prefer warmer habitats indoors. The large number of buildings in the world has favoured P. phalangioides, though populations tend to be relatively small, widely dispersed, and greatly isolated from one another. This small size combined with low mobility of populations results in an increased importance placed on the role of genetic drift, more specifically the founder effect, on population structure. Although some gene flow does exist between populations, its importance has been insignificant when compared to that of geographical isolation-driven genetic drift. As a result, most P. phalangioides individuals of the same population that live in the same geographical region will have a very low degree of genetic variation (intrapopulation differentiation). On the other hand, this genetic drift results in significant interpopulation differentiation.
Description
Pholcus phalangioides are sexually dimorphic, where females are slightly larger than the males of the species. The body length of this species varies between males and females. Males tend to be around 6 to 10 mm in length with the average male being around 6 mm. The average female ranges from 7 to 8 mm in length. As indicated by their common name, "daddy long-legs", these spiders boast eight very long and thin legs which are covered in thin, grey bristles. On average, their legs are roughly 5 to 6 times as long as the spider's body. The average length of an adult female's legs is roughly 50 mm.
The bodies of P. phalangioides, as with all spiders, can be divided into two parts: the prosoma and the opisthosoma. The prosoma is commonly known as the cephalothorax, and the opisthosoma is commonly known as the abdomen. Most of the prosoma is occupied by the brain. The opisthosoma is considered the posterior part of the body which contains most of the spider's internal organs including the heart, respiratory system, mid-gland, reproductive system, genital tract and silk glands. The translucent bodies of P. phalangioides tend to be a grey-pale brown color with a dark spot on the back of the prosoma and some dark, blurred spots on the dorsal side of the opisthosoma.
Although some other members of the family Pholcidae have six eyes, Pholcus phalangioides is an eight-eyed spider. The eyes are arranged such that there is a pair of smaller, dark eyes at the front of the prosoma followed by three parallel rows of pairs of larger eyes.
Similar to other species of spider, a hard exoskeleton coats the bodies of P. phalangioides. Depending on the age of the spider, this exoskeleton must be shed at differing intervals; younger spiders tend to molt much more often. During molting, the spider will produce certain enzymes that release the rest of its body from the underlying tissue of its exoskeleton. The spider is then able to escape the exoskeleton. The remnant outer skin or exoskeleton is known as the exuviae.
It takes about one year for these spiders to mature after they are born, and their life span is up to two years or more post-maturity.
Distribution and habitat
Because of its spread with humans worldwide, there has been some uncertainty about its exact original, native range, although it has been recognized as being likely in the subtropical parts of the Afro-Eurasia, thus with a preference for warmer climates, and recent authorities regards it as only native to Asia. As a synanthropic species, Pholcus phalangioides has largely had its modern geographic distribution determined by the spread of humans around the world. Today, these spiders can be found on every continent in the world.
P. phalangioides are not suited for survival in cold environments which is why they in these regions prefer the warmth of the indoors, specifically inside human dwellings. These spiders have a particular affinity for dimly lit, dark areas that are quiet and calm. They are commonly found in the corners of buildings and people's homes as well as in attics. Populations of Pholcus phalangioides living outdoors can be found in caves and in between rock crevices.
Diet
P. phalangioides are carnivorous predators that feed on insects, other spiders, and other small invertebrates. Unlike many other spiders, who simply feed on prey that have gotten stuck in their webs, these spiders frequently venture out from their own webs to hunt other spiders resting in their respective webs and feed on them or their eggs. In times of low prey availability, both the males and females of the species will turn to cannibalism to meet their nutritional needs.
General ethology
Web patterns
In general, the webs of P. phalangioides are loose and horizontal with many irregularities. These webs are often intertwined with webs of other spiders of the same population. They live peacefully unless resources are low at which point the spiders turn to cannibalism.
Communication
The extent of the P. phalangioides communication is seen in times of mating. The primary form of communication for these spiders is through the use of touch and chemicals, specifically pheromones.
Predation behaviors
Predators
This species is preyed upon by jumping spiders of the Salticidae family. Some of these spiders simply leap into the webs of their prey and attack them. Others, employ a certain strategy known as mimicry in order to trick P. phalangioides and capture them.
A jumping spider species whose aggressive mimicry behavior towards P. phalangioides has been well studied is the Portia fimbriata jumping spider species of the genus Portia. During mimicry, the jumping spider produces certain specialized vibrations near the edge of the webs of P. phalangioides. These vibrations cause the webs of P. phalangioides to oscillate in such a way that they mimic the oscillations that would be produced when a form of prey gets stuck in the web. The jumping spider will then continue on with these vibrations for very long durations of time, up to three days in some instances. P. phalangioides often assume that this is an indication that they have caught some sort of prey and will move toward the host of the vibrations. At this point, the jumping spider is in an optimal position to leap onto and attack P. phalangioides, thus subduing them in many instances. In addition to employing mimicry, these jumping spiders are also particularly good at preventing P. phalangioides from inducing their whirling defense mechanism, which tends to be an effective way for P. phalangioides spiders to defend themselves from predators.
Defensive behavior
The primary defense strategy performed by P. phalangioides in moments of predation is whirling. Whirling, or a gyration of the body, consists of the spider swinging its body around in a circle repeatedly while its legs remain fixed on the web. This whirling strategy is induced as soon as the individual recognizes any sort of movement occurring in its web. The duration of this whirling is related to the specific kind of predator that the spider encounters. Short-duration whirling can be induced simply by a human touching the spider's web or occasionally by spider of a different species. Long-duration whirling, which can last several hours or even days, is performed specifically in response to the presence of the more threatening Salticid, or jumping spiders, much more often than for spiders of other families. The rapid gyrating associated with the whirling disturbs the vision of the Salticid spiders such that they can no longer rely on their acute eyesight to pinpoint the location of P. phalangioides. This disruption results in the safety of the spiders from an otherwise deadly predator.
Mimicry
Much like the Salticidae family of spiders, P. phalangioides also use mimicry as a predatory tactic to subdue their prey; however, unlike jumping spiders, P. phalangioides do not rely on vision for predation. This mimicry consists of creating specialized vibrations to trick the prey into thinking that it has caught an insect or another spider. The prey then slowly approaches its supposed catch at which point the P. phalangioides spider raises up on its long legs. The spider patiently waits until the exact moment at which the prey touches one of its legs. Then, the P. phalangioides spider quickly immobilizes its prey by using its legs to wrap it up in layers of silk. Its long legs give it plenty of distance from the prey to avoid being bitten in retaliation. After immobilizing its prey, P. phalangioides can administer their venomous bite to the prey and consume it.
Even forms of prey that do not fully make it onto the web of P. phalangioides are not safe. Often, prey will trip over the edges of the web, thus providing P. phalangioides with an optimal time to attack. P. phalangioides is capable of clinging onto their web with two of their legs while the rest of their body leans out of the web and shoots silk in the direction of the prey to subdue it.
Bite
It is a common misconception that P. phalangioides is incapable of biting humans due to an inability of their fangs to penetrate the human epidermis. These spiders can bite humans since their fangs are roughly 0.25 mm long, while the thickness of the human epidermis is less, around 0.1 mm thick; however, there are hardly any reports on bites.
Venom
Although these spiders are capable of hunting and killing some of the most venomous spiders in the world such as the redback spider, they are not dangerous to humans. According to researchers Greta Binford and Pamela Zobel-Thropp, the effects of P. phalangioides venom on humans and other mammals are negligible. In humans, the P. phalangioides bite simply results in a mild stinging sensation that has no long-term health consequences. A recent study has even shown that Pholcidae venom has a relatively weak effect, even on insects.
Reproduction
Male genitalia
Overall genital system structure
The genital system of an adult male P. phalangioides is located in the ventral portion of the opisthosoma and can be characterized by a large pair of testes and thin, twisted vasa deferentia which become thicker upon nearing the genital opening of the male pedipalp. These vasa deferentia distally fuse creating the ductus ejaculatorius of the spider. The ductus ejaculatorius is composed of lumen which contains large quantities of spermatozoa and other secretions. This variety of secretions is not seen in subadult males whose lumen only contains dense secretion matrix. Ventrally surrounding specific portions of the genital tract are apullate silk glands, and overall, the genital system is bordered by parts of the midgut gland. All stages of spermatogenesis are apparent in the adult testes, and the spermatozoa are coiled. In order to reach this stage with a fully formed male genital system, P. phalangioides must first go through two subadult phases.
Stages of genital development
The first stage occurs roughly four weeks before the spider's final molt. Unlike adult males, young males possess a broad tarsus that does not appear to consist of any internal structures or appendages. Their pedipalps are greatly bent at a joint connecting the between the tibia and patella. The testes at this point in the young male's life appear very similar to those of the adult males both in terms of physical structure and presence of all stages of spermatogenesis. This spermatogenesis takes place in cysts which contain spermatids. During this time, there is very little observable secretory activity in the testes. In a similar manner to the adult genital system, the vas deferens in young males is connected to the distal, thin part of the testis. The distal portion of the vas deferens is incredibly narrow and is not characterized by the presence of spermatozoa or other secretions. On the other hand, the proximal region consists of a thick epithelium and intricate luminal region containing spermatozoa.
The second stage of development is observed two weeks prior the spider's final molt. At this point, the pedipalps of the spider are only partially bent, and the internal structures of the tarsus can be seen. The testes are dimensionally very similar to those of subadult stage one males and adult males. The distal portion of the vas deferens becomes thinner and twists in a tube-like shape. Spermatozoa and other secretions are extensively present in proximal portion of the vas deferens. But similar to the stage one males, these males still do not appear to contain any sort of secretions or spermatozoa in the distal portion of the vas deferens. This is in contrast to adults where spermatozoa are present in all regions of the vas deferens.
Spermatogenesis
Spermatogenesis for males of the P. phalangioides species commences weeks before maturity and continues throughout their lives.
Female genitalia
Many female spiders possess sac-like structures where sperm from the male spider is stored; however, females of the P. phalangioides species do not have these receptaculum seminis. Instead, the posterior wall of uterus externus, or genital cavity, serves as the site of sperm storage. The females have two accessory glands located in the dorsal part of the uterus externus. These glands release a secretion into the uterus externus which functions as a matrix to hold the male spermatozoa and seminal fluid in place upon copulation. These accessory glands are composed of multiple glandular units, they themselves consisting of two secretory and envelope cells each. The inner and outer envelope cells surround the secretory cells and serve to create a cuticular ductule or canal that runs from the secretory cells to the two pore plates located on the uterus externus. These pore plates are the exit sites for the aforementioned glandular secretion into the uterus externus.
Mating behaviors
Courtship
Male courtship in P. phalangioides can be observed in four different steps: abdominal vibrations, tapping of the female's web, web jerking, and tapping the female's legs. In order to mate with the females, the males must perform courtship in a manner which will not result in the female assuming that the male is prey. Otherwise, the male would be attacked.
As the males approach the females, they begin to do a series of rapid dorso-ventral vibrations with their opisthosoma. This only occurs once the females have noticed the presence of the males. The males then use the ventral portion of their tarsus to begin tapping on the web of the female. This tapping can last up to twenty minutes as the male inches closer to the female. Then, using claws on their tarsus, the males hook onto the web and perform rapid jerk movements using their legs. On average, this jerking lasts for a few minutes with each jerk lasting less than half of a second. In between sequences of jerking, the males continue to move closer to the females. The males then tap on the female's legs with their cephalothorax positioned downwards for, on average, eight minutes. At this point, receptive females will take on a specific position in which they are motionless with their opisthosoma turned horizontally and their legs extended outward. Before coupling, many of the males will use their pedipalps to cut certain parts of the web closest to the female.
Copulation
Copulation begins as the males use their chelicerae to rapidly move back and forth across the female's ventral body surface. This is an attempt to grab hold of the female's body and mount onto their epigyne. For some males, it can take up to 100 attempts to properly mount. Once mounted, the males pull the females closer to them resulting in rotation of the female opisthosoma from a horizontal to vertical position. At this point, the male is able to insert his pedipalps into the genital cavity of the female. During the multiple insertions, the male pedipalps are twisted into different motions in a synchronous fashion with the procursi being inserted deeply into the female genital cavity to release sperm into the uterus externus. As the coupling duration lengthens, the amount of palpal insertions decreases. The duration of copulation is dependent upon whether or not the female P. phalangioides have previously mated with any males. If the females have mated, second males are only allowed to engage in copulation for a few minutes. On the other hand, first males are able to copulate for anywhere between 16 and 122 minutes. Once the mating has finished, the females often act aggressively towards the males in an attempt to drive them off.
Male competition
Because palpal, or genital bulb, movements from the males result in the displacement of spermatozoa and other seminal fluid from the female uterus externus, sperm competition exists between males of the P. Phalangioides species. A rival male can attempt to displace the sperm of another male from the female's genital cavity by copulating with her; however, because the copulation duration is greatly decreased in second males, and thus there is less time to displace a rival's sperm, it is unlikely that the spermatozoa of rival, second male would greatly outnumber those of the first male in the uterus externus.
Biomedical applications
Medicinal benefit
The use of spider silk in the medical field has gained much recognition over the last twenty years. Silk has been praised for its wound healing purposes because it contains compounds such as vitamin K. Spider silk is primarily composed of proteins made up of non-polar amino acids such as glycine and alanine. However, it also contains the organic compound pyrrolidine which functions to hold the silk's moisture and potassium nitrate which prevents any fungal or bacterial growth from occurring on the silk.
Antibacterial activity
Certain antimicrobial biomolecules found in the spider silk of P. phalangioides are able to elicit an inhibitory effect on drug-resistant human pathogens including gram-positive bacteria L. monocytogenes, gram-negative E. coli, Staphylococcus aureus, Bacillus subtilis, and Pseudomonas aeruginosa. More generally, researchers are hoping that the anti-microbial biomolecules of this spider silk could serve as a natural anti-microbial agent in the future against a host of infectious bacterial diseases that are resistant to antibiotics.
Biological imaging
Spiders are capable of spinning a multitude of unique silks. These silks vary in compounds and proteins that they consist of and in their use for the spider. One specific type of silk, known as dragline silk, is of particular interest to researchers due to its high elasticity, toughness, and large tensile strength. This silk has been shown to be significantly stronger than steel of the same weight. Dragline silk serves as the spider's attachment to its web should it need to retreat from predators or just go back in general. This silk also forms the radial spokes of a spider's web.
To examine the potential role of this dragline silk in biological imaging, resin was dripped onto the fibers of Pholcus phalangioides silk. As it condensed, the silk molded naturally into a dome or lens shape. By shining a laser onto this lens, researchers were able to generate high-quality photonic nanojets (PNJs), or high-intensity scattered beams of light. These photonic nanojets could be adjusted by manipulating the amount of time that the silk spends in contact with the resin. This adjustable spider silk-based lens could be used in the future for biological tissue imaging, highlighting the biomedical importance of P. phalangioides.
| Biology and health sciences | Spiders | Animals |
21396352 | https://en.wikipedia.org/wiki/Titanoboa | Titanoboa | Titanoboa (; ) is an extinct genus of giant boid (the family that includes all boas and anacondas) snake that lived during the middle and late Paleocene. Titanoboa was first discovered in the early 2000s by the Smithsonian Tropical Research Institute who, along with students from the University of Florida, recovered 186 fossils of Titanoboa from La Guajira department in northeastern Colombia. It was named and described in 2009 as Titanoboa cerrejonensis, the largest snake ever found at that time. It was originally known only from thoracic vertebrae and ribs, but later expeditions collected parts of the skull and teeth. Titanoboa is in the subfamily Boinae, being most closely related to other extant boines from Madagascar and the Pacific.
Titanoboa could grow up to long, perhaps even up to long, and weigh around . The discovery of Titanoboa cerrejonensis supplanted the previous record holder, Gigantophis garstini, which is known from the Eocene of Egypt. Titanoboa evolved following the extinction of all non-avian dinosaurs, being one of the largest reptiles to evolve after the Cretaceous–Paleogene extinction event. Its vertebrae are very robust and wide, with a pentagonal shape in anterior view, as in other members of Boinae. Titanoboa is thought to have been a semi-aquatic apex predator, with a diet consisting primarily of fish.
History and naming
In 2002, during an expedition to the coal mines of Cerrejón in La Guajira launched by the University of Florida and Smithsonian Tropical Research Institute, large thoracic vertebrae and ribs were unearthed by the students Jonathon Bloch and Carlos Jaramillo. More fossils were unearthed over the course of the expedition, eventually totaling 186 fossils from 30 individuals. The expedition lasted until 2004, during which the fossils of Titanoboa were mistakenly labeled as those of crocodiles. These were found in association with other giant reptile fossils of turtles and crocodilians from the Cerrejón Formation, dating to the mid-late Paleocene epoch (around 60-58 mya), a period just after the Cretaceous–Paleogene extinction event. Before this discovery, few fossils of Paleocene-epoch vertebrates had been found in ancient tropical environments of South America. The fossils were then transported to the Florida Museum of Natural History, where they were studied and described by an international team of Canadian, American, and Panamanian scientists in 2009 led by Jason J. Head of the University of Toronto. The snake elements were described as those of a novel, giant boid snake that they named Titanoboa cerrejonensis. The genus name derives from the Greek word "Titan" in addition to Boa, the type genus of the family Boidae. The species name is a reference to the Cerrejón region it is known from. The designated holotype is a single dorsal vertebra cataloged as UF/IGM 1, which is used by Head et al. (2009) to complete the initial size estimates of T. cerrejonensis.
Another expedition to Cerrejón launched in 2011 found more fossils from Titanoboa. Most notably, the group returned with three disarticulated skulls of Titanoboa, making it one of the few fossil snakes with preserved cranial material. They were associated with postcranial material, cementing their referral to the species. Though the skulls are undescribed, an article by the BBC in 2012 and an abstract in the Society of Vertebrate Paleontology have been published. A documentary on the animal titled Titanoboa: Monster Snake aired in 2012 in addition to a touring exhibit of the same name, which lasted from 2013 to 2018. In 2023, some of the vertebrae from the referred specimen UF/IGM 16 have been reassigned to an indeterminate palaeophiine.
Description
Size
Based on the size of the vertebrae, Titanoboa is the largest snake in the paleontological record. In modern constrictors like boids and pythonids, increased body size is achieved through larger vertebrae rather than an increase in the number of bones making up the skeleton, allowing for length estimates based on individual bones. Based on comparison between the undistorted Titanoboa vertebrae and the skeleton of modern boas, Head and colleagues found that the analyzed specimens fit a position towards the later half of the precloacal vertebral column, approximately 60 to 65% back from the first two neck vertebrae. Using this method, initial size estimates proposed a total body length of approximately (± ). Weight was determined by comparing Titanoboa to the extant green anaconda and the southern rock python, resulting in a weight between (mean estimate ). These estimates far exceed the largest modern snakes, the green anaconda and the reticulated python, as well as the previous record holder, the madtsoid Gigantophis. The existence of eight additional specimens of similar size to the one used in these calculations implies that Titanoboa reached such massive proportions regularly. The later discovery of skull material allowed for size estimates based on skull to body length proportions. Applying anaconda proportions to the skull of Titanoboa results in a total body length of around (± ). In 2016, Feldman and his colleagues estimated that a long individual would have weighed at maximum based on their equation to estimate the body size of boids.
Anatomy
Many of the fossils of Titanoboa are incomplete or undescribed, consisting primarily of thoracic vertebrae that were located before the cloaca. It possesses the same characteristics as other boids and especially Boa, such as a short, posteriorly-pointing prezygapophyseal process on these vertebrae. However, Titanoboa's are distinct due to being very robust and with a uniquely T-shaped neural spine. The neural spine also has an expanded posterior margin and a thin, blade-like anterior process. It also has much smaller foramina (small pits in bone) on its center and lateral sides, contrary to those of many other boids.
The skull is only briefly described in a 2013 abstract. According to it, Titanoboa has a high amount of palatal and marginal tooth positions compared to others boids. The quadrate bone is oriented at a low angle and the articulation of both the palatine to pterygoid and pterygoid to quadrate are heavily reduced, a trait absent in its relatives. The teeth themselves are weakly ankylosed, meaning they are not strongly connected to the jawbone.
Classification
Titanoboa is placed in the family Boidae, a family of snakes containing the "constrictors", that evolved during the Late Cretaceous in what is now the Americas. They are a widely distributed group, with six subfamilies found on nearly every continent, with Titanoboa being in the subfamily Boinae based on vertebrae morphology. All known boines are from the Americas, reaching as far north as Mexico and the Antilles and south to Argentina. Titanoboa is also the only extinct boine genus known; all other boine genera are still living.
The skull material confirmed Titanoboa'''s initial placement within the subfamily, now also supported by the reduced palatine choanal. The 2013 abstract recovered Titanoboa as closely related to taxa from the Pacific Islands and Madagascar, linking the Old World and New World boids and suggesting that the two lineages diverged by the Paleocene at the latest. This would place Titanoboa at the stem of Boinae, a result corroborated by a study in 2015.
The cladogram below follows the 2015 phylogenetic analysis:
Paleobiology
Diet
Initially, Titanoboa was thought to have acted much like a modern anaconda based on its size and the environment it lived in, with researchers suggesting that it may have fed on the local crocodylomorph fauna. However, in the 2013 abstract, Jason Head and colleagues noted that the skull of this snake displays multiple adaptations to a piscivorous diet, including the anatomy of the palate, the tooth count, and the anatomy of the teeth themselves. These adaptations are not seen in other boids, but closely resemble those in modern caeonphidian snakes with a piscivorous diet. Such a lifestyle would be supported by the extensive rivers of Paleocene Colombia, as well as the fossil fish (lungfish and osteoglossomorphs) recovered from the formation.
Habitat
Due to the warm and humid greenhouse climate of the Paleocene, the region of what is now Cerrejón was a coastal plain covered by wet tropical forests with large river systems, which were inhabited by various freshwater animals. Among the native reptiles are three different genera of dyrosaurs, crocodylomorphs that survived the KPG extinction event independently from modern crocodilians. The genera that coexisted alongside Titanoboa included the large, slender-snouted Acherontisuchus, the medium-sized but broad-headed Anthracosuchus, and the relatively small Cerrejonisuchus. Turtles also thrived in the tropical wetlands of Paleocene Colombia, giving rise to several species of considerable size such as Cerrejonemys and Carbonemys.The rainforests of the Cerrejón Formation mirror modern tropical forests in regards to which families make up most of the vegetation. But unlike modern tropical forests, these Paleocene forests had fewer species. Although it is possible that this low diversity was a result of the wetland nature of the depositional environment, samples from other localities in the same time frame suggest that all of the forests that arose shortly following the Cretaceous-Paleogene mass extinction were of similar composition. This indicates that the low plant diversity of the time was a direct result of the mass extinction preceding it. Plants found in these Paleocene forests include the floating fern Salvinia and various genera of Zingiberales and Araceae.
Climate implications
In the 2009 type description, Head and colleagues correlate the gigantism observed in Titanoboa with the climate conditions of its environment. As a poikilothermic ectotherm, Titanoboas internal temperature and metabolism were heavily dependent on the ambient temperature, which would in turn affect the animal's size. Accordingly, large ectothermic animals are typically found in the tropics and decrease in size the further one moves away from the equator. Following this correlation, the authors suggest that the mean annual temperature can be calculated by comparing the maximum body size of poikilotherm animals found in two localities. Based on the relation between temperatures in the modern Neotropics and the maximum length of anacondas, Head and colleagues calculated a mean annual temperature of at least for the equatorial region of Paleocene South America. The estimates are consistent with a hot Paleocene climate as suggested by a study published in 2003 and slightly higher (1–5 °C) than estimates derived from the oxygen isotopes of planktonic foraminifer. Although these estimates exceed temperatures of modern tropical forests, the paper argued that the increase in temperature was balanced out by higher amounts of rainfall.
However, this conclusion was questioned by several researchers following the publication of the paper. J. M. Kale Sniderman used the same methodology as Head and colleagues on the Pleistocene monitor lizard Varanus priscus, comparing it to the extant Komodo dragon. Sniderman calculates that following this method, the modern tropics should be able to support lizards much larger than what is observed today, or in the reverse, that Varanus priscus is much larger than what would be implied by the ambient temperature of its native range. In conclusion it is argued that Paleocene rainforests may not have been any hotter than those today and that the massive size of Titanoboa and Varanus priscus may instead be the result of lacking significant mammalian competition. Mark W. Denny, Brent L. Lockwood and George N. Somero also disagreed with Head's conclusion, noting that although this method is applicable to smaller poikilotherms, it is not constant across all size ranges. As thermal equilibrium is achieved through the relation between volume and surface area, they argue that the large size of Titanoboa coupled with the high temperatures proposed by Head et al. would mean that the animal would overheat easily if resting in a coiled up state. The authors conclude that several key factors influence the relationship between Titanoboa and the temperature of the area it inhabited. Varying posture could help cool down if needed, basking behavior or heat absorption through the substrate are both unknown and the potentially semi-aquatic nature of the animal creates additional factors to consider. Ultimately, Denny and colleagues argue that the nature of the giant snake renders it a poor indicator for the climate of the Paleocene and that the mean annual temperature must have been cooler than the current estimate.
These issues, alongside adjustments suggested by Makarieva, were addressed by Head and his team the same year, arguing that Denny and colleagues misunderstand their proposed model. They retort that the method takes into account variation caused by body size and that it's furthermore based on the largest extant snakes, making it an appropriate method. They also add that the results recovered are consistent with large extant snakes, which are also known to perform thermoregulation through behavior. Sniderman's proposal that the correlation between body size and temperature is inconsistent with modern monitor lizards is addressed twofold. For one, Head argues, Komodo dragons are a poor analogy as they are geographically restricted to the islands of Indonesia, limiting the size they could grow to while both green anacondas and Titanoboa are mainland animals. Secondly the response notes that the size estimates utilized for Varanus priscus'' are overestimates and unreliable, being based on secondary reports that do not match better supported estimates indicating a range for the monitor.
| Biology and health sciences | Prehistoric squamates | Animals |
20344155 | https://en.wikipedia.org/wiki/Electrical%20grid | Electrical grid | An electrical grid (or electricity network) is an interconnected network for electricity delivery from producers to consumers. Electrical grids consist of power stations, electrical substations to step voltage up or down, electric power transmission to carry power over long distances, and finally electric power distribution to customers. In that last step, voltage is stepped down again to the required service voltage. Power stations are typically built close to energy sources and far from densely populated areas. Electrical grids vary in size and can cover whole countries or continents. From small to large there are microgrids, wide area synchronous grids, and super grids. The combined transmission and distribution network is part of electricity delivery, known as the power grid.
Grids are nearly always synchronous, meaning all distribution areas operate with three phase alternating current (AC) frequencies synchronized (so that voltage swings occur at almost the same time). This allows transmission of AC power throughout the area, connecting the electricity generators with consumers. Grids can enable more efficient electricity markets.
Although electrical grids are widespread, , 1.4 billion people worldwide were not connected to an electricity grid. As electrification increases, the number of people with access to grid electricity is growing. About 840 million people (mostly in Africa), which is ca. 11% of the World's population, had no access to grid electricity in 2017, down from 1.2 billion in 2010.
Electrical grids can be prone to malicious intrusion or attack; thus, there is a need for electric grid security. Also as electric grids modernize and introduce computer technology, cyber threats start to become a security risk. Particular concerns relate to the more complex computer systems needed to manage grids.
Types (grouped by size)
Microgrid
A microgrid is a local grid that is usually part of the regional wide-area synchronous grid but which can disconnect and operate autonomously. It might do this in times when the main grid is affected by outages. This is known as islanding, and it might run indefinitely on its own resources.
Compared to larger grids, microgrids typically use a lower voltage distribution network and distributed generators. Microgrids may not only be more resilient, but may be cheaper to implement in isolated areas.
A design goal is that a local area produces all of the energy it uses.
Example implementations include:
Hajjah and Lahj, Yemen: community-owned solar microgrids.
Île d'Yeu pilot program: sixty-four solar panels with a peak capacity of 23.7 kW on five houses and a battery with a storage capacity of 15 kWh.
Les Anglais, Haiti: includes energy theft detection.
Mpeketoni, Kenya: a community-based diesel-powered micro-grid system.
Stone Edge Farm Winery: micro-turbine, fuel-cell, multiple battery, hydrogen electrolyzer, and PV enabled winery in Sonoma, California.
Wide area synchronous grid
A wide area synchronous grid, also known as an "interconnection" in North America, directly connects many generators delivering AC power with the same relative frequency to many consumers. For example, there are four major interconnections in North America (the Western Interconnection, the Eastern Interconnection, the Quebec Interconnection and the Texas Interconnection). In Europe one large grid connects most of Western Europe.
A wide area synchronous grid (also called an "interconnection" in North America) is an electrical grid at a regional scale or greater that operates at a synchronized frequency and is electrically tied together during normal system conditions. These are also known as synchronous zones, the largest of which is the synchronous grid of Continental Europe (ENTSO-E) with 667 gigawatts (GW) of generation, and the widest region served being that of the IPS/UPS system serving countries of the former Soviet Union. Synchronous grids with ample capacity facilitate electricity market trading across wide areas. In the ENTSO-E in 2008, over 350,000 megawatt hours were sold per day on the European Energy Exchange (EEX).
Each of the interconnects in North America are run at a nominal 60 Hz, while those of Europe run at 50 Hz. Neighbouring interconnections with the same frequency and standards can be synchronized and directly connected to form a larger interconnection, or they may share power without synchronization via high-voltage direct current power transmission lines (DC ties), or with variable-frequency transformers (VFTs), which permit a controlled flow of energy while also functionally isolating the independent AC frequencies of each side.
The benefits of synchronous zones include pooling of generation, resulting in lower generation costs; pooling of load, resulting in significant equalizing effects; common provisioning of reserves, resulting in cheaper primary and secondary reserve power costs; opening of the market, resulting in possibility of long-term contracts and short term power exchanges; and mutual assistance in the event of disturbances.
One disadvantage of a wide-area synchronous grid is that problems in one part can have repercussions across the whole grid. For example, in 2018 Kosovo used more power than it generated due to a dispute with Serbia, leading to the phase across the whole synchronous grid of Continental Europe lagging behind what it should have been. The frequency dropped to 49.996 Hz. This caused certain kinds of clocks to become six minutes slow.
Super grid
A super grid or supergrid is a wide-area transmission network that is intended to make possible the trade of high volumes of electricity across great distances. It is sometimes also referred to as a mega grid. Super grids can support a global energy transition by smoothing local fluctuations of wind energy and solar energy. In this context they are considered as a key technology to mitigate global warming. Super grids typically use High-voltage direct current (HVDC) to transmit electricity long distances. The latest generation of HVDC power lines can transmit energy with losses of only 1.6% per 1000 km.
Electric utilities between regions are many times interconnected for improved economy and reliability. Electrical interconnectors allow for economies of scale, allowing energy to be purchased from large, efficient sources. Utilities can draw power from generator reserves from a different region to ensure continuing, reliable power and diversify their loads. Interconnection also allows regions to have access to cheap bulk energy by receiving power from different sources. For example, one region may be producing cheap hydro power during high water seasons, but in low water seasons, another area may be producing cheaper power through wind, allowing both regions to access cheaper energy sources from one another during different times of the year. Neighboring utilities also help others to maintain the overall system frequency and also help manage tie transfers between utility regions.
Electricity Interconnection Level (EIL) of a grid is the ratio of the total interconnector power to the grid divided by the installed production capacity of the grid. Within the EU, it has set a target of national grids reaching 10% by 2020, and 15% by 2030.
Components
Generation
Electricity generation is the process of generating electric power at power stations. This is done ultimately from sources of primary energy typically with electromechanical generators driven by heat engines from fossil, nuclear, and geothermal sources, or driven by the kinetic energy of water or wind. Other power sources are photovoltaics driven by solar insolation, and grid batteries.
The sum of the power outputs of generators on the grid is the production of the grid, typically measured in gigawatts (GW).
Transmission
Electric power transmission is the bulk movement of electrical energy from a generating site, via a web of interconnected lines, to an electrical substation, from which is connected to the distribution system. This networked system of connections is distinct from the local wiring between high-voltage substations and customers. Transmission networks are complex with redundant pathways. Redundancy allows line failures to occur and power is simply rerouted while repairs are done.
Because the power is often generated far from where it is consumed, the transmission system can cover great distances. For a given amount of power, transmission efficiency is greater at higher voltages and lower currents. Therefore, voltages are stepped up at the generating station, and stepped down at local substations for distribution to customers.
Most transmission is three-phase. Three phase, compared to single phase, can deliver much more power for a given amount of wire, since the neutral and ground wires are shared. Further, three-phase generators and motors are more efficient than their single-phase counterparts.
However, for conventional conductors one of the main losses are resistive losses which are a square law on current, and depend on distance. High voltage AC transmission lines can lose 1-4% per hundred miles. However, high-voltage direct current can have half the losses of AC. Over very long distances, these efficiencies can offset the additional cost of the required AC/DC converter stations at each end.
Substations
Substations may perform many different functions but usually transform voltage from low to high (step up) and from high to low (step down). Between the generator and the final consumer, the voltage may be transformed several times.
The three main types of substations, by function, are:
Step-up substation: these use transformers to raise the voltage coming from the generators and power plants so that power can be transmitted long distances more efficiently, with smaller currents.
Step-down substation: these transformers lower the voltage coming from the transmission lines which can be used in industry or sent to a distribution substation.
Distribution substation: these transform the voltage lower again for the distribution to end users.
Aside from transformers, other major components or functions of substations include:
Circuit breakers: used to automatically break a circuit and isolate a fault in the system.
Switches: to control the flow of electricity, and isolate equipment.
The substation busbar: typically a set of three conductors, one for each phase of current. The substation is organized around the buses, and they are connected to incoming lines, transformers, protection equipment, switches, and the outgoing lines.
Lightning arresters
Capacitors for power factor correction
Synchronous condensers for power factor correction and grid stability
Electric power distribution
Distribution is the final stage in the delivery of power; it carries electricity from the transmission system to individual consumers. Substations connect to the transmission system and lower the transmission voltage to medium voltage ranging between and . But the voltage levels varies very much between different countries, in Sweden medium voltage are normally between . Primary distribution lines carry this medium voltage power to distribution transformers located near the customer's premises. Distribution transformers again lower the voltage to the utilization voltage. Customers demanding a much larger amount of power may be connected directly to the primary distribution level or the subtransmission level.
Distribution networks are divided into two types, radial or network.
In cities and towns of North America, the grid tends to follow the classic radially fed design. A substation receives its power from the transmission network, the power is stepped down with a transformer and sent to a bus from which feeders fan out in all directions across the countryside. These feeders carry three-phase power, and tend to follow the major streets near the substation. As the distance from the substation grows, the fanout continues as smaller laterals spread out to cover areas missed by the feeders. This tree-like structure grows outward from the substation, but for reliability reasons, usually contains at least one unused backup connection to a nearby substation. This connection can be enabled in case of an emergency, so that a portion of a substation's service territory can be alternatively fed by another substation.
Storage
Grid energy storage (also called large-scale energy storage) is a collection of methods used for energy storage on a large scale within an electrical power grid. Electrical energy is stored during times when electricity is plentiful and inexpensive (especially from intermittent power sources such as renewable electricity from wind power, tidal power and solar power) or when demand is low, and later power is generated when demand is high, and electricity prices tend to be higher.
, the largest form of grid energy storage is dammed hydroelectricity, with both conventional hydroelectric generation as well as pumped storage hydroelectricity.
Developments in battery storage have enabled commercially viable projects to store energy during peak production and release during peak demand, and for use when production unexpectedly falls giving time for slower responding resources to be brought online.
Two alternatives to grid storage are the use of peaking power plants to fill in supply gaps and demand response to shift load to other times.
Functionalities
Demand
The demand, or load on an electrical grid is the total electrical power being removed by the users of the grid.
The graph of the demand over time is called the demand curve.
Baseload is the minimum load on the grid over any given period, peak demand is the maximum load. Historically, baseload was commonly met by equipment that was relatively cheap to run, that ran continuously for weeks or months at a time, but globally this is becoming less common. The extra peak demand requirements are sometimes produced by expensive peaking plants that are generators optimised to come on-line quickly but these too are becoming less common.
However, if the demand of electricity exceed the capacity of a local power grid, it will cause safety issue like burning out.
Voltage
Grids are designed to supply electricity to their customers at largely constant voltages. This has to be achieved with varying demand, variable reactive loads, and even nonlinear loads, with electricity provided by generators and distribution and transmission equipment that are not perfectly reliable. Often grids use tap changers on transformers near to the consumers to adjust the voltage and keep it within specification.
Frequency
In a synchronous grid all the generators must run at the same frequency, and must stay very nearly in phase with each other and the grid. Generation and consumption must be balanced across the entire grid, because energy is consumed as it is produced. For rotating generators, a local governor regulates the driving torque, maintaining almost constant rotation speed as loading changes. Energy is stored in the immediate short term by the rotational kinetic energy of the generators.
Although the speed is kept largely constant, small deviations from the nominal system frequency are very important in regulating individual generators and are used as a way of assessing the equilibrium of the grid as a whole. When the grid is lightly loaded the grid frequency runs above the nominal frequency, and this is taken as an indication by Automatic Generation Control systems across the network that generators should reduce their output. Conversely, when the grid is heavily loaded, the frequency naturally slows, and governors adjust their generators so that more power is output (droop speed control). When generators have identical droop speed control settings it ensures that multiple parallel generators with the same settings share load in proportion to their rating.
In addition, there's often central control, which can change the parameters of the AGC systems over timescales of a minute or longer to further adjust the regional network flows and the operating frequency of the grid.
For timekeeping purposes, the nominal frequency will be allowed to vary in the short term, but is adjusted to prevent line-operated clocks from gaining or losing significant time over the course of a whole 24 hour period.
An entire synchronous grid runs at the same frequency, neighbouring grids would not be synchronised even if they run at the same nominal frequency. High-voltage direct current lines or variable-frequency transformers can be used to connect two alternating current interconnection networks which are not synchronized with each other. This provides the benefit of interconnection without the need to synchronize an even wider area. For example, compare the wide area synchronous grid map of Europe with the map of HVDC lines.
Capacity and firm capacity
The sum of the maximum power outputs (nameplate capacity) of the generators attached to an electrical grid might be considered to be the capacity of the grid.
However, in practice, they are never run flat out simultaneously. Typically, some generators are kept running at lower output powers (spinning reserve) to deal with failures as well as variation in demand. In addition generators can be off-line for maintenance or other reasons, such as availability of energy inputs (fuel, water, wind, sun etc.) or pollution constraints.
Firm capacity is the maximum power output on a grid that is immediately available over a given time period, and is a far more useful figure.
Production
Most grid codes specify that the load is shared between the generators in merit order according to their marginal cost (i.e. cheapest first) and sometimes their environmental impact. Thus cheap electricity providers tend to be run flat out almost all the time, and the more expensive producers are only run when necessary.
Failures and issues
Failures are usually associated with generators or power transmission lines tripping circuit breakers due to faults leading to a loss of generation capacity for customers, or excess demand. This will often cause the frequency to reduce, and the remaining generators will react and together attempt to stabilize above the minimum. If that is not possible then a number of scenarios can occur.
A large failure in one part of the grid — unless quickly compensated for — can cause current to re-route itself to flow from the remaining generators to consumers over transmission lines of insufficient capacity, causing further failures. One downside to a widely connected grid is thus the possibility of cascading failure and widespread power outage. A central authority is usually designated to facilitate communication and develop protocols to maintain a stable grid. For example, the North American Electric Reliability Corporation gained binding powers in the United States in 2006, and has advisory powers in the applicable parts of Canada and Mexico. The U.S. government has also designated National Interest Electric Transmission Corridors, where it believes transmission bottlenecks have developed.
Brownout
A brownout is an intentional or unintentional drop in voltage in an electrical power supply system. Intentional brownouts are used for load reduction in an emergency. The reduction lasts for minutes or hours, as opposed to short-term voltage sag (or dip). The term brownout comes from the dimming experienced by incandescent lighting when the voltage sags. A voltage reduction may be an effect of disruption of an electrical grid, or may occasionally be imposed in an effort to reduce load and prevent a power outage, known as a blackout.
Blackout
A power outage (also called a power cut, a power out, a power blackout, power failure or a blackout) is a loss of the electric power to a particular area.
Power failures can be caused by faults at power stations, damage to electric transmission lines, substations or other parts of the distribution system, a short circuit, cascading failure, fuse or circuit breaker operation, and human error.
Power failures are particularly critical at sites where the environment and public safety are at risk. Institutions such as hospitals, sewage treatment plants, mines, shelters and the like will usually have backup power sources such as standby generators, which will automatically start up when electrical power is lost. Other critical systems, such as telecommunication, are also required to have emergency power. The battery room of a telephone exchange usually has arrays of lead–acid batteries for backup and also a socket for connecting a generator during extended periods of outage.
Load shedding
Electrical generation and transmission systems may not always meet peak demand requirements— the greatest amount of electricity required by all utility customers within a given region. In these situations, overall demand must be lowered, either by turning off service to some devices or cutting back the supply voltage (brownouts), in order to prevent uncontrolled service disruptions such as power outages (widespread blackouts) or equipment damage. Utilities may impose load shedding on service areas via targeted blackouts, rolling blackouts or by agreements with specific high-use industrial consumers to turn off equipment at times of system-wide peak demand.
Black start
A black start is the process of restoring an electric power station or a part of an electric grid to operation without relying on the external electric power transmission network to recover from a total or partial shutdown.
Normally, the electric power used within the plant is provided from the station's own generators. If all of the plant's main generators are shut down, station service power is provided by drawing power from the grid through the plant's transmission line. However, during a wide-area outage, off-site power from the grid is not available. In the absence of grid power, a so-called black start needs to be performed to bootstrap the power grid into operation.
To provide a black start, some power stations have small diesel generators, normally called the black start diesel generator (BSDG), which can be used to start larger generators (of several megawatts capacity), which in turn can be used to start the main power station generators. Generating plants using steam turbines require station service power of up to 10% of their capacity for boiler feedwater pumps, boiler forced-draft combustion air blowers, and for fuel preparation. It is uneconomical to provide such a large standby capacity at each station, so black-start power must be provided over designated tie lines from another station. Often hydroelectric power plants are designated as the black-start sources to restore network interconnections. A hydroelectric station needs very little initial power to start (just enough to open the intake gates and provide excitation current to the generator field coils), and can put a large block of power on line very quickly to allow start-up of fossil-fuel or nuclear stations. Certain types of combustion turbine can be configured for black start, providing another option in places without suitable hydroelectric plants. In 2017 a utility in Southern California has successfully demonstrated the use of a battery energy storage system to provide a black start, firing up a combined cycle gas turbine from an idle state.
Obsolescence
Despite novel institutional arrangements and network designs, power delivery infrastructures is experiencing aging across the developed world. Contributing factors include:
Aging equipment – older equipment has higher failure rates, leading to customer interruption rates affecting the economy and society; also, older assets and facilities lead to higher inspection maintenance costs and further repair and restoration costs.
Obsolete system layout – older areas require serious additional substation sites and rights-of-way that cannot be obtained in the current area and are forced to use existing, insufficient facilities.
Outdated engineering – traditional tools for power delivery planning and engineering are ineffective in addressing current problems of aged equipment, obsolete system layouts, and modern deregulated loading levels.
Old cultural value – planning, engineering, operating of system using concepts and procedures that worked in vertically integrated industry exacerbate the problem under a deregulated industry.
Trends
Demand response
Demand response is a grid management technique where retail or wholesale customers are requested or incentivised either electronically or manually to reduce their load. Currently, transmission grid operators use demand response to request load reduction from major energy users such as industrial plants. Technologies such as smart metering can encourage customers to use power when electricity is plentiful by allowing for variable pricing.
Smart grid
Grid defection
Resistance to distributed generation among grid operators may encourage providers to leave the grid and instead distribute power to smaller geographies.
The Rocky Mountain Institute and other studies foresee widescale grid defection. However grid defection may be less likely in places such as Germany that have greater power demands in winter.
History
Early electric energy was produced near the device or service requiring that energy. In the 1880s, electricity competed with steam, hydraulics, and especially coal gas. Coal gas was first produced on customer's premises but later evolved into gasification plants that enjoyed economies of scale. In the industrialized world, cities had networks of piped gas, used for lighting. But gas lamps produced poor light, wasted heat, made rooms hot and smoky, and gave off hydrogen and carbon monoxide. They also posed a fire hazard. In the 1880s electric lighting soon became advantageous compared to gas lighting.
Electric utility companies established central stations to take advantage of economies of scale and moved to centralized power generation, distribution, and system management. After the war of the currents was settled in favor of AC power, with long-distance power transmission it became possible to interconnect stations to balance the loads and improve load factors. Historically, transmission and distribution lines were owned by the same company, but starting in the 1990s, many countries have liberalized the regulation of the electricity market in ways that have led to the separation of the electricity transmission business from the distribution business.
In the United Kingdom, Charles Merz, of the Merz & McLellan consulting partnership, built the Neptune Bank Power Station near Newcastle upon Tyne in 1901, and by 1912 had developed into the largest integrated power system in Europe. Merz was appointed head of a parliamentary committee and his findings led to the Williamson Report of 1918, which in turn created the Electricity (Supply) Act 1919. The bill was the first step towards an integrated electricity system. In 1925 the Weir Committee recommended the creation of a "national gridiron" and so the Electricity (Supply) Act 1926 created the Central Electricity Board (CEB). The CEB standardized the nation's electricity supply and established the first synchronized AC grid, running at 132 kilovolts and 50 hertz but initially operated as regional grids. After brief overnight interconnection in 1937 they permanently and officially joined in 1938 becoming the UK National Grid.
In France, electrification began in the 1900s, with 700 communes in 1919, and 36,528 in 1938. At the same time, these close networks began to interconnect: Paris in 1907 at 12 kV, the Pyrénées in 1923 at 150 kV, and finally almost all of the country interconnected by 1938 at 220 kV. In 1946, the grid was the world's most dense. That year the state nationalised the industry, by uniting the private companies as Électricité de France. The frequency was standardised at 50 Hz, and the 225 kV network replaced 110 kV and 120 kV. Since 1956, service voltage has been standardised at 220/380 V, replacing the previous 127/220 V. During the 1970s, the 400 kV network, the new European standard, was implemented. The end user service voltage will progressively change to 230/400 V +/-10% since may 29, 1986.
In the United States in the 1920s, utilities formed joint-operations to share peak load coverage and backup power. In 1934, with the passage of the Public Utility Holding Company Act (USA), electric utilities were recognized as public goods of importance and were given outlined restrictions and regulatory oversight of their operations. The Energy Policy Act of 1992 required transmission line owners to allow electric generation companies open access to their network and led to a restructuring of how the electric industry operated in an effort to create competition in power generation. No longer were electric utilities built as vertical monopolies, where generation, transmission and distribution were handled by a single company. Now, the three stages could be split among various companies, in an effort to provide fair access to high voltage transmission. The Energy Policy Act of 2005 allowed incentives and loan guarantees for alternative energy production and advance innovative technologies that avoided greenhouse emissions.
In China, electrification began in the 1950s. In August 1961, the electrification of the Baoji-Fengzhou section of the Baocheng Railway was completed and delivered for operation, becoming China's first electrified railway. From 1958 to 1998, China's electrified railway reached . As of the end of 2017, this number has reached . In the current railway electrification system of China, State Grid Corporation of China——is an important power supplier. In 2019, it completed the power supply project of China's important electrified railways in its operating areas, such as Jingtong Railway, Haoji Railway, Zhengzhou–Wanzhou high-speed railway, et cetera, providing power supply guarantee for 110 traction stations, and its cumulative power line construction length reached 6,586 kilometres.
| Technology | Electricity generation and distribution | null |
20352309 | https://en.wikipedia.org/wiki/Binturong | Binturong | The binturong (Arctictis binturong) (, ), also known as the bearcat, is a viverrid native to South and Southeast Asia. It is uncommon in much of its range, and has been assessed as Vulnerable on the IUCN Red List because of a declining population. It is estimated to have declined at least 30% since the mid-1980s. The binturong is the only species in the genus Arctictis.
Etymology
"Binturong" is its common name in Borneo, and is related to the Western Malayo-Polynesian root "ma-tuRun". In Riau, it is called "benturong" and "tenturun". The scientific name Arctictis means 'bear-weasel', from the Greek arkt- "bear" + iktis "weasel".
Taxonomy
Viverra binturong was the scientific name proposed by Thomas Stamford Raffles in 1822 for a specimen from Malacca. The generic name Arctictis was proposed by Coenraad Jacob Temminck in 1824. Arctictis is a monotypic taxon; its morphology is similar to that of members of the genera Paradoxurus and Paguma.
In the 19th and 20th centuries, the following zoological specimens were described:
Paradoxurus albifrons proposed by Frédéric Cuvier in 1822 was based on a drawing of a binturong from Bhutan prepared by Alfred Duvaucel.
Arctictis penicillata by Temminck in 1835 were specimens from Sumatra and Java.
Arctictis whitei proposed by Joel Asaph Allen in 1910 were skins of two female binturongs collected in Palawan Island in the Philippines.
Arctictis pageli proposed by Ernst Schwarz in 1911 was a skin and skull of a female collected in northern Borneo.
Arctictis gairdneri proposed by Oldfield Thomas in 1916 was a skull of a male binturong collected in southwestern Thailand.
Arctictis niasensis proposed by Marcus Ward Lyon Jr. in 1916 was a binturong skin from Nias Island.
A. b. kerkhoveni by Henri Jacob Victor Sody in 1936 was based on specimens from Bangka Island.
A. b. menglaensis by Wang and Li in 1987 was based on specimens from the Yunnan Province in China.
Nine subspecies have been recognized forming two clades. The northern clade in mainland Asia is separated from the Sundaic clade by the Isthmus of Kra.
Characteristics
The binturong is long and heavy, with short, stout legs. It has a thick coat of coarse black hair. The bushy and prehensile tail is thick at the root, gradually tapering, and curls inwards at the tip. The muzzle is short and pointed, somewhat turned up at the nose, and is covered with bristly hairs, brown at the points, which lengthen as they diverge, and form a peculiar radiated circle round the face. The eyes are large, black and prominent. The ears are short, rounded, edged with white, and terminated by tufts of black hair. There are six short rounded incisors in each jaw, two canines, which are long and sharp, and six molars on each side. The hair on the legs is short and of a yellowish tinge. The feet are five-toed, with large strong claws. The soles are bare, and are plantigrade―applied to the ground throughout the whole of their length―and the hind ones are longer than the fore ones.
In general build, the binturong is essentially like Paradoxurus and Paguma, but more massive in the length of the tail, legs and feet, in the structure of the scent glands, and in the larger size of the rhinarium, which is more convex with a median groove being much narrower above the philtrum. The contour hairs of the coat are much longer and coarser, and the long hairs covering the whole of the back of the ears project beyond the tip as a definite tuft. The anterior bursa flap of the ears is more widely and less deeply emarginate. The tail is more muscular, especially at the base and, in colour, generally like the body, but commonly paler at the base beneath. The body hairs are frequently partly whitish or buff, giving a speckled appearance to the pelage, sometimes so pale that the whole body is mostly straw-coloured or grey. The young are often paler than the adults, but the head is always closely speckled with grey or buff. The long mystacial vibrissae are conspicuously white, and there is a white rim on the summit of the otherwise black ear. The glandular area is whitish.
The tail is nearly as long as the head and body. The body ranges from and the tail is from long. Some captive binturongs measured from in head and body, with a tail of . The mean weight of captive adult females is , with a range from . Captive animals often weigh more than their wild counterparts. 12 captive female binturongs were found to weigh a mean of while 22 males weighed a mean of . In one study, the estimated mean weight of wild females was . However, seven wild male binturongs in Thailand were found to weigh a mean of , while one female was of similar weight at . One estimate of the mean body mass of wild binturongs was .
Both sexes have scent glands—females on either side of the vulva, and males between the scrotum and penis. The musk glands emit an odor reminiscent of popcorn or corn chips, described as "ltpɨt" by the Malaysian Jahai people, likely due to the volatile compound 2-acetyl-1-pyrroline in the urine, which is also produced in the Maillard reaction at high temperatures. Unlike most other carnivorans, the male binturong does not have a baculum.
Distribution and habitat
The binturong occurs from India, Nepal, Bangladesh, Bhutan, Myanmar, Thailand and Malaysia to Laos, Cambodia, Vietnam and Yunnan in China, Sumatra, Kalimantan and Java in Indonesia, to Palawan in the Philippines. It is confined to tall forest. In Assam, it is common in foothills and hills with good tree cover, but less so in the forested plains. It has been recorded in Manas National Park, in Dulung and Kakoi Reserved Forests of the Lakhimpur district, in the hill forests of Karbi Anglong, North Cachar Hills, Cachar and Hailakandi Districts. It was also recorded in Kaziranga National Park in the year 2024.
In Myanmar, binturongs were photographed on the ground in Tanintharyi Nature Reserve, at an elevation of in the Hukaung Valley, at elevations from in the Rakhine Yoma Elephant Reserve, and at and at three other sites up to elevation. In Thailand's Khao Yai National Park, several individuals were observed feeding in a fig tree and on a vine. In Laos, they have been observed in extensive evergreen forest.
In Malaysia, binturongs were recorded in secondary forest surrounding a palm estate that was logged in the 1970s. In Palawan, it inhabits primary and secondary lowland forest, including grassland–forest mosaic from sea level to .
Ecology and behavior
The binturong is active during the day and at night. Three sightings in Pakke Tiger Reserve were by day. Camera traps set up in Myanmar captured thirteen animals, one around dusk, seven at night and five in broad daylight. All the photographs were of single animals, and all were taken on the ground. Because binturongs are not very nimble, they may have to descend to the ground relatively frequently when moving between trees.
Five radio-collared binturongs in the Phu Khieo Wildlife Sanctuary exhibited an arrhythmic activity dominated by crepuscular and nocturnal tendencies with peaks in the early morning and late evening. Reduced activity periods occurred from midday to late afternoon. They moved between and daily in the dry season and increased their daily movement to in the wet season. Range size of males varied between . Two males showed slightly larger ranges in the wet season. Their ranges overlapped between 30 and 70%. The average home range of a radio-collared female in the Khao Yai National Park was estimated at , and the one of a male at .
The binturong is essentially arboreal. Pocock observed the behaviour of several captive individuals in the London Zoological Gardens. When resting, they lay curled up with their heads tucked under their tails. They seldom leaped, but climbed skilfully, albeit slowly, progressing with equal ease and confidence along the upper side of branches or, upside down, beneath them. The prehensile tail was always ready as an aid. They descended the vertical bars of the cage head first, gripping them between their paws and using the prehensile tail as a check. They growled fiercely when irritated, and when on the prowl they periodically uttered a series of low grunts or a hissing sound, made by expelling air through partially opened lips.
The binturong uses its tail to communicate. It moves about gently, clinging to a branch, often coming to a stop, and often using the tail to keep balance. It shows a pronounced comfort behaviour associated with grooming the fur, shaking and licking its hair, and scratching. Shaking is the most characteristic element of comfort behaviour.
Diet
The binturong is omnivorous, feeding on small mammals, birds, fish, earthworms, insects and fruits. It also preys on rodents. Fish and earthworms are likely unimportant items in its diet, as it is neither aquatic nor fossorial, coming across such prey only when opportunities present themselves. Since it does not have the attributes of a predatory mammal, most of the binturong's diet is probably of vegetable matter. Figs are a major component of its diet. Captive binturongs are particularly fond of plantains, but also eat fowls' heads and eggs.
The binturong is an important agent for seed dispersal, especially for those of the strangler fig, because of its ability to scarify the seed's tough outer covering.
In captivity, the binturong's diet includes commercially prepared meat mix, bananas, apples, oranges, canned peaches and mineral supplement.
Reproduction
The average age of sexual maturation is 30.4 months for females and 27.7 months for males. The estrous cycle of the binturong lasts 18 to 187 days, with an average of 82.5 days. Gestation lasts 84 to 99 days. Litter size in captivity varies from one to six young, with an average of two young per birth. Neonates weigh between . Fertility lasts until 15 years of age.
Threats
Major threats to the binturong are habitat loss and degradation of forests through logging and conversion of forests to non-forest land-uses throughout the binturong's range. Habitat loss has been severe in the lowlands of the Sundaic part of its range, and there is no evidence that the binturong uses the plantations that are largely replacing natural forest. In China, rampant deforestation and opportunistic logging practices have fragmented suitable habitat or eliminated sites altogether. In the Philippines, it is captured for the wildlife trade, and in the south of its range it is also taken for human consumption. In Laos, it is one of the most frequently displayed caged live carnivores and skins are traded frequently in at least Vientiane. In parts of Laos, it is considered a delicacy and also traded as a food item to Vietnam.
The binturong is also sometimes kept captive for production of kopi luwak.
Conservation
The binturong is included in CITES Appendix III and in Schedule I of the Indian Wild Life (Protection) Act, 1972, so that it has the highest level of protection. In China, it is listed as critically endangered. It is completely protected in Bangladesh, and partially in Thailand, Malaysia, Vietnam and Indonesia. It is not protected in Brunei.
World Binturong Day is a yearly event held in several zoos and is dedicated to binturong awareness and conservation. It takes place every second Saturday of May.
In captivity
Binturongs are common in zoos, and captive individuals represent a source of genetic diversity essential for long-term conservation. Their geographic origin is either usually unknown, or they are offspring of several generations of captive-bred animals. The maximum known lifespan in captivity is thought to be over 25 years of age.
The Orang Asli of Malaysia has the tradition of keeping binturongs as pets.
| Biology and health sciences | Other carnivora | Animals |
20353360 | https://en.wikipedia.org/wiki/Grenade | Grenade | A grenade is a small explosive weapon typically thrown by hand (also called hand grenade), but can also refer to a shell (explosive projectile) shot from the muzzle of a rifle (as a rifle grenade) or a grenade launcher. A modern hand grenade generally consists of an explosive charge ("filler"), a detonator mechanism, an internal striker to trigger the detonator, an arming safety secured by a transport safety. The user removes the transport safety before throwing, and once the grenade leaves the hand the arming safety gets released, allowing the striker to trigger a primer that ignites a fuze (sometimes called the delay element), which burns down to the detonator and explodes the main charge.
Grenades work by dispersing fragments (fragmentation grenades), shockwaves (high-explosive, anti-tank and stun grenades), chemical aerosols (smoke, gas and chemical grenades) or fire (incendiary grenades). Their outer casings, generally made of a hard synthetic material or steel, are designed to rupture and fragment on detonation, sending out numerous fragments (shards and splinters) as fast-flying projectiles. In modern grenades, a pre-formed fragmentation matrix inside the grenade is commonly used, which may be spherical, cuboid, wire or notched wire. Most anti-personnel (AP) grenades are designed to detonate either after a time delay or on impact.
Grenades are often spherical, cylindrical, ovoid or truncated ovoid in shape, and of a size that fits the hand of an average-sized adult. Some grenades are mounted at the end of a handle and known as "stick grenades". The stick design provides leverage for throwing longer distances, but at the cost of additional weight and length, and has been considered obsolete by western countries since the Second World War and Cold War periods. A friction igniter inside the handle or on the top of the grenade head was used to initiate the fuse.
Etymology
The word grenade is likely derived from the French word spelled exactly the same, meaning pomegranate, as the bomb is reminiscent of the many-seeded fruit in size and shape. Its first use in English dates from the 1590s.
History
Pre-gunpowder
Rudimentary incendiary grenades appeared in the Eastern Roman (Byzantine) Empire, not long after the reign of Leo III (717–741). Byzantine soldiers learned that Greek fire, a Byzantine invention of the previous century, could not only be thrown by flamethrowers at the enemy but also in stone and ceramic jars. Later, glass containers were employed.
Gunpowder
In Song China (960–1279), weapons known as 'thunder crash bombs' () were created when soldiers packed gunpowder into ceramic or metal containers fitted with fuses. A 1044 military book, Wujing Zongyao (Compilation of Military Classics), described various gunpowder recipes in which one can find, according to Joseph Needham, the prototype of the modern hand grenade.
Grenade-like devices were also known in ancient India. In a 12th-century Persian historiography, the Mojmal al-Tawarikh, a terracotta elephant filled with explosives set with a fuse was placed hidden in the van and exploded as the invading army approached.
A type of grenade called the 'flying impact thunder crash bomb' (飛擊震天雷) was developed in the late 16th century and first used in September 1, 1592 by the Joseon Dynasty during the Japanese invasions of Korea. The grenade was 20 cm in diameter, weighed 10 kg, and had a cast iron shell. It contained iron pellets, and an adjustable fuse. The grenade was used with a dedicated grenade launcher called a 'wangu' (碗口). It was used in both the besieging and defense of fortifications, to great effect.
The first cast-iron bombshells and grenades appeared in Europe in 1467, where their initial role was with the besieging and defense of castles and fortifications. A hoard of several hundred ceramic hand grenades was discovered during construction in front of a bastion of the Bavarian city of Ingolstadt, Germany, dated to the 17th century. Many of the grenades retained their original black powder loads and igniters. The grenades were most likely intentionally dumped in the moat of the bastion prior to 1723.
By the mid-17th century, infantry known as "grenadiers" began to emerge in the armies of Europe, who specialized in shock and close quarters combat, mostly with the usage of grenades and fierce melee combat. In 1643, it is possible that grenados were thrown amongst the Welsh at Holt Bridge during the English Civil War. The word grenade was also used during the events surrounding the Glorious Revolution in 1688, where cricket ball-sized ( in circumference) iron spheres packed with gunpowder and fitted with slow-burning wicks were first used against the Jacobites in the battles of Killiecrankie and Glen Shiel. These grenades were not very effective owing both to the unreliability of their fuse, as well inconsistent times to detonation, and as a result, saw little use. Grenades were also used during the Golden Age of Piracy, especially during boarding actions; pirate Captain Thompson used "vast numbers of powder flasks, grenade shells, and stinkpots" to defeat two pirate-hunters sent by the Governor of Jamaica in 1721.
Improvised grenades were increasingly used from the mid-19th century, the confines of trenches enhancing the effect of small explosive devices. In a letter to his sister, Colonel Hugh Robert Hibbert described an improvised grenade that was employed by British troops during the Crimean War (1854–1856):
In March 1868 during the Paraguayan War, the Paraguayan troops used hand grenades in their attempt to board Brazilian ironclad warships with canoes.
Hand grenades were used on naval engagements during the War of the Pacific.
During the Siege of Mafeking in the Second Boer War, the defenders used fishing rods and a mechanical spring device to throw improvised grenades.
Improvised hand grenades were used to great effect by the Russian defenders of Port Arthur (now Lüshun Port) during the Russo-Japanese War.
Development of modern grenades
Around the turn of the 20th century, the ineffectiveness of the available types of hand grenades, coupled with their levels of danger to the user and difficulty of operation, meant that they were regarded as increasingly obsolete pieces of military equipment. In 1902, the British War Office announced that hand grenades were obsolete and had no place in modern warfare. But within two years, following the success of improvised grenades in the trench warfare conditions of the Russo-Japanese War, and reports from General Sir Aylmer Haldane, a British observer of the conflict, a reassessment was quickly made and the Board of Ordnance was instructed to develop a practical hand grenade. Various models using a percussion fuze were built, but this type of fuze suffered from various practical problems, and they were not commissioned in large numbers.
In 1904 Serbia adopted a grenade designed by Major Miodrag Vasić; it was partially inspired by copies of Bulgarian grenades manufactured by the Serbian Chetnik Organization.
Marten Hale, known for patenting the Hales rifle grenade, developed a modern hand grenade in 1906 but was unsuccessful in persuading the British Army to adopt the weapon until 1913. Hale's chief competitor was Nils Waltersen Aasen, who invented his design in 1906 in Norway, receiving a patent for it in England. Aasen began his experiments with developing a grenade while serving as a sergeant in the Oscarsborg Fortress. Aasen formed the Aasenske Granatkompani in Denmark, which before the First World War produced and exported hand grenades in large numbers across Europe. He had success in marketing his weapon to the French and was appointed as a Knight of the French Legion of Honour in 1916 for the invention.
The Royal Laboratory developed the No. 1 grenade in 1908. It contained explosive material with an iron fragmentation band, with an impact fuze, detonating when the top of the grenade hit the ground. A long cane handle (approximately 16 inches or 40 cm) allowed the user to throw the grenade farther than the blast of the explosion. It suffered from the handicap that the percussion fuse was armed before throwing, which meant that if the user was in a trench or other confined space, he was apt to detonate it and kill himself when he drew back his arm to throw it.
Before the beginning of the Second Balkan War, Serbian General Stepa Stepanović ordered that bomb equipped squads (consisting of one non-commissioned officer and 16 soldiers each.) should be formed in all companies of the 4t 3th, 14th, 15th and 20th Infantry Regiments of the Timočka Division.
Early in World War I, combatant nations only had small grenades, similar to Hales' and Aasen's design. The Italian Besozzi grenade had a five-second fuze with a match-tip that was ignited by striking on a ring on the soldier's hand.
William Mills, a hand grenade designer from Sunderland, patented, developed and manufactured the "Mills bomb" at the Mills Munition Factory in Birmingham, England in 1915, designating it the No.5. It was described as the first "safe grenade". They were explosive-filled steel canisters with a triggering pin and a distinctive deeply notched surface. This segmentation is often erroneously thought to aid fragmentation, though Mills' own notes show the external grooves were purely to aid the soldier to grip the weapon. Improved fragmentation designs were later made with the notches on the inside, but at that time they would have been too expensive to produce. The external segmentation of the original Mills bomb was retained, as it provided a positive grip surface. This basic "pin-and-pineapple" design is still used in some modern grenades.
After the Second World War, the general design of hand grenades has been fundamentally unchanged, with pin-and-lever being the predominant igniter system with the major powers, though incremental and evolutionary improvements continuously were made. In 2012, (shgr 07, "Blast hand-grenade 07") was announced as the first major innovation in the area of handgrenades since the Great War. Developed by Ian Kinley at Försvarets Materielverk (FMV), shgr 07 is a self-righting, jumping hand grenade containing some 1900 balls that covers a cone 10 metres in diameter with the centre about 2 metres in height. This minimize the dangers outside the lethal zone as there is little to no random scattering of fragments from the blast..
Explosive grenades
Fragmentation
Fragmentation grenades are common in armies. They are weapons that are designed to disperse fragments on detonation, aimed to damage targets within the lethal and injury radii. The body is generally made of a hard synthetic material or steel, which will provide some fragmentation as shards and splinters, though in modern grenades a pre-formed fragmentation matrix is often used. The pre-formed fragmentation may be spherical, cuboid, wire or notched wire. Most explosive grenades are designed to detonate either after a time delay or on impact.
Modern fragmentation grenades, such as the United States M67 grenade, have a wounding radius of – half that of older style grenades, which can still be encountered – and can be thrown about . Fragments may travel more than .
High explosive
These grenades are usually classed as offensive weapons because the effective casualty radius is much less than the distance it can be thrown, and its explosive power works better within more confined spaces such as fortifications or buildings, where entrenched defenders often occupy. The concussion effect, rather than any expelled fragments, is the effective killer. In the case of the US Mk3A2, the casualty radius is published as in open areas, but fragments and bits of fuze may be projected as far as from the detonation point.
Concussion grenades have also been used as depth charges (underwater explosives) around boats and underwater targets; some like the US Mk 40 concussion grenade are designed for use against enemy divers and frogmen. Underwater explosions kill or otherwise incapacitate the target by creating a lethal shock wave underwater.
The US Army Armament Research, Development and Engineering Center (ARDEC) announced in 2016 that they were developing a grenade which could operate in either fragmentation or blast mode (selected at any time before throwing), the electronically fuzed enhanced tactical multi-purpose (ET-MP) hand grenade.
Anti-tank
During the Great War, handgrenades were frequently used by troops, lacking other means to defend against enemy tanks threatening to over-run the position, to various success. The Interwar period saw some limited development of grenades specifically intended to defeat armour, but it was not until the outbreak of WWII serious efforts were made. While there were infantry anti-tank weapons available, they were either not ubiquitous enough, ineffective or both. Anti-tank grenades were a suitable stopgap to ensure a rudimentary capability for every squad to be used for self-defence. Once rocket-propelled shaped charges became available in greater numbers, anti-tank hand grenades became almost obsolete. However, they were still used with limited success in the Iraqi insurgency in the early 2000s against lightly armoured mine-resistant ambush protected (MRAP) vehicles, designed for protection only against improvised explosive devices, as well as drone ordnance in Ukraine 2022–2025.
Incendiary
During World War II the United Kingdom used incendiary grenades based on white phosphorus. One model, the No. 76 special incendiary grenade, was mainly issued to the Home Guard as an anti-tank weapon. It was produced in vast numbers; by August 1941 well over 6,000,000 had been manufactured.
Sting
Sting grenades, also known as stingball or sting ball grenades, are stun grenades based on the design of the fragmentation grenade. Instead of using a metal casing to produce fragmentation, they are made from hard rubber and are filled with around 100 rubber or plastic balls. On detonation, these balls, and fragments from the rubber casing explode outward in all directions as reduced lethality projectiles, which may ricochet. It is intended that people struck by the projectiles will receive a series of fast, painful stings, without serious injury. Some types have an additional payload of CS gas.
Sting grenades do not reliably incapacitate people, so they can be dangerous to use against armed subjects. They sometimes cause serious physical injury, especially the rubber fragments from the casing. People have lost eyes and hands to sting grenades.
Sting grenades are sometimes called "stinger grenades", which is a genericized trademark as "Stinger" is trademarked by Defense Technology for its line of sting grenades.
Chemical and gas
Chemical and gas grenades burn or release a gas, and do not explode.
Practice
Practice or simulation grenades are similar in handling and function to other hand grenades, except that they only produce a loud popping noise and a puff of smoke on detonation. The grenade body can be reused. Another type is the throwing practice grenade which is completely inert and often cast in one piece. It is used to give soldiers a feel for the weight and shape of real grenades and for practicing precision throwing. Examples of practice grenades include the K417 Biodegradable Practice Hand Grenade by CNOTech Korea.
Igniters
When using a hand grenade, the objective is to have the grenade explode so that the target is within its effective radius while keeping the thrower out of the same. For this reason, several systems has been used to trigger the explosion.
Impact was the first used, with fragile containers of Greek fire that ruptured when landing. Later impact fuzes contained some kind of sensitive explosive to either initiate the main charge directly, or set off a primer charge that in turn detonates the main charge. This turned out to present significant drawbacks; either the primer is so sensitive that unintended and premature ignition happens, while a more stable substance often fails to set off the grenade when landing in softer ground, not seldom even allowing the targeted troops to hurl the grenade back. Thus, the only significant use of impact fuzes since WWI has been in anti-tank grenades.
Fuze-delayed grenades is the predominant system today, developed from the match-fuzes that were hand-lit in the early grenades. From there, two sub-groups were developed: friction-ignitors where a cord is pulled or a cap is twisted to ignite the delay-fuze like on the German Stielhandgranate; the other being strike- or percussion-ignitors where the user either hit the cap before the throw like on the Japanese Type 10 grenade, or have a spring-loaded striker hit the cap after the grenade is released like the Mills bomb with the latter being predominant since WWII.
There is also an alternative technique of throwing, where the grenade is not thrown immediately after the fuze is ignited, which allows the fuze to burn partially and decrease the time to detonation after throwing; this is referred to as "cooking". A shorter delay is useful to reduce the ability of the enemy to take cover, throw or kick the grenade away and can also be used to allow a fragmentation grenade to explode into the air over defensive positions.
Concerned with a number of serious incidents and accidents involving hand grenades, Ian Kinley at the Swedish Försvarets materielverk identified the two main issues as the time-fuze's burntime variation with temperature (slows down in cold and speeds up in heat) and the springs, the striker spring in particular, coming pre-tensioned from the factory by mechanism designs that had not changed much since the 1930s. In 2019, a new mechanism, fully interchangeable with the old ones, was adopted into service. The main difference, apart from a fully environmentally stable delay, is that the springs now are twist-tensioned by the thrower after the transport safety (pin and ring) has been removed, thus eliminating the possibility of unintentional arming of the hand grenade.
Cultural impact
Manufacturing
Modern manufacturers of hand grenades include:
Agenzia Industrie della Difesa (Italy)
Diehl (Germany)
Mecar (Belgium)
Rheinmetall (formerly Arges, Austria)
Ruag (Switzerland)
Nammo (Norway)
Instalaza (Spain)
Solar Industries (India)
MKEK (Turkey)
| Technology | Explosive weapons | null |
3810490 | https://en.wikipedia.org/wiki/Cryptoclidus | Cryptoclidus | Cryptoclidus ( ) is a genus of plesiosaur reptile from the Middle Jurassic period of England, France, and Cuba.
Discovery
Cryptoclidus was a plesiosaur whose specimens include adult and juvenile skeletons, and remains which have been found in various degrees of preservation in England, Northern France, Russia, and South America. Its name, meaning "hidden clavicles", refer to its small, practically invisible clavicles buried in its front limb girdle.
The type species was initially described as Plesiosaurus eurymerus. The specific name "wide femur" refers to the forelimb, which was mistaken for a hindlimb at the time. It was moved to its own genus Cryptoclidus by Seeley (1892).
Fossils of Cryptoclidus have been found in the Oxford Clay of Cambridgeshire, England. The dubious species Cryptoclidus beaugrandi is known from Kimmeridgian-age deposits in Boulogne-sur-Mer, France. Cryptoclidus vignalensis, which is now considered undiagnostic, hails from the Jagua Formation of western Cuba.
In 2016, there was a report about a fragmentary Cryptoclidus postcranial skeleton from the Callovian deposits of Nikitino village in Spassky District, Ryazan Oblast, Russia, but later Zverkov et al. defined it as an intedermitate cryptoclidid.
Description
Cryptoclidus was a medium-sized plesiosaur, with the largest individuals measuring up to long and weighing about . The fragile build of the head and teeth preclude any grappling with prey, and suggest a diet of small, soft-bodied animals such as squid and shoaling fish. Cryptoclidus may have used its long, intermeshing teeth to strain small prey from the water, or perhaps sift through sediment for buried animals.
The size and shape of the nares and nasal openings have led Brown and Cruickshank (1994) to argue that they were used to sample seawater for smells and chemical traces.
Classification
The cladogram below follows the topology from Benson et al. (2012) analysis.
| Biology and health sciences | Prehistoric marine reptiles | Animals |
3811974 | https://en.wikipedia.org/wiki/Callosciurus | Callosciurus | Callosciurus is a genus of squirrels collectively referred to as the "beautiful squirrels". They are found mainly in Southeast Asia, though a few species also occur in Nepal, northeastern India, Bangladesh and southern China. Several of the species have settled on islands. In total, the genus contains 15 species and numerous varieties and subspecies. The genera Glyphotes, Rubrisciurus, and Tamiops have sometimes been included in Callosciurus.
Species
There are approximately 15 species in this genus, and over 60 subspecies. These squirrels range in length from , not including the tail which is often about the same length as the body. Most are rather dull olive-brown to gray and several have a pale and dark stripe on their side, however a few are very colorful. The Pallas's squirrel may have an unremarkable olive-gray back, while its belly is often –but not always– bright red. The "typical" subspecies of Prevost's squirrels have black backs, white sides, and red-brown undersides. The Finlayson's squirrel occurs in numerous varieties, three of which are overall red-brown, overall black, or pure white.
Most squirrels in Callosciurus live in tropical rain forests, but some individuals live in parks and gardens in cities. In the trees, they build their nests out of plant material. They are solitary, and give birth to one to five young. Their food consists of nuts, fruits, and seeds, and also of insects and bird eggs.
| Biology and health sciences | Rodents | Animals |
3817826 | https://en.wikipedia.org/wiki/Brightest%20cluster%20galaxy | Brightest cluster galaxy | A brightest cluster galaxy (BCG) is defined as the brightest galaxy in a cluster of galaxies. BCGs include the most massive galaxies in the universe. They are generally elliptical galaxies which lie close to the geometric and kinematical center of their host galaxy cluster, hence at the bottom of the cluster potential well. They are also generally coincident with the peak of the cluster X-ray emission.
Formation scenarios for BCGs include:
Cooling flow—star formation from the central cooling flow in high density cooling centers of X-ray cluster halos.
The study of accretion populations in BCGs has cast doubt over this theory and astronomers have seen no evidence of cooling flows in radiative cooling clusters. The two remaining theories exhibit healthier prospects.
Galactic cannibalism—galaxies sink to the center of the cluster due to dynamical friction and tidal stripping.
Galactic merger—rapid galactic mergers between several galaxies take place during cluster collapse.
It is possible to differentiate the cannibalism model from the merging model by considering the formation period of the BCGs. In the cannibalism model, there are numerous small galaxies present in the evolved cluster, whereas in the merging model, a hierarchical cosmological model is expected due to the collapse of clusters. It has been shown that the orbit decay of cluster galaxies is not effective enough to account for the growth of BCGs.
The merging model is now generally accepted as the most likely one, but recent observations are at odds with some of its predictions. For example, it has been found that the stellar mass of BCGs was assembled much earlier than the merging model predicts.
BCGs are divided into various classes of galaxies: giant ellipticals (gE), D galaxies and cD galaxies. cD and D galaxies both exhibit an extended diffuse envelope surrounding an elliptical-like nucleus akin to regular elliptical galaxies. The light profiles of BCGs are often described by a Sersic surface brightness law, a double Sersic profile or a de Vaucouleurs law. The different parametrizations of the light profile of BCG's, as well as the faintness of the diffuse envelope lead to discrepancies in the reported values of the sizes of these objects.
| Physical sciences | Galaxy classification | Astronomy |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.