id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160
values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
27,197,240 | https://en.wikipedia.org/wiki/Litracen | Litracen (N-7,049) is a tricyclic antidepressant which was never marketed.
See also
Fluotracen
Melitracen
References
Secondary amines
Anthracenes
Abandoned drugs | Litracen | Chemistry | 48 |
58,622,999 | https://en.wikipedia.org/wiki/Aspergillus%20dromiae | Aspergillus dromiae is a species of fungus in the genus Aspergillus. It is from the Nidulantes section. The species was first described in 2016. It has been isolated from the Dromia erythropus crab in Venezuela.
Growth and morphology
A. dromiae has been cultivated on both Czapek yeast extract agar (CYA) plates and Malt Extract Agar Oxoid® (MEAOX) plates. The growth morphology of the colonies can be seen in the pictures below.
References
dromiae
Fungi described in 2016
Fungus species | Aspergillus dromiae | Biology | 123 |
49,101,146 | https://en.wikipedia.org/wiki/APC%20superfamily | The amino acid-polyamine-organocation (APC) superfamily is the second largest superfamily of secondary carrier proteins currently known, and it contains several Solute carriers. Originally, the APC superfamily consisted of subfamilies under the transporter classification number (TC # 2.A.3). This superfamily has since been expanded to include eighteen different families.
The most recent families added include the PAAP (Putative Amino Acid Permease), LIVCS (Branched Chain Amino Acid:Cation Symporter), NRAMP (Natural Resistance-Associated Macrophage Protein), CstA (Carbon starvation A protein), KUP (K+ Uptake Permease), BenE (Benzoate:H+ Virginia Symporter), and AE (Anion Exchanger). Bioinformatic and phylogenetic analysis is used to continually expand currently existing families and superfamilies.
Other constituents of the APC superfamily are the AAAP family (TC# 2.A.18), the HAAAP family (TC# 2.A.42) and the LCT family (TC# 2.A.43). Some of these proteins exhibit 11 TMSs. Eukaryotic members of this superfamily have been reviewed by Wipf et al. (2002) and Fischer et al. (1998).
Families
Currently recognized families within the APC Superfamily (with TC numbers in blue) include:
2.A.3 - The Amino Acid-Polyamine-Organocation (APC) Family
2.A.15 - The Betaine/Carnitine/Choline Transporter (BCCT) Family
2.A.18 - The Amino Acid/Auxin Permease (AAAP) Family
2.A.21 - The Solute:Sodium Symporter (SSS) Family
2.A.22 - The Neurotransmitter:Sodium Symporter (NSS) Family
2.A.25 - The Alanine or Glycine:Cation Symporter (AGCS) Family
2.A.26 - The Branched Chain Amino Acid:Cation Symporter (LIVCS) Family
2.A.30 - The Cation-Chloride Cotransporter (CCC) Family
2.A.31 - The Anion Exchanger (AE) Family
2.A.39 - The Nucleobase:Cation Symporter-1 (NCS1) Family
2.A.40 - The Nucleobase/Ascorbate Transporter (NAT) or Nucleobase:Cation Symporter-2 (NCS2) Family
2.A.42 - The Hydroxy/Aromatic Amino Acid Permease (HAAAP) Family
2.A.46 - The Benzoate:H+ Symporter (BenE) Family
2.A.53 - The Sulfate Permease (SulP) Family
2.A.55 - The Metal Ion (Mn2+-iron) Transporter (Nramp) Family
2.A.72 - The K+ Uptake Permease (KUP) Family
2.A.114 - The Putative Peptide Transporter Carbon Starvation CstA (CstA) Family
2.A.120 - The Putative Amino Acid Permease (PAAP) Family
APC proteins in humans
There are several APC proteins expressed in humans, and they are SLC proteins. There are 11 SLC families including APC proteins: SLC4, 5, 6, 7, 11, 12, 23, 26, 32, 36, and 38. The atypical SLC TMEM104 is also clustered to the APC clan.
Structure and function
The topology of the well-characterized human Anion Exchanger 1 (AE1) conforms to a UraA-like topology of 14 TMSs (12 α-helical TMSs and 2 mixed coil/helical TMSs). All functionally characterized members of the APC superfamily use cation symport for substrate accumulation except for some members of the AE family which frequently use anion:anion exchange. All new entries contain the two 5 or 7 TMS repeat units characteristic of the APC superfamily, sometimes with extra TMSs at the ends likely the result of an addition prior to duplication. The CstA family contains the greatest variation in TMSs. New functionally characterized members transport amino acids, peptides, and inorganic anions or cations. Except for anions, these are typical substrates of established APC superfamily members. Active site TMSs are rich in glycyl residues in variable but conserved arrangements.
In CadB of E. coli (2.A.3.2.2), amino acid residues involved in both uptake and excretion, or solely in excretion are located in the cytoplasmic loops and the cytoplasmic side of transmembrane segments, whereas residues involved in uptake are located in the periplasmic loops and the transmembrane segments. A hydrophilic cavity is proposed to be formed by the transmembrane segments II, III, IV, VI, VII, X, XI, and XII. Based on 3-D structures of APC superfamily members, Rudnick (2011) has proposed the pathway for transport and suggested a "rocking bundle" mechanism.
The structure and function of the cadaverine-lysine antiporter, CadB (2.A.3.2.2), and the putrescine-ornithine antiporter, PotE (2.A.3.2.1), in E. coli have been evaluated using model structures based on the crystal structure of AdiC (2.A.3.2.5), an agmatine-arginine antiporter (). The central cavity of CadB, containing the substrate-binding site is wider than that of PotE, mirroring the different sizes of cadaverine and putrescine. The size of the central cavity of CadB and PotE is dependent on the angle of transmembrane helix 6 (TM6) against the periplasm. Tyr(73), Tyr(89), Tyr(90), Glu(204), Tyr(235), Asp(303), and Tyr(423) of CadB, and Cys(62), Trp(201), Glu(207), Trp(292), and Tyr(425) of PotE are strongly involved in the antiport activities. In addition, Trp(43), Tyr(57), Tyr(107), Tyr(366), and Tyr(368) of CadB are involved preferentially in cadaverine uptake at neutral pH, while only Tyr(90) of PotE is involved preferentially in putrescine uptake. The results indicated that the central cavity of CadB consists of TMs 2, 3, 6, 7, 8, and 10, and that of PotE consists of TMs 2, 3, 6, and 8. Several residues are necessary for recognition of cadaverine in the periplasm because the level of cadaverine is much lower than that of putrescine at neutral pH.
The roughly barrel-shaped AdiC subunit of approx. 45 Å diameter consists of 12 transmembrane helices, TMS1 and TMS6 being interrupted by short non-helical stretches in the middle of their transmembrane spans. Biochemical analysis of homologues places the amino and carboxy termini on the intracellular side of the membrane. TM1–TM10 surround a large cavity exposed to the extracellular solution. These ten helices comprise two inverted structural repeats. TM1–TM5 of AdiC align well with TM6–TM10 turned 'upside down' around a pseudo-two-fold axis nearly parallel to the membrane plane. Thus, TMS1 pairs with TMS6, TMS2 with TMS7, etc. Helices TMS11 and TMS12, non-participants in this repeat, provide most of the 2,500 Å 2 homodimeric interface. AdiC mirrors the common fold observed unexpectedly in four phylogenetically unrelated families of Na+-coupled solute transporters: BCCT (2.A.15), NCS1 (2.A.39), SSS (2.A.21) and NSS (2.A.22).
Transport reactions
Transport reactions generally catalyzed by APC superfamily members include:
Solute:proton symport
Solute (out) + nH+ (out) → Solute (in) + nH+ (in).
Solute:solute antiport
Solute-1 (out) + Solute-2 (in) ⇌ Solute-1 (in) + Solute-2 (out).These reactions may differ for some family members.
References
Further reading
Solute carrier family
Transmembrane transporters
Integral membrane proteins
Protein superfamilies | APC superfamily | Biology | 1,925 |
23,834,912 | https://en.wikipedia.org/wiki/Arithmetic%20topology | Arithmetic topology is an area of mathematics that is a combination of algebraic number theory and topology. It establishes an analogy between number fields and closed, orientable 3-manifolds.
Analogies
The following are some of the analogies used by mathematicians between number fields and 3-manifolds:
A number field corresponds to a closed, orientable 3-manifold
Ideals in the ring of integers correspond to links, and prime ideals correspond to knots.
The field Q of rational numbers corresponds to the 3-sphere.
Expanding on the last two examples, there is an analogy between knots and prime numbers in which one considers "links" between primes. The triple of primes are "linked" modulo 2 (the Rédei symbol is −1) but are "pairwise unlinked" modulo 2 (the Legendre symbols are all 1). Therefore these primes have been called a "proper Borromean triple modulo 2" or "mod 2 Borromean primes".
History
In the 1960s topological interpretations of class field theory were given by John Tate based on Galois cohomology, and also by Michael Artin and Jean-Louis Verdier based on Étale cohomology. Then David Mumford (and independently Yuri Manin) came up with an analogy between prime ideals and knots which was further explored by Barry Mazur. In the 1990s Reznikov and Kapranov began studying these analogies, coining the term arithmetic topology for this area of study.
See also
Arithmetic geometry
Arithmetic dynamics
Topological quantum field theory
Langlands program
Notes
Further reading
Masanori Morishita (2011), Knots and Primes, Springer,
Masanori Morishita (2009), Analogies Between Knots And Primes, 3-Manifolds And Number Rings
Christopher Deninger (2002), A note on arithmetic topology and dynamical systems
Adam S. Sikora (2001), Analogies between group actions on 3-manifolds and number fields
Curtis T. McMullen (2003), From dynamics on surfaces to rational points on curves
Chao Li and Charmaine Sia (2012), Knots and Primes
External links
Mazur’s knotty dictionary
Algebraic number theory
3-manifolds
Knot theory | Arithmetic topology | Mathematics | 455 |
73,592,636 | https://en.wikipedia.org/wiki/Anammox%20for%20wastewater%20treatment | Anammox is a wastewater treatment technique that removes nitrogen using anaerobic ammonium oxidation (anammox). This process is performed by anammox bacteria which are autotrophic, meaning they do not need organic carbon for their metabolism to function. Instead, the metabolism of anammox bacteria convert ammonium and nitrite into dinitrogen gas. Anammox bacteria are a wastewater treatment technique and wastewater treatment facilities are in the process of implementing anammox-based technologies to further enhance ammonia and nitrogen removal.
Morphology and physiology
Anammox bacteria can be found in wastewater treatment plants, lakes, suboxic zones, and coastal sediments. Anammox bacteria are temperature-dependent, requiring temperatures between 30˚C to 40˚C to grow. Anammox bacteria growth is also impacted by pH, growing best at pH ranges of 6.5 to 8.3. Anammox bacteria are made up of an anammoxosome membrane, which takes up 50% to 70% of the cell volume, and a cell membrane surrounded by ladderane lipids.
Chemical process
The two main chemicals needed for the metabolism of anammox bacteria to function are ammonia and nitrite. Nitrate and nitrite are produced by microorganisms within wastewater treatment facilities as a result of sewage treatment. The chemical compound ammonia monooxygenase converts ammonia in wastewater into nitrite during the nitrification process.
Anaerobic ammonium oxidation bacteria (Anammox) reactions, are mediated by the chemoautotrophic bacteria that are from the phylum Planctomycetota. Anammoxosome is the compartment within anammox bacteria where anammox reactions occur. During this process, a proton gradient is produced across the anammoxosome membrane, starting a catabolic reaction. Nitrate is first converted to nitric oxide in the presence of nitrate reductase, which is the first step in this reaction. Anammox oxidizes ammonium into nitrite, which is the reduced to hydroxylamine. Hydroxylamine and ammonia then react to form hydrazine, which is then oxidized into nitrogen gas.
Chemical reaction for anammox, conversion of ammonia to nitrogen gas
Impacts on wastewater treatment
Wastewater usually exists in a mix of solid and liquid forms. The composition of wastewater varies depending on how it has been generated. "Wastewater" may refer to domestic wastewater, wastewater from industry, or surface water runoff. Treatment of wastewater to improve sanitation is a major challenge in developing countries, as untreated wastewater can contaminate drinking water.
Anammox bacteria treatments have been implemented in treatment facilities to help convert sewage wastewater into sludge ash, which is then used as a fertilizer source for agriculture. Sludge ash can be used as fertilizer due to its rich concentration of phosphorus and other nutrients necessary for plant growth. The crystallization of struvite (made up of magnesium, ammonium, and phosphate) during the wastewater treatment process can also be used as a fertilizer. The addition of magnesium to wastewater that already contains ammonium and phosphate allows for a 1:1:1 mole ratio in which all three elements bind to one another, allowing struvite to form as a product according to figure 1. The struvite crystals contain nutrients essential to plant growth that are easy to use and transport. This process also helps to recover nitrogen and phosphorus from wastewater, helping to improve surface water quality as these are two of the primary elements that can cause eutrophication. If eutrophication occurs, an anammox cycle can take place in the absence of oxygen and with high nitrite and ammonia concentrations. These two compounds are needed for the anammox cycle to begin, and are present in wastewater in high concentrations. The anammox bacteria present can help clean up wastewater of excess nitrite and ammonia.
References
Anaerobic digestion
Environmental microbiology | Anammox for wastewater treatment | Chemistry,Engineering,Environmental_science | 804 |
40,630,370 | https://en.wikipedia.org/wiki/Tylopilus%20peralbidus | Tylopilus peralbidus is a bolete fungus in the family Boletaceae native to the eastern United States.
Taxonomy
The species was first described in 1936 by Wally Snell and Henry Curtis Beardslee as a species of Boletus. William Alphonso Murrill transferred it to Tylopilus in 1938. Rolf Singer described a variety Tylopilus peralbidus var. rhodoconius in 1945, but this was renamed as an independent species, Tylopilus rhodoconius, in 1998.
Description
The fruit bodies have caps that are initially convex before flattening out in maturity (sometimes developing a central depression); they reach a diameter of . The cap surface is dry, smooth to slightly hairy, while the color ranges from white initially to tan to brownish in maturity. Brownish stains develop where the cap has been bruised. On the cap underside, the pores are initially whitish, but turn buff to pinkish as the spores mature. The pores are circular to angular, numbering 1 to 3 per millimetre, and the tubes are deep. The stipe measures long by thick, and is either more or less equal in width throughout, or tapers slightly at either end. Its color is whitish to brownish.
The spore print is cream-buff. Spores are smooth, cylindrical to somewhat club-shaped, and measure 7–12 by 2.3–3.5 μm. The flesh quickly stains dark grey to violet grey when a drop of iron(II) sulfate (FeSO4) solution is applied; potassium hydroxide (KOH) solution turns the flesh yellow to yellow-ochre.
Habitat and distribution
Fruit bodies of Tylopilus peralbidus grow singly to scattered on the ground under oaks, or occasionally with pine. Preferred habitats include shaded lawns and along roads. Found in the eastern United States, the bolete has been recorded from North Carolina south to Florida, west to Texas. It fruits from May to October.
See also
List of North American boletes
References
External links
peralbidus
Fungi described in 1938
Fungi of the United States
Fungi without expected TNC conservation status
Fungus species | Tylopilus peralbidus | Biology | 448 |
449,756 | https://en.wikipedia.org/wiki/Epitaxy | Epitaxy (prefix epi- means "on top of”) refers to a type of crystal growth or material deposition in which new crystalline layers are formed with one or more well-defined orientations with respect to the crystalline seed layer. The deposited crystalline film is called an epitaxial film or epitaxial layer. The relative orientation(s) of the epitaxial layer to the seed layer is defined in terms of the orientation of the crystal lattice of each material. For most epitaxial growths, the new layer is usually crystalline and each crystallographic domain of the overlayer must have a well-defined orientation relative to the substrate crystal structure. Epitaxy can involve single-crystal structures, although grain-to-grain epitaxy has been observed in granular films. For most technological applications, single-domain epitaxy, which is the growth of an overlayer crystal with one well-defined orientation with respect to the substrate crystal, is preferred. Epitaxy can also play an important role in the growth of superlattice structures.
The term epitaxy comes from the Greek roots epi (ἐπί), meaning "above", and taxis (τάξις), meaning "an ordered manner".
One of the main commercial applications of epitaxial growth is in the semiconductor industry, where semiconductor films are grown epitaxially on semiconductor substrate wafers. For the case of epitaxial growth of a planar film atop a substrate wafer, the epitaxial film's lattice will have a specific orientation relative to the substrate wafer's crystalline lattice, such as the [001] Miller index of the film aligning with the [001] index of the substrate. In the simplest case, the epitaxial layer can be a continuation of the same semiconductor compound as the substrate; this is referred to as homoepitaxy. Otherwise, the epitaxial layer will be composed of a different compound; this is referred to as heteroepitaxy.
Types
Homoepitaxy is a kind of epitaxy performed with only one material, in which a crystalline film is grown on a substrate or film of the same material. This technology is often used to grow a more pure film than the substrate and to fabricate layers with different doping levels. In academic literature, homoepitaxy is often abbreviated to "homoepi".
Homotopotaxy is a process similar to homoepitaxy except that the thin-film growth is not limited to two-dimensional growth. Here the substrate is the thin-film material.
Heteroepitaxy is a kind of epitaxy performed with materials that are different from each other. In heteroepitaxy, a crystalline film grows on a crystalline substrate or film of a different material. This technology is often used to grow crystalline films of materials for which crystals cannot otherwise be obtained and to fabricate integrated crystalline layers of different materials. Examples include silicon on sapphire, gallium nitride (GaN) on sapphire, aluminium gallium indium phosphide (AlGaInP) on gallium arsenide (GaAs) or diamond or iridium, and graphene on hexagonal boron nitride (hBN).
Heteroepitaxy occurs when a film of different composition and/or crystalline films grown on a substrate. In this case, the amount of strain in the film is determined by the lattice mismatch Ԑ:
Where and are the lattice constants of the film and the substrate. The film and substrate could have similar lattice spacings but also different thermal expansion coefficients. If a film is grown at a high temperature, it can experience large strains upon cooling to room temperature. In reality, is necessary for obtaining epitaxy. If is larger than that, the film experiences a volumetric strain that builds with each layer until a critical thickness. With increased thickness, the elastic strain in the film is relieved by the formation of dislocations, which can become scattering centers that damage the quality of the structure. Heteroepitaxy is commonly used to create so-called bandgap systems thanks to the additional energy caused by de deformation. Silicon-germanium epitaxial layers are heavily used in CMOS microelectronics and silicon photonics.
Heterotopotaxy is a process similar to heteroepitaxy except that thin-film growth is not limited to two-dimensional growth; the substrate is similar only in structure to the thin-film material.
Pendeo-epitaxy is a process in which the heteroepitaxial film is growing vertically and laterally simultaneously.
In 2D crystal heterostructure, graphene nanoribbons embedded in hexagonal boron nitride give an example of pendeo-epitaxy.
Grain-to-grain epitaxy involves epitaxial growth between the grains of a multicrystalline epitaxial and seed layer. This can usually occur when the seed layer only has an out-of-plane texture but no in-plane texture. In such a case, the seed layer consists of grains with different in-plane textures. The epitaxial overlayer then creates specific textures along each grain of the seed layer, due to lattice matching. This kind of epitaxial growth doesn't involve single-crystal films.
Epitaxy is used in silicon-based manufacturing processes for bipolar junction transistors (BJTs) and modern complementary metal–oxide–semiconductors (CMOS), but it is particularly important for compound semiconductors such as gallium arsenide. Manufacturing issues include control of the amount and uniformity of the deposition's resistivity and thickness, the cleanliness and purity of the surface and the chamber atmosphere, the prevention of the typically much more highly doped substrate wafer's diffusion of dopant to the new layers, imperfections of the growth process, and protecting the surfaces during manufacture and handling.
Mechanism
Heteroepitaxial growth is classified into three primary growth modes-- Volmer–Weber (VW), Frank–van der Merwe (FM) and Stranski–Krastanov (SK).
In the VW growth regime, the epitaxial film grows out of 3D nuclei on the growth surface. In this mode, the adsorbate-adsorbate interactions are stronger than adsorbate-surface interactions, leading to island formation by local nucleation and the epitaxial layer is formed when the islands join.
In the FM growth mode, adsorbate-surface and adsorbate-adsorbate interactions are balanced, which promotes 2D layer-by-layer or step-flow epitaxial growth.
The SK mode is a combination of VW and FM modes. In this mechanism, the growth initiates in the FM mode, forming 2D layers, but after reaching a critical thickness, enters a VW-like 3D island growth regime.
Practical epitaxial growth, however, takes place in a high supersaturation regime, away from thermodynamic equilibrium. In that case, the epitaxial growth is governed by adatom kinetics rather than thermodynamics, and 2D step-flow growth becomes dominant.
Methods
Vapor-phase
Homoepitaxial growth of semiconductor thin films are generally done by chemical or physical vapor deposition methods that deliver the precursors to the substrate in gaseous state. For example, silicon is most commonly deposited from silicon tetrachloride (or germanium tetrachloride) and hydrogen at approximately 1200 to 1250 °C:
SiCl4(g) + 2H2(g) ↔ Si(s) + 4HCl(g)
where (g) and (s) represent gas and solid phases, respectively. This reaction is reversible, and the growth rate depends strongly upon the proportion of the two source gases. Growth rates above 2 micrometres per minute produce polycrystalline silicon, and negative growth rates (etching) may occur if too much hydrogen chloride byproduct is present. (Hydrogen chloride may be intentionally added to etch the wafer.) An additional etching reaction competes with the deposition reaction:
SiCl4(g) + Si(s) ↔ 2SiCl2(g)
Silicon VPE may also use silane, dichlorosilane, and trichlorosilane source gases. For instance, the silane reaction occurs at 650 °C in this way:
SiH4 → Si + 2H2
VPE is sometimes classified by the chemistry of the source gases, such as hydride VPE (HVPE) and metalorganic VPE (MOVPE or MOCVD).
The reaction chamber where this process takes place may be heated by lamps located outside the chamber. A common technique used in compound semiconductor growth is molecular beam epitaxy (MBE). In this method, a source material is heated to produce an evaporated beam of particles, which travel through a very high vacuum (10−8 Pa; practically free space) to the substrate and start epitaxial growth. Chemical beam epitaxy, on the other hand, is an ultra-high vacuum process that uses gas phase precursors to generate the molecular beam.
Another widely used technique in microelectronics and nanotechnology is atomic layer epitaxy, in which precursor gases are alternatively pulsed into a chamber, leading to atomic monolayer growth by surface saturation and chemisorption.
Liquid-phase
Liquid-phase epitaxy (LPE) is a method to grow semiconductor crystal layers from the melt on solid substrates. This happens at temperatures well below the melting point of the deposited semiconductor. The semiconductor is dissolved in the melt of another material. At conditions that are close to the equilibrium between dissolution and deposition, the deposition of the semiconductor crystal on the substrate is relatively fast and uniform. The most used substrate is indium phosphide (InP). Other substrates like glass or ceramic can be applied for special applications. To facilitate nucleation, and to avoid tension in the grown layer the thermal expansion coefficient of substrate and grown layer should be similar.
Centrifugal liquid-phase epitaxy is used commercially to make thin layers of silicon, germanium, and gallium arsenide. Centrifugally formed film growth is a process used to form thin layers of materials by using a centrifuge. The process has been used to create silicon for thin-film solar cells and far-infrared photodetectors. Temperature and centrifuge spin rate are used to control layer growth. Centrifugal LPE has the capability to create dopant concentration gradients while the solution is held at constant temperature.
Solid-phase
Solid-phase epitaxy (SPE) is a transition between the amorphous and crystalline phases of a material. It is usually produced by depositing a film of amorphous material on a crystalline substrate, then heating it to crystallize the film. The single-crystal substrate serves as a template for crystal growth. The annealing step used to recrystallize or heal silicon layers amorphized during ion implantation is also considered to be a type of solid phase epitaxy. The impurity segregation and redistribution at the growing crystal-amorphous layer interface during this process is used to incorporate low-solubility dopants in metals and silicon.
Doping
An epitaxial layer can be doped during deposition by adding impurities to the source gas, such as arsine, phosphine, or diborane. Dopants in the source gas, liberated by evaporation or wet etching of the surface, may also diffuse into the epitaxial layer and cause autodoping. The concentration of impurity in the gas phase determines its concentration in the deposited film. Doping can also be achieved by a site-competition technique, where the growth precursor ratios are tuned to enhance the incorporation of vacancies, specific dopant species or vacant-dopant clusters into the lattice. Additionally, the high temperatures at which epitaxy is performed may allow dopants to diffuse into the growing layer from other layers in the wafer (out-diffusion).
Minerals
In mineralogy, epitaxy is the overgrowth of one mineral on another in an orderly way, such that certain crystal directions of the two minerals are aligned. This occurs when some planes in the lattices of the overgrowth and the substrate have similar spacings between atoms.
If the crystals of both minerals are well formed so that the directions of the crystallographic axes are clear then the epitaxic relationship can be deduced just by a visual inspection.
Sometimes many separate crystals form the overgrowth on a single substrate, and then if there is epitaxy all the overgrowth crystals will have a similar orientation. The reverse, however, is not necessarily true. If the overgrowth crystals have a similar orientation there is probably an epitaxic relationship, but it is not certain.
Some authors consider that overgrowths of a second generation of the same mineral species should also be considered as epitaxy, and this is common terminology for semiconductor scientists who induce epitaxic growth of a film with a different doping level on a semiconductor substrate of the same material. For naturally produced minerals, however, the International Mineralogical Association (IMA) definition requires that the two minerals be of different species.
Another man-made application of epitaxy is the making of artificial snow using silver iodide, which is possible because hexagonal silver iodide and ice have similar cell dimensions.
Isomorphic minerals
Minerals that have the same structure (isomorphic minerals) may have epitaxic relations. An example is albite on microcline . Both these minerals are triclinic, with space group , and with similar unit cell parameters, a = 8.16 Å, b = 12.87 Å, c = 7.11 Å, α = 93.45°, β = 116.4°, γ = 90.28° for albite and a = 8.5784 Å, b = 12.96 Å, c = 7.2112 Å, α = 90.3°, β = 116.05°, γ = 89° for microcline.
Polymorphic minerals
Minerals that have the same composition but different structures (polymorphic minerals) may also have epitaxic relations. Examples are pyrite and marcasite, both FeS2, and sphalerite and wurtzite, both ZnS.
Rutile on hematite
Some pairs of minerals that are not related structurally or compositionally may also exhibit epitaxy. A common example is rutile TiO2 on hematite Fe2O3. Rutile is tetragonal and hematite is trigonal, but there are directions of similar spacing between the atoms in the (100) plane of rutile (perpendicular to the a axis) and the (001) plane of hematite (perpendicular to the c axis). In epitaxy these directions tend to line up with each other, resulting in the axis of the rutile overgrowth being parallel to the c axis of hematite, and the c axis of rutile being parallel to one of the axes of hematite.
Hematite on magnetite
Another example is hematite on magnetite . The magnetite structure is based on close-packed oxygen anions stacked in an ABC-ABC sequence. In this packing the close-packed layers are parallel to (111) (a plane that symmetrically "cuts off" a corner of a cube). The hematite structure is based on close-packed oxygen anions stacked in an AB-AB sequence, which results in a crystal with hexagonal symmetry.
If the cations were small enough to fit into a truly close-packed structure of oxygen anions then the spacing between the nearest neighbour oxygen sites would be the same for both species. The radius of the oxygen ion, however, is only 1.36 Å and the Fe cations are big enough to cause some variations. The Fe radii vary from 0.49 Å to 0.92 Å, depending on the charge (2+ or 3+) and the coordination number (4 or 8). Nevertheless, the O spacings are similar for the two minerals hence hematite can readily grow on the (111) faces of magnetite, with hematite (001) parallel to magnetite (111).
Applications
Epitaxy is used in nanotechnology and in semiconductor fabrication. Indeed, epitaxy is the only affordable method of high quality crystal growth for many semiconductor materials. In surface science, epitaxy is used to create and study monolayer and multilayer films of adsorbed organic molecules on single crystalline surfaces via scanning tunnelling microscopy.
See also
Heterojunction
Island growth
Nano-RAM
Quantum cascade laser
Selective area epitaxy
Silicon on sapphire
Single event upset
Thermal laser epitaxy
Thin film
Vertical-cavity surface-emitting laser
Wake Shield Facility
Zhores Alferov
References
Bibliography
External links
epitaxy.net : a central forum for the epitaxy-communities
Deposition processes
CrystalXE.com: a specialized software in epitaxy
Thin film deposition
Semiconductor device fabrication
Crystallography
Methods of crystal growth | Epitaxy | Physics,Chemistry,Materials_science,Mathematics,Engineering | 3,609 |
62,990,925 | https://en.wikipedia.org/wiki/B%C3%A9la%20Paizs | Béla Paizs is a Hungarian bioinformatician.
His research interests revolve around fragmentation of peptides in mass spectrometry. In top-down proteomics, the interpretation of fragment ion spectra of peptides is a crucial step. The research of Béla Paizs have led to detailed characterization of peptide fragment ion structures and dissociation mechanisms, and have shown underlying fundamental physical and chemical principles. His work has been recognized with the American Society for Mass Spectrometry Biemann Medal in 2011.
Paizs received his Ph.D. in Chemistry in 1998 from Eötvös University in Budapest and graduated with summa cum laude honors. He worked as postdoctoral fellow there and later at the DKFZ in Heidelberg. He held a position as group leader since 2004 at the German Cancer Research Center in Heidelberg until 2013 when he moved to Bangor University.
References
21st-century chemists
Mass spectrometrists
Living people
Year of birth missing (living people) | Béla Paizs | Physics,Chemistry | 203 |
36,618,866 | https://en.wikipedia.org/wiki/Katydid%20sequence | The Katydid sequence is a sequence of numbers first defined in Clifford A. Pickover's book Wonders of Numbers (2001).
Description
A Katydid sequence is the smallest sequence of integers that can be reached from 1 by a sequence of the two operations n ↦ 2n + 2 and 7n + 7 (in any order). For instance, applying the first operation to 1 produces the number 4, and applying the second operation to 4 produces the number 35, both of which are in the sequence.
The first 10 elements of the sequence are:
1, 4, 10, 14, 22, 30, 35, 46, 62, 72.
Repetitions
Pickover asked whether there exist numbers that can be reached by more than one sequence of operations.
The answer is yes. For instance, 1814526 can be reached by the two sequences
1, 4, 10, 22, 46, 329, 660, 4627, 9256, 18514, 37030, 259217, 1814526 and
1, 14, 30, 62, 441, 884, 1770, 3542, 7086, 14174, 28350, 56702, 113406, 226814, 453630, 907262, 1814526.
References
Integer sequences | Katydid sequence | Mathematics | 264 |
56,971,036 | https://en.wikipedia.org/wiki/Ralava%20Beboarimisa | Ralava Beboarimisa (born 1977) is a Malagasy politician. He was Minister of Environment, Ecology, Sea, and Forest from 2015 to 2016. He was then Ministry of Transport and Meteorology of Madagascar from 2017 to 2019, firstly during two Governments of Jean Ravelonarivo and Olivier Solonandrasana, and then during transition Government led by Christian Ntsay because of the April 21, 2018 crisis.
Beboarimisa studied finance and international relationship. He acquired his first professional experiences in investment banks in France before returning to Madagascar in 2011. He then held the post of executive director of the Foundation for Protected Areas and Biodiversity of Madagascar for 4 years (FAPBM). FAPBM is one of the largest environmental foundation in Africa and also founding member of the Consortium of African Funds for Environment (CAFE) that Beboarimisa chaired. During his contract with FAPBM, Madagascar was particularly highlighted internationally as FAPBM led a delegation of more than 60 people from many environmental associations, NGOs and communities at the IUCN World Parks Congress 2014 in Sydney Australia.
One of the biggest problems Beboarimisa faced during his tenure as minister of the environment was the rapid deforestation and threat to the flora and fauna of the island that Madagascar is undergoing. The first law he helped pass was the so-called "Beboarimisa law", which toughened sanctions for cutting down rosewoods in Madagascar.
Beboarimisa's hard tactics toward combating deforestation made him a popular figure in the country, and for a brief period of time it was rumored that he might succeed Jean Ravelonarivo as the next prime minister. However, Beboarimisa came under scrutiny in April 2016 after more than 1,000 tons of rosewood were discovered in the possession of a Hong Kong businessman in Singapore, leading to questions of how such a large amount could have been smuggled out of the country. Following the effervescence of traffics, a law was even drawn up, called the "Beboarimisa" law to strengthen the fight at a legal level. United States support for international legal proceedings. In 2019 the Court of Appeal in Singapore quashed all convictions related to the case. Ruling that the logs had only been intransit through Singapore, and not imported into Singapore. In August 2017, following a government reshuffle, Beboarimisa returned to government ministry to be at the head of the Ministry of Transport and Meteorology.
In October 2018, the 60th anniversary of the Republic of Madagascar, he decided to create a non-profit organization called "Bâtir la République" to encourage citizens' involvement and empowerment in the democratic debate, mindful of the importance of citizen involvement observed during the first edition in 2018, further debates were held in the following years. The 1st Republic of Madagascar was born on October 14, 1958, while independence was obtained two years later on June 26, 1960. October 14 is generally a forgotten day however "Batir la République" decided to put it forward.
References
External links
Bâtir la République page on Facebook
1978 births
Living people
Environmental scientists
Government ministers of Madagascar
Malagasy scientists | Ralava Beboarimisa | Environmental_science | 657 |
43,134,741 | https://en.wikipedia.org/wiki/Missile%20Warning%20Center | The Missile Warning Center (MWC) is a center that provides missile warning and defense for United States Space Command's Combined Force Space Component Command, incorporating both space-based and terrestrial sensors. The MWC is located at Cheyenne Mountain Space Force Station.
Mission
The Missile Warning Center coordinates, plans, and executes worldwide missile, nuclear detonation, and space re-entry event detection to provide timely, accurate, and unambiguous strategic warning in support of the United States and Canada.
History
During deployment of the computerized air defense network for the United States, the Soviet Union announced that they had successfully tested an ICBM. BMEWS General Operational Requirement 156 was issued on November 7, 1957 (BMEWS was "designed to go with the active portion of the WIZARD system") and on February 4, 1958; the USAF informed Air Defense Command (ADC) that BMEWS was an "all-out program" and the "system has been directed by the President, has the same national priority as the ballistic missile and satellite programs and is being placed on the Department of Defense master urgency list." The subsequent plan by June 1958 for a US Zone of the Interior facility for anti-ICBM fire control by Air Defense Command (ADC) was for it to be "the heart of the entire ballistic missile defense system" with Nike Zeus SAMs. On 19 October 1959, HQ USAF assigned ADC the "planning responsibility" for eventual operation of the Missile Defense Alarm System to detect ICBM launches with infrared sensors in space.
1960 Ent AFB CC&DF
The BMEWS Central Computer and Display Facility (CC&DF) built as an austere facility instead of the planned AICBM control center became operational on September 30, 1960, at Ent AFB when BMEWS' Thule Site J became operational. Site J's computers (e.g., in the Sylvania AN/FSQ-28 Missile Impact Predictor Set) processed 4 RCA AN/FPS-50 Radar Sets' data, and alerts transferred via the BMEWS Rearward Communications System to the CC&DF for NORAD attack assessment and warning to RCA Display Information Processors (DIPs) at the NORAD/CONAD command center (also on Ent AFB), SAC's Offutt AFB nuclear bunker, and The Pentagon's new National Military Command Center. DIPs presented impact ellipses and drove a "threat summary display" with a count of incoming missiles and a countdown of "Minutes Until First Impact" (cf. later large screen displays such as the Iconorama.) In July 1961 separate from the CC&DF, the surveillance center in New Hampshire "was discontinued as the new SPADATS Center became operational at Ent AFB" with the 496L Space Detection and Tracking System (i.e., NORAD began aerospace operations). In 1962 the Army's LIM-49 Nike Zeus program was assigned the satellite intercept mission (Program 505's "Operation Mudflap" conducted a test), and the 1962 SECDEF assigned the USAF to develop the Satellite Intercept System which would use orbit data from a Space Defense Center. By December 15, 1964, NORAD had an implementation plan for a "Single Integrated Space Defense Center" for NORAD/CONAD to centralize both missile warning and space surveillance.
1967 Space Defense Center
The 1st Aero on February 6, 1967, moved operations to the Group III Space Defense Center, the integrated missile warning/space surveillance facility (496L Spacetrack system with Philco 212 primary processor) at the Cheyenne Mountain nuclear bunker (FOC of the new bunker's command center—a portion of the Burroughs 425L Command/Control and Missile Warning System—had been on July 1, 1966.) Interim operations of the Avco 474N SLBM Detection and Warning System began in July 1970 (IOC was 5 May 1972), and in 1972 20% of the Bendix AN/FPS-85 Phased Array Radar's surveillance capability "became dedicated to search for SLBMs" (the FPS-85 relayed SLBM data via the 474N network for SLBM warning to "SAC, the National Military Command Center, and the Alternate NMCC over BMEWS circuits").
1975 NORAD/ADCOM center
The NORAD/CONAD Missile Warning Center came under NORAD/ADCOM control in 1975 when the unified Continental Air Defense Command ended and in early 1972, the 427M improvement program was planned; e.g., (NORAD Computer System to replace the 425L System.) After SAC assumed control of ballistic missile warning and space surveillance facilities on December 1, 1979, the MWC was in the same room as HQ NORAD/ADCOM J31's Space Surveillance Center (separated by partitions.) The "NORAD Missile Warning and Space Surveillance System" was the general term for the entire network applied by the House's 1981 Armed Services Committee—the Core Processing Segment (CPS) handled missile warning/space surveillance with three Honeywell H6080 computers, e.g., a NORAD Computer System (NCS) H6080 for command and control and for missile warning functions (2nd for space surveillance and 3rd as backup for both). Circa 1986, the "missile and space surveillance and warning system" consisted of a space computational center and 5 sensor systems:
Ballistic Missile Early Warning System
Defense Support Program (DSP satellites, ground systems, etc. of Project 647)
"OTH Forward Scatter Missile Detection System" (440L System of Program 673A with international AN/FRT-80 transmitters & AN/FSQ-76 receivers, Aviano AB Correlation Center, and Rome Laboratory processing center)
"Sea-Launched Ballistic Missile Warning System" (remaining 474N Fuzzy-7 radar(s), AN/FPS-85, and 2 PAVE PAWS stations)
Space Detection and Warning System
By 1981 Cheyenne Mountain was providing 6,700 messages per hour compiled via sensor inputs from the Joint Surveillance System, BMEWS, the SLBM "Detection and Warning System, COBRA DANE, and PARCS as well as SEWS and PAVE PAWS". During the 1991 Gulf War, the missile operations section that supported the MWC processed SCUD missile detections and interceptions for theater warning units. The Space and Warning Systems Center maintained "26 stovepipe systems" for USSPACECOM, NORAD, and AFSPC, and the Space Computational Center was replaced in 1992.
In February 1995, "the missile warning center at Cheyenne Mountain AS [was] undergoing a $450 million upgrade program as part of Cheyenne Mountain's $1.7 billion renovation package." At Cheyenne Mountain on September 11, 2001, Major Richard J. Hughes was the Missile Warning Center Commander and the Chief of the J7 Exercise Branch. In 2003, construction began for a new command center at Cheyenne Mountain to include Ground-Based Midcourse Defense—the "new Missile Correlation Center" (MCC) was to have new consoles, mission system connectivity and communications capabilities.
Missile Correlation Center
The Missile Correlation Center (MCC) and Space Control Center were in Cheyenne Mountain by March 4, 2005 when Patrick Mullin was the commander of the MCC, which by 2006 was receiving input from five Joint Tactical Ground Stations.
Missile Warning Operations Center
The 2006–8 Cheyenne Mountain Realignment divided MCC operations into NORAD/NORTHCOM's Missile and Space Domain at Peterson AFB and STRATCOM's facility in Cheyenne Mountain ("Missile Warning Operations Center" in 2007.) USSTRATCOM announced a 2007 plan to relocate the MWOC from Cheyenne Mountain to Schriever AFB (cf. the Space Control Center which AFSPC was moving from Cheyenne Mountain to Vandenberg.) In May 2010, USSTRATCOM decided to keep its missile warning center at Cheyenne Mountain, which had begun a $2.9 million renovation in January 2010 (a temporary MWOC facility had to be set up.)
List of directors
Capt Gervy Alota
Capt Jermaine Brooms, 28 June 2024
References
Military units and formations in Colorado
Cheyenne Mountain Complex
Space units and formations of the United States
United States warning systems
Centers of the U.S. Department of Defense | Missile Warning Center | Technology | 1,697 |
3,175,932 | https://en.wikipedia.org/wiki/Phosphorene | Phosphorene is a two-dimensional material consisting of phosphorus. It consists of a single layer of black phosphorus, the most stable allotrope of phosphorus. Phosphorene is analogous to graphene (single layer graphite). Among two-dimensional materials, phosphorene is a competitor to graphene because it has a nonzero fundamental band gap that can be modulated by strain and the number of layers in a stack. Phosphorene was first isolated in 2014 by mechanical exfoliation. Liquid exfoliation is a promising method for scalable phosphorene production.
History
In 1914 black phosphorus, a layered, semiconducting allotrope of phosphorus, was synthesized. This allotrope exhibits high carrier mobility. In 2014, several groups isolated single-layer phosphorene, a monolayer of black phosphorus. It attracted renewed attention because of its potential in optoelectronics and electronics due to its band gap, which can be tuned via modifying its thickness, anisotropic photoelectronic properties and carrier mobility. Phosphorene was initially prepared using mechanical cleavage, a commonly used technique in graphene production.
In 2023, alloys of arsenic-phosphorene displayed higher hole mobility than pure phosphorene and were also magnetic.
Synthesis
Synthesis of phosphorene is a significant challenge. Currently, there are two main ways of phosphorene production: scotch-tape-based microcleavage and liquid exfoliation, while several other methods are being developed as well. Phosphorene production from plasma etching has also been reported.
In scotch-tape-based microcleavage, phosphorene is mechanically exfoliated from a bulk of black phosphorus crystal using scotch-tape. Phosphorene is then transferred on a Si/SiO2 substrate, where it is then cleaned with acetone, isopropyl alcohol and methanol to remove any scotch tape residue. The sample is then heated to 180 °C to remove solvent residue.
In the liquid exfoliation method, first reported by Brent et al. in 2014 and modified by others, bulk black phosphorus is first ground in a mortar and pestle and then sonicated in deoxygenated, anhydrous organic liquids such as NMP under an inert atmosphere using low-power bath sonication. Suspensions are then centrifuged for 30 minutes to filter out the unexfoliated black phosphorus. Resulting 2D monolayer and few-layer phosphorene unoxidized and crystalline structure, while exposure to air oxidizes the phosphorene and produces acid.
Another variation of liquid exfoliation is "basic N-methyl-2-pyrrolidone (NMP) liquid exfoliation". Bulk black phosphorene is added to a saturated NaOH/NMP solution, which is further sonicated for 4 hours to conduct liquid exfoliation. The solution is then centrifuged twice, first for 10 minutes to remove any unexfoliated black phosphorus and then for 20 minutes at a higher speed to separate thick layers of phosphorene (5–12 layers) from NMP. The supernatant then is centrifuged again at higher speed for another 20 minutes to separate thinner layers of phosphorene (1–7 layers). The precipitate from centrifugation is then redispersed in water and washed several times by deionized water. Phosphorene/water solution is dropped onto silicon with a 280-nm SiO2 surface, where it is further dried under vacuum. NMP liquid exfoliation method was shown to yield phosphorene with controllable size and layer number, excellent water stability and in high yield.
The disadvantage of the current methods includes long sonication time, high boiling point solvents, and low efficiency. Therefore, other physical methods for liquid exfoliation are still under development. A laser-assisted method developed by Zheng and co-workers showed a promising yield of up to 90% within 5 minutes. The laser photon interacts with the surface of bulk black phosphorus crystal, causing a plasma and solvent bubbles to weaken the interlayer interaction. Depending on the laser energy, solvent (ethanol, methanol, hexane, etc.) and irradiation time, the layer number and lateral size of the phosphorene were controlled.
The high yield production of phosphorene has been demonstrated by many groups in solvents, but to realize the potential applications of this material, it is crucial to deposit these free-standing nanosheets in solvents systematically on substrates. H. Kaur et al. demonstrated the synthesis, interface-driven alignment and subsequent functional properties of few layer semiconducting phosphorene using Langmuir-Blodgett assembly. This is the first study which provides a straightforward and versatile solution towards the challenge of assembling nanosheets of phosphorene onto various supports and subsequently use these sheets in an electronic device. Therefore, wet assemblies techniques like Langmuir-Blodgett serves as a very valuable new entry point for the exploration of electronic as well as opto-electronic properties of phosphorene as well as other 2D layered inorganic materials.
It is still a challenge to directly epitaxially grow 2D phosphorene because the stability of black phosphorene is highly sensitive to substrate, which is understanding by theoretical simulations.
Properties
Structure
Phosphorene 2D materials are composed of individual layers held together by van der Waals forces in lieu of covalent or ionic bonds that are found in most materials. There are three electrons within the 3p orbitals of the phosphorus atom, thus, giving rise to sp3 hybridization of each phosphorus atom within the phosphorene structure. Monolayered phosphorene exhibits the structure of a quadrangular pyramid because three electrons of P atom bond with three other P atoms covalently at 2.18 Å leaving one lone pair. Two of the phosphorus atoms are in the plane of the layer at 99° from one another, and the third phosphorus is between the layers at 103°, yielding an average angle of 102°.
According to density functional theory (DFT) calculations, phosphorene forms in a honeycomb lattice structure with notable nonplanarity in the shape of structural ridges. It is predicted that crystal structure of black phosphorus can be discriminated under high pressure. This is mostly due to the anisotropic compressibility of black phosphorus because of the asymmetrical crystal structures. Subsequently, the van der Waals bond can be greatly compressed in the z-direction. However, there is a great variation in compressibility across the orthogonal x-y plane.
It is reported that controlling the centrifugal speed of production may aid in regulating the thickness of a material. For example, centrifuging at 18,000 rpm during synthesis produced phosphorene with an average diameter of 210 nm and a thickness of 2.8 ± 1.5 nm (2–7 layers).
Band gap and conductivity
Phosphorene has a thickness dependent direct band gap that changes to 1.88 eV in a monolayer from 0.3 eV in the bulk. Increase in band gap value in single-layer phosphorene is predicted to be caused by the absence of interlayer hybridization near the top of the valence and bottom of the conduction band. A pronounced peak centered at around 1.45 eV suggests the band gap structure in few- or single-layer phosphorene difference from bulk crystals.
In vacuum or on weak substrate, an interesting reconstruction with nanotubed termination of phosphorene edge is very easy to happen, transforming phosphorene edge from metallic to semiconducting.
Air stability
One major disadvantage of phosphorene is its limited air-stability. Composed of hygroscopic phosphorus and with extremely high surface-to-volume ratio, phosphorene reacts with water vapor and oxygen assisted by visible light to degrade within the scope of hours. Through the degradation process, phosphorene (solid) reacts with oxygen/water to develop liquid phase acid 'bubbles' on the surface, and finally evaporate (vapor) to fully vanish (S-B-V degradation) and severely reducing overall quality.
Applications
Transistor
Researchers have fabricated transistors of phosphorene to examine its performance in actual devices. Phosphorene-based transistor consists of a channel of 1.0 μm and uses few layered phosphorene with a thickness varying from 2.1 to over 20 nm. Reduction of the total resistance with decreasing gate voltage is observed, indicating the p-type characteristic of phosphorene. Linear I-V relationship of transistor at low drain bias suggests good contact properties at the phosphorene/metal interface. Good current saturation at high drain bias values was observed. However, it was seen that the mobility is reduced in few-layer phosphorene when compared to bulk black phosphorus. Field-effect mobility of phosphorene-based transistor shows a strong thickness dependence, peaking at around 5 nm and decrease steadily with further increase of crystal thickness.
Atomic layer deposition (ALD) dielectric layer and/or hydrophobic polymer is used as encapsulation layers in order to prevent device degradation and failure. Phosphorene devices are reported to maintain their function for weeks with encapsulation layer, whereas experience device failure within a week when exposed to ambient condition.
Battery electrode
Phosphorene is considered a promising anode material for rechargeable batteries, such as lithium-ion batteries. The interlayer space allows lithium storage and transfer. The layer number and lateral size of phosphorene affect the stability and capacity of the anode.
Inverter
Researchers have also constructed the CMOS inverter (logic circuit) by combining a phosphorene PMOS transistor with a MoS2 NMOS transistor, achieving high heterogeneous integration of semiconducting phosphorene crystals as a new channel material for potential electronic applications. In the inverter, the power supply voltage is set to be 1 V. The output voltage shows a clear transition from VDD to 0 within the input voltage range from −10 to −2 V. A maximum gain of ~1.4 is attained.
Solar-cell donor material (optoelectronics)
The potential applications of mixed bilayer phosphorene in solar-cell material was examined as well.
Flexible circuits
Phosphorene is a promising candidate for flexible nano systems due to its ultra-thin nature with ideal electrostatic control and superior mechanical flexibility. Researchers have demonstrated the flexible transistors, circuits and AM demodulator based on few-layer phosphorus, showing enhanced am bipolar transport with high room temperature carrier mobility as high as ~310 cm2/Vs and strong current saturation. Fundamental circuit units including digital inverter, voltage amplifier and frequency doubler have been realized. Radio frequency (RF) transistors with highest intrinsic cutoff frequency of 20 GHz has been realized for potential applications in high frequency flexible smart nano systems.
See also
Borophene
Germanene
Graphene
Silicene
Stanene
References
Phosphorus
Semiconductor materials
Monolayers | Phosphorene | Physics,Chemistry | 2,408 |
45,307 | https://en.wikipedia.org/wiki/Atomic%20electron%20transition | In atomic physics and chemistry, an atomic electron transition (also called an atomic transition, quantum jump, or quantum leap) is an electron changing from one energy level to another within an atom or artificial atom. The time scale of a quantum jump has not been measured experimentally. However, the Franck–Condon principle binds the upper limit of this parameter to the order of attoseconds.
Electrons jumping to energy levels of smaller n emit electromagnetic radiation in the form of a photon. Electrons can also absorb passing photons, which drives a quantum jump to a level of higher n. The larger the energy separation between the electron's initial and final state, the shorter the photons' wavelength.
History
Danish physicist Niels Bohr first theorized that electrons can perform quantum jumps in 1913. Soon after, James Franck and Gustav Ludwig Hertz proved experimentally that atoms have quantized energy states.
The observability of quantum jumps was predicted by Hans Dehmelt in 1975, and they were first observed using trapped ions of barium at University of Hamburg and mercury at NIST in 1986.
Theory
An atom interacts with the oscillating electric field:
with amplitude , angular frequency , and polarization vector . Note that the actual phase is . However, in many cases, the variation of is small over the atom (or equivalently, the radiation wavelength is much greater than the size of an atom) and this term can be ignored. This is called the dipole approximation. The atom can also interact with the oscillating magnetic field produced by the radiation, although much more weakly.
The Hamiltonian for this interaction, analogous to the energy of a classical dipole in an electric field, is . The stimulated transition rate can be calculated using time-dependent perturbation theory; however, the result can be summarized using Fermi's golden rule:
The dipole matrix element can be decomposed into the product of the radial integral and the angular integral. The angular integral is zero unless the selection rules for the atomic transition are satisfied.
Recent discoveries
In 2019, it was demonstrated in an experiment with a superconducting artificial atom consisting of two strongly-hybridized transmon qubits placed inside a readout resonator cavity at 15 mK, that the evolution of some jumps is continuous, coherent, deterministic, and reversible. On the other hand, other quantum jumps are inherently unpredictable.
See also
Burst noise
Ensemble interpretation
Fluorescence
Glowing pickle demonstration
Molecular electronic transition, for molecules
Phosphorescence
Quantum jump
Spontaneous emission
Stimulated emission
References
External links
Part 2
"There are no quantum jumps, nor are there particles!" by H. D. Zeh, Physics Letters A172, 189 (1993).
"Surface plasmon at a metal-dielectric interface with an epsilon-near-zero transition layer" by Kevin Roccapriore et al., Physical Review B 103, L161404 (2021).
Atomic physics
Electron states | Atomic electron transition | Physics,Chemistry | 612 |
52,416,821 | https://en.wikipedia.org/wiki/Gordon%20J.%20Stanley | Gordon J. Stanley (1 July 1921 – 17 December 2001) was a New Zealand-born radio astronomer who with John G. Bolton in 1947, discovered the first radio star, Cygnus A.
Stanley was born in Cambridge, New Zealand. By the 1940s he was working in radio astronomy with Bolton, where they discovered the first radio star.
In 1955 Stanley went to the California Institute of Technology (Caltech) where he became the director of the Owens Valley Radio Observatory.
References
Sources
"Gordon Stanley, 80; Built, Directed Radio Observatory at Caltech", LA Times, 31 December 2001
Ken Kellerman, et al. Gordon James Stanley and the early developments of Radio Astronomy in Australia.
World Book, 1967 edition, Vol. 1, p. 803.
New Zealand emigrants to Australia
20th-century Australian astronomers
Australian expatriates in the United States
1921 births
2001 deaths | Gordon J. Stanley | Astronomy | 181 |
41,333 | https://en.wikipedia.org/wiki/Long-haul%20communications | In telecommunications, the term long-haul communications has the following meanings:
1. In public switched networks, pertaining to circuits that span large distances, such as the circuits in inter-LATA, interstate, and international communications. See also Long line (telecommunications)
2. In the military community, communications among users on a national or worldwide basis.
Note 1: Compared to tactical communications, long-haul communications are characterized by (a) higher levels of users, such as the US National Command Authority, (b) more stringent performance requirements, such as higher quality circuits, (c) longer distances between users, including worldwide distances, (d) higher traffic volumes and densities, (e) larger switches and trunk cross sections, and (f) fixed and recoverable assets.
Note 2: "Long-haul communications" usually pertains to the U.S. Defense Communications System.
Note 3: "Long-haul telecommunications technicians" can be translated into many fields of IT work within the corporate industry (Information Technology, Network Technician, Telecommunication Specialist, It Support, and so on). While the term is used in military most career fields that are in communications such as 3D1X2 - Cyber Transport Systems (the career field has been renamed so many times over the course of many years but essentially it is the same job (Network Infrastructure Tech., Systems Control Technician, and Cyber Transport Systems)) or may work in areas that require the "in between" (cloud networking) for networks (MSPP, ATM, Routers, Switches), phones (VOIP, DS0 - DS4 or higher, and so on), encryption (configuring encryption devices or monitoring), and video support data transfers. The "bulk data transfer" or aggregation networking.
The Long-haul telecommunication technicians is considered a "jack of all" but it is much in the technician's interest to gather greater education with certifications to qualify for certain jobs outside the military. The Military provides an avenue but does not make the individual a master of the career field. The technician will find that the job out look outside of military requires many things that aren't required of them within the career field while in the military. So it is best to find the job that is similar to the AFSC and also view the companies description of the qualification to fit that job. Also at least get an associate degree, over 5 years experience, and all of the required "certs" (Network +, Security +, CCNA, CCNP and so on) to acquire the job or at least an interview. The best time to apply or get a guaranteed job is the last three months before you leave the military. Military personnel that are within the career field 3D1X2 require a Secret, TS, or TS with SCI clearance in order to do the job.
See also
Long-distance calling
Meteor burst communications
Communication circuits | Long-haul communications | Engineering | 590 |
19,570,879 | https://en.wikipedia.org/wiki/Piston%20effect | Piston effect refers to the forced-air flow inside a tunnel or shaft caused by moving vehicles. It is one of numerous phenomena that engineers and designers must consider when developing a range of structures.
Cause
In open air, when a vehicle travels along, air pushed aside can move in any direction except into the ground. Inside a tunnel, air is confined by the tunnel walls to move along the tunnel. Behind the moving vehicle, as air has been pushed away, suction is created, and air is pulled to flow into the tunnel. In addition, because of fluid viscosity, the surface of the vehicle drags the air to flow with vehicle, a force experienced as skin drag by the vehicle. This movement of air by the vehicle is analogous to the operation of a mechanical piston as inside a reciprocating compressor gas pump, hence the name "piston effect". The effect is also similar to the pressure fluctuations inside drainage pipes as waste water pushes air in front of it.
The piston effect is very pronounced in railway tunnels, because the cross sectional area of trains is large and in many cases almost completely fills the tunnel cross section. The wind felt by the passengers on underground railway platforms (that do not have platform screen doors installed) when a train is approaching is air flow from the piston effect. The effect is less pronounced in road vehicle tunnels, as the cross-sectional area of vehicle is small compared to the total cross-sectional area of the tunnel. Single track tunnels experience the maximum effect but clearance between rolling stock and the tunnel as well as the shape of the front of the train affect its strength.
Air flow caused by the piston effect can exert large forces on the installations inside the tunnel and so these installations have to be carefully designed and installed properly. Non-return dampers are sometimes needed to prevent stalling of ventilation fans caused by this air flow.
Applications
The piston effect has to be considered by building designers in relation to smoke movement within an elevator shaft. A moving elevator car forces the air in front of it out of the shaft and pulls air into the shaft behind it with the effect most apparent in elevator systems with a fast moving car in a single shaft. This means that in a fire a moving elevator may push smoke into lower floors.
The piston effect is used in tunnel ventilation. In railway tunnels, the train pushes out the air in front of it toward the closest ventilation shaft in front, and sucks air into the tunnel from the closest ventilation shaft behind it. The piston effect can also assist ventilation in road vehicle tunnels.
In underground rapid transit systems, the piston effect contributes to ventilation and in some cases provides enough air movement to make mechanical ventilation unnecessary. At wider stations with multiple tracks, air quality remains the same and can even improve when mechanical ventilation is disabled. At narrow platforms with a single tunnel, however, air quality worsens when relying on the piston effect alone for ventilation. This still allows for potential energy savings by taking advantage of the piston effect rather than mechanical ventilation where possible.
Tunnel boom
Tunnel boom is a loud boom sometimes generated by high-speed trains when they exit tunnels. These shock waves can disturb nearby residents and damage trains and nearby structures. People perceive this sound similarly to that of a sonic boom from supersonic aircraft. However, unlike a sonic boom, tunnel boom is not caused by trains exceeding the speed of sound. Instead, tunnel boom results from the structure of the tunnel preventing the air around the train from escaping in all directions. As a train passes through a tunnel, it creates compression waves in front of it. These waves coalesce into a shock wave that generates a loud boom when it reaches the tunnel exit. The strength of this wave is proportional to the cube of the train's speed, so the effect is much more pronounced with faster trains.
Tunnel boom can disturb residents near the mouths of tunnels, and it is exacerbated in mountain valleys where the sound echoes. Reducing these disturbances is a significant challenge for high-speed lines such as Japan's Shinkansen, France's TGV and Spain's AVE. Tunnel boom has become a principal limitation to increased train speeds in Japan where the mountainous terrain requires frequent tunnels. Japan has enacted a law limiting noise to 70 dB in residential areas, which include many tunnel exit zones.
Methods of reducing tunnel boom include making the train's profile highly aerodynamic, adding hoods to tunnel entrances, installing perforated walls at tunnel exits, and drilling vent holes in the tunnel (similar to fitting a silencer on a firearm, but on a far bigger scale). The HS2 project in the United Kingdom has developed "porous portal" tunnel hoods to mitigate tunnel boom for residents, as well as minimising aural discomfort for passengers that could arise from in-train air pressure changes.
Ear discomfort
Passengers and crew may experience ear discomfort as a train enters a tunnel because of rapid pressure changes.
See also
Plumbing drainage venting
Footnotes
References
Pistone
External links
Tunnel Boom by an AVE train in Buñol, Spain
Enhancing the piston effect in underground railway tunnels
Piston Effect Simulation Using Ansys CFX
Railway tunnels
Tunnels
Physical phenomena | Piston effect | Physics | 1,037 |
18,870,628 | https://en.wikipedia.org/wiki/Gianotti%E2%80%93Crosti%20syndrome | Gianotti–Crosti syndrome (), also known as infantile papular acrodermatitis, papular acrodermatitis of childhood, and papulovesicular acrolocated syndrome, is a reaction of the skin to a viral infection. Hepatitis B virus and Epstein–Barr virus are the most frequently reported pathogens. Other viruses implicated are hepatitis A virus, hepatitis C virus, cytomegalovirus, coxsackievirus, adenovirus, enterovirus, rotavirus, rubella virus, HIV, and parainfluenza virus.
It is named for Ferdinando Gianotti and Agostino Crosti.
Presentation
Gianotti–Crosti syndrome mainly affects infants and young children. Children as young as 1.5 months and up to 12 years of age are reported to be affected. It is generally recognized as a papular or papulovesicular skin rash occurring mainly on the face and distal aspects of the four limbs. Purpura is generally not seen but may develop upon tourniquet test. However, extensive purpura without any hemorrhagic disorder has been reported. The presence of less florid lesions on the trunk does not exclude the diagnosis. Lymphadenopathy and hepatomegaly are sometimes noted. Raised AST and ALT levels with no rise in conjugated and unconjugated bilirubin levels are sometimes detectable, although the absence of such does not exclude the diagnosis. Spontaneous disappearance of the rash usually occurs after 15 to 60 days.
Diagnosis
The diagnosis of Gianotti–Crosti syndrome is clinical. A validated diagnostic criterion is as follows:
A patient is diagnosed as having Gianotti–Crosti syndrome if:
On at least one occasion or clinical encounter, he/she exhibits all the positive clinical features,
On all occasions or clinical encounters related to the rash, he/she does not exhibit any of the negative clinical features,
None of the differential diagnoses is considered to be more likely than Gianotti–Crosti syndrome on clinical judgment, and
If lesional biopsy is performed, the histopathological findings are consistent with Gianotti–Crosti syndrome.
The positive clinical features are:
Monomorphous, flat-topped, pink-brown papules or papulovesicles 1-10mm in diameter.
At least three of the following four sites involved – (1) cheeks, (2) buttocks, (3) extensor surfaces of forearms, and (4) extensor surfaces of legs.
Being symmetrical, and
Lasting for at least ten days.
The negative clinical features are:
Extensive truncal lesions, and
Scaly lesions.
Differential diagnosis
The differential diagnoses are: acrodermatitis enteropathica, erythema infectiosum, erythema multiforme, hand-foot-and-mouth disease, Henoch–Schönlein purpura, Kawasaki disease, lichen planus, papular urticaria, papular purpuric gloves and socks syndrome, and scabies.
Treatment
Gianotti-Crosti disease is a harmless and self-limiting condition, so no treatment may be required. Treatment is mainly focused on controlling itching, symptomatic relief and to avoid any further complications. For symptomatic relief from itching, oral antihistamines or any soothing lotions like calamine lotion or zinc oxide may be used. If there are any associated conditions like streptococcal infections, antibiotics may be required.
See also
List of cutaneous conditions
References
External links
Virus-related cutaneous conditions
Epstein–Barr virus–associated diseases
Syndromes affecting the skin
Syndromes caused by microbes | Gianotti–Crosti syndrome | Biology | 787 |
292,265 | https://en.wikipedia.org/wiki/Behavioral%20ecology | Behavioral ecology, also spelled behavioural ecology, is the study of the evolutionary basis for animal behavior due to ecological pressures. Behavioral ecology emerged from ethology after Niko Tinbergen outlined four questions to address when studying animal behaviors: What are the proximate causes, ontogeny, survival value, and phylogeny of a behavior?
If an organism has a trait that provides a selective advantage (i.e., has adaptive significance) in its environment, then natural selection favors it. Adaptive significance refers to the expression of a trait that affects fitness, measured by an individual's reproductive success. Adaptive traits are those that produce more copies of the individual's genes in future generations. Maladaptive traits are those that leave fewer. For example, if a bird that can call more loudly attracts more mates, then a loud call is an adaptive trait for that species because a louder bird mates more frequently than less loud birds—thus sending more loud-calling genes into future generations. Conversely, loud calling birds may attract the attention of predators more often, decreasing their presence in the gene pool.
Individuals are always in competition with others for limited resources, including food, territories, and mates. Conflict occurs between predators and prey, between rivals for mates, between siblings, mates, and even between parents and offspring.
Competing for resources
The value of a social behavior depends in part on the social behavior of an animal's neighbors. For example, the more likely a rival male is to back down from a threat, the more value a male gets out of making the threat. The more likely, however, that a rival will attack if threatened, the less useful it is to threaten other males. When a population exhibits a number of interacting social behaviors such as this, it can evolve a stable pattern of behaviors known as an evolutionarily stable strategy (or ESS). This term, derived from economic game theory, became prominent after John Maynard Smith (1982) recognized the possible application of the concept of a Nash equilibrium to model the evolution of behavioral strategies.
Evolutionarily stable strategy
In short, evolutionary game theory asserts that only strategies that, when common in the population, cannot be "invaded" by any alternative (mutant) strategy is an ESS, and thus maintained in the population. In other words, at equilibrium every player should play the best strategic response to each other. When the game is two player and symmetric, each player should play the strategy that provides the response best for it.
Therefore, the ESS is considered the evolutionary end point subsequent to the interactions. As the fitness conveyed by a strategy is influenced by what other individuals are doing (the relative frequency of each strategy in the population), behavior can be governed not only by optimality but the frequencies of strategies adopted by others and are therefore frequency dependent (frequency dependence).
Behavioral evolution is therefore influenced by both the physical environment and interactions between other individuals.
An example of how changes in geography can make a strategy susceptible to alternative strategies is the parasitization of the African honey bee, A. m. scutellata.
Resource defense
The term economic defendability was first introduced by Jerram Brown in 1964. Economic defendability states that defense of a resource have costs, such as energy expenditure or risk of injury, as well as benefits of priority access to the resource. Territorial behavior arises when benefits are greater than the costs.
Studies of the golden-winged sunbird have validated the concept of economic defendability. Comparing the energetic costs a sunbird expends in a day to the extra nectar gained by defending a territory, researchers showed that birds only became territorial when they were making a net energetic profit. When resources are at low density, the gains from excluding others may not be sufficient to pay for the cost of territorial defense. In contrast, when resource availability is high, there may be so many intruders that the defender would have no time to make use of the resources made available by defense.
Sometimes the economics of resource competition favors shared defense. An example is the feeding territories of the white wagtail. The white wagtails feed on insects washed up by the river onto the bank, which acts as a renewing food supply. If any intruders harvested their territory then the prey would quickly become depleted, but sometimes territory owners tolerate a second bird, known as a satellite. The two sharers would then move out of phase with one another, resulting in decreased feeding rate but also increased defense, illustrating advantages of group living.
Ideal free distribution
One of the major models used to predict the distribution of competing individuals amongst resource patches is the ideal free distribution model. Within this model, resource patches can be of variable quality, and there is no limit to the number of individuals that can occupy and extract resources from a particular patch. Competition within a particular patch means that the benefit each individual receives from exploiting a patch decreases logarithmically with increasing number of competitors sharing that resource patch. The model predicts that individuals will initially flock to higher-quality patches until the costs of crowding bring the benefits of exploiting them in line with the benefits of being the only individual on the lesser-quality resource patch. After this point has been reached, individuals will alternate between exploiting the higher-quality patches and the lower-quality patches in such a way that the average benefit for all individuals in both patches is the same. This model is ideal in that individuals have complete information about the quality of a resource patch and the number of individuals currently exploiting it, and free in that individuals are freely able to choose which resource patch to exploit.
An experiment by Manfred Malinski in 1979 demonstrated that feeding behavior in three-spined sticklebacks follows an ideal free distribution. Six fish were placed in a tank, and food items were dropped into opposite ends of the tank at different rates. The rate of food deposition at one end was set at twice that of the other end, and the fish distributed themselves with four individuals at the faster-depositing end and two individuals at the slower-depositing end. In this way, the average feeding rate was the same for all of the fish in the tank.
Mating strategies and tactics
As with any competition of resources, species across the animal kingdom may also engage in competitions for mating. If one considers mates or potentials mates as a resource, these sexual partners can be randomly distributed amongst resource pools within a given environment. Following the ideal free distribution model, suitors distribute themselves amongst the potential mates in an effort to maximize their chances or the number of potential matings. For all competitors, males of a species in most cases, there are variations in both the strategies and tactics used to obtain matings. Strategies generally refer to the genetically determined behaviors that can be described as conditional. Tactics refer to the subset of behaviors within a given genetic strategy. Thus it is not difficult for a great many variations in mating strategies to exist in a given environment or species.
An experiment conducted by Anthony Arak, where playback of synthetic calls from male natterjack toads was used to manipulate behavior of the males in a chorus, the difference between strategies and tactics is clear. While small and immature, male natterjack toads adopted a satellite tactic to parasitize larger males. Though large males on average still retained greater reproductive success, smaller males were able to intercept matings. When the large males of the chorus were removed, smaller males adopted a calling behavior, no longer competing against the loud calls of larger males. When smaller males got larger, and their calls more competitive, then they started calling and competing directly for mates.
Sexual selection
Mate choice by resources
In many sexually reproducing species, such as mammals, birds, and amphibians, females are able to bear offspring for a certain time period, during which the males are free to mate with other available females, and therefore can father many more offspring to pass on their genes. The fundamental difference between male and female reproduction mechanisms determines the different strategies each sex employs to maximize their reproductive success. For males, their reproductive success is limited by access to females, while females are limited by their access to resources. In this sense, females can be much choosier than males because they have to bet on the resources provided by the males to ensure reproductive success.
Resources usually include nest sites, food and protection. In some cases, the males provide all of them (e.g. sedge warblers). The females dwell in their chosen males' territories for access to these resources. The males gain ownership to the territories through male–male competition that often involves physical aggression. Only the largest and strongest males manage to defend the best quality nest sites. Females choose males by inspecting the quality of different territories or by looking at some male traits that can indicate the quality of resources. One example of this is with the grayling butterfly (Hipparchia semele), where males engage in complex flight patterns to decide who defends a particular territory. The female grayling butterfly chooses a male based on the most optimal location for oviposition. Sometimes, males leave after mating. The only resource that a male provides is a nuptial gift, such as protection or food, as seen in Drosophila subobscura. The female can evaluate the quality of the protection or food provided by the male so as to decide whether to mate or not or how long she is willing to copulate.
Mate choice by genes
When males' only contribution to offspring is their sperm, females are particularly choosy. With this high level of female choice, sexual ornaments are seen in males, where the ornaments reflect the male's social status. Two hypotheses have been proposed to conceptualize the genetic benefits from female mate choice.
First, the good genes hypothesis suggests that female choice is for higher genetic quality and that this preference is favored because it increases fitness of the offspring. This includes Zahavi's handicap hypothesis and Hamilton and Zuk's host and parasite arms race. Zahavi's handicap hypothesis was proposed within the context of looking at elaborate male sexual displays. He suggested that females favor ornamented traits because they are handicaps and are indicators of the male's genetic quality. Since these ornamented traits are hazards, the male's survival must be indicative of his high genetic quality in other areas. In this way, the degree that a male expresses his sexual display indicates to the female his genetic quality. Zuk and Hamilton proposed a hypothesis after observing disease as a powerful selective pressure on a rabbit population. They suggested that sexual displays were indicators of resistance of disease on a genetic level.
Such 'choosiness' from the female individuals can be seen in wasp species too, especially among Polistes dominula wasps. The females tend to prefer males with smaller, more elliptically shaped spots than those with larger and more irregularly shaped spots. Those males would have reproductive superiority over males with irregular spots.
In marbled newts, females show preference to mates with larger crests. This however, is not considered a handicap as it does not negatively affect males' chances of survival. It is simply a trait females show preference for when choosing their mate as it is an indication of health and fitness.
Fisher's hypothesis of runaway sexual selection suggests that female preference is genetically correlated with male traits and that the preference co-evolves with the evolution of that trait, thus the preference is under indirect selection. Fisher suggests that female preference began because the trait indicated the male's quality. The female preference spread, so that the females' offspring now benefited from the higher quality from specific trait but also greater attractiveness to mates. Eventually, the trait only represents attractiveness to mates, and no longer represents increased survival.
An example of mate choice by genes is seen in the cichlid fish Tropheus moorii where males provide no parental care. An experiment found that a female T. moorii is more likely to choose a mate with the same color morph as her own. In another experiment, females have been shown to share preferences for the same males when given two to choose from, meaning some males get to reproduce more often than others.
Sensory bias
The sensory bias hypothesis states that the preference for a trait evolves in a non-mating context, and is then exploited by one sex to obtain more mating opportunities. The competitive sex evolves traits that exploit a pre-existing bias that the choosy sex already possesses. This mechanism is thought to explain remarkable trait differences in closely related species because it produces a divergence in signaling systems, which leads to reproductive isolation.
Sensory bias has been demonstrated in guppies, freshwater fish from Trinidad and Tobago. In this mating system, female guppies prefer to mate with males with more orange body coloration. However, outside of a mating context, both sexes prefer animate orange objects, which suggests that preference originally evolved in another context, like foraging. Orange fruits are a rare treat that fall into streams where the guppies live. The ability to find these fruits quickly is an adaptive quality that has evolved outside of a mating context. Sometime after the affinity for orange objects arose, male guppies exploited this preference by incorporating large orange spots to attract females.
Another example of sensory exploitation is in the water mite Neumania papillator, an ambush predator that hunts copepods (small crustaceans) passing by in the water column. When hunting, N. papillator adopts a characteristic stance termed the 'net stance' - their first four legs are held out into the water column, with their four hind legs resting on aquatic vegetation; this allows them to detect vibrational stimuli produced by swimming prey and use this to orient towards and clutch at prey. During courtship, males actively search for females - if a male finds a female, he slowly circles around the female whilst trembling his first and second leg near her. Male leg trembling causes females (who were in the 'net stance') to orient towards often clutch the male. This did not damage the male or deter further courtship; the male then deposited spermatophores and began to vigorously fan and jerk his fourth pair of legs over the spermatophore, generating a current of water that passed over the spermatophores and towards the female. Sperm packet uptake by the female would sometimes follow. Heather Proctor hypothesised that the vibrations trembling male legs made were done to mimic the vibrations that females detect from swimming prey - this would trigger the female prey-detection responses causing females to orient and then clutch at males, mediating courtship. If this was true and males were exploiting female predation responses, then hungry females should be more receptive to male trembling – Proctor found that unfed captive females did orient and clutch at males significantly more than fed captive females did, consistent with the sensory exploitation hypothesis.
Other examples for the sensory bias mechanism include traits in auklets, wolf spiders, and manakins. Further experimental work is required to reach a fuller understanding of the prevalence and mechanisms of sensory bias.
Sexual conflict
Sexual conflict, in some form or another, may very well be inherent in the ways most animals reproduce. Females invest more in offspring prior to mating, due to the differences in gametes in species that exhibit anisogamy, and often invest more in offspring after mating. This unequal investment leads, on one hand, to intense competition between males for mates and, on the other hand, to females choosing among males for better access to resources and good genes. Because of differences in mating goals, males and females may have very different preferred outcomes to mating.
Sexual conflict occurs whenever the preferred outcome of mating is different for the male and female. This difference, in theory, should lead to each sex evolving adaptations that bias the outcome of reproduction towards its own interests. This sexual competition leads to sexually antagonistic coevolution between males and females, resulting in what has been described as an evolutionary arms race between males and females.
Conflict over mating
Males' reproductive successes are often limited by access to mates, whereas females' reproductive successes are more often limited by access to resources. Thus, for a given sexual encounter, it benefits the male to mate, but benefits the female to be choosy and resist. For example, male small tortoiseshell butterfly compete to gain the best territory to mate. Another example of this conflict can be found in the Eastern carpenter bee, Xylocopa virginica. Males of this species are limited in reproduction primarily by access to mates, so they claim a territory and wait for a female to pass through. Big males are, therefore, more successful in mating because they claim territories near the female nesting sites that are more sought after. Smaller males, on the other hand, monopolize less competitive sites in foraging areas so that they may mate with reduced conflict. Another example of this is Sepsis cynipsea, where males of the species mount females to guard them from other males and remain on the female, attempting to copulate, until the female either shakes them off or consents to mating. Similarly the neriid fly Derocephalus angusticollis demonstrates mate guarding by using their long limbs to hold onto the female as well as push other males away during copulation. Extreme manifestations of this conflict are seen throughout nature. For example, the male Panorpa scorpionflies attempt to force copulation. Male scorpionflies usually acquire mates by presenting them with edible nuptial gifts in the forms of salivary secretions or dead insects. However, some males attempt to force copulation by grabbing females with a specialized abdominal organ without offering a gift. Forced copulation is costly to the female as she does not receive the food from the male and has to search for food herself (costing time and energy), while it is beneficial for the male as he does not need to find a nuptial gift.
In other cases, however, it pays for the female to gain more matings and her social mate to prevent these so as to guard paternity. For example, in many socially monogamous birds, males follow females closely during their fertile periods and attempt to chase away any other males to prevent extra-pair matings. The female may attempt to sneak off to achieve these extra matings. In species where males are incapable of constant guarding, the social male may frequently copulate with the female so as to swamp rival males' sperm.
Sexual conflict after mating has also been shown to occur in both males and females. Males employ a diverse array of tactics to increase their success in sperm competition. These can include removing other male's sperm from females, displacing other male's sperm by flushing out prior inseminations with large amounts of their own sperm, creating copulatory plugs in females' reproductive tracts to prevent future matings with other males, spraying females with anti-aphrodisiacs to discourage other males from mating with the female, and producing sterile parasperm to protect fertile eusperm in the female's reproductive tract. For example, the male spruce bud moth (Zeiraphera canadensis) secretes an accessory gland protein during mating that makes them unattractive to other males and thus prevents females from future copulation. The Rocky Mountain parnassian also exhibits this type of sexual conflict when the male butterflies deposit a waxy genital plug onto the tip of the female's abdomen that physically prevents the female from mating again. Males can also prevent future mating by transferring an anti-Aphrodiasic to the female during mating. This behavior is seen in butterfly species such as Heliconius melpomene, where males transfer a compound that causes the female to smell like a male butterfly and thus deter any future potential mates. Furthermore, males may control the strategic allocation of sperm, producing more sperm when females are more promiscuous. All these methods are meant to ensure that females are more likely to produce offspring belonging to the males who uses the method.
Females also control the outcomes of matings, and there exists the possibility that females choose sperm (cryptic female choice). A dramatic example of this is the feral fowl Gallus gallus. In this species, females prefer to copulate with dominant males, but subordinate males can force matings. In these cases, the female is able to eject the subordinate male's sperm using cloacal contractions.
Parental care and family conflicts
Parental care is the investment a parent puts into their offspring—which includes protecting and feeding the young, preparing burrows or nests, and providing eggs with yolk. There is great variation in parental care in the animal kingdom. In some species, the parents may not care for their offspring at all, while in others the parents exhibit single-parental or even bi-parental care. As with other topics in behavioral ecology, interactions within a family involve conflicts. These conflicts can be broken down into three general types: sexual (male–female) conflict, parent–offspring conflict, and sibling conflict.
Types of parental care
There are many different patterns of parental care in the animal kingdom. The patterns can be explained by physiological constraints or ecological conditions, such as mating opportunities. In invertebrates, there is no parental care in most species because it is more favorable for parents to produce a large number of eggs whose fate is left to chance than to protect a few individual young. In other cases, parental care is indirect, manifested via actions taken before the offspring is produced, but nonetheless essential for their survival; for example, female Lasioglossum figueresi sweat bees excavate a nest, construct brood cells, and stock the cells with pollen and nectar before they lay their eggs, so when the larvae hatch they are sheltered and fed, but the females die without ever interacting with their brood. In birds, biparental care is the most common, because reproductive success directly depends on the parents' ability to feed their chicks. Two parents can feed twice as many young, so it is more favorable for birds to have both parents delivering food. In mammals, female-only care is the most common. This is most likely because females are internally fertilized and so are holding the young inside for a prolonged period of gestation, which provides males with the opportunity to desert. Females also feed the young through lactation after birth, so males are not required for feeding. Male parental care is only observed in species where they contribute to feeding or carrying of the young, such as in marmosets. In fish there is no parental care in 79% of bony fish. In fish with parental care, it usually limited to selecting, preparing, and defending a nest, as seen in sockeye salmon, for example. Also, parental care in fish, if any, is primarily done by males, as seen in gobies and redlip blennies. The cichlid fish V. moorii exhibits biparental care. In species with internal fertilization, the female is usually the one to take care of the young. In cases where fertilization is external the male becomes the main caretaker.
Familial conflict
Familial conflict is a result of trade-offs as a function of lifetime parental investment. Parental investment was defined by Robert Trivers in 1972 as "any investment by the parent in an individual offspring that increases the offspring's chance of surviving at the cost of the parent's ability to invest in other offspring". Parental investment includes behaviors like guarding and feeding. Each parent has a limited amount of parental investment over the course of their lifetime. Investment trade-offs in offspring quality and quantity within a brood and trade offs between current and future broods leads to conflict over how much parental investment to provide and to whom parents should invest in. There are three major types of familial conflict: sexual, parent–offspring, and sibling–sibling conflict.
Sexual conflict
There is conflict among parents as to who should provide the care as well as how much care to provide. Each parent must decide whether or not to stay and care for their offspring, or to desert their offspring. This decision is best modeled by game theoretic approaches to evolutionarily stable strategies (ESS) where the best strategy for one parent depends on the strategy adopted by the other parent. Recent research has found response matching in parents who determine how much care to invest in their offspring. Studies found that parent great tits match their partner's increased care-giving efforts with increased provisioning rates of their own. This cued parental response is a type of behavioral negotiation between parents that leads to stabilized compensation. Sexual conflicts can give rise to antagonistic co-evolution between the sexes to try to get the other sex to care more for offspring. For example, in the waltzing fly Prochyliza xanthostoma, ejaculate feeding maximizes female reproductive success and minimizes the female's chance of mating multiply. Evidence suggests that the sperm evolved to prevent female waltzing flies from mating multiply in order to ensure the male's paternity.
Parent–offspring conflict
According to Robert Trivers's theory on relatedness, each offspring is related to itself by 1, but is only 0.5 related to their parents and siblings. Genetically, offspring are predisposed to behave in their own self-interest while parents are predisposed to behave equally to all their offspring, including both current and future ones. Offspring selfishly try to take more than their fair shares of parental investment, while parents try to spread out their parental investment equally amongst their present young and future young.
There are many examples of parent–offspring conflict in nature. One manifestation of this is asynchronous hatching in birds. A behavioral ecology hypothesis is known as Lack's brood reduction hypothesis (named after David Lack). Lack's hypothesis posits an evolutionary and ecological explanation as to why birds lay a series of eggs with an asynchronous delay leading to nestlings of mixed age and weights. According to Lack, this brood behavior is an ecological insurance that allows the larger birds to survive in poor years and all birds to survive when food is plentiful. We also see sex-ratio conflict between the queen and her workers in social hymenoptera. Because of haplodiploidy, the workers (offspring) prefer a 3:1 female to male sex allocation while the queen prefers a 1:1 sex ratio. Both the queen and the workers try to bias the sex ratio in their favor. In some species, the workers gain control of the sex ratio, while in other species, like B. terrestris, the queen has a considerable amount of control over the colony sex ratio. Lastly, there has been recent evidence regarding genomic imprinting that is a result of parent–offspring conflict. Paternal genes in offspring demand more maternal resources than maternal genes in the same offspring and vice versa. This has been shown in imprinted genes like insulin-like growth factor-II.
Parent–offspring conflict resolution
Parents need an honest signal from their offspring that indicates their level of hunger or need, so that the parents can distribute resources accordingly. Offspring want more than their fair share of resources, so they exaggerate their signals to wheedle more parental investment. However, this conflict is countered by the cost of excessive begging. Not only does excessive begging attract predators, but it also retards chick growth if begging goes unrewarded. Thus, the cost of increased begging enforces offspring honesty.
Another resolution for parent–offspring conflict is that parental provisioning and offspring demand have actually coevolved, so that there is no obvious underlying conflict. Cross-fostering experiments in great tits (Parus major) have shown that offspring beg more when their biological mothers are more generous. Therefore, it seems that the willingness to invest in offspring is co-adapted to offspring demand.
Sibling–sibling conflict
The lifetime parental investment is the fixed amount of parental resources available for all of a parent's young, and an offspring wants as much of it as possible. Siblings in a brood often compete for parental resources by trying to gain more than their fair share of what their parents can offer. Nature provides numerous examples in which sibling rivalry escalates to such an extreme that one sibling tries to kill off broodmates to maximize parental investment (See Siblicide). In the Galápagos fur seal, the second pup of a female is usually born when the first pup is still suckling. This competition for the mother's milk is especially fierce during periods of food shortage such as an El Niño year, and this usually results in the older pup directly attacking and killing the younger one.
In some bird species, sibling rivalry is also abetted by the asynchronous hatching of eggs. In the blue-footed booby, for example, the first egg in a nest is hatched four days before the second one, resulting in the elder chick having a four-day head start in growth. When the elder chick falls 20-25% below its expected weight threshold, it attacks its younger sibling and drives it from the nest.
Sibling relatedness in a brood also influences the level of sibling–sibling conflict. In a study on passerine birds, it was found that chicks begged more loudly in species with higher levels of extra-pair paternity.
Brood parasitism
Some animals deceive other species into providing all parental care. These brood parasites selfishly exploit their hosts' parents and host offspring. The common cuckoo is a well known example of a brood parasite. Female cuckoos lay a single egg in the nest of the host species and when the cuckoo chick hatches, it ejects all the host eggs and young. Other examples of brood parasites include honeyguides, cowbirds, and the large blue butterfly.
Brood parasite offspring have many strategies to induce their host parents to invest parental care. Studies show that the common cuckoo uses vocal mimicry to reproduce the sound of multiple hungry host young to solicit more food. Other cuckoos use visual deception with their wings to exaggerate the begging display. False gapes from brood parasite offspring cause host parents to collect more food. Another example of a brood parasite is Phengaris butterflies such as Phengaris rebeli and Phengaris arion, which differ from the cuckoo in that the butterflies do not oviposit directly in the nest of the host, an ant species Myrmica schencki. Rather, the butterfly larvae release chemicals that deceive the ants into believing that they are ant larvae, causing the ants to bring the butterfly larvae back to their own nests to feed them. Other examples of brood parasites are Polistes sulcifer, a paper wasp that has lost the ability to build its own nests so females lay their eggs in the nest of a host species, Polistes dominula, and rely on the host workers to take care of their brood, as well as Bombus bohemicus, a bumblebee that relies on host workers of various other Bombus species. Similarly, in Eulaema meriana, some Leucospidae wasps exploit the brood cells and nest for shelter and food from the bees. Vespula austriaca is another wasp in which the females force the host workers to feed and take care of the brood. In particular, Bombus hyperboreus, an Arctic bee species, is also classified as a brood parasite in that it attacks and enslaves other species within their subgenus, Alpinobombus to propagate their population.
Mating systems
Various types of mating systems include monogamy, polygyny, polyandry, and promiscuity. Each is differentiated by the sexual behavior between mates, such as which males mate with certain females. An influential paper by Stephen Emlen and Lewis Oring (1977) argued that two main factors of animal behavior influence the diversity of mating systems: the relative accessibility that each sex has to mates, and the parental desertion by either sex.
Mating systems with no male parental care
In a system that does not have male parental care, resource dispersion, predation, and the effects of social living primarily influence female dispersion, which in turn influences male dispersion. Since males' primary concern is female acquisition, the males either indirectly or directly compete for the females. In direct competition, the males are directly focused on the females. Blue-headed wrasse demonstrate the behavior in which females follow resources—such as good nest sites—and males follow the females. Conversely, species with males that exemplify indirectly competitive behavior tend towards the males' anticipation of the resources desired by females and their subsequent effort to control or acquire these resources, which helps them to achieve success with females. Grey-sided voles demonstrate indirect male competition for females. The males were experimentally observed to home in on the sites with the best food in anticipation of females settling in these areas. Males of Euglossa imperialis, a non-social bee species, also demonstrate indirect competitive behavior by forming aggregations of territories, which can be considered leks, to defend fragrant-rich primary territories. The purpose of these aggregations is largely only facultative, since the more suitable fragrant-rich sites there are, the more habitable territories there are to inhabit, giving females of this species a large selection of males with whom to potentially mate. Leks and choruses have also been deemed another behavior among the phenomena of male competition for females. Due to the resource-poor nature of the territories that lekking males often defend, it is difficult to categorize them as indirect competitors. For example, the ghost moth males display in leks to attract a female mate. Additionally, it is difficult to classify them as direct competitors seeing as they put a great deal of effort into their defense of their territories before females arrive, and upon female arrival they put for the great mating displays to attract the females to their individual sites. These observations make it difficult to determine whether female or resource dispersion primarily influences male aggregation, especially in lieu of the apparent difficulty that males may have defending resources and females in such densely populated areas. Because the reason for male aggregation into leks is unclear, five hypotheses have been proposed. These postulates propose the following as reasons for male lekking: hotspot, predation reduction, increased female attraction, hotshot males, facilitation of female choice. With all of the mating behaviors discussed, the primary factors influencing differences within and between species are ecology, social conflicts, and life history differences.
In some other instances, neither direct nor indirect competition is seen. Instead, in species like the Edith's checkerspot butterfly, males' efforts are directed at acquisition of females and they exhibit indiscriminate mate location behavior, where, given the low cost of mistakes, they blindly attempt to mate both correctly with females and incorrectly with other objects.
Mating systems with male parental care
Monogamy
Monogamy is the mating system in 90% of birds, possibly because each male and female has a greater number of offspring if they share in raising a brood. In obligate monogamy, males feed females on the nest, or share in incubation and chick-feeding. In some species, males and females form lifelong pair bonds. Monogamy may also arise from limited opportunities for polygamy, due to strong competition among males for mates, females suffering from loss of male help, and female–female aggression.
Polygyny
In birds, polygyny occurs when males indirectly monopolize females by controlling resources. In species where males normally do not contribute much to parental care, females suffer relatively little or not at all. In other species, however, females suffer through the loss of male contribution, and the cost of having to share resources that the male controls, such as nest sites or food. In some cases, a polygynous male may control a high-quality territory so for the female, the benefits of polygyny may outweigh the costs.
Polyandry threshold
There also seems to be a "polyandry threshold" where males may do better by agreeing to share a female instead of maintaining a monogamous mating system. Situations that may lead to cooperation among males include when food is scarce, and when there is intense competition for territories or females. For example, male lions sometimes form coalitions to gain control of a pride of females. In some populations of Galapagos hawks, groups of males would cooperate to defend one breeding territory. The males would share matings with the female and share paternity with the offspring.
Female desertion and sex role reversal
In birds, desertion often happens when food is abundant, so the remaining partner is better able to raise the young unaided. Desertion also occurs if there is a great chance of a parent to gain another mate, which depends on environmental and populational factors. Some birds, such as the phalaropes, have reversed sex roles where the female is larger and more brightly colored, and compete for males to incubate their clutches. In jacanas, the female is larger than the male and her territory could overlap the multiple territories of up to four males. In the frog species P. bibronii, the female is fertilizes multiple nests, and the male is left to tend to each nest while the female moves on.
Social behaviors
Animals cooperate with each other to increase their own fitness. These altruistic, and sometimes spiteful behaviors can be explained by Hamilton's rule, which states that rB-C > 0 where r= relatedness, B= benefits, and C= costs.
Kin selection
Kin selection refers to evolutionary strategies where an individual acts to favor the reproductive success of relatives, or kin, even if the action incurs some cost to the organism's own survival and ability to procreate. John Maynard Smith coined the term in 1964, although the concept was referred to by Charles Darwin who cited that helping relatives would be favored by group selection. Mathematical descriptions of kin selection were initially offered by R. A. Fisher in 1930 and J. B. S. Haldane in 1932. and 1955. W. D. Hamilton popularized the concept later, including the mathematical treatment by George Price in 1963 and 1964.
Kin selection predicts that individuals will harbor personal costs in favor of one or multiple individuals because this can maximize their genetic contribution to future generations. For example, an organism may be inclined to expend great time and energy in parental investment to rear offspring since this future generation may be better suited for propagating genes that are highly shared between the parent and offspring. Ultimately, the initial actor performs apparent altruistic actions for kin to enhance its own reproductive fitness. In particular, organisms are hypothesized to act in favor of kin depending on their genetic relatedness. So, individuals are inclined to act altruistically for siblings, grandparents, cousins, and other relatives, but to differing degrees.
Inclusive fitness
Inclusive fitness describes the component of reproductive success in both a focal individual and their relatives. Importantly, the measure embodies the sum of direct and indirect fitness and the change in their reproductive success based on the actor's behavior. That is, the effect an individual's behaviors have on: being personally better-suited to reproduce offspring, and aiding descendant and non-descendant relatives in their reproductive efforts. Natural selection is predicted to push individuals to behave in ways that maximize their inclusive fitness. Studying inclusive fitness is often done using predictions from Hamilton's rule.
Kin recognition
Genetic cues
One possible method of kin selection is based on genetic cues that can be recognized phenotypically. Genetic recognition has been exemplified in a species that is usually not thought of as a social creature: amoebae. Social amoebae form fruiting bodies when starved for food. These amoebae preferentially formed slugs and fruiting bodies with members of their own lineage, which is clonally related. The genetic cue comes from variable lag genes, which are involved in signaling and adhesion between cells.
Kin can also be recognized a genetically determined odor, as studied in the primitively social sweat bee, Lasioglossum zephyrus. These bees can even recognize relatives they have never met and roughly determine relatedness. The Brazilian stingless bee Schwarziana quadripunctata uses a distinct combination of chemical hydrocarbons to recognize and locate kin. Each chemical odor, emitted from the organism's epicuticles, is unique and varies according to age, sex, location, and hierarchical position. Similarly, individuals of the stingless bee species Trigona fulviventris can distinguish kin from non-kin through recognition of a number of compounds, including hydrocarbons and fatty acids that are present in their wax and floral oils from plants used to construct their nests. In the species, Osmia rufa, kin selection has also been associated with mating selection. Females, specifically, select males for mating with whom they are genetically more related to.
Environmental cues
There are two simple rules that animals follow to determine who is kin. These rules can be exploited, but exist because they are generally successful.
The first rule is 'treat anyone in my home as kin.' This rule is readily seen in the reed warbler, a bird species that only focuses on chicks in their own nest. If its own kin is placed outside of the nest, a parent bird ignores that chick. This rule can sometimes lead to odd results, especially if there is a parasitic bird that lays eggs in the reed warbler nest. For example, an adult cuckoo may sneak its egg into the nest. Once the cuckoo hatches, the reed warbler parent feeds the invading bird like its own child. Even with the risk for exploitation, the rule generally proves successful.
The second rule, named by Konrad Lorenz as 'imprinting,' states that those who you grow up with are kin. Several species exhibit this behavior, including, but not limited to the Belding's ground squirrel. Experimentation with these squirrels showed that regardless of true genetic relatedness, those that were reared together rarely fought. Further research suggests that there is partially some genetic recognition going on as well, as siblings that were raised apart were less aggressive toward one another compared to non-relatives reared apart.
Another way animals may recognize their kin include the interchange of unique signals. While song singing is often considered a sexual trait between males and females, male–male song singing also occurs. For example, male vinegar flies Zaprionus tuberculatus can recognize each other by song.
Cooperation
Cooperation is broadly defined as behavior that provides a benefit to another individual that specifically evolved for that benefit. This excludes behavior that has not been expressly selected for to provide a benefit for another individual, because there are many commensal and parasitic relationships where the behavior one individual (which has evolved to benefit that individual and no others) is taken advantage of by other organisms. Stable cooperative behavior requires that it provide a benefit to both the actor and recipient, though the benefit to the actor can take many different forms.
Within species
Within species cooperation occurs among members of the same species. Examples of intraspecific cooperation include cooperative breeding (such as in weeper capuchins) and cooperative foraging (such as in wolves). There are also forms of cooperative defense mechanisms, such as the "fighting swarm" behavior used by the stingless bee Tetragonula carbonaria. Much of this behavior occurs due to kin selection. Kin selection allows cooperative behavior to evolve where the actor receives no direct benefits from the cooperation.
Cooperation (without kin selection) must evolve to provide benefits to both the actor and recipient of the behavior. This includes reciprocity, where the recipient of the cooperative behavior repays the actor at a later time. This may occur in vampire bats but it is uncommon in non-human animals. Cooperation can occur willingly between individuals when both benefit directly as well. Cooperative breeding, where one individual cares for the offspring of another, occurs in several species, including wedge-capped capuchin monkeys.
Cooperative behavior may also be enforced, where their failure to cooperate results in negative consequences. One of the best examples of this is worker policing, which occurs in social insect colonies.
The cooperative pulling paradigm is a popular experimental design used to assess if and under which conditions animals cooperate. It involves two or more animals pulling rewards towards themselves via an apparatus they can not successfully operate alone.
Between species
Cooperation can occur between members of different species. For interspecific cooperation to be evolutionarily stable, it must benefit individuals in both species. Examples include pistol shrimp and goby fish, nitrogen fixing microbes and legumes, ants and aphids. In ants and aphids, aphids secrete a sugary liquid called honeydew, which ants eat. The ants provide protection to the aphids against predators, and, in some instances, raise the aphid eggs and larvae inside the ant colony. This behavior is analogous to human domestication. The genus of goby fish, Elacatinus also demonstrate cooperation by removing and feeding on ectoparasites of their clients. The species of wasp Polybia rejecta and ants Azteca chartifex show a cooperative behavior protecting one another's nests from predators.
Market economics often govern the details of the cooperation: e.g. the amount exchanged between individual animals follow the rules of supply and demand.
Spite
Hamilton's rule can also predict spiteful behaviors between non-relatives. A spiteful behavior is one that is harmful to both the actor and to the recipient. Spiteful behavior is favored if the actor is less related to the recipient than to the average member of the population making r negative and if rB-C is still greater than zero. Spite can also be thought of as a type of altruism because harming a non-relative, by taking his resources for example, could also benefit a relative, by allowing him access to those resources. Furthermore, certain spiteful behaviors may provide harmful short term consequences to the actor but also give long term reproductive benefits. Many behaviors that are commonly thought of as spiteful are actually better explained as being selfish, that is benefiting the actor and harming the recipient, and true spiteful behaviors are rare in the animal kingdom.
An example of spite is the sterile soldiers of the polyembryonic parasitoid wasp. A female wasp lays a male and a female egg in a caterpillar. The eggs divide asexually, creating many genetically identical male and female larvae. Sterile soldier wasps also develop and attack the relatively unrelated brother larvae so that the genetically identical sisters have more access to food.
Another example is bacteria that release bacteriocins. The bacteria that releases the bacteriocin may have to die to do so, but most of the harm is to unrelated individuals who are killed by the bacteriocin. This is because the ability to produce and release the bacteriocin is linked to an immunity to it. Therefore, close relatives to the releasing cell are less likely to die than non-relatives.
Altruism and conflict in social insects
Many insect species of the order Hymenoptera (bees, ants, wasps) are eusocial. Within the nests or hives of social insects, individuals engage in specialized tasks to ensure the survival of the colony. Dramatic examples of these specializations include changes in body morphology or unique behaviors, such as the engorged bodies of the honeypot ant Myrmecocystus mexicanus or the waggle dance of honey bees and a wasp species, Vespula vulgaris.
In many, but not all social insects, reproduction is monopolized by the queen of the colony. Due to the effects of a haplodiploid mating system, in which unfertilized eggs become male drones and fertilized eggs become worker females, average relatedness values between sister workers can be higher than those seen in humans or other eutherian mammals. This has led to the suggestion that kin selection may be a driving force in the evolution of eusociality, as individuals could provide cooperative care that establishes a favorable benefit to cost ratio (rB-c > 0). However, not all social insects follow this rule. In the social wasp Polistes dominula, 35% of the nest mates are unrelated. In many other species, unrelated individuals only help the queen when no other options are present. In this case, subordinates work for unrelated queens even when other options may be present. No other social insect submits to unrelated queens in this way. This seemingly unfavorable behavior parallels some vertebrate systems. It is thought that this unrelated assistance is evidence of altruism in P. dominula.
Cooperation in social organisms has numerous ecological factors that can determine the benefits and costs associated with this form of organization. One suggested benefit is a type of "life insurance" for individuals who participate in the care of the young. In this instance, individuals may have a greater likelihood of transmitting genes to the next generation when helping in a group compared to individual reproduction. Another suggested benefit is the possibility of "fortress defense", where soldier castes threaten or attack intruders, thus protecting related individuals inside the territory. Such behaviors are seen in the snapping shrimp Synalpheus regalis and gall-forming aphid Pemphigus spyrothecae. A third ecological factor that is posited to promote eusociality is the distribution of resources: when food is sparse and concentrated in patches, eusociality is favored. Evidence supporting this third factor comes from studies of naked mole-rats and Damaraland mole-rats, which have communities containing a single pair of reproductive individuals.
Conflicts in social insects
Although eusociality has been shown to offer many benefits to the colony, there is also potential for conflict. Examples include the sex-ratio conflict and worker policing seen in certain species of social Hymenoptera such as Dolichovespula media, Dolichovespula sylvestris, Dolichovespula norwegica and Vespula vulgaris. The queen and the worker wasps either indirectly kill the laying-workers' offspring by neglecting them or directly condemn them by cannibalizing and scavenging.
The sex-ratio conflict arises from a relatedness asymmetry, which is caused by the haplodiploidy nature of Hymenoptera. For instance, workers are most related to each other because they share half of the genes from the queen and inherit all of the father's genes. Their total relatedness to each other would be 0.5+ (0.5 x 0.5) = 0.75. Thus, sisters are three-fourths related to each other. On the other hand, males arise from unfertilized larva, meaning they only inherit half of the queen's genes and none from the father. As a result, a female is related to her brother by 0.25, because 50% of her genes that come from her father have no chance of being shared with a brother. Her relatedness to her brother would therefore be 0.5 x 0.5=0.25.
According to Trivers and Hare's population-level sex-investment ratio theory, the ratio of relatedness between sexes determines the sex investment ratios. As a result, it has been observed that there is a tug-of-war between the queen and the workers, where the queen would prefer a 1:1 female to male ratio because she is equally related to her sons and daughters (r=0.5 in each case). However, the workers would prefer a 3:1 female to male ratio because they are 0.75 related to each other and only 0.25 related to their brothers. Allozyme data of a colony may indicate who wins this conflict.
Conflict can also arise between workers in colonies of social insects. In some species, worker females retain their ability to mate and lay eggs. The colony's queen is related to her sons by half of her genes and a quarter to the sons of her worker daughters. Workers, however, are related to their sons by half of their genes and to their brothers by a quarter. Thus, the queen and her worker daughters would compete for reproduction to maximize their own reproductive fitness. Worker reproduction is limited by other workers who are more related to the queen than their sisters, a situation occurring in many polyandrous hymenopteran species. Workers police the egg-laying females by engaging in oophagy or directed acts of aggression.
The monogamy hypothesis
The monogamy hypothesis states that the presence of monogamy in insects is crucial for eusociality to occur. This is thought to be true because of Hamilton's rule that states that rB-C>0. By having a monogamous mating system, all of the offspring have high relatedness to each other. This means that it is equally beneficial to help out a sibling, as it is to help out an offspring. If there were many fathers the relatedness of the colony would be lowered.
This monogamous mating system has been observed in insects such as termites, ants, bees and wasps. In termites the queen commits to a single male when founding a nest. In ants, bees and wasps the queens have a functional equivalent to lifetime monogamy. The male can even die before the founding of the colony. The queen can store and use the sperm from a single male throughout their lifetime, sometimes up to 30 years.
In an experiment looking at the mating of 267 hymenopteran species, the results were mapped onto a phylogeny. It was found that monogamy was the ancestral state in all the independent transitions to eusociality. This indicates that monogamy is the ancestral, likely to be crucial state for the development of eusociality. In species where queens mated with multiple mates, it was found that these were developed from lineages where sterile castes already evolved, so the multiple mating was secondary. In these cases, multiple mating is likely to be advantageous for reasons other than those important at the origin of eusociality. Most likely reasons are that a diverse worker pool attained by multiple mating by the queen increases disease resistance and may facilitate a division of labor among workers
Communication and signaling
Communication is varied at all scales of life, from interactions between microscopic organisms to those of large groups of people. Nevertheless, the signals used in communication abide by a fundamental property: they must be a quality of the receiver that can transfer information to a receiver that is capable of interpreting the signal and modifying its behavior accordingly. Signals are distinct from cues in that evolution has selected for signalling between both parties, whereas cues are merely informative to the observer and may not have originally been used for the intended purpose. The natural world is replete with examples of signals, from the luminescent flashes of light from fireflies, to chemical signaling in red harvester ants to prominent mating displays of birds such as the Guianan cock-of-the-rock, which gather in leks, the pheromones released by the corn earworm moth, the dancing patterns of the blue-footed booby, or the alarm sound Synoeca cyanea make by rubbing their mandibles against their nest. Yet other examples are the cases of the grizzled skipper and Spodoptera littoralis where pheromones are released as a sexual recognition mechanism that drives evolution. In a type of mating signal, male orb-weaving spiders of the species Zygiella x-notata pluck the signal thread of a female's web with their forelegs. This performance conveys vibratory signals informing the female spider of the male's presence.
The nature of communication poses evolutionary concerns, such as the potential for deceit or manipulation on the part of the sender. In this situation, the receiver must be able to anticipate the interests of the sender and act appropriately to a given signal. Should any side gain advantage in the short term, evolution would select against the signal or the response. The conflict of interests between the sender and the receiver results in an evolutionarily stable state only if both sides can derive an overall benefit.
Although the potential benefits of deceit could be great in terms of mating success, there are several possibilities for how dishonesty is controlled, which include indices, handicaps, and common interests. Indices are reliable indicators of a desirable quality, such as overall health, fertility, or fighting ability of the organism. Handicaps, as the term suggests, place a restrictive cost on the organisms that own them, and thus lower quality competitors experience a greater relative cost compared to their higher quality counterparts. In the common interest situation, it is beneficial to both sender and receiver to communicate honestly such that the benefit of the interaction is maximized.
Signals are often honest, but there are exceptions. Prime examples of dishonest signals include the luminescent lure of the anglerfish, which is used to attract prey, or the mimicry of non-poisonous butterfly species, like the Batesian mimic Papilio polyxenes of the poisonous model Battus philenor. Although evolution should normally favor selection against the dishonest signal, in these cases it appears that the receiver would benefit more on average by accepting the signal.
See also
Autonomous foraging
Behavioral plasticity
Evolutionary models of food sharing
Gene-centered view of evolution
Human behavioral ecology
Life history theory
Marginal value theorem
Optimization
Mating effort
Parental effort
Phylogenetic comparative methods
Selection
Balancing selection
Directional selection
Disruptive selection
Stabilizing selection
r/K selection theory
Somatic effort
References
Further reading
Alcock, J. (2009). Animal Behavior: An Evolutionary Approach (9th edition). Sinauer Associates Inc. Sunderland, MA.
Bateson, P. (2017) Behaviour, Development and Evolution. Open Book Publishers,
Danchin, É., Girladeau, L.-A. and Cézilly, F. (2008). Behavioural Ecology: An Evolutionary Perspective on Behaviour. Oxford University Press, Oxford.
Krebs, J.R. and Davies, N. An Introduction to Behavioural Ecology,
Krebs, J.R. and Davies, N. Behavioural Ecology: An Evolutionary Approach,
Wajnberg, E., Bernstein E. and van Alphen, E. (2008). Behavioral Ecology of Insect Parasitoids – From Theoretical Approaches to Field Applications, Blackwell Publishing.
External links | Behavioral ecology | Biology | 11,952 |
27,912,530 | https://en.wikipedia.org/wiki/Social%20media%20and%20suicide | Researchers study social media and suicide to find if a correlation exists between the two. Some research has shown that there may be a correlation.
Background
Suicide is one of the leading causes of death worldwide, and as of 2020, the second leading cause of death in the United States for those aged 15–34. According to the Center for Disease Control and Prevention, suicide was the third leading cause of death among adolescents in the US, from 1999 to 2006.
In 2020, people in the US had a suicide rate of 13.5 per 100,000. Suicide was a leading cause of death in the United States accounting for 48,183 deaths in 2021. Suicide rates increased by 30 per cent from 2000-2018 and declined in 2019 and 2020.
Suicide remains a significant public health issue worldwide, despite prevention efforts and treatments. Suicide has been identified not only as an individual phenomenon but also as being influenced by social and environmental factors. There is growing evidence that online activity has influenced suicide-related behavior. The use of social media throughout the 21st century has grown exponentially. For this reason, there are a variety of sources that are accessible to the public in various forms, especially social media sites such as Facebook, Instagram, Twitter, YouTube, Snapchat, TikTok and many more. Although these platforms were intended to allow people to connect virtually, these platforms can lead to cyber-bullying, insecurity, and emotional distress, and sometimes may influence a person to attempt suicide.
Bullying, whether on social media or elsewhere, physical or not, significantly increases victims' risk of suicidal behavior. Since social media was introduced some people have taken their lives as a result of cyberbullying. Furthermore, suicide rates among teenagers have increased from 2010 to 2022 as social media has become something that people interact with more throughout their day-to-day lives.
Media algorithms tend to popularize videos and posts to inform the country of the rising trouble, which may create a popular appeal to the young and immature minds of teenagers. This is why, social media could provide higher risks with the promotion of different kinds of pro-suicidal sites, message boards, chat rooms, and forums. Moreover, the Internet not only reports suicide incidents but documents suicide methods (for example, suicide pacts, an agreement between two or more people to kill themselves at a particular time and often by the same lethal means). Therefore, the role the Internet plays, particularly social media, in suicide-related behavior is a topic of growing interest.
Cyberbullying
There is substantial evidence that the Internet and social media can influence suicide-related behavior. Such evidence includes an increase in exposure to graphic content. A research study conducted by Sameer Hinduja and Justin Patchin found a correlation between cyberbullying and suicide. According to their findings, cyber-bullying increases suicidal thoughts by 14.5 percent and suicide attempts by 8.7 percent. Particularly alarming is the fact that children and young people under 25 who are victims of cyberbullying are more than twice as likely to self-harm and engage in suicidal behavior. Overall, teen suicide rates have increased within the past decade.This presents a significant public health concern, with over 40,000 suicides in the United States and nearly one million worldwide annually.
Recent data from the Centers for Disease Control and Prevention reveals that 14.9 per cent of teenagers have experienced online bullying, while 13.6 per cent of teenagers have seriously attempted suicide. Both of these incidents are in increasing numbers in the United States. Furthermore, in numerous recent incidents, cyber-bullying led the victim to commit suicide; this phenomenon is now known as cyberbullicide. Many parents and children are unaware of the dangers and potential legal consequences of cyberbullying. As a response, anti-bullying regulations implemented by schools aim to prevent any form of bullying, including through technology, and protect students from online harassment. While some states have enacted laws against cyberbullying, there are currently no federal regulations addressing this issue.
Social media's influence on suicide
The media may portray suicidal behavior or language which can potentially influence people to act on these suicidal tendencies. This may include news reports of actual suicides that have occurred or television shows and films that reenact suicides.
Some organizations have proposed guidelines about how the media should report suicide. There is evidence that compliance with the guidelines varies. Some research showed that it is unclear whether the guidelines have successfully reduced the number of suicides. On the contrary, other research studies stated that the guidelines have worked in some cases.
Impact of pro-suicidal sites, message boards, chat rooms and forums
Social media platforms have transformed traditional methods of communication by allowing the instantaneous and interactive sharing of information created and controlled by individuals, groups, organizations, and governments. As of the third quarter of 2022, Facebook had 266 million monthly active users, between Canada and the US. An immense quantity of information on the topic of suicide is available on the Internet and via social media. The information available on social media on the topic of suicide can influence suicidal behavior, both negatively and positively.
The social cognitive theory plays a vital role in suicide attempts influenced through social media. This theory is demonstrated when one is influenced by what they see through various processes that form into modeled behaviors. This can be shown when people post their suicide attempts online or promote suicidal behavior in general.
Contributors to these social media platforms may also exert peer pressure and encourage others to take their own lives, idolize those who have killed themselves, and facilitate suicide pacts. These pro-suicidal sites reported the following. For example, on a Japanese message board in 2008, it was shared that people can kill themselves using hydrogen sulfide gas. Shortly after 220 people attempted suicide in this way, and 208 were successful. Biddle et al. conducted a systematic Web search of 12 suicide-associated terms (e.g., suicide, suicide methods, how to kill yourself, and best suicide methods) to analyze the search results and found that pro-suicide sites and chat rooms that discussed general issues associated with suicide most often occurred within the first few hits of a search. In another study, 373 suicide-related websites were found using Internet search engines and examined. Among them, 31% were suicide neutral, 29% were anti-suicide, and 11% were pro-suicide. Together, these studies have shown that obtaining pro-suicide information on the Internet, including detailed information on suicide methods, is very easy.
While social media has been prevalent in young adult suicide, some young adults find comfort and solace through these platforms. Young adults are making connections with people in like situations that are helping them feel less lonely. Although the public opinion is that message boards are harmful, the following studies show how they point to suicide prevention and have positive influences. A study using content analysis analyzed all of the postings on the AOL Suicide Bulletin Board over 11 months and concluded that most contributions contained positive, empathetic, and supportive postings. Then, a multi-method study was able to demonstrate that the users of such forums experience a great deal of social support and only a small amount of social strain. Lastly, in the survey participants were asked to assess the extent of their suicidal thoughts on a 7-level scale (0, absolutely no suicidal thoughts, to 7, very strong suicidal thoughts) for the time directly before their first forum visit and at the time of the survey. The study found a significant reduction after using the forum. The study however cannot conclude the forum is the only reason for the decrease. Together, these studies show how forums can reduce the number of suicides.
An example of how social media can play a role in suicide is that of a male adolescent who arrived at the emergency department with his parents after suspected medication ingestion in which he attempted to overdose. Beforehand he had sent an ex-girlfriend a Snapchat picture of himself holding a bottle of acetaminophen, which was forwarded to the young male's parents. This picture was used by medical experts to establish the time of his ingestion, oral N-acetylcysteine was administered and he was brought to a pediatric care facility, where he had an uneventful recovery and psychiatric evaluation.
In 2013, the main cause of nine teen suicides was due to hateful anonymous messages on Ask.fm.
Cyberbullying and suicide
Cyberbullying has received considerable attention as a possible cause of suicide. With the rise of social media, the risk of falling victim to blackmail has also increased. It has been deemed a major health concern for affected teens and a major health threat to those affected by the psychological trauma inflicted by perpetrators on social media. While there isn't one Federal Law that is specific to cyberbullying, 48 states have laws against cyberbullying or online harassment with 44 of those states having criminal sanctions within the laws. Many states have enhanced their harassment laws to include online harassment. Criminal harassment statutes often provide a basis for bringing charges in severe cases, and more serious criminal charges have been brought in cases where evidence indicates a resultant suicide or other tragic consequences. Civil remedies have been sought in many cases where criminal liability was difficult to prove.
In 2006, 13 year old Megan Meier hanged herself in her bedroom closet following a series of MySpace messages that came from a friend's mother and her 18 year old associate, who posed as a 16 year old boy named "Josh Evans" and encouraged Megan to commit suicide. The mother, Lori Drew, faced federal conspiracy charges related to computer fraud and abuse (see United States v. Drew), but was later acquitted.
In 2012, Canadian high school student Amanda Todd hanged herself after being blackmailed by a stalker, and suffering from repeated cyberbullying and harassment at school. On September 7, Todd posted a 9-minute YouTube video titled My story: Struggling, bullying, suicide, self-harm, which showed her using a series of flashcards to tell of her experiences being bullied. The video went viral after her death on October 10, 2012, receiving over 1,600,000 views by October 13, 2012, with news websites from around the world linking to it.
In 2014, Conrad Roy killed himself after exchanging numerous text messages with Michelle Carter, his long-distance girlfriend, who repeatedly encouraged him to commit suicide. She was found guilty of involuntary manslaughter, and sentenced to 15 months in prison. Carter was released in January 2020.
Sadie Riggs, a Pennsylvania teen, killed herself in 2015 allegedly because of online bullying and harassment at school on her appearance. Sadie's aunt, Sarah Smith, contacted various social media companies, police, and Sadie's school in hopes to make the bullying stop. In desperation, Smith went as far as to break Sadie's phone, in her presence, in an attempt to stop the bullying. No charges were ever filed against any alleged suspect.
In 2016, Chien Chih-cheng, a Taiwanese animal shelter director, committed suicide after appearing in a television program about animal euthanasia. Chien, an animal lover, was charged with euthanizing stray pets as a result of overcrowding in Taiwan's shelters. After appearing on the program, she was branded as an "executioner" and "female butcher", and she and the shelter she operated were subject to intense cyberbullying and abuse. She later died by injecting herself with the same substance she used to euthanize pets, leaving a note communicating that "all lives are equal".
In a 2018 Florida case, two preteens were arrested and charged with cyberstalking after they were accused of cyberbullying another female middle school student, 12 year old Gabriella Green. Online rumors were spread about her, and she hanged herself immediately after a call with one of the abusers, who told her that "If you're going to do it, just do it" and ended the call, according to police.
In 2019, Canadian Inuk pop singer Kelly Fraser, who was most popular for her Inuktitut language covers of pop songs, was found dead in her home near Winnipeg, Manitoba. Her death was ruled a suicide, which Fraser's family attributed to "childhood traumas, racism, and persistent cyberbullying."
Austrian doctor Lisa-Maria Kellermayr committed suicide in 2022 after a tweet she made criticizing opponents to Covid measures caused her to become a target of death threats, intimidation and abuse.
Media contagion effect
Suicide contagion can be viewed within the larger context of behavioral contagion, which has been described as a situation in which the same behavior spreads quickly and spontaneously through a group. Suicide contagion refers to the phenomenon of indirect exposure to suicide or suicidal behaviors influencing others to attempt to kill themselves. The Persons most susceptible to suicide contagions are those under 25 years of age. Media coverage of suicides has been shown to significantly increase the rate of suicide, and the magnitude of the increase is related to the amount, duration, and prominence of coverage. A recent study by Dunlop et al. specifically examined possible contagion effects on suicidal behavior via the Internet and social media. Of 719 individuals aged 14 to 24 years, 79% reported being exposed to suicide-related content through family, friends, and traditional news media such as newspapers, and 59% found such content through Internet sources. This information may pose a hazard for vulnerable groups by influencing decisions to die by suicide. In particular, interactions via chat rooms or discussion forums may foster peer pressure to die by suicide, encourage users to idolize those who have died by suicide, or facilitate suicide pacts. Recently there has been a trend in creating memorial social media pages in honor of a deceased person. In New Zealand, a memorial page was made after a person died by suicide, this resulted in the suicide of eight other persons thereafter, which further shows the power of the media contagion effect. One South Korean study demonstrated that social media data can be used to predict national suicide numbers.
Suicide notes
It has generally been found that those who post suicide notes online tend to not receive help.
Several notable cases support this argument:
Kevin Whitrick and Abraham K. Biggs webcast both of their suicides. "I am going to leave this for whoever stumbles across my bookmarks later on."
Paul Zolezzi indicated via a Facebook update his intent to commit suicide.
In 2010, John Patrick Bedell left a Wikipedia user page and YouTube videos interpreted by some as a suicide note; the former was deleted by Wikipedia administrators.
Joe Stack also posted a suicide note online.
Chris McKinstry, an AI researcher, died by suicide after posting a note to both his blog and the Joel on Software off-topic forum explaining the reasons for his demise.
A girl who attended a Louisville-area high school posted a video suicide note and then killed herself in 2014. The girl did not receive any help prior to her suicide, leading H. Eric Sparks, director of the American School Counselor Association, to say that troubled students should be directed to help hotlines or to trusted authorities to seek intervention as quickly as possible.
Suicide pacts
A suicide pact is an agreement between two or more people to die by suicide at a particular time and often by the same lethal means. Although suicide pacts are found to be rare however, there are traditional suicide pacts that have typically developed among individuals who know each other, such as a couple of friends. Additionally, a suicide pact that has been formed or developed in some way through the use of the Internet is known as a cyber suicide pact. A primary difference between cybersuicide pacts and traditional suicide pacts is that these pacts are usually formed among strangers. They mostly use online chat rooms and virtual bulletin boards and forums as an unmediated avenue to share their feelings with other like-minded individuals, which can be easier than talking about such thoughts and feelings in person.
The first documented use of the Internet to form a suicide pact was reported in Japan in 2000. It has now become a more common form of suicide in Japan, where the suicide rate increased from 34 suicides in 2003 to 91 suicides in 2005. Also, South Korea now has one of the world's highest suicide rates (24.7/100 000 in 2005), and evidence exists that cyber suicide pacts may account for almost one-third of suicides in that country. Suicide pacts are also in the United States. In April 2018, Macon Middle School, a middle school in North Carolina, became aware of a group on social media called "Edgy" or "Edgy Fan Page 101" in which this group came up with a suicide pact and had suicidal ideations. The middle school contacted the parents and informed them to look into their children's social media pages and talk with them about the dangers of a group like this.
Gerald Krein and William Francis Melchert-Dinkel were accused of arranging internet suicide pacts.
Interventions
Suicide intervention on social media has saved many lives on Twitter, Instagram, and Facebook. All of the aforementioned companies have slightly different ways to report posts that may seem suicidal.
Facebook
Facebook, assisted by, among a handful of other experts, Dr. Dan Reidenburg of Suicide Awareness Voices of Education—"uses an algorithm to track down buzzwords and phrases that are commonly associated with suicide" and has intervened in over 3,500 cases, according to company reports. The algorithm reportedly tracks buzzwords and phrases associated with suicide and an alert is sent to Facebook's Safety Center.
"The technology itself isn't going to send somebody to their house. A person at Facebook would have to do that..."
–Dr. Dan Reidenburg
Twitter
Demi Moore and her followers intervened to stop a suicide that had been announced on Twitter.
Twitter followers of Chicago rapper CupcakKe alerted authorities after the rapper posted ominous phrases onto Twitter. She later thanked all of her followers after receiving help.
Forums
A South German was prevented from killing himself after Spanish internet users saw him announcing his decision.
Discussion and support groups
The Defense Centers of Excellence have expressed interest in using social media for suicide prevention. Facebook groups have sometimes been set up for suicide prevention purposes, including one that attracted 47,000 members. Although many teens and preteens encounter suicide-related posts from peers on different social media apps, they also encounter suicide prevention hotlines and website links as well.
SAMHSA's Suicide Prevention Lifeline operates on Twitter, Facebook, and YouTube. The American Foundation For Suicide Prevention is trying to understand and prevent suicide through research, education, and advocacy.
See also
Blue Whale (game)
Death and the Internet
Instagram's impact on people
Momo Challenge
Suicide and the Internet
Virtual crime
alt.suicide.holiday
Sanctioned Suicide
References
Further reading
Cyberbullying
Social media | Social media and suicide | Technology | 3,838 |
41,635 | https://en.wikipedia.org/wiki/Recovery%20procedure | In telecommunications, a recovery procedure is a process that attempts to bring a system back to a normal operating state. Examples:
The actions necessary to restore an automated information system's data files and computational capability after a system failure.
In data communications, a process whereby a data station attempts to resolve conflicting or erroneous conditions arising during the data transfer.
See also
Error detection and correction
Fault-tolerant design
Fault-tolerant system
References
Telecommunications techniques
Fault tolerance | Recovery procedure | Engineering | 91 |
22,488,176 | https://en.wikipedia.org/wiki/Gautieria%20morchelliformis | Gautieria morchelliformis is a species of hypogeal fungus in the family Gomphaceae. It was first described scientifically by Italian Carlo Vittadini in 1831. Three varieties have been described: var. globispora and var. stenospora by Albert Pilát in 1958; and var. microspora by Evžen Wichanský in 1962. None are considered to have independent taxonomical significance.
References
Fungi described in 1831
Fungi of Europe
Fungi of North America
Gomphaceae
Fungus species | Gautieria morchelliformis | Biology | 112 |
3,445,364 | https://en.wikipedia.org/wiki/PCLSRing | PCLSRing (also known as Program Counter Lusering) is the term used in the ITS operating system for a consistency principle in the way one process accesses the state of another process.
Problem scenario
This scenario presents particular complications:
Process A makes a time-consuming system call. By "time-consuming", it is meant that the system needs to put Process A into a wait queue and can schedule another process for execution if one is ready-to-run. A common example is an I/O operation.
While Process A is in this wait state, Process B tries to interact with or access Process A, for example, send it a signal.
What should be the visible state of the context of Process A at the time of the access by Process B? In fact, Process A is in the middle of a system call, but ITS enforces the appearance that system calls are not visible to other processes (or even to the same process).
ITS-solution: transparent restart
If the system call cannot complete before the access, then it must be restartable. This means that the context is backed up to the point of entry to the system call, while the call arguments are updated to reflect whatever portion of the operation has already been completed. For an I/O operation, this means that the buffer start address must be advanced over the data already transferred, while the length of data to be transferred must be decremented accordingly. After the Process B interaction is complete, Process A can resume execution, and the system call resumes from where it left off.
This technique mirrors in software what the PDP-10 does in hardware. Some PDP-10 instructions like BLT may not run to completion, either due to an interrupt or a page fault. In the course of processing the instruction, the PDP-10 would modify the registers containing arguments to the instruction, so that later the instruction could be run again with new arguments that would complete any remaining work to be done. PCLSRing applies the same technique to system calls.
This requires some additional complexity. For example, memory pages in User space may not be paged out during a system call in ITS. If this were allowed, then when the system call is PCLSRed and tries to update the arguments so the call can be aborted, the page containing the arguments might not be present, and the system call would have to block, preventing the PCLSR from succeeding. To prevent this, ITS doesn't allow memory pages in User space to be paged out after they're first accessed during a system call, and system calls typically start by touching pages in User space they know they will need to access.
Unix-solution: restart on request
Contrast this with the approach taken in the UNIX operating system, where there is restartability, but it is not transparent. Instead, an I/O operation returns the number of bytes actually transferred (or the EINTR error if the operation was interrupted before any bytes were actually transferred), and it is up to the application to check this and manage its own resumption of the operation until all the bytes have been transferred. In the philosophy of UNIX, this was given by Richard P. Gabriel as an example of the "worse is better" principle.
Asynchronous approaches
A different approach is possible. It is apparent in the above that the system call has to be synchronous—that is, the calling process has to wait for the operation to complete. This is not inevitable: in the OpenVMS operating system, all I/O and other time-consuming operations are inherently asynchronous, which means the semantics of the system call is "start the operation, and perform one or more of these notifications when it completes" after which it returns immediately to the caller. There is a standard set of available notifications (such as set an event flag, or deliver an asynchronous system trap), as well as a set of system calls for explicitly suspending the process while waiting for these, which are a) fully restartable in the ITS sense, and b) much smaller in number than the set of actual time-consuming system calls.
OpenVMS provides alternative "start operation and wait for completion" synchronous versions of all time-consuming system calls. These are implemented as "perform the actual asynchronous operation" followed by "wait until the operation sets the event flag". Any access to the process context during this time will see it about to (re)enter the wait-for-event-flag call.
Notes
References
Concurrent computing | PCLSRing | Technology | 939 |
53,341,538 | https://en.wikipedia.org/wiki/Psi%20Crateris | Psi Crateris, Latinized from ψ Crateris, is the Bayer designation for a visual binary star system in the southern constellation of Crater. It is faintly visible to the naked eye with an apparent visual magnitude of 6.13. According to the Bortle scale, it requires dark suburban or rural skies to view. Based upon an annual parallax shift of 6.5 mas, the system is located approximately 500 light years away from the Sun.
The components in this star system have an orbital period of about 366 years with an eccentricity of 0.43. The angular size of the orbit's semimajor axis is about half an arc second. The primary member, component A, is an ordinary A-type main sequence star with a visual magnitude of 6.24 and a stellar classification of A0 V. It was a candidate λ Boötis star, but this was later rejected when the spectrum was found to be normal. Any peculiarities may have instead resulted from the overlapping spectra of the two stars. The star is radiating about 75 times the solar luminosity from it outer atmosphere at an effective temperature of 9,199 K. The fainter secondary, component B, has a visual magnitude of 8.34 and a class of A3.
References
A-type main-sequence stars
Spectroscopic binaries
Crater (constellation)
Crateris, Psi
Durchmusterung objects
097411
054742
4347 | Psi Crateris | Astronomy | 294 |
40,579,087 | https://en.wikipedia.org/wiki/Quotient%20stack | In algebraic geometry, a quotient stack is a stack that parametrizes equivariant objects. Geometrically, it generalizes a quotient of a scheme or a variety by a group: a quotient variety, say, would be a coarse approximation of a quotient stack.
The notion is of fundamental importance in the study of stacks: a stack that arises in nature is often either a quotient stack itself or admits a stratification by quotient stacks (e.g., a Deligne–Mumford stack.) A quotient stack is also used to construct other stacks like classifying stacks.
Definition
A quotient stack is defined as follows. Let G be an affine smooth group scheme over a scheme S and X an S-scheme on which G acts. Let the quotient stack be the category over the category of S-schemes, where
an object over T is a principal G-bundle together with equivariant map ;
a morphism from to is a bundle map (i.e., forms a commutative diagram) that is compatible with the equivariant maps and .
Suppose the quotient exists as an algebraic space (for example, by the Keel–Mori theorem). The canonical map
,
that sends a bundle P over T to a corresponding T-point, need not be an isomorphism of stacks; that is, the space "X/G" is usually coarser. The canonical map is an isomorphism if and only if the stabilizers are trivial (in which case exists.)
In general, is an Artin stack (also called algebraic stack). If the stabilizers of the geometric points are finite and reduced, then it is a Deligne–Mumford stack.
has shown: let X be a normal Noetherian algebraic stack whose stabilizer groups at closed points are affine. Then X is a quotient stack if and only if it has the resolution property; i.e., every coherent sheaf is a quotient of a vector bundle. Earlier, Robert Wayne Thomason proved that a quotient stack has the resolution property.
Examples
An effective quotient orbifold, e.g., where the action has only finite stabilizers on the smooth space , is an example of a quotient stack.
If with trivial action of (often is a point), then is called the classifying stack of (in analogy with the classifying space of ) and is usually denoted by . Borel's theorem describes the cohomology ring of the classifying stack.
Moduli of line bundles
One of the basic examples of quotient stacks comes from the moduli stack of line bundles over , or over for the trivial -action on . For any scheme (or -scheme) , the -points of the moduli stack are the groupoid of principal -bundles .
Moduli of line bundles with n-sections
There is another closely related moduli stack given by which is the moduli stack of line bundles with -sections. This follows directly from the definition of quotient stacks evaluated on points. For a scheme , the -points are the groupoid whose objects are given by the setThe morphism in the top row corresponds to the -sections of the associated line bundle over . This can be found by noting giving a -equivariant map and restricting it to the fiber gives the same data as a section of the bundle. This can be checked by looking at a chart and sending a point to the map , noting the set of -equivariant maps is isomorphic to . This construction then globalizes by gluing affine charts together, giving a global section of the bundle. Since -equivariant maps to is equivalently an -tuple of -equivariant maps to , the result holds.
Moduli of formal group laws
Example: Let L be the Lazard ring; i.e., . Then the quotient stack by
,
,
is called the moduli stack of formal group laws, denoted by .
See also
Homotopy quotient
Moduli stack of principal bundles (which, roughly, is an infinite product of classifying stacks.)
Group-scheme action
Moduli of algebraic curves
References
Some other references are
Algebraic geometry | Quotient stack | Mathematics | 895 |
311,433 | https://en.wikipedia.org/wiki/Obesity%20hypoventilation%20syndrome | Obesity hypoventilation syndrome (OHS) is a condition in which severely overweight people fail to breathe rapidly or deeply enough, resulting in low oxygen levels and high blood carbon dioxide (CO2) levels. The syndrome is often associated with obstructive sleep apnea (OSA), which causes periods of absent or reduced breathing in sleep, resulting in many partial awakenings during the night and sleepiness during the day. The disease puts strain on the heart, which may lead to heart failure and leg swelling.
Obesity hypoventilation syndrome is defined as the combination of obesity and an increased blood carbon dioxide level during the day that is not attributable to another cause of excessively slow or shallow breathing.
The most effective treatment is weight loss, but this may require bariatric surgery to achieve. Weight loss of 25 to 30% is usually required to resolve the disorder. The other first-line treatment is non-invasive positive airway pressure (PAP), usually in the form of continuous positive airway pressure (CPAP) at night. The disease was known initially in the 1950s, as "Pickwickian syndrome" in reference to a Dickensian character.
Signs and symptoms
Most people with obesity hypoventilation syndrome have concurrent obstructive sleep apnea, a condition characterized by snoring, brief episodes of apnea (cessation of breathing) during the night, interrupted sleep and excessive daytime sleepiness. In OHS, sleepiness may be worsened by elevated blood levels of carbon dioxide, which causes drowsiness ("CO2 narcosis"). Other symptoms present in both conditions are depression, and hypertension (high blood pressure) which is difficult to control with medication. The high carbon dioxide can also cause headaches, which tend to be worsening in the morning.
The low oxygen level leads to physiologic constriction of the pulmonary arteries to correct ventilation-perfusion mismatching, which puts excessive strain on the right side of the heart. When this leads to right sided heart failure, it is known as cor pulmonale. Symptoms of this disorder occur because the heart has difficulty pumping blood from the body through the lungs. Fluid may, therefore, accumulate in the skin of the legs in the form of edema (swelling), and in the abdominal cavity in the form of ascites; decreased exercise tolerance and exertional chest pain may occur. On physical examination, characteristic findings are the presence of a raised jugular venous pressure, a palpable parasternal heave, a heart murmur due to blood leaking through the tricuspid valve, hepatomegaly (an enlarged liver), ascites and leg edema. Cor pulmonale occurs in about a third of all people with OHS.
Mechanism
It is not fully understood why some obese people develop obesity hypoventilation syndrome while others do not. It is likely that it is the result of an interplay of various processes. Firstly, work of breathing is increased as adipose tissue restricts the normal movement of the chest muscles and makes the chest wall less compliant, the diaphragm moves less effectively, respiratory muscles are fatigued more easily, and airflow in and out of the lung is impaired by excessive tissue in the head and neck area. Hence, people with obesity need to expend more energy to breathe effectively. These factors together lead to sleep-disordered breathing and inadequate removal of carbon dioxide from the circulation and hence hypercapnia; given that carbon dioxide in aqueous solution combines with water to form an acid (CO2[g] + H2O[l] + excess H2O[l] --> H2CO3[aq]), this causes acidosis (increased acidity of the blood). Under normal circumstances, central chemoreceptors in the brain stem detect the acidity, and respond by increasing the respiratory rate; in OHS, this "ventilatory response" is blunted.
The blunted ventilatory response is attributed to several factors. Obese people tend to have raised levels of the hormone leptin, which is secreted by adipose tissue and under normal circumstances increases ventilation. In OHS, this effect is reduced. Furthermore, episodes of nighttime acidosis (e.g. due to sleep apnea) lead to compensation by the kidneys with retention of the alkali bicarbonate. This normalizes the acidity of the blood. However, bicarbonate stays around in the bloodstream for longer, and further episodes of hypercapnia lead to relatively mild acidosis and reduced ventilatory response in a vicious circle.
Low oxygen levels lead to hypoxic pulmonary vasoconstriction, the tightening of small blood vessels in the lung to create an optimal distribution of blood through the lung. Persistently low oxygen levels causing chronic vasoconstriction leads to increased pressure on the pulmonary artery (pulmonary hypertension), which in turn puts strain on the right ventricle, the part of the heart that pumps blood to the lungs. The right ventricle undergoes remodeling, becomes distended and is less able to remove blood from the veins. When this is the case, raised hydrostatic pressure leads to accumulation of fluid in the skin (edema), and in more severe cases the liver and the abdominal cavity.
The chronically low oxygen levels in the blood also lead to increased release of erythropoietin and the activation of erythropoeisis, the production of red blood cells. This results in polycythemia, abnormally increased numbers of circulating red blood cells and an elevated hematocrit.
Diagnosis
Formal criteria for diagnosis of OHS are:
Body mass index over 30 kg/m2 (a measure of obesity, obtained by taking one's weight in kilograms and dividing it by one's height in meters squared)
Arterial carbon dioxide level over 45 mmHg or 6.0 kPa as determined by arterial blood gas measurement
No alternative explanation for hypoventilation, such as use of narcotics, severe obstructive or interstitial lung disease, severe chest wall disorders such as kyphoscoliosis, severe hypothyroidism (underactive thyroid), neuromuscular disease or congenital central hypoventilation syndrome
If OHS is suspected, various tests are required for its confirmation. The most important initial test is the demonstration of elevated carbon dioxide in the blood. This requires an arterial blood gas determination, which involves taking a blood sample from an artery, usually the radial artery. Given that it would be complicated to perform this test on every patient with sleep-related breathing problems, some suggest that measuring bicarbonate levels in normal (venous) blood would be a reasonable screening test. If this is elevated (27 mmol/L or higher), blood gasses should be measured.
To distinguish various subtypes, polysomnography is required. This usually requires brief admission to a hospital with a specialized sleep medicine department where a number of different measurements are conducted while the subject is asleep; this includes electroencephalography (electronic registration of electrical activity in the brain), electrocardiography (same for electrical activity in the heart), pulse oximetry (measurement of oxygen levels) and often other modalities. Blood tests are also recommended for the identification of hypothyroidism and polycythemia.
To distinguish between OHS and various other lung diseases that can cause similar symptoms, medical imaging of the lungs (such as a chest X-ray or CT/CAT scan), spirometry, electrocardiography and echocardiography may be performed. Echo- and electrocardiography may also show strain on the right side of the heart caused by OHS, and spirometry may show a restrictive pattern related to obesity.
Classification
Obesity hypoventilation syndrome is a form of sleep disordered breathing. Two subtypes are recognized, depending on the nature of disordered breathing detected on further investigations. The first is OHS in the context of obstructive sleep apnea; this is confirmed by the occurrence of 5 or more episodes of apnea, hypopnea or respiratory-related arousals per hour (high apnea-hypopnea index) during sleep. The second is OHS primarily due to "sleep hypoventilation syndrome"; this requires a rise of CO2 levels by 10 mmHg (1.3 kPa) after sleep compared to awake measurements and overnight drops in oxygen levels without simultaneous apnea or hypopnea. Overall, 90% of all people with OHS fall into the first category, and 10% in the second.
Treatment
In people with stable OHS, the most important treatment is weight loss—by diet, through exercise, with medication, or sometimes weight loss surgery (bariatric surgery). This has been shown to improve the symptoms of OHS and resolution of the high carbon dioxide levels. Weight loss may take a long time and is not always successful. If the symptoms are significant, nighttime positive airway pressure (PAP) treatment is tried; this involves the use of a machine to assist with breathing. PAP exists in various forms, and the ideal strategy is uncertain. Some medications have been tried to stimulate breathing or correct underlying abnormalities; their benefit is again uncertain.
While many people with obesity hypoventilation syndrome are cared for on an outpatient basis, some deteriorate suddenly and when admitted to the hospital may show severe abnormalities such as markedly deranged blood acidity (pH<7.25) or depressed level of consciousness due to very high carbon dioxide levels. On occasions, admission to an intensive care unit with intubation and mechanical ventilation is necessary. Otherwise, "bi-level" positive airway pressure (see the next section) is commonly used to stabilize the patient, followed by conventional treatment.
Positive airway pressure
Positive airway pressure, initially in the form of continuous positive airway pressure (CPAP), is a useful treatment for obesity hypoventilation syndrome, particularly when obstructive sleep apnea coexists. CPAP requires the use during sleep of a machine that delivers a continuous positive pressure to the airways and preventing the collapse of soft tissues in the throat during breathing; it is administered through a mask on either the mouth and nose together or if that is not tolerated on the nose only (nasal CPAP). This relieves the features of obstructive sleep apnea and is often sufficient to remove the resultant accumulation of carbon dioxide. The pressure is increased until the obstructive symptoms (snoring and periods of apnea) have disappeared. CPAP alone is effective in more than 50% of people with OHS.
In some occasions, the oxygen levels are persistently too low (oxygen saturations below 90%). In that case, the hypoventilation itself may be improved by switching from CPAP treatment to an alternate device that delivers "bi-level" positive pressure: higher pressure during inspiration (breathing in) and a lower pressure during expiration (breathing out). If this too is ineffective in increasing oxygen levels, the addition of oxygen therapy may be necessary. As a last resort, tracheostomy may be necessary; this involves making a surgical opening in the trachea to bypass obesity-related airway obstruction in the neck. This may be combined with mechanical ventilation with an assisted breathing device through the opening.
Other treatments
People who fail first-line treatments or have very severe, life-threatening disease may sometimes be treated with tracheotomy, which is a reversible procedure. Treatments without proven benefit, and concern for harm, include oxygen alone or respiratory stimulant medications. Medroxyprogesterone acetate, a progestin, and acetazolamide are both associated with an increased risk of thrombosis and are not recommended.
Prognosis
Obesity hypoventilation syndrome is associated with a reduced quality of life, and people with the condition incur increased healthcare costs, largely due to hospital admissions including observation and treatment on intensive care units. OHS often occurs together with several other disabling medical conditions, such as asthma (in 18–24%) and type 2 diabetes (in 30–32%). Its main complication of heart failure affects 21–32% of patients.
Those with abnormalities severe enough to warrant treatment have an increased risk of death reported to be 23% over 18 months and 46% over 50 months. This risk is reduced to less than 10% in those receiving treatment with PAP. Treatment also reduces the need for hospital admissions and reduces healthcare costs.
Epidemiology
The exact prevalence of obesity hypoventilation syndrome is unknown, and it is thought that many people with symptoms of OHS have not been diagnosed. About a third of all people with morbid obesity (a body mass index exceeding 40 kg/m2) have elevated carbon dioxide levels in the blood.
When examining groups of people with obstructive sleep apnea, researchers have found that 10–20% of them meet the criteria for OHS as well. The risk of OHS is much higher in those with more severe obesity, i.e. a body mass index (BMI) of 40 kg/m2 or higher. It is twice as common in men compared to women. The average age at diagnosis is 52. American Black people are more likely to be obese than American whites, and are therefore more likely to develop OHS, but obese Asians are more likely than people of other ethnicities to have OHS at a lower BMI as a result of physical characteristics.
It is anticipated that rates of OHS will rise as the prevalence of obesity rises. This may also explain why OHS is more commonly reported in the United States, where obesity is more common than in other countries.
History
The discovery of obesity hypoventilation syndrome is generally attributed to the authors of a 1956 report of a professional poker player who, after gaining weight, became somnolent and fatigued and prone to fall asleep during the day, as well as developing edema of the legs suggesting heart failure. The authors coined the condition "Pickwickian syndrome" after the character Joe from Dickens' The Posthumous Papers of the Pickwick Club (1837), who was markedly obese and tended to fall asleep uncontrollably during the day. This report, however, was preceded by other descriptions of hypoventilation in obesity. In the 1960s, various further discoveries were made that led to the distinction between obstructive sleep apnea and sleep hypoventilation.
The term "Pickwickian syndrome" has fallen out of favor because it does not distinguish obesity hypoventilation syndrome and sleep apnea as separate disorders (which may coexist).
References
Further reading
Medical conditions related to obesity
Sleep disorders
Respiratory diseases
Syndromes affecting the respiratory system | Obesity hypoventilation syndrome | Biology | 3,094 |
63,543,883 | https://en.wikipedia.org/wiki/Hypoid%20gearboxes | Hypoid gearboxes are gearboxes having axes that are non-intersecting and not parallel. The hypoid gearboxes are a subcategory of spiral bevel gearbox with the axes of gears at an offset from one another. In comparison to the conical geometry of a spiral bevel gear, the basic geometry of hypoid gear is hyperbolic. The spiral angle of the pinion is larger than the spiral angle of the gear in a hypoid gearbox, so the pinion diameter can be larger than that of a bevel gear pinion. This helps in attaining an enhanced contact surface and a better tooth strength which allows for higher gear ratios and scope of higher torque transmission. Bearings can also be used on both sides of gears for extra rigidity as the offset between the axes allows the scope for extra support.
Applications
Hypoid gear sets have long been used in the differential of rear-wheel drive cars, trucks and robotic arms. The scope of misalignment between the centers of the two interlinking shafts permits utilization of larger sized gears which enhances the contact surface area and reduces the wear and tear on the gear hence extending the life and power transmission capabilities of the gearboxes. The reduction in friction also ensure reduction in the loss of energy and improve the overall efficiency of power transmission. This leads to a quieter running gear set.
References
Mechanical power transmission | Hypoid gearboxes | Physics | 280 |
5,123,117 | https://en.wikipedia.org/wiki/Blowout%20%28well%20drilling%29 | A blowout is the uncontrolled release of crude oil and/or natural gas from an oil well or gas well after pressure control systems have failed. Modern wells have blowout preventers intended to prevent such an occurrence. An accidental spark during a blowout can lead to a catastrophic oil or gas fire.
Prior to the advent of pressure control equipment in the 1920s, the uncontrolled release of oil and gas from a well while drilling was common and was known as an oil gusher, gusher or wild well.
History
Gushers were an icon of oil exploration during the late 19th and early 20th centuries. During that era, the simple drilling techniques, such as cable-tool drilling, and the lack of blowout preventers meant that drillers could not control high-pressure reservoirs. When these high-pressure zones were breached, the oil or natural gas would travel up the well at a high rate, forcing out the drill string and creating a gusher. A well which began as a gusher was said to have "blown in": for instance, the Lakeview Gusher blew in in 1910. These uncapped wells could produce large amounts of oil, often shooting or higher into the air. A blowout primarily composed of natural gas was known as a gas gusher.
Despite being symbols of new-found wealth, gushers were dangerous and wasteful. They killed workmen involved in drilling, destroyed equipment, and coated the landscape with thousands of barrels of oil; additionally, the explosive concussion released by the well when it pierces an oil/gas reservoir has been responsible for a number of oilmen losing their hearing entirely; standing too near to the drilling rig at the moment it drills into the oil reservoir is extremely hazardous. The impact on wildlife is very hard to quantify, but can only be estimated to be mild in the most optimistic models—realistically, the ecological impact is estimated by scientists across the ideological spectrum to be severe, profound, and lasting.
To complicate matters further, the free flowing oil was—and is—in danger of igniting. One dramatic account of a blowout and fire reads,
With a roar like a hundred express trains racing across the countryside, the well blew out, spewing oil in all directions. The derrick simply evaporated. Casings wilted like lettuce out of water, as heavy machinery writhed and twisted into grotesque shapes in the blazing inferno.
The development of rotary drilling techniques where the density of the drilling fluid is sufficient to overcome the downhole pressure of a newly penetrated zone meant that gushers became avoidable. However, if the fluid density was not adequate or fluids were lost to the formation, then there was still a significant risk of a well blowout.
In 1924 the first successful blowout preventer was brought to market. The BOP valve affixed to the wellhead could be closed in the event of drilling into a high pressure zone, and the well fluids contained. Well control techniques could be used to regain control of the well. As the technology developed, blowout preventers became standard equipment, and gushers became a thing of the past.
In the modern petroleum industry, uncontrollable wells became known as blowouts and are comparatively rare. There has been significant improvement in technology, well control techniques, and personnel training which has helped to prevent their occurring. From 1976 to 1981, only 21 blowouts occurred.
Notable gushers
A blowout in 1815 resulted from an attempt to drill for salt rather than for oil. Joseph Eichar and his team were digging west of the town of Wooster, Ohio, US along Killbuck Creek, when they struck oil. In a written retelling by Eichar's daughter, Eleanor, the strike produced "a spontaneous outburst, which shot up high as the tops of the highest trees!"
Oil drillers struck a number of gushers near Oil City, Pennsylvania, US in 1861. The most famous was the Little & Merrick well, which began gushing oil on 17 April 1861. The spectacle of the fountain of oil flowing out at about per day had drawn about 150 spectators by the time an hour later when the oil gusher burst into flames, raining fire down on the oil-soaked onlookers. Thirty people died. Other early gushers in northwest Pennsylvania were the Phillips #2 ( per day) in September 1861, and the Woodford well ( per day) in December 1861.
The Shaw Gusher in Oil Springs, Ontario, was Canada's first oil gusher. On January 16, 1862, it shot oil from over below ground to above the treetops at a rate of per day, triggering the oil boom in Lambton County.
Lucas Gusher at Spindletop in Beaumont, Texas, US in 1901 flowed at per day at its peak, but soon slowed and was capped within nine days. The well tripled U.S. oil production overnight and marked the start of the Texas oil industry.
Masjed Soleiman, Iran, in 1908 marked the first major oil strike recorded in the Middle East.
Dos Bocas in the State of Veracruz, Mexico, was a famous 1908 Mexican blowout that formed a large crater. It leaked oil from the main reservoir for many years, continuing even after 1938 (when Pemex nationalized the Mexican oil industry).
Lakeview Gusher on the Midway-Sunset Oil Field in Kern County, California, US of 1910 is believed to be the largest-ever U.S. gusher. At its peak, more than of oil per day flowed out, reaching as high as in the air. It remained uncapped for 18 months, spilling over of oil, less than half of which was recovered.
A short-lived gusher at Alamitos #1 in Signal Hill, California, US in 1921 marked the discovery of the Long Beach Oil Field, one of the most productive oil fields in the world.
The Barroso 2 well in Cabimas, Venezuela, in December 1922 flowed at around per day for nine days, plus a large amount of natural gas.
Baba Gurgur near Kirkuk, Iraq, an oilfield known since antiquity, erupted at a rate of a day in 1927.
The Yates #30-A in Pecos County, Texas, US gushing 80 feet through the fifteen-inch casing, produced a world record 204,682 barrels of oil a day from a depth of 1,070 feet on 23 September 1929.
The Wild Mary Sudik gusher in Oklahoma City, Oklahoma, US in 1930 flowed at a rate of per day.
The Daisy Bradford gusher in 1930 marked the discovery of the East Texas Oil Field, the largest oilfield in the contiguous United States.
The largest known 'wildcat' oil gusher blew near Qom, Iran, on 26 August 1956. The uncontrolled oil gushed to a height of , at a rate of per day. The gusher was closed after 90 days' work by Bagher Mostofi and Myron Kinley (USA).
On October 17, 1982, a sour gas well Amoco Dome Brazeau River, 13-12-48-12, being drilled 20 km west of Lodgepole, Alberta blew out. The burning well was finally capped 67 days later by the Texas well-control company Boots & Coots.
One of the most troublesome gushers happened on 23 June 1985, at well #37 at the Tengiz field in Atyrau, Kazakh SSR, Soviet Union, where the 4,209-metre deep well blew out and the 200-metre high gusher self-ignited two days later. Oil pressure up to 800 atm and high hydrogen sulfide content had led to the gusher being capped only on 27 July 1986. The total volume of erupted material measured at 4.3 million metric tons of oil and 1.7 billion m³ of natural gas, and the burning gusher resulted in 890 tons of various mercaptans and more than 900,000 tons of soot released into the atmosphere.
Deepwater Horizon explosion: The largest underwater blowout in U.S. history occurred on 20 April 2010, in the Gulf of Mexico at the Macondo Prospect oil field. The blowout caused the explosion of the Deepwater Horizon, a mobile offshore drilling platform owned by Transocean and under lease to BP at the time of the blowout. While the exact volume of oil spilled is unknown, , the United States Geological Survey Flow Rate Technical Group has placed the estimate at between of crude oil per day.
Causes
Reservoir pressure
Petroleum or crude oil is a naturally occurring, flammable liquid consisting of a complex mixture of hydrocarbons of various molecular weights, and other organic compounds, found in geologic formations beneath the Earth's surface. Because most hydrocarbons are lighter than rock or water, they often migrate upward and occasionally laterally through adjacent rock layers until either reaching the surface or becoming trapped within porous rocks (known as reservoirs) by impermeable rocks above. When hydrocarbons are concentrated in a trap, an oil field forms, from which the liquid can be extracted by drilling and pumping. The downhole pressure in the rock structures changes depending upon the depth and the characteristics of the source rock. Natural gas (mostly methane) may be present also, usually above the oil within the reservoir, but sometimes dissolved in the oil at reservoir pressure and temperature. Dissolved gas typically comes out of solution as free gas as the pressure is reduced either under controlled production operations or in a kick, or in an uncontrolled blowout. The hydrocarbon in some reservoirs may be essentially all natural gas.
Formation kick
The downhole fluid pressures are controlled in modern wells through the balancing of the hydrostatic pressure provided by the mud column. Should the balance of the drilling mud pressure be incorrect (i.e., the mud pressure gradient is less than the formation pore pressure gradient), then formation fluids (oil, natural gas, and/or water) can begin to flow into the wellbore and up the annulus (the space between the outside of the drill string and the wall of the open hole or the inside of the casing), and/or inside the drill pipe. This is commonly called a kick. Ideally, mechanical barriers such as blowout preventers (BOPs) can be closed to isolate the well while the hydrostatic balance is regained through circulation of fluids in the well. But if the well is not shut in (common term for the closing of the blow-out preventer), a kick can quickly escalate into a blowout when the formation fluids reach the surface, especially when the influx contains gas that expands rapidly with the reduced pressure as it flows up the wellbore, further decreasing the effective weight of the fluid.
Early warning signs of an impending well kick while drilling are:
Sudden change in drilling rate;
Reduction in drillpipe weight;
Change in pump pressure;
Change in drilling fluid return rate.
Other warning signs during the drilling operation are:
Returning mud "cut" by (i.e., contaminated by) gas, oil or water;
Connection gases, high background gas units, and high bottoms-up gas units detected in the mudlogging unit.
The primary means of detecting a kick while drilling is a relative change in the circulation rate back up to the surface into the mud pits. The drilling crew or mud engineer keeps track of the level in the mud pits and closely monitors the rate of mud returns versus the rate that is being pumped down the drill pipe. Upon encountering a zone of higher pressure than is being exerted by the hydrostatic head of the drilling mud (including the small additional frictional head while circulating) at the bit, an increase in mud return rate would be noticed as the formation fluid influx blends in with the circulating drilling mud. Conversely, if the rate of returns is slower than expected, it means that a certain amount of the mud is being lost to a thief zone somewhere below the last casing shoe. This does not necessarily result in a kick (and may never become one); however, a drop in the mud level might allow influx of formation fluids from other zones if the hydrostatic head is reduced to less than that of a full column of mud.
Well control
The first response to detecting a kick would be to isolate the wellbore from the surface by activating the blow-out preventers and closing in the well. Then the drilling crew would attempt to circulate in a heavier kill fluid to increase the hydrostatic pressure (sometimes with the assistance of a well control company). In the process, the influx fluids will be slowly circulated out in a controlled manner, taking care not to allow any gas to accelerate up the wellbore too quickly by controlling casing pressure with chokes on a predetermined schedule.
This effect will be minor if the influx fluid is mainly salt water. And with an oil-based drilling fluid it can be masked in the early stages of controlling a kick because gas influx may dissolve into the oil under pressure at depth, only to come out of solution and expand rather rapidly as the influx nears the surface. Once all the contaminant has been circulated out, the shut-in casing pressure should have reached zero.
Capping stacks are used for controlling blowouts. The cap is an open valve that is closed after bolted on.
Types
Well blowouts can occur during the drilling phase, during well testing, during well completion, during production, or during workover activities.
Surface blowouts
Blowouts can eject the drill string out of the well, and the force of the escaping fluid can be strong enough to damage the drilling rig. In addition to oil, the output of a well blowout might include natural gas, water, drilling fluid, mud, sand, rocks, and other substances.
Blowouts will often be ignited from sparks from rocks being ejected, or simply from heat generated by friction. A well control company then will need to extinguish the well fire or cap the well, and replace the casing head and other surface equipment. If the flowing gas contains poisonous hydrogen sulfide, the oil operator might decide to ignite the stream to convert this to less hazardous substances.
Sometimes blowouts can be so forceful that they cannot be directly brought under control from the surface, particularly if there is so much energy in the flowing zone that it does not deplete significantly over time. In such cases, other wells (called relief wells) may be drilled to intersect the well or pocket, in order to allow kill-weight fluids to be introduced at depth. When first drilled in the 1930s relief wells were drilled to inject water into the main drill well hole. Contrary to what might be inferred from the term, such wells generally are not used to help relieve pressure using multiple outlets from the blowout zone.
Subsea blowouts
The two main causes of a subsea blowout are equipment failures and imbalances with encountered subsurface reservoir pressure. Subsea wells have pressure control equipment located on the seabed or between the riser pipe and drilling platform. Blowout preventers (BOPs) are the primary safety devices designed to maintain control of geologically driven well pressures. They contain hydraulic-powered cut-off mechanisms to stop the flow of hydrocarbons in the event of a loss of well control.
Even with blowout prevention equipment and processes in place, operators must be prepared to respond to a blowout should one occur. Before drilling a well, a detailed well construction design plan, an Oil Spill Response Plan as well as a Well Containment Plan must be submitted, reviewed and approved by BSEE and is contingent upon access to adequate well containment resources in accordance to NTL 2010-N10.
The Deepwater Horizon well blowout in the Gulf of Mexico in April 2010 occurred at a water depth. Current blowout response capabilities in the U.S. Gulf of Mexico meet capture and process rates of 130,000 barrels of fluid per day and a gas handling capacity of 220 million cubic feet per day at depths through 10,000 feet.
Underground blowouts
An underground blowout is a special situation where fluids from high pressure zones flow uncontrolled to lower pressure zones within the wellbore. Usually this is from deeper higher pressure zones to shallower lower pressure formations. There may be no escaping fluid flow at the wellhead. However, the formation(s) receiving the influx can become overpressured, a possibility that future drilling plans in the vicinity must consider.
Blowout control companies
Myron M. Kinley was a pioneer in fighting oil well fires and blowouts. He developed many patents and designs for the tools and techniques of oil firefighting. His father, Karl T. Kinley, attempted to extinguish an oil well fire with the help of a massive explosion—a method still in common use for fighting oil fires. Myron and Karl Kinley first successfully used explosives to extinguish an oil well fire in 1913. Kinley would later form the M. M. Kinley Company in 1923. Asger "Boots" Hansen and Edward Owen "Coots" Matthews also begin their careers under Kinley.
Paul N. "Red" Adair joined the M. M. Kinley Company in 1946, and worked 14 years with Myron Kinley before starting his own company, Red Adair Co., Inc., in 1959.
Red Adair Co. has helped in controlling offshore blowouts, including:
CATCO fire in the Gulf of Mexico in 1959.
"The Devil's Cigarette Lighter" in 1962 in Gassi Touil, Algeria, in the Sahara Desert.
The Ixtoc I oil spill in Mexico's Bay of Campeche in 1979.
The Piper Alpha disaster in the North Sea in 1988.
The Kuwaiti oil fires following the Gulf War in 1991.
The 1968 American film Hellfighters, which starred John Wayne, is about a group of oil well firefighters, based loosely on Adair's life; Adair, Hansen, and Matthews served as technical advisors on the film.
In 1994, Adair retired and sold his company to Global Industries. Management of Adair's company left and created International Well Control (IWC). In 1997, they would buy the company Boots & Coots International Well Control, Inc., which was founded by Hansen and Matthews in 1978.
Methods of quenching
Subsea well containment
After the Macondo-1 blowout on the Deepwater Horizon, the offshore industry collaborated with government regulators to develop a framework to respond to future subsea incidents. As a result, all energy companies operating in the deep-water U.S. Gulf of Mexico must submit an OPA 90 required Oil Spill Response Plan with the addition of a Regional Containment Demonstration Plan prior to any drilling activity. In the event of a subsea blowout, these plans are immediately activated, drawing on some of the equipment and processes effectively used to contain the Deepwater Horizon well as others that have been developed in its aftermath.
In order to regain control of a subsea well, the Responsible Party would first secure the safety of all personnel on board the rig and then begin a detailed evaluation of the incident site. Remotely operated underwater vehicles (ROVs) would be dispatched to inspect the condition of the wellhead, blowout preventer (BOP) and other subsea well equipment. The debris removal process would begin immediately to provide clear access for a capping stack.
Once lowered and latched on the wellhead, a capping stack uses stored hydraulic pressure to close a hydraulic ram and stop the flow of hydrocarbons. If shutting in the well could introduce unstable geological conditions in the wellbore, a cap and flow procedure would be used to contain hydrocarbons and safely transport them to a surface vessel.
The Responsible Party works in collaboration with BSEE and the United States Coast Guard to oversee response efforts, including source control, recovering discharged oil and mitigating environmental impact.
Several not-for-profit organizations provide a solution to effectively contain a subsea blowout. HWCG LLC and Marine Well Containment Company operate within the U.S. Gulf of Mexico waters, while cooperatives like Oil Spill Response Limited offer support for international operations.
Use of nuclear explosions
On Sep. 30, 1966, the Soviet Union experienced blowouts on five natural gas wells in Urta-Bulak, an area about 80 kilometers from Bukhara, Uzbekistan. It was claimed in Komsomoloskaya Pravda that after years of burning uncontrollably they were able to stop them entirely. The Soviets lowered a specially made 30 kiloton nuclear physics package into a borehole drilled away from the original (rapidly leaking) well. A nuclear explosive was deemed necessary because conventional explosives both lacked the necessary power and would also require a great deal more space underground. When the device was detonated, it crushed the original pipe that was carrying the gas from the deep reservoir to the surface and vitrified the surrounding rock. This caused the leak and fire at the surface to cease within approximately one minute of the explosion, and proved to be a permanent solution. An attempt on a similar well was not as successful. Other tests were for such experiments as oil extraction enhancement (Stavropol, 1969) and the creation of gas storage reservoirs (Orenburg, 1970).
Notable offshore well blowouts
Data from industry information.
See also
Drilling fluid
Drilling rig
List of oil spills
Oil platform
Oil well
Oil well control
Oil well fire
Petroleum geology
Underbalanced drilling
References
External links
San Joaquin Geological Society article on famous Californian gushers
Blowout
Petroleum geology
Oil wells | Blowout (well drilling) | Chemistry,Environmental_science | 4,404 |
407,214 | https://en.wikipedia.org/wiki/List%20of%20proposed%20future%20transport | Transport today is mostly powered by fossil fuel. The reason for this is the ease of use and the existence of mature technologies harnessing this fuel source. Fossil fuels represent a concentrated, relatively compact source of energy. The drawbacks are that they are heavily polluting and rely on limited natural resources. There are many proposals to harness renewable forms of energy, to use fossil fuel more efficiently, or to use human power, or some hybrid of these, to move people and things.
The list below contains some forms of transport not in general use, but considered as possibilities in the future.
Proposed future transport
Air-propelled train (abandoned in 19th century)
Bounce tube pneumatic travel (Proposed by Robert A. Heinlein in 1956)
Vactrain also known as ET3
BiModal Glideway (Dual Mode Transportation System) travel (Proposed by William D. Davis, Jr. in 1967)
TEV Project (proposed by Will Jones in Summer 2012)
Dual-mode vehicle
Hyperloop
Intelligent Transportation System
Jet pack
Backpack helicopter
Personal air vehicle (Flying car)
Personal rapid transit
Shweeb
Rolling road (proposed by Robert A. Heinlein in 1940)
Slidewalk (proposed by Robert A. Heinlein in 1948)
Teleportation
SkyTran
Spacecraft propulsion or Space transport
Launch loop
Orbital ring
Light sail (proposed by Jack Vance in 1962)
Space elevator (proposed by Russian scientist Konstantin Tsiolkovsky in 1895)
References
External links
Global Intelligent Transportation System (proposed by Vladimir Postnikov in 2010)
Future
Transport | List of proposed future transport | Physics | 307 |
1,886,799 | https://en.wikipedia.org/wiki/Entropy%20unit | The entropy unit is a non-S.I. unit of thermodynamic entropy, usually denoted by "e.u." or "eU" and equal to one calorie per kelvin per mole, or 4.184 joules per kelvin per mole. Entropy units are primarily used in chemistry to describe enthalpy changes.
Sources
Units of measurement | Entropy unit | Physics,Chemistry,Mathematics | 76 |
50,924,898 | https://en.wikipedia.org/wiki/David%20A.%20Lucht | David Allen Lucht (; born February 18, 1943) is an American engineer and fire safety expert. His career was devoted to public service in government, academia and the nonprofit sector. He served as the Ohio State Fire Marshal; the first presidential appointee to serve in the United States Fire Administration and the inaugural head of the graduate degree fire protection engineering program at Worcester Polytechnic Institute, where he served for 25 years
Early years
David Lucht was born and raised in the rural village of Middlefield, Ohio. In 1960, local Fire Chief Earl Warne invited him to join the first class of student volunteer firefighters in the Middlefield Volunteer Fire Department where he actively served until graduating from high school in 1961. Early in his service, he responded to a residential fire in which three young children died, leaving an indelible imprint on him.
He attended the Illinois Institute of Technology in Chicago under a four-year scholarship granted by the Western Actuarial Bureau. He received his Bachelor of Science degree in fire protection and safety engineering in 1965. After graduating from IIT, he moved to Columbus, Ohio to work for his scholarship sponsor for three years.
The Ohio State University
In 1968 Lucht moved on to the position of research associate at The Ohio State University Engineering Experiment Station, Building Research Laboratory, performing fire tests on building construction systems and materials.
An interest in home smoke alarms developed during his time at OSU, stimulated by the development of the first affordable devices by Duane Pearsall of Denver, CO. As chair of the Central Ohio Fire Prevention Association Household Fire Warning Study Committee, Lucht organized The Alton Road Tests aimed at demonstrating the effectiveness of home smoke detectors in actual dwellings.
After his career-long advocacy for home smoke alarms, he later described the early devices as "the most important technological [fire safety] breakthrough of the 20th century."
Ohio State Fire Marshal
In 1972, Lucht joined the Ohio Division of State Fire Marshal where he authored the first Ohio Fire Code. Ohio Governor John J. Gilligan appointed him as the Ohio State Fire Marshal in 1973.
During his tenure in the Fire Marshal Division, Ohio adopted the first statewide requirements for home smoke detectors, developed the digital Ohio Fire Incident Reporting System, and the Ohio Arson Laboratory, and completed plans for the Ohio Fire Academy.
United States Fire Administration
In 1975, President Gerald R. Ford appointed David Lucht Deputy Administrator of the United States Fire Administration (originally named the National Fire Prevention and Control Administration, NFPCA) after confirmation by the Senate. The new agency had been created by Congress in direct response to the landmark America Burning report of the National Commission on Fire Prevention and Control. He also served as acting head of the new agency until Howard Tipton was appointed to the Administrator post a few months later.
He played a key role in implementing the mandates of the Fire Prevention and Control Act. Areas of focus included the National Fire Academy, the National Fire Incident Reporting System, fire research and public education programs to support practitioners on the state and local level.
Firepro Incorporated
Lucht moved to Massachusetts in 1978 at which time he assumed a new position as executive vice president with the consulting firm Firepro Incorporated. While at Firepro he simultaneously worked on the startup of a new graduate degree program at nearby Worcester Polytechnic Institute.
Firepro was a full-service fire protection engineering consulting firm, offering a full range of services ranging from building fire safety and incident reconstruction to corporate fire safety management and fire department organization and deployment studies.
Worcester Polytechnic Institute
In 1978, David Lucht was recruited by Worcester Polytechnic Institute to start up the Center for Firesafety Studies. In the initial years as Professor and Director of the new Center, he worked in parallel as Executive Vice President of Firepro, Incorporated, a Boston area consulting engineering firm. He transitioned to full-time status at the university in 1985.
Starting “from scratch” at WPI, he assembled the resources, faculty, staff and laboratory facilities to support a first-of-its-kind program of graduate study in fire protection engineering. The Master of Science degree was first offered in 1979 and the PhD in 1991.
By the time he retired in 2005, WPI had graduated over 400 fire protection engineers from 26 countries. Graduates pursued careers in a host of employer settings ranging from consulting engineering firms, manufacturing industries and public utilities to product testing and research laboratories and codes and standards groups.
Nonprofit governance
Lucht served on several nonprofit boards of directors and boards of trustees.
1985 – 1991 Society of Fire Protection Engineers (SFPE)
1987 – 1989 New England Chapter SFPE
1989 – 1991 American Association of Engineering Societies
1990 – 1999 National Fire Protection Association (NFPA)
1992 – 2003 Underwriters Laboratories (UL)
1995 – 2005 Ecotarium
2004 – 2012 CTC, Inc., Public Safety Technology Center
2004 – 2009 Worcester Art Museum
2004 – 2009 Master Singers of Worcester
Honors, recognitions and awards
During his career, David Lucht was recognized for his leadership and contributions to fire safety.
1988 President's Award, SFPE Foundation
1988 Man of the Year Award, Automatic Fire Alarm Association
1989 Fellow, Society of Fire Protection Engineers
1993 Harold E. Nelson Service Award, Society of Fire Protection Engineers
2000 John J. Ahern President’s Award, Society of Fire Protection Engineers
2002 Arthur B. Guise Medal and Prize, SFPE Foundation
2004 Person of the Year Award, Automatic Fire Alarm Association
2004 William R. Grogan Award, WPI Alumni Association
2004 Person of the Year Award, New England Chapter, SFPE
2005 David A. Lucht Lamp of Knowledge Award (awarded annually by SFPE)
2006 David Rasbash Memorial Medal, Institution of Fire Engineers (London)
2013 John L. Bryan Mentor Award, Society of Fire Protection Engineers
2015 Cardinal High School Distinguished Alumni Hall of Fame, Middlefield, OH
2023 Doctor of Engineering honoris causa degree awarded by Worcester Polytechic Institute
Selected publications
Selected publications are listed below. A full listing of 65 published and 49 unpublished works can be found in the WPI David Lucht Collection
"Legal Requirements for Fire Alarms in Ohio Dwellings", Fire Journal, NFPA, March 1972.
"NFPCA Designed to Assist Local, State Governments", Fire Engineering, August 1976.
"The Federal Role Information, Training and Encouragement", Nation's Cities, March 1978.
"Fire Prevention Planning and Leadership for Small Communities", (book) published by NFPA, 1980.
"Fire Protection Engineering Graduate Program Takes Hold", Fire Journal, National Fire Protection Association, Vol. 78, No. 2, March 1984.
"Emerging Fire Technology: A Wolf in Sheep's Clothing?" Chief Fire Executive, Vol. 1, No. 1, April/May 1986.
"An Update on the WPI Graduate Program in Fire Protection Engineering", Fire Technology, Vol. 23, No. 3, August 1987.
"Coming of Age", Journal of Fire Protection Engineering, Society of Fire Protection Engineers, Vol. 1, No. 2, April, May, June 1989.
"Changing The Way We Do Business", Fire Technology, Vol. 28, No. 3, August 1992.
"Progress in Professional Practice”, Fire Protection Engineering, Society of Fire Protection Engineers, Issue No. 3, Summer 1999.
"Let’s be Intolerant of Fire Traps”, Op-Ed, Providence Journal, Providence, RI, August 5, 2003.
"Issues and Opportunities for the Future of Fire Engineering”, 2006 Rasbash Honors Lecture, IFE Fire Prevention Fire Engineers Journal, July 2006
"Millennials: The New Source of Young Talent”, Fire Protection Engineering Magazine, Society of Fire Protection Engineers, Fall 2007
"The WPI Program: Starting from Scratch”, Fire Protection Engineering Magazine, Society of Fire Protection Engineers, Issue No. 53, First Quarter, 2012.
"The Most Important Technological Breakthrough of the 20th Century”, Fire Protection Engineering Magazine, Society of Fire Protection Engineers, First Quarter, 2015.
"Symposium Review and Conclusion", Proceedings of the Society of Fire Protection Engineers Symposium on Systems Applications, University of Maryland, College Park, Maryland; March 1981.
"Report on the Conference on Firesafety Design in the 21st Century", WPI, Worcester, MA, June, 1999. Chairman and Editor.
"Proceedings of the Second Conference on Firesafety Design in the 21st Century", Worcester Polytechnic Institute, June 2000. Chairman and Editor.
"Making the Nation Safe from Fire Workshop: A Path Forward in Research”, National Research Council report, National Academy of Sciences, National Academies Press, Washington, D.C., 2003. (Editor and chair).
Artist
In the years following his retirement in 2005, David Lucht's focus shifted to the arts. He enrolled in a range of art courses at the Worcester Art Museum and understudied several local artists. He actively participated in the Princeton Arts Society Portrait Group for many years.
In 2016 Lucht was invited to paint the posthumous portrait of Philip J. DiNenno, who was President of Hughes Associates, and Fellow and Past President of SFPE when he died.
Parkinson's advocacy
David Lucht was diagnosed with Parkinson's disease in 2012 and, with time, became active in the Parkinson's health movement. He participated in Parkinson's clinical research studies at UMASS Amherst, Boston University, MIT, and Worcester State University.
As an outgrowth of the UMASS Parkinson's Voice Study, Lucht and several other clinical participants started the Parkinson's Chorus of Central Massachusetts.
External links
David Lucht Papers from the WPI Manuscript Collections.
WPI Fire Protection Engineering Program
The Society of Fire Protection Engineers
References
1943 births
American artists
Worcester Polytechnic Institute faculty
Fire protection
People from Warren, Ohio
Illinois Institute of Technology alumni
People from Shrewsbury, Massachusetts
Living people | David A. Lucht | Engineering | 1,975 |
281,028 | https://en.wikipedia.org/wiki/Essential%20oil | An essential oil is a concentrated hydrophobic liquid containing volatile (easily evaporated at normal temperatures) chemical compounds from plants. Essential oils are also known as volatile oils, ethereal oils, aetheroleum, or simply as the oil of the plant from which they were extracted, such as oil of clove. An essential oil is essential in the sense that it contains the essence of the plant's fragrance—the characteristic fragrance of the plant from which it is derived. The term "essential" used here does not mean required or usable by the human body, as with the terms essential amino acid or essential fatty acid, which are so called because they are nutritionally required by a living organism.
Essential oils are generally extracted by distillation, often by using steam. Other processes include expression, solvent extraction, sfumatura, absolute oil extraction, resin tapping, wax embedding, and cold pressing. They are used in perfumes, cosmetics, soaps, air fresheners and other products, for flavoring food and drink, and for adding scents to incense and household cleaning products.
Essential oils are often used for aromatherapy, a form of alternative medicine in which healing effects are ascribed to aromatic compounds. Aromatherapy may be useful to induce relaxation, but there is not sufficient evidence that it can effectively treat any condition. Improper use of essential oils may cause harm including allergic reactions, inflammation and skin irritation. Children may be particularly susceptible to the toxic effects of improper use. Essential oils can be poisonous if ingested or absorbed through the skin.
Production
Distillation
Most common essential oils such as lavender, peppermint, tea tree oil, patchouli, and eucalyptus are distilled. Raw plant material, consisting of the flowers, leaves, wood, bark, roots, seeds, or peel, is put into an alembic (distillation apparatus) over water. As the water is heated, the steam passes through the plant material, vaporizing the volatile compounds. The vapors flow through a coil, where they condense back to liquid, which is then collected in the receiving vessel.
Most oils are distilled in a single process. One exception is ylang-ylang (Cananga odorata) which is purified through a fractional distillation.
The recondensed water is referred to as a hydrosol, hydrolat, herbal distillate, or plant water essence, which may be sold as another fragrant product. Hydrosols include rose water, lavender water, lemon balm, clary sage, and orange blossom water.
Expression
Most citrus peel oils are expressed mechanically or cold-pressed (similar to olive oil extraction). Due to the relatively large quantities of oil in citrus peel and low cost to grow and harvest the raw materials, citrus-fruit oils are cheaper than most other essential oils. Lemon or sweet orange oils are obtained as byproducts of the citrus industry.
Before the discovery of distillation, all essential oils were extracted by pressing.
Solvent extraction
Most flowers contain too little volatile oil to undergo expression, but their chemical components are too delicate and easily denatured by the high heat used in steam distillation. Instead, a solvent such as hexane or supercritical carbon dioxide is used to extract the oils. Extracts from hexane and other hydrophobic solvents are called concretes, which are a mixture of essential oil, waxes, resins, and other lipophilic (oil-soluble) plant material.
Although highly fragrant, concretes contain large quantities of non-fragrant waxes and resins. Often, another solvent, such as ethyl alcohol, is used to extract the fragrant oil from the concrete. The alcohol solution is chilled to for more than 48 hours which causes the waxes and lipids to precipitate out. The precipitates are then filtered out and the ethanol is removed from the remaining solution by evaporation, vacuum purge, or both, leaving behind the absolute.
Supercritical carbon dioxide is used as a solvent in supercritical fluid extraction. This method can avoid petrochemical residues in the product and the loss of some "top notes" when steam distillation is used. It does not yield an absolute directly. The supercritical carbon dioxide will extract both the waxes and the essential oils that make up the concrete. Subsequent processing with liquid carbon dioxide, achieved in the same extractor by merely lowering the extraction temperature, will separate the waxes from the essential oils. This lower temperature process prevents the decomposition and denaturing of compounds. When the extraction is complete, the pressure is reduced to ambient and the carbon dioxide reverts to a gas, leaving no residue.
Production quantities
Estimates of total production of essential oils are difficult to obtain. One estimate, compiled from data in 1989, 1990, and 1994 from various sources, gives the following total production, in tonnes, of essential oils for which more than 1,000 tonnes were produced.
{| class="wikitable"
! Oil !! Tonnes
|-
| Sweet orange || style="text-align:right;"| 12,000
|-
| Mentha arvensis || style="text-align:right;"| 4,800
|-
| Peppermint || style="text-align:right;"| 3,200
|-
| Cedarwood || style="text-align:right;"| 2,600
|-
| Lemon || style="text-align:right;"| 2,300
|-
| Eucalyptus globulus || style="text-align:right;"| 2,070
|-
| Litsea cubeba || style="text-align:right;"| 2,000
|-
| Clove (leaf) || style="text-align:right;"| 2,000
|-
| Spearmint || style="text-align:right;"| 1,300
|}
Uses and cautions
Taken by mouth, many essential oils can be dangerous in high concentrations. Typical effects begin with a burning feeling, followed by salivation. Different essential oils may have drastically different pharmacology. Some act as local anesthetic counterirritants and, thereby, exert an antitussive (cough suppressing) effect. Many essential oils, particularly tea tree oil, may cause contact dermatitis. Menthol and some others produce a feeling of cold followed by a sense of burning.
In Australia essential oils (mainly eucalyptus) have been increasingly causing cases of poisoning, mostly of children. In the period 2014–2018, there were 4,412 poisoning incidents reported in New South Wales.
Use in aromatherapy
Aromatherapy is a form of alternative medicine in which healing effects are ascribed to the aromatic compounds in essential oils and other plant extracts. Aromatherapy may be useful to induce relaxation, but there is not sufficient evidence that essential oils can effectively treat any condition. Scientific research indicates that essential oils cannot treat or cure any chronic disease or other illnesses. Much of the research on the use of essential oils for health purposes has serious methodological errors. In a systemic review of 201 published studies on essential oils as alternative medicines, only 10 were found to be of acceptable methodological quality, and even these 10 were still weak in reference to scientific standards. Use of essential oils may cause harm including allergic reactions and skin irritation; After receiving a facial at an all-natural salon, a person experienced severe skin irritation, which highlighted the potential dangers of using "clean" beauty products marketed as being made from natural ingredients. This incident underscores the misconception that natural compounds are always safe, revealing a growing awareness within the beauty industry about the risks associated with essential oils, which can lead to allergic reactions and skin damage; there has been at least one case of death.
Use as pesticide
Research has shown that some essential oils have potential as a natural pesticide. In case studies, certain oils have been shown to have a variety of deterring effects on pests, specifically insects and select arthropods. These effects may include repelling, inhibiting digestion, stunting growth, decreasing rate of reproduction, or death of pests that consume the oil. However, the molecules within the oils that cause these effects are normally non-toxic for mammals. These specific actions of the molecules allow for widespread use of these "green" pesticides without harmful effects to anything else other than pests. Essential oils that have been investigated include rose, lemon grass, lavender, thyme, peppermint, basil, cedarwood, and eucalyptus.
Although they may not be the perfect replacement for all synthetic pesticides, essential oils have prospects for crop or indoor plant protection, urban pest control, and marketed insect repellents, such as bug spray. Certain essential oils have been shown in studies to be comparable, if not exceeding, in effectiveness to DEET, which is currently marketed as the most effective mosquito repellent. Although essential oils are effective as pesticides when first applied in uses such as mosquito repellent applied to the skin, it is only effective in the vapor stage. Since this stage is relatively short-lived, creams and polymer mixtures are used in order to elongate the vapor period of effective repellency.
In any form, using essential oils as green pesticides rather than synthetic pesticides has ecological benefits such as decreased residual actions. In addition, increased use of essential oils as pest control could have not only ecological, but economical benefits as the essential oil market diversifies and popularity increases among organic farmers and environmentally conscious consumers. some EOs are authorized, and in use, in the European Union: Melaleuca oil as a fungicide, citronella oil as a herbicide, Syzygium aromaticum oil as a fungicide and bactericide, Mentha spicata oil as a plant growth regulator; Citrus sinensis oil (only in France) for Bemisia tabaci on Cucurbita pepo and Trialeurodes vaporariorum on Solanum lycopersicum; and approvals for oils of Thymus, C. sinensis, and Tagetes as insecticides are pending.
Use in food
In relation with their food applications, although these oils have been used throughout history as food preservatives, it was in the 20th century when essential oils were considered as Generally Recognized as Safe (GRAS) by the United States’ Food and Drug Administration (FDA).
GRAS substances according to the FDA
As antimicrobials
The most commonly used essential oils with antimicrobial action are: β-caryophyllene, eugenol, eugenol acetate, carvacrol, linalool, thymol, geraniol, geranyl acetate, bicyclogermacrene, cinnamaldehyde, geranial, neral, 1,8-cineole, methyl chavicol, methyl cinnamate, methyl eugenol, camphor, α-thujone, viridiflorol, limonene, (Z)-linalool oxide, α-pinene, p-cymene, (E)-caryophyllene, γ-terpinene.
Some essential oils are effective antimicrobials and have been evaluated for food incorporation in vitro. However, actual deployment is rare because much higher concentrations are required in real foods. Some or all of this lower effectiveness is due to large differences between culture medium and foods in chemistry (especially lipid content), viscosity, and duration of inoculation/storage.
Dilution
Essential oils are usually lipophilic (literally: "oil-loving") compounds that are immiscible (not miscible) with water. They can be diluted in solvents like pure ethanol and polyethylene glycol.
Raw materials
Essential oils are derived from sections of plants. Some plants, like the bitter orange, are sources of several types of essential oil.
{| class="wikitable" |
| valign="top" |
Bark
Cassia
Cinnamon
Sassafras
Berries
Allspice
Juniper
Flowers
Cannabis
Chamomile
Clary sage
Clove
Hops
Hyssop
Jasmine
Lavender
Manuka
Marjoram
Orange
Pelargonium (Scented geranium)
Plumeria
Rose
Ylang-ylang
| valign="top" |
Leaves
Basil
Bay leaf
Buchu
Cinnamon
Common sage
Eucalyptus
Guava
Lemon grass
Melaleuca
Oregano
rose
bergamot
Patchouli
Peppermint
Pine
Rosemary
Spearmint
Tea tree
Thyme
Tsuga
Wintergreen
Peel
Bergamot
Grapefruit
Lemon
Lime
Orange
Tangerine
| valign="top" |
Resin
Benzoin
Copaiba
Frankincense
Labdanum
Myrrh
Rhizome
Galangal
Ginger
Roots
Valerian
Seeds
Anise
Buchu
Celery
Cumin
Flax
Nutmeg oil
Woods
Agarwood
Camphor
Cedar
Rosewood
Sandalwood
|}
Balsam of Peru
Balsam of Peru, an essential oil derived from Myroxylon plants, is used in food and drink for flavoring, in perfumes and toiletries for fragrance, and in animal care products. However, national and international surveys identified balsam of Peru among the "top five" allergens most commonly causing patch test allergic reactions in people referred to dermatology clinics.
Garlic oil
Garlic oil is an essential oil derived from garlic.
Eucalyptus oil
Most eucalyptus oil on the market is produced from the leaves of Eucalyptus globulus. Steam-distilled eucalyptus oil is used throughout Asia, Africa, Latin America and South America as a primary cleaning/disinfecting agent added to soaped mop and countertop cleaning solutions; it also possesses insect and limited vermin control properties. Note, however, there are hundreds of species of eucalyptus, and perhaps some dozens are used to various extents as sources of essential oils. Not only do the products of different species differ greatly in characteristics and effects, but also the products of the very same tree can vary grossly.
Lavender oil
Lavender oil has long been used in the production of perfume. However, studies have shown it can be estrogenic and antiandrogenic, causing problems for prepubescent boys and pregnant women, in particular. Lavender essential oil is also used as an insect repellent.
Rose oil
Rose oil is produced from the petals of Rosa damascena and Rosa centifolia. Steam-distilled rose oil is known as "rose otto", while the solvent extracted product is known as "rose absolute".
Toxicity
The potential toxicity of essential oil is related to its level or grade of purity, and to the toxicity of specific chemical components of the oil. Many essential oils are designed exclusively for their aroma-therapeutic quality; these essential oils generally should not be applied directly to the skin in their undiluted form. Some can cause severe irritation, provoke an allergic reaction and, over time, prove toxic to the liver. If ingested or rubbed into the skin, essential oils can be highly poisonous, causing confusion, choking, loss of muscle coordination, difficulty in breathing, pneumonia, seizures, and possibly severe allergic reactions or coma.
Some essential oils, including many of the citrus peel oils, are photosensitizers, increasing vulnerability of the skin to sunlight.
Industrial users of essential oils should consult the safety data sheets to determine the hazards and handling requirements of particular oils. Even certain therapeutic-grade oils can pose potential threats to individuals with epilepsy or pregnant women.
Essential oil use in children can pose a danger when misused because of their thin skin and immature livers. This might cause them to be more susceptible to toxic effects than adults.
Flammability
The flash point of each essential oil is different. Many of the common essential oils, such as tea tree, lavender, and citrus oils, are classed as Class 3 Flammable Liquids, as they have a flash point of 50–60 °C.
Gynecomastia
Estrogenic and antiandrogenic activity have been reported by in vitro study of tea tree oil and lavender essential oils. Two published sets of case reports suggest that lavender oil may be implicated in some cases of gynecomastia, an abnormal breast tissue growth in prepubescent boys. The European Commission's Scientific Committee on Consumer Safety dismissed the claims against tea tree oil as implausible, but did not comment on lavender oil. In 2018, a BBC report on a study stated that tea tree and lavender oils contain eight substances that when tested in tissue culture experiments, increasing the level of estrogen and decreasing the level of testosterone. Some of the substances are found in "at least 65 other essential oils". The study did not include animal or human testing.
Handling
Exposure to essential oils may cause contact dermatitis. Essential oils can be aggressive toward rubbers and plastics, so care must be taken in choosing the correct handling equipment. Glass syringes are often used, but have coarse volumetric graduations. Chemistry syringes are ideal, as they resist essential oils, are long enough to enter deep vessels, and have fine graduations, facilitating quality control. Unlike traditional pipettes, which have difficulty handling viscous fluids, the chemistry syringe, also known as a positive displacement pipette, has a seal and piston arrangement which slides inside the pipette, wiping the essential oil off the pipette wall.
Ingestion
Some essential oils qualify as GRAS flavoring agents for use in foods, beverages, and confectioneries according to strict good manufacturing practice and flavorist standards. Pharmacopoeia standards for medicinal oils should be heeded. Some oils can be toxic to some domestic animals, cats in particular. The internal use of essential oils can pose hazards to pregnant women, as some can be abortifacients in dose 0.5–10 mL, and thus should not be used during pregnancy.
Pesticide residues
Concern about pesticide residues in essential oils, particularly those used therapeutically, means many practitioners of aromatherapy buy organically produced oils. Not only are pesticides present in trace quantities, but also the oils themselves are used in tiny quantities and usually in high dilutions. Where there is a concern about pesticide residues in food essential oils, such as mint or orange oils, the proper criterion is not solely whether the material is organically produced, but whether it meets the government standards based on actual analysis of its pesticide content.
Pregnancy
Some essential oils may contain impurities and additives that may be harmful to pregnant women. Certain essential oils are safe to use during pregnancy, but care must be taken when selecting quality and brand. Sensitivity to certain smells may cause pregnant women to have adverse side effects with essential oil use, such as headache, vertigo, and nausea. Pregnant women often report an abnormal sensitivity to smells and taste, and essential oils can cause irritation and nausea when ingested.
Toxicology
The following table lists the or median lethal dose for common oils; this is the dose required to kill half the members of a tested animal population. LD50 is intended as a guideline only, and reported values can vary widely due to differences in tested species and testing conditions.
Standardization of derived products
In 2002, ISO published ISO 4720 in which the botanical names of the relevant plants are standardized. The rest of the standards with regards to this topic can be found in the section of ICS 71.100.60
History
The resins of aromatics and plant extracts were retained to produce traditional medicines and scented preparations, such as perfumes and incense, including frankincense, myrrh, cedarwood, juniper berry and cinnamon in ancient Egypt may have contained essential oils. In 1923, when archaeologists opened Pharaoh Tutankhamun’s tomb, they found 50 alabaster jars of essential oils.
Essential oils have been used in folk medicine over centuries. The Persian physician Ibn Sina, known as Avicenna in Europe, was first to derive the fragrance of flowers from distillation, while the earliest recorded mention of the techniques and methods used to produce essential oils may be Ibn al-Baitar (1188–1248), an Arab Al-Andalusian (Muslim Spain) physician, pharmacist and chemist.
Rather than refer to essential oils themselves, modern works typically discuss specific chemical compounds of which the essential oils are composed, such as referring to methyl salicylate rather than "oil of wintergreen".
Essential oils are used in aromatherapy, a branch of alternative medicine that uses essential oils and other aromatic compounds. Oils are volatilized, diluted in a carrier oil and used in massage, diffused in the air by a nebulizer or diffuser, heated over a candle flame, or burned as incense.
See also
Aroma lamp
Enfleurage
Fragrance oil
List of essential oils
Tincture
Volatility
References
Further reading | Essential oil | Chemistry | 4,288 |
3,784,665 | https://en.wikipedia.org/wiki/Log%20probability | In probability theory and computer science, a log probability is simply a logarithm of a probability. The use of log probabilities means representing probabilities on a logarithmic scale , instead of the standard unit interval.
Since the probabilities of independent events multiply, and logarithms convert multiplication to addition, log probabilities of independent events add. Log probabilities are thus practical for computations, and have an intuitive interpretation in terms of information theory: the negative expected value of the log probabilities is the information entropy of an event. Similarly, likelihoods are often transformed to the log scale, and the corresponding log-likelihood can be interpreted as the degree to which an event supports a statistical model. The log probability is widely used in implementations of computations with probability, and is studied as a concept in its own right in some applications of information theory, such as natural language processing.
Motivation
Representing probabilities in this way has several practical advantages:
Speed. Since multiplication is more expensive than addition, taking the product of a high number of probabilities is often faster if they are represented in log form. (The conversion to log form is expensive, but is only incurred once.) Multiplication arises from calculating the probability that multiple independent events occur: the probability that all independent events of interest occur is the product of all these events' probabilities.
Accuracy. The use of log probabilities improves numerical stability, when the probabilities are very small, because of the way in which computers approximate real numbers.
Simplicity. Many probability distributions have an exponential form. Taking the log of these distributions eliminates the exponential function, unwrapping the exponent. For example, the log probability of the normal distribution's probability density function is instead of . Log probabilities make some mathematical manipulations easier to perform.
Optimization. Since most common probability distributions—notably the exponential family—are only logarithmically concave, and concavity of the objective function plays a key role in the maximization of a function such as probability, optimizers work better with log probabilities.
Representation issues
The logarithm function is not defined for zero, so log probabilities can only represent non-zero probabilities. Since the logarithm of a number in interval is negative, often the negative log probabilities are used. In that case the log probabilities in the following formulas would be inverted.
Any base can be selected for the logarithm.
Basic manipulations
In this section we would name probabilities in logarithmic space and for short:
The product of probabilities corresponds to addition in logarithmic space.
The sum of probabilities is a bit more involved to compute in logarithmic space, requiring the computation of one exponent and one logarithm.
However, in many applications a multiplication of probabilities (giving the probability of all independent events occurring) is used more often than their addition (giving the probability of at least one of mutually exclusive events occurring). Additionally, the cost of computing the addition can be avoided in some situations by simply using the highest probability as an approximation. Since probabilities are non-negative this gives a lower bound. This approximation is used in reverse to get a continuous approximation of the max function.
Addition in log space
The formula above is more accurate than , provided one takes advantage of the asymmetry in the addition formula. should be the larger (least negative) of the two operands. This also produces the correct behavior if one of the operands is floating-point negative infinity, which corresponds to a probability of zero.
This quantity is indeterminate, and will result in NaN.
This is the desired answer.
The above formula alone will incorrectly produce an indeterminate result in the case where both arguments are . This should be checked for separately to return .
For numerical reasons, one should use a function that computes (log1p) directly.
See also
Information content
Log-likelihood
References
Probability
Mathematics of computing | Log probability | Mathematics | 832 |
4,721,464 | https://en.wikipedia.org/wiki/Space%20command | A space command is a military organization with responsibility for space operations and warfare. A space command is typically a joint organization or organized within a larger military branch and is distinct from a fully independent space force. The world's first space command, the United States' Air Force Space Command was established in 1982 and later became the United States Space Force in 2019.
History
In the United States and Soviet Union, the early military space programs were managed by individual military services. In the United States, the Air Force and its various major commands were responsible for military space operations, however Air Defense Command was responsible for the majority of space operations. In 1967, it was redesignated Aerospace Defense Command to emphasize its increased space role. Following the inactivation of Aerospace Defense Command in 1980, U.S. space forces were briefly organized under Strategic Air Command, before being organized into Space Command, which was activated in 1982. Space Command, which was the first space command in the world, was redesignated Air Force Space Command in 1985 to distinguish it from the joint U.S. Space Command. The Army and Navy, both possessing smaller space capabilities, both had their own space commands, with Naval Space Command activated in 1983 and Army Space Command activated in 1988.
Soviet space forces were organized under the Strategic Rocket Forces' Central Directorate of Space Assets, which was activated in 1964, before being upgraded to the Main Directorate of Space Assets in 1970. The Soviet Air Defense Forces' Anti-Ballistic Missile and Anti-Space Defense Forces were activated in 1967 and remained a part from the Strategic Missile Forces' space forces.
In 1959, fearing U.S. Air Force dominance of the military space program, the United States Navy's chief of naval operations, Admiral Arleigh Burke, proposed the creation of a Defense Astronautical Agency to manage U.S. military space operations. The proposal of a joint space command did not come to pass until 1985, when United States Space Command was activated to manage U.S. military space activities, overseeing Air Force Space Command, Naval Space Command, and Army Space Command. The Soviet Union also rose the profile of their space forces, moving the Main Directorate of Space Assets from the Strategic Missile Forces to the Soviet Armed Forces General Staff in 1982, before upgrading it into the Chief Directorate of Space Assets and placing it directly in the Ministry of Defence in 1986. In 1981, the U.S.–Canadian North American Air Defense Command was redesignated as the North American Aerospace Defense Command, emphasizing its space role.
Following the collapse of the Soviet Union, the Soviet space forces were reorganized into Russia's Military Space Forces and the Russian Air Defence Forces' Rocket and Space Defence Troops. In 1997, both were merged into the Strategic Rocket Forces, before being split out in 2001 as the Russian Space Forces, which was an independent troops, but not a full independent service. U.S. Army space forces also underwent reorganization, with the Army Space Command being merged with its missile defense forces to form Army Space and Strategic Defense Command in 1992, being redesignated as Army Space and Missile Defense Command in 1997.
With the September 11 Attacks, U.S. space forces were sidelined with the change in focus to the War on Terror. In 2002, U.S. Space Command was inactivated and its joint space responsibilities were transferred to United States Strategic Command and Naval Space Command was inactivated, transferring most of its capabilities to Air Force Space Command. Starting in 2005, U.S. Strategic Command began to organize its space forces semi–independently, first as Joint Space Operations, then in 2006 as the Joint Functional Component Command for Space, and in 2017 the Joint Force Space Component Command. In 2019, the United States reestablished United States Space Command, and in 2020, reorganized Air Force Space Command into the United States Space Force, becoming a full independent military branch, with Space Operations Command serving as its primary space command. To support U.S. Space Command, in 2020 the Navy created Navy Space Command, with United States Tenth Fleet as its operational arm, out of Fleet Cyber Command.
Recognizing the growing importance of space operations, France created the Joint Space Command within the French Air Force in 2010 to manage its space capabilities, reorganizing it into the French Space Command as part of a larger transformation of the French Air Force into the French Air and Space Force in 2019. Russia also reorganized their Space Forces, merging together their Space Forces and air defense elements of the Russian Air Force to form the Russian Aerospace Defense Forces in 2011, moving the space elements into the Aerospace Defense Forces' Russian Space Command. In 2015, it reorganized its space forces again, merging the Russian Air Force and Russian Aerospace Defense Forces to form the Russian Aerospace Forces and recreating the Russian Space Forces as a sub-branch, replacing the Russian Space Command. In 2015, the People's Liberation Army also centralized their space forces as part of the new Strategic Support Force's Space Systems Department. In 2018, India centralized its space forces in a tri-service Defence Space Agency, which is expected to become a full command in the coming years. In 2020, Iran also unveiled their own Space Command under the Islamic Revolutionary Guard Corps Aerospace Force. In 2020, NATO also established a Space Centre as part of Allied Air Command. In 2021, the British Armed Forces established United Kingdom Space Command as a joint command under the leadership of the Royal Air Force, taking over space responsibilities from United Kingdom Strategic Command. In 2021, the Royal Australian Air Force Chief of Air Force announced the intended creation of an Australian Space Command.
List of space commands
Space Command (Australian Defence Force integrated tri-service headquarters within Joint Capabilities Group (JCG))
Space Operations Group (part of the Japan Air Self-Defense Force)
Aerospace Operations Command (part of the Brazilian Air Force)
North American Aerospace Defense Command (multinational command)
(part of the People's Liberation Army Strategic Support Force)
French Space Command (part of the French Air and Space Force)
Defence Space Agency (Indian Armed Forces joint command)
Iranian Space Command (part of the Islamic Revolutionary Guard Corps Aerospace Force)
Space Operations Command (Italian Armed Forces joint command)
NATO Space Centre (part of Allied Air Command)
Russian Space Forces (part of the Russian Aerospace Forces)
United Kingdom Space Command (British Armed Forces joint command)
United States Space Command (United States Armed Forces joint command)
Space Operations Command (part of the United States Space Force)
Army Space and Missile Defense Command (part of the United States Army)
Navy Space Command (part of the United States Navy)
See also
Space force
References
Types of military forces
Outer space
Space law
Space warfare | Space command | Astronomy | 1,329 |
22,574,724 | https://en.wikipedia.org/wiki/Hydrophobic%20mismatch | Hydrophobic mismatch is the difference between the thicknesses of hydrophobic regions of a transmembrane protein and of the biological membrane it spans. In order to avoid unfavorable exposure of hydrophobic surfaces to water, the hydrophobic regions of transmembrane proteins are expected to have approximately the same thickness as the hydrophobic (lipid acyl chain) region of the surrounding lipid bilayer. Nevertheless, the same membrane protein can be encountered in bilayers of different thickness. In eukaryotic cells, the plasma membrane is thicker than the membranes of the endoplasmic reticulum. Yet all proteins that are abundant in the plasma membrane are initially integrated into the endoplasmic reticulum upon synthesis on ribosomes. Transmembrane peptides or proteins and surrounding lipids can adapt to the hydrophobic mismatch by different means.
Possible adaptations to mismatch
In order to avoid unfavorable exposure of hydrophobic surfaces to a hydrophilic environment, biological membrane tends to adapt to such mismatch. For example, an integral membrane protein tends to surround itself by lipids of matching size and shape due to protein and lipid segregation. Since proteins are relatively rigid, whereas lipid hydrocarbon chains are flexible, the condition of hydrophobic matching can be fulfilled by stretching, squashing, and/or tilting of the lipid chains
When the hydrophobic part of a transmembrane protein is too thick to match the hydrophobic bilayer thickness (left part of Figure), the protein can aggregate in the membrane to minimize the exposed hydrophobic area or tilt to reduce their effective hydrophobic thickness. They can also adopt by changing the orientation of hydrophobic and hydrophilic side chains near the interface. Lipids in turn can modulate the membrane thickness by stretching their acyl chains.
When the hydrophobic part of a transmembrane protein is too thin to match the hydrophobic bilayer thickness (right part of Figure), again this might result in protein aggregation, or changes in backbone conformation and/or side chain orientation. Too short peptides may adopt a surface localization. Lipids could decrease the local bilayer thickness by disordering their acyl chains.
Protein aggregation
Since Mouritsen and Bloom proposed the detailed thermodynamic model, which includes adaptation of the lipids and induction of protein segregation at a more extreme mismatch in their “Mattress Model”, more additional insight into mismatch-induced protein aggregation has been obtained. Also some experimental evidence that a hydrophobic mismatch can lead to protein aggregation in fluid bilayer were founded. Electron microscopy studies on bacteriorhodopsin, reconstituted in saturated and unsaturated fluid PC bilayers with varying chain length, showed that protein aggregation occurred only with a rather large mismatch, and that bilayer thicknesses of 4 angstrom thicker and 10 angstrom thinner than the estimated hydrophobic thickness of the protein are allowed without induction of significant aggregation.
Helix tilt
Tilt is also a possible result if the hydrophobic part of a peptide or protein is too long to span the membrane. A previous study on lactose permease of E. coli showed that upon reconstitution of the protein in PE/PG (3/1) lipid bilayer, an increase in helix tilt occurs at increasing protein content. This tilt was accompanied by a decrease in lipid order, which results in a decrease in bilayer thickness, suggesting that it is a mismatch related response.
In large proteins that span the membrane multiple times, changes in helical tilt may occur with little effect on lipid packing. However, for a single transmembrane helix, it is possible that a tilt would cause a strain on the surrounding lipids to accommodate the helix in the bilayer. Thus, a large degree of tilting can be a less favorable option for single transmembrane proteins.
Surface orientation
Relatively small hydrophobic peptides may not be able to integrate into the membrane, and in response adopt an orientation at the membrane surface. The experimental evidence was shown by a fluorescence study on an artificial peptide with a 19 amino acid long hydrophobic sequence of mainly leucines and flanked on both sides with lysines as anchoring residues. The results indicated that a conversion from a dominant transmembrane to parallel orientation of the peptide could be induced by modulating bilayer thickness via addition of cholesterol or by increasing lipid chain length.
Backbone conformation change
To obtain detailed information on the consequences of mismatch for the conformation of peptides and proteins in lipid bilayer, small membrane-spanning peptides are most suitable. Still need some studies.
Theories for the mismatch effects
Different theoretical approaches have been applied to describe the energy cost and thermodynamic effects of mismatch, including treatment of the membrane as an elastic sheet or a microscopic approach.
Mattress model
Mattress model was proposed as a phenomenological theory approach in 1984 by Mouritsen and Bloom. It is a two-component real solution theory based on the theory of nonideal solutions and hence allows for phase separation. In their model, they relate the energy stored in the undulations of the membrane surface caused by the mismatch to the elastic properties of the lipids and proteins. They do not include microscopic detail of the lipids, but use as input the known thermodynamic properties of the pure lipid system. They also include indirect lipid-protein interactions induced by the mismatch as well as direct lipid-protein van der Waals-like interactions between the hydrophobic parts of the lipid bilayer and the proteins. The excess "hydrophobic effect" associated with the lipid-protein hydrophobic mismatch, and the elastic deformation free energy of the lipid chains near the protein. The interaction potentials are estimated based on experimental data derived from thermodynamic and mechanical measurements of membrane properties.
Monte Carlo simulation scheme
The mattress model was later replicated in a Monte Carlo simulation scheme by Sperotto and Mouritsen. They allowed for different microstates of the lipids, classified according to Pink’s 10-state model. hence enabling a pure lipid bilayer phase transition. This version of the model provides a connection between the microscopic characteristics of the system and its thermodynamic behavior.
Molecular theory
In a molecular theory of the lipid chains of the membrane, peptides, with their hydrophobic length, were treated as providing a boundary condition on the configuration of the lipid chains. A molecular modeling was combined with phenomenological free energy contributions describing lipid head group repulsion and membrane solvent surface tension. Duque et al.
Experimental studies of hydrophobic mismatch and helix tilt
Knowledge of the response of membrane proteins to mismatch has been obtained from a variety of experimental studies. Different types of experimental approaches provide different kinds of insight into the contributions from the abovementioned hypothetical molecular responses. For example, proteins or peptides outfitted with fluorescent or paramagnetic labeling groups can be employed in fluorescence spectroscopy and electron spin resonance studies. These can reveal the molecular details of both the protein-lipid interactions and protein-protein interactions (characteristic of an aggregation-style response) and how they are affecting by (mis)match conditions. Studies of helix tilting as a function of membrane thickness have also benefited from the use of solid-state NMR techniques, in particular using oriented membranes that provide direct insight into the helix tilt angle. Early studies of model membrane-spanning peptides (such as the WALP peptide) have provided insight into the various factors that influence the response, including membrane composition, peptide sequence and in particular also the presence of interfacial anchoring residues. In recent years, great advances in X-ray crystallography and electron microscopy techniques have yielded new insights of the lipid interactions of larger proteins. This is exemplified by the insights into helix tilting in a crystallized calcium pump protein.
Biological significance of mismatch
The hydrophobic mismatch is important for the protein sorting and formation of lipid rafts.
Protein sorting
In eukaryotic cells, the level of cholesterol increases through the secretory pathway, from the endoplasmic reticulum to the Golgi to the plasma membrane, suggesting a concomitant increase in membrane thickness. In line with this, the average length of transmembrane segment of single-span plasma membrane proteins typically is five amino acids longer than the average length of proteins from the Golgi. Experimental evidence was obtained that protein sorting in the Golgi may be based on this length difference: for several proteins that normally reside in the Golgi, it was shown that increasing their hydrophobic length can reroute the proteins to the plasma membrane, or vice versa, that decreasing the hydrophobic length of proteins from the plasma membrane can cause their retention in the Golgi.
Lipid rafts
Rafts are membrane domains enriched in cholesterol, sphingomyelin (SM), and certain membrane proteins. Rafts have putative roles in many physiological processes, such as signal transduction, endocytosis, apoptosis, protein trafficking, and lipid regulation. Raft lipids typically have saturated hydrocarbon chains. Lipid rafts have a higher hydrophobic thickness than the rest of the lipid bilayer, which may lead to a preferential separation of transmembrane proteins with a higher hydrophobic thickness into the lipid rafts.
See also
Hydrophobicity scales
Cell membrane
Lipid raft
References
Membrane biology | Hydrophobic mismatch | Chemistry | 1,973 |
77,944,540 | https://en.wikipedia.org/wiki/Enzo%20Marinari | Enzo Marinari (born on July 7, 1957, in Avellino) is an Italian theoretical and computational physicist. He has contributed to introducing several new algorithms in computational physics, such as Parallel Tempering, the SU(N) updating method and Constraint Allocation Flux Balance Analysis (CAFBA). He is a professor at the Physics Department of the Sapienza University of Rome.
Education and career
Enzo Marinari got his physics degree at the Sapienza University of Rome in 1980. Until 1984 he worked as a staff scientist at the Theoretical Physics Institute of the CEA Saclay, in France. In 1988 he was nominated Associate Professor at the University of Rome Tor Vergata and in 1994 he became a full professor at the University of Cagliari. Since 1999 he is a full professor at the Physics Department of the Sapienza University of Rome in Italy.
From 1992 until 1994 he was contemporarily the Physics Director for the Northeast Parallel Architecture Center (NPAC) in Syracuse, NY, USA. During the period 2004-2011, he was the Scientific Director for physics of the Institute for Biocomputation and Physics of Complex Systems (BIFI) at the University of Zaragoza, Spain.
During his career Enzo Marinari has done research in
different fields of physics, such as particle physics (QCD, string theory), statistical physics (spin glasses, disordered and complex systems, phase transitions, temperature chaos) and biophysics (metabolic and neural networks).
He has been one of the founding members of the Spanish-Italian Janus collaboration and of the Italian APE collaboration, both promoting the use of computational methods in research in physics.
He has written and edited several books
and plays an active role in explaining science and its applications on mainstream media channels.
Recognitions
In 1978 and 1979 Enzo Marinari received the Borsa Persico of the Accademia dei Lincei. In 1988 he was elected as best physicist under the age of 35 by the Accademia dei Lincei.
In 1992 he received an essay prize from the Gravity Research Foundation.
References
External links
Personal Homepage
Living people
1957 births
20th-century Italian physicists
21st-century Italian physicists
Academic staff of the Sapienza University of Rome
Statistical physicists | Enzo Marinari | Physics | 458 |
15,542,466 | https://en.wikipedia.org/wiki/Display%20contrast | Contrast, in physics and digital imaging, is a quantifiable property used to describe the difference in appearance between elements within a visual field. It is closely linked with the perceived brightness of objects and is typically defined by specific formulas that involve the luminances of the stimuli. For example, contrast can be quantified as ΔL/L near the luminance threshold, known as Weber contrast, or as LH/LL at much higher luminances. Further, contrast can result from differences in chromaticity, which are specified by colorimetric characteristics such as the color difference ΔE in the CIE 1976 UCS (Uniform Colour Space).
Understanding contrast is crucial in fields such as imaging and display technologies, where it significantly affects the quality of visual content rendering. The contrast of electronic visual displays is influenced by the type of signal driving mechanism used, which can be either analog or digital. This mechanism directly influences how well the display renders images under varying conditions. Additionally, the contrast is affected by ambient illumination and the viewer's direction of observation, which can alter perceived brightness and color accuracy.
Luminance contrast
The "luminance contrast" is the ratio between the higher luminance, LH, and the lower luminance, LL, that define the feature to be detected. This ratio, often called contrast ratio, CR, (actually being a luminance ratio), is often used for high luminances and for specification of the contrast of electronic visual display devices. The luminance contrast (ratio), CR, is a dimensionless number, often indicated by adding ":1" to the value of the quotient (e.g. CR = 900:1).
with 1 ≤ CR ≤
A "contrast ratio" of CR = 1 means no contrast.
The contrast can also be specified by the contrast modulation (or Michelson contrast), CM, defined as:
with 0 ≤ CM ≤ 1.
CM = 0 means no contrast.
Another contrast definition is a practical application of Weber contrast, sometimes found in the electronic displays field, K or CW, is:
with 0 ≤ CW ≤ 1.
CW = 0 means no contrast, while maximum contrast, CWmax equals one, or more commonly described as a percentage like Michelson, 100%.
A modification of Weber by Hwaung/Peli adds a glare offset to the denominator to more accurately model computer displays. Thus the modified Weber is:
This more accurately models the loss of contrast that occurs on darker display luminance due to ambient light conditions.
Color contrast
Two parts of a visual field can be of equal luminance, but their color (chromaticity) is different. Such a color contrast can be described by a distance in a suitable chromaticity system (e.g. CIE 1976 UCS, CIELAB, CIELUV).
A metric for color contrast often used in the electronic displays field is the color difference ΔE*uv or ΔE*ab.
Full-screen contrast
During measurement of the luminance values used for evaluation of the contrast, the active area of the display screen is often completely set to one of the optical states for which the contrast is to be determined, e.g. completely white (R=G=B=100%) and completely black (R=G=B=0%) and the luminance is measured one after the other (time sequential).
This way of proceeding is suitable only when the display device does not exhibit loading effects, which means that the luminance of the test pattern is varying with the size of the test pattern. Such loading effects can be found in CRT-displays and in PDPs. A small test pattern (e.g. 4% window pattern) displayed on these devices can have significantly higher luminance than the corresponding full-screen pattern because the supply current may be limited by special electronic circuits.
Full-swing contrast
Any two test patterns that are not completely identical can be used to evaluate a contrast between them. When one test pattern comprises the completely bright state (full-white, R=G=B=100%) and the other one the completely dark state (full-black, R=G=B=0%) the resulting contrast is called full-swing contrast. This contrast is the highest (maximum) contrast the display can achieve. If no test pattern is specified in a data sheet together with a contrast statement, it will most probably refer to the full-swing contrast.
Static contrast
The standard procedure for contrast evaluation is as follows:
Apply the first test pattern to the electrical interface of the display under test and wait until the optical response has settled to a stable steady state,
Measure the luminance and/or the chromaticity of the first test pattern and record the result,
Apply the second test pattern to the electrical interface of the display under test and wait until the optical response has settled to a stable steady state,
Measure the luminance and/or the chromaticity of the second test pattern and record the result,
Calculate the resulting static contrast for the two test patterns using one of the metrics listed above (CR,CM or K).
When luminance and/or chromaticity are measured before the optical response has settled to a stable steady state, some kind of transient contrast has been measured instead of the static contrast.
Transient contrast
When the image content is changing rapidly, e.g. during the display of video or movie content, the optical state of the display may not reach the intended stable steady state because of slow response and thus the apparent contrast is reduced if compared to the static contrast.
Dynamic contrast
This is a technique for expanding the contrast of LCD-screens.
LCD-screens comprise a backlight unit which is permanently emitting light and an LCD-panel in front of it which modulates transmission of light with respect to intensity and chromaticity. In order to increase the contrast of such LCD-screens the backlight can be (globally) dimmed when the image to be displayed is dark (i.e. not comprising high intensity image data) while the image data is numerically corrected and adapted to the reduced backlight intensity. In such a way the dark regions in dark images can be improved and the contrast between subsequent frames can be substantially increased. Also the contrast within one frame can be expanded intentionally depending on the histogram of the image (some sporadic highlights in an image may be cut or suppressed). There is quite some digital signal processing required for implementation of the dynamic contrast control technique in a way that is pleasing to the human visual system (e.g. no flicker effects must be induced).
The contrast within individual frames (simultaneous contrast) can be increased when the backlight can be locally dimmed. This can be achieved with backlight units that are realized with arrays of LEDs. High-dynamic-range (HDR) LCDs are using that technique in order to realize (static) contrast values in the range of CR > 100.000.
Dark-room contrast
In order to measure the highest contrast possible, the dark state of the display under test must not be corrupted by light from the surroundings, since even small increments ΔL in the denominator of the ratio (LH + ΔL) / (LL + ΔL) effect a considerable reduction of that quotient. This is the reason why most contrast ratios used for advertising purposes are measured under dark-room conditions (illuminance EDR ≤ 1 lx).
All emissive electronic displays (e.g. CRTs, PDPs) theoretically do not emit light in the black state (R=G=B=0%) and thus, under darkroom conditions with no ambient light reflected from the display surface into the light measuring device, the luminance of the black state is zero and thus the contrast becomes infinity.
When these display-screens are used outside a completely dark room, e.g. in the living room (illuminance approx. 100 lx) or in an office situation (illuminance 300 lx minimum), ambient light is reflected from the display surface, adding to the luminance of the dark state and thus reducing the contrast considerably.
A quite novel TV-screen realized with OLED technology is specified with a dark-room contrast ratio CR = 1.000.000 (one million). In a realistic application situation with 100 lx illuminance the contrast ratio goes down to ~350, with 300 lx it is reduced to ~120.
"Ambient contrast"
The contrast that can be experienced or measured in the presence of ambient illumination is shortly called "ambient contrast". A special kind of "ambient contrast" is the contrast under outdoor illumination conditions when the illuminance can be very intense (up to 100.000 lx). The contrast apparent under such conditions is called "daylight contrast".
Since always the dark areas of a display are corrupted by reflected light, reasonable "ambient contrast" values can only be maintained when the display is provided with efficient measures to reduce reflections by anti reflection and/or anti-glare coatings.
Concurrent contrast
When a test pattern is displayed that contains areas with different luminance and/or chromaticity (e.g. a checkerboard pattern), and an observer sees the different areas simultaneously, the apparent contrast is called concurrent contrast (the term simultaneous contrast is already taken for a different effect). Contrast values obtained from two subsequently displayed full-screen patterns may be different from the values evaluated from a checkerboard pattern with the same optical states. That discrepancy may be due to non-ideal properties of the display-screen (e.g. crosstalk, halation, etc.) and/or due to straylight problems in the light measuring device.
Successive contrast
When a contrast is established between two optical states that are perceived or measured one after the other, this contrast is called successive contrast. The contrast between two full-screen patterns (full-screen contrast) always is a successive contrast.
Methods of measurement
contrast of direct-view displays
contrast of projection displays
Depending on the nature of the display under test (direct-view or projection) the contrast is evaluated as a quotient of luminance values (direct-view) or as a quotient of illuminance values (projection displays) if the properties of the projection screen is separated from that of the projector. In the latter case, a checkerboard pattern with full-white and full-black rectangles is projected and the illuminance is measured at the center of the rectangles. The standard ANSI IT7.215-1992 defines test-patterns and measurement locations, and a way to obtain the luminous flux from illuminance measurements, it does not define however a quantity named "ANSI lumen".
If the reflective properties of the projection screen (usually depending on direction) are included in the measurement, the luminance reflected from the centers of the rectangles has to be measured for a (set of) specific directions of observation.
Luminance, contrast and chromaticity of LCD-screens is usually varying with the direction of observation (i.e. viewing direction). The variation of electro-optical characteristics with viewing direction can be measured sequentially by mechanical scanning of the viewing cone (gonioscopic approach) or by simultaneous measurements based on conoscopy.
See also
Contrast (vision)
Interferometric visibility
References
External links
Charles Poynton: Reducing eyestrain from video and computer monitors
Contrast of Sonys XEL-1 OLED-TV-screen with ambient illumination
Display technology
Liquid crystal displays
Engineering ratios
Television technology | Display contrast | Mathematics,Technology,Engineering | 2,403 |
38,940,908 | https://en.wikipedia.org/wiki/List%20of%20botanists%20by%20author%20abbreviation%20%28T%E2%80%93V%29 |
A–S
To find entries for A–S, use the table of contents above.
T
T.A.Chr. – Tyge Ahrengot Christensen (1918–1996)
Täckh. – Vivi Täckholm (1898–1978)
Tagawa – Motozi Tagawa (1908–1977)
Tagg – Harry Frank Tagg (1874–1933)
Takeda – Hisayoshi Takeda (1883–1972)
Takeuchi – H.Takeuchi (fl. 1929)
Takht. – Armen Takhtajan (1910–2009)
Tali – Kadri Tali (born 1966)
Taliev – Valerij Ivanovich Taliev (1872–1932)
Tamamsch. – Sophia G. Tamamschjan (1901–1981)
T.Amano – Tetsuo Amano (1912–1985)
Tamayo – Francisco Tamayo (1902–1985)
Tamiya – Hiroshi Tamiya (1903–1984)
Tammes – Tine (Jantine) Tammes (1871–1947)
Tamura – (1927–2007)
Tanaka – Chōzaburō Tanaka (1885–1976)
Tandang – Danilo N. Tandang (fl. 2005)
T.Anderson – Thomas Anderson (1832–1870)
Tang – Tsin Tang (1897–1984)
Tangav. – A. C. Tangavelou (fl. 2003)
Tansley – Arthur Tansley (1871–1955)
Tao Chen – Tao Chen (born 1963)
Tardieu – Marie Laure Tardieu (1902–1998)
Tärnström – Christopher Tärnström (1703–1746)
Tartenova – M. A. Tartenova (fl. 1957)
T.A.Stephenson – Thomas Alan Stephenson (1898–1961)
Tat. – Alexander Alexejevitch Tatarinow (1817–1886)
Tate – Ralph Tate (1840–1901)
Tateoka – Tsuguo Tateoka (1931–1994)
Tatew. – Misao Tatewaki (1899–1976)
Taton – Auguste Taton (1914–1989)
Taub. – Paul Hermann Wilhelm Taubert (1862–1897)
Tausch – Ignaz Friedrich Tausch (1793–1848)
Tawan – Cheksum Supiah Tawan (born 1959)
T.A.Williams – Thomas Albert Williams (1865–1900)
Taylor – Thomas Taylor (1786–1848)
T.Baskerv. – Thomas Baskerville (1812–1840)
T.Bastard – Thomas Bastard (died 1815)
T.Baytop. – Turhan Baytop (1920–2002)
T.B.Lee – Tchang Bok Lee (1919–2003)
T.B.Moore – Thomas Bather Moore (1850–1919)
T.Cao – Tong Cao (born 1946)
T.C.Chen – Tê Chao Chen (born 1926)
T.C.E.Fr. – Thore Christian Elias Fries (1886–1930)
T.Chen – T. Chen (fl. 1985)
T.C.Hsu – Tian Chuan Hsu (fl. 2006)
T.C.Huang – Tseng Chieng Huang (born 1931)
T.Cooke – Theodore Cooke (1836–1910)
T.C.Palmer – Thomas Chalkley Palmer (1860–1934)
T.C.Pan – Ti Chang Pan (born 1937)
T.C.Scheff. – Theodore Comstock Scheffer (1904–2003)
T.C.Wilson – Trevor C. Wilson (fl. 2012)
T.D.Jacobsen – Terry Dale Jacobsen (born 1950)
T.D.Macfarl. – Terry Desmond Macfarlane (born 1953)
T.D.Penn. – Terence Dale Pennington (born 1938)
T.Duncan – Thomas Duncan (born 1948)
T.Durand – Théophile Alexis Durand (1855–1912)
T.E.Díaz – (born 1949)
T.E.Hunt – (1913–1970)
Teijsm. – Johannes Elias Teijsmann (1808–1882)
Temb. – Yakov Gustavovich Temberg (born 1914)
Temminck – Coenraad Jacob Temminck (1778–1858)
Temp. – Joannes Albert Tempère (1847–1926)
Templeton – John Templeton (1766–1825)
Temu – Ruwa-Aichi Pius Cosmos Temu (born 1955)
Ten. – Michele Tenore (1780–1861)
Ten.-Woods – Julian Edmund Tenison-Woods (1832–1889)
Teo – Stephen P. Teo (fl. 1997)
Teodor. – Emanoil Constantin Teodoresco (1866–1949)
Tepper – Johann Gottlieb Otto Tepper (1841–1923)
Teppner — Herwig Teppner (born 1941)
T.E.Raven – Tamra Engelhorn Raven (born 1945)
T.F.Andrews – Theodore Francis Andrews (born 1917)
T.F.Daniel – Thomas Franklin Daniel (born 1954)
T.F.Forst. – Thomas Furley Forster (1761–1825)
T.G.Gao – Tian Gang Gao
T.G.Hartley – Thomas Gordon Hartley (1931–2016)
T.G.J.Rayner – Timothy Guy Johnson Rayner (born 1963)
T.G.Pearson – Thomas Gilbert Pearson (1873–1943)
T.Green – Ted Green (born 1921)
T.G.White – Theodore Greely White (1872–1901)
T.Hall. – Tony Hall (fl. 2011)
T.Hammer – Timothy Andrew Hammer (born 1984)
T.Hanb. – Thomas Hanbury (1832–1907)
Tharp – (1885–1964)
T.H.Chung – Tai Hyun Chung (1882–1971)
Theilade – Ida Theilade (fl. 1995)
Thell. – Albert Thellung (1881–1928)
Theophr. – Theophrastus (Tyrtamus) (c. 371 – c. 287 BC)
Thér. – Irénée Thériot (1859–1947)
Therese – Princess Theresa of Bavaria (1850–1925)
Th.Fr. – Theodor Magnus Fries (1832–1913)
Thiede – Joachim Thiede (born 1963)
Thiele – Friedrich Leopold Thiele (died 1841)
Thieret – John William Thieret (1926–2005)
Thijsse – Jacobus Pieter Thijsse (1863–1945)
Thines – Marco Thines (born 1978)
T.H.Nguyên – Tiên Hiêp Nguyên (born 1947)
Thomé – Otto Wilhelm Thomé (1840–1925)
Thomson – Thomas Thomson (1817–1878)
T.Hong – Tao Hong (fl. 1963)
Thonn. – Peter Thonning (1775–1848)
Thonner – Franz Thonner (1863–1928)
Thorel – Clovis Thorel (1833–1911)
Thorne – Robert Folger Thorne (1920–2015)
Thoroddsen – Þorvaldur (Thorvaldur) Thoroddsen (1855–1921)
Thorsen – Mike Thorsen (fl. 2009)
Thory – (1759–1827)
Thoth. – Krishnamurthy Thothathri (born 1929)
Thouars – Louis-Marie Aubert du Petit-Thouars (1758–1831)
Thouin – André Thouin (1747–1824)
Threlfall – S. Threlfall (fl. 1983)
Threlkeld – Caleb Threlkeld (1676–1728)
Thuill. – (1757–1822)
Thulin – Mats Thulin (born 1948)
Thüm. – Felix von Thümen (1839–1892)
Thunb. – Carl Peter Thunberg (1743–1828)
Thur. – Gustave Adolphe Thuret (1817–1875)
Thurb. – George Thurber (1821–1890)
Thurm. – Jules Thurmann (1804–1855)
Thurn – Everard Ferdinand im Thurn (1852–1932)
Thwaites – George Henry Kendrick Thwaites (1811–1882)
Th.Wolf – Theodor Wolf (1841–1924)
Tich – Nguyen Thien Tich (fl. 2010)
Tidestr. – Ivar Tidestrom (1864–1956)
Tiegh. – Philippe Édouard Léon van Tieghem (1839–1914)
Tilesius – Wilhelm Gottlieb Tilesius von Tilenau (1769–1857)
Tiling – Heinrich Sylvester Theodor Tiling (1818–1871)
Timb.-Lagr. – Édouard Timbal-Lagrave (1819–1888)
Timeroy – Marc Antoine Timeroy (1793–1856)
Timler – Friedrich Karl Timler (born 1914)
Tindale – Mary Douglas Tindale (1920–2011)
Tineo – Vincenzo Tineo (1791–1856)
Titius – Johann Daniel Titius (Tietz) (1729–1796)
T.Itô – (1868–1941)
Tjaden – William Louis Tjaden (born 1913)
T.J.Ayers – Tina J. Ayers (born 1957)
T.J.Chester – Thomas Jay Chester (born 1951)
T.Jensen – Thomas Jensen (1824–1877)
T.J.Motley – Timothy J. Motley (1966–2013)
T.J.Sørensen – Thorvald (Thorwald) Julius Sørensen (1902–1973)
T.J.Wallace – Thomas Jennings Wallace (born 1912)
T.J.Zhang – Tie Jun Zhang (born 1962)
T.Knight – Thomas Andrew Knight (1759–1838)
T.Kop. – Timo Juhani Koponen (born 1939)
T.Lebel – Teresa Lebel (fl. 2000)
T.Lestib. – Thémistocle Gaspard Lestiboudois (1797–1876)
T.L.Ming – Tien Lu Ming (born 1937)
T.Lobb – Thomas Lobb (1820–1894)
T.MacDoug. – (1895–1973)
T.Marsson – Theodor Friedrich Marsson (1816–1892)
T.M.Barkley – Theodore Mitchell Barkley (1934–2004)
T.M.Harris – Thomas Maxwell Harris (1903–1983)
T.Miyake – Tsutomu Miyake (born 1880)
T.Moore – Thomas Moore (1821–1887)
T.M.Reeve – Thomas M. Reeve (fl. 1989)
T.M.Salter – Terence Macleane Salter (1883–1969)
T.M.Schust. – Tanja M. Schuster (fl. 2011)
T.M.Williams – Tanisha M. Williams (fl. 2022)
T.Nees – Theodor Friedrich Ludwig Nees von Esenbeck (1787–1837)
T.N.McCoy – Thomas Nevil McCoy (born 1905)
T.N.Nguyen – Thi Nhan Nguyen (born 1953)
Tod. – Agostino Todaro (1818–1892)
Todzia – Carol Ann Todzia (fl. 1986)
Toelken – Hellmut Richard Toelken (born 1939)
Toledo – Joaquim Franco de Toledo (1905–1952)
Tolm. – Alexandr Innokentevich Tolmatchew (1903–1979)
Tomas. – Ruggero Tomaselli (1920–1982)
Tomb – Andrew Spencer Tomb (born 1943)
Tomka – Pavol Tomka (fl. 2024)
Tomm. – Muzio Giuseppe Spirito de Tommasini (1794–1879)
Torén – (1718–1753)
Torr. – John Torrey (1796–1873)
Torre – Antonio Rocha da Torre (1904–1995)
Torres – Maria Amelia Torres (1934–2011)
Torrend – Camille Torrend (1875–1961)
T.Osborn – Theodore George Bentley Osborn (1887–1973)
Totten – Henry Roland Totten (1892–1974)
Tourlet – Ernest Henry Tourlet (1843–1907)
Tourn. – Joseph Pitton de Tournefort (1656–1708)
Touton – Karl Touton (1858–1934)
Tovey – James Richard Tovey (1873–1922)
Towle – Brian J. Towle (fl. 2022)
Towner – Howard Frost Towner (born 1943)
T.P.Boyle – T. P. Boyle (fl. 2002)
T.P.Lin – Tsan Piao Lin (born 1948)
T.Post – (1858–1912)
T.P.Yi – Tong Pei Yi (fl. 1980)
T.Q.Nguyen – To Quyen Nguyen (fl. 1965)
Trab. – Louis Charles Trabut (1853–1929)
Tracey – John Geoffrey Tracey (1930–2004)
Tracy – Samuel Mills Tracy (1847–1920)
Trad. – John Tradescant the younger (1608–1662)
Trail – James William Helenus Trail (1851–1919)
Transeau – Edgar Nelson Transeau (1875–1960)
Tratt. – Leopold Trattinnick (1764–1889)
Traub – Hamilton Paul Traub (1890–1983)
Trautv. – Ernst Rudolf von Trautvetter (1809–1889)
T.R.Dudley – Theodore Robert Dudley (1936–1994)
Treat – Mary Lua Adelia Davis Treat (1830–1923)
Trécul – Auguste Trécul (1818–1896)
T.Reeves – Timothy Reeves (born 1947)
Trel. – William Trelease (1857–1945)
Treub – Melchior Treub (1851–1910)
Trevelyan – Walter Calverley Trevelyan (1797–1879)
Trevir. – Ludolf Christian Treviranus (1779–1864)
Trevis. – Vittore Benedetto Antonio Trevisan de Saint-Léon (1818–1897)
Trew – Christoph Jakob Trew (1695–1769)
Triana – José Jerónimo Triana (1834–1890)
Tricker – Charles William Bret Tricker (1852–1916)
Trimen – Henry Trimen (1843–1896)
Trin. – Carl Bernhard von Trinius (1778–1844)
Tripp – Frances E. Tripp (1832–1890)
Tristram – Henry Baker Tristram (1822–1906)
Troll – Wilhelm Troll (1897–1978)
Trotter – Alessandro Trotter (1874–1967)
Troupin – Georges M.D.J. Troupin (born 1923)
Trovó – Marcelo Trovó (fl. 2009)
Trudell – Harry W. Trudell (1879–1964)
Trudgen – Malcolm Eric Trudgen (born 1951)
True – Rodney Howard True (1866–1940)
Trumbull – James Hammond Trumbull (1821–1897)
Tscherm.-Seys. – Erich von Tschermak-Seysenegg (1871–1962)
Tscherm.-Woess – Elisabeth Tschermak-Woess (1917–2001)
Tsering – Jambey Tsering (fl. 2020)
Tsiang – (1898–1982)
T.S.Liu – Tang Shui Liu (1911–1997)
T.S.Nayar – T.S. Nayar (fl. 1998)
T.S.Palmer – Theodore Sherman Palmer (1860–1962)
T.S.Patrick – Thomas Stewart Patrick (1944–2019)
T.Spratt – Thomas Abel Brimage Spratt (1811–1888)
T.Stephenson – Thomas Stephenson (1865–1948)
Tsukaya – Hirokazu Tsukaya (born 1964)
Tswett – Mikhail Tsvet (1872–1919)
T.S.Ying – Tsun Shen Ying (born 1933)
T.Taylor – Thomas Taylor (1820–1910)
T.T.Chang – Tun Tschu Chang (1927–2006)
T.T.McIntosh – Terry T. McIntosh (born 1948)
T.T.Yu – Tse Tsun Yu (1908–1986)
Tubergen – Cornelis Gerrit van Tubergen (1844–1919)
Tuck. – Edward Tuckerman (1817–1886)
Tuckey – James Hingston Tuckey (1776–1816)
Tul. – Louis René Tulasne (1815–1885)
Tullb. – Sven Axel Teodor Tullberg (1852–1886)
Tunmann – Otto Tunmann (1867–1919)
Tur – Nuncia María Tur (born 1940)
Turcz. – Nicolai Stepanovitch Turczaninow (1796–1863)
Turland – Nicholas J. Turland (born 1966)
Turner – Dawson Turner (1775–1858)
Turpin – Pierre Jean François Turpin (1775–1840)
Turra – Antonio Turra (1730–1796)
Turrill – William Bertram Turrill (1890–1961)
Tussac – François Richard de Tussac (1751–1837)
Tutin – Thomas Gaskell Tutin (1908–1987)
Tuyama – Takasi Tuyama (1910–2000)
T.V.Egorova – Tatiana Vladimirovna Egorova (1930–2007)
T.West – Tuffen West (1823–1891)
Twining – Elizabeth Twining (1805–1889)
T.W.Nelson – Thomas W. Nelson (1928–2006)
T.Yamaz. – Takasi Yamazaki (1921–2007)
T.Yukawa – Tomohisa Yukawa (fl. 1992)
Tzanoud. – Dimitris Tzanoudakis (born 1950)
T.Z.Hsu – Ting Zhi Hsu (born 1941)
Tzvelev – (1925–2015)
U
U.C.La – Ung Chil La (fl. 1966)
Ucria – Bernardino da Ucria (1739–1796)
Udachin – R. A. Udachin (fl. 1970)
Udar – Ram Udar (1926–1985)
Udovicic – Frank Udovicic (born 1966)
Ueki – Robert Ueki (fl. 1973)
U.Hamann – Ulrich Hamann (born 1931)
Ulbr. – Oskar Eberhard Ulbrich (1879–1952)
Ule – Ernst Heinrich Georg Ule (1854–1915)
Uline – Edwin Burton Uline (1867–1933)
Ulmer – Torsten Ulmer (born 1970)
Umber – Ray E. Umber (1948–2018)
U.Müll.-Doblies – Ute Müller-Doblies (born 1938)
Underw. – Lucien Marcus Underwood (1853–1907)
Unger – Franz Joseph Andreas Nicolaus Unger (1800–1870)
Unwin – William Charles Unwin (1811–1887)
Upham – Warren Upham (1850–1934)
U.P.Pratov – Uktam Pratovich Pratov (1934–2018)
Upton – Walter Thomas Upton (1922–2012)
Urb. – Ignatz Urban (1848–1931)
Urbatsch – Lowell Edward Urbatsch (born 1942)
Ursch – Eugène Ursch (1882–1962)
Urtubey – Estrella Urtubey (fl. 1999)
Urum. – Ivan Kroff Urumoff (1857–1937)
U.Schneid. – Ulrike Schneider (born 1936)
Usteri – Paul Usteri (1768–1831)
Utsch – Jacob Utsch (1824–1901)
Utteridge – Timothy Michael Arthur Utteridge (born 1970)
V
V.A.Albert – Victor Anthony Albert (born 1964)
Vachell – Eleanor Vachell (1879–1948)
V.A.Funk – Vicki Ann Funk (1947–2019)
Vaga – August Vaga (1893–1960)
Vahl – Martin Vahl (1749–1804)
Vail – Anna Murray Vail (1863–1955)
Vaill. – Sébastien Vaillant (1669–1722)
Vain. – Edvard (Edward) August Vainio (1853–1929)
Val. – see Valeton
Valck.Sur. – Jan Valckenier Suringar (1864–1932)
Valentine – David Henriques Valentine (1912–1987)
Valeton – Theodoric Valeton (1855–1929)
Vallentin – Elinor Frances Vallentin (1873–1924)
Vallès-Xirau – Joan Vallès-Xirau (born 1959)
Valls – José Francisco Montenegro Valls (born 1945)
V.A.Matthews – Victoria Ann Matthews (born 1941)
Vand. – Domenico Agostino Vandelli (1735–1816)
Vandas – (1861–1923)
Van der Byl – Paul Andries van der Bijl (1888–1939)
van der Werff – (born 1946)
Van Heurck – (1838–1909)
Vanhöffen – Ernst Vanhöffen (1858–1918)
Van Houtte – Louis Benoit Van Houtte (1810–1876)
Vanij. – Ongkarn Vanijajiva (born 1977)
V.A.Nikitin – Vladimir Alekseevich Nikitin (1906–1974)
Vaniot – Eugene Vaniot (1845–1913)
van Jaarsv. – Ernst van Jaarsveld (born 1953)
Van Scheepen – Johan Van Scheepen (fl. 1997)
Varapr. – K.S. Varaprasad (fl. 2009)
Vasey – George Vasey (1822–1893)
Vassilcz. – (1903–1995)
Vasudeva – R.S. Vasudeva (fl. 1953)
Vatke – Wilhelm Vatke (1849–1889)
Vattimo – Ítalo de Vattimo (born 1930)
Vaughan – John Vaughan (1855–1922)
Vaupel – Friedrich Karl Johann Vaupel (1876–1927)
Vauvel – Léopold Eugène Vauvel (1848–1915)
Vavilov – Nikolai Vavilov (1887–1943)
v.A.v.R. – Cornelis Rugier Willem Karel van Alderwerelt van Rosenburgh (1863–1936) (This has been replaced by the abbreviation Alderw. but still appears in older texts)
V.A.W.Graham – Victoria Anne Wassell Graham (born 1950) (birth name Victoria Anne Wassell Smith)
V.A.W.Sm. – Victoria Anne Wassell Smith (born 1950) (married name Victoria Anne Wassell Graham)
V.B.Heinrich – Volker B. Heinrich (fl. 2009)
V.Chandras. – Veerichetty Chandrasekaran (born 1941)
V.Cordus – Valerius Cordus (1515–1544)
V.C.Souza – Vinicius Castro Souza (born 1954)
V.D.Matthews – Velma Dare Matthews (1904–1958)
V.E.Avet. – Vandika Ervandovna Avetisyan (born 1928)
V.E.Grant – Verne Edwin Grant (1917–2007)
Veillon – Jean-Marie Veillon (fl. 1982)
Veitch – John Gould Veitch (1839–1870)
Veldk. – Jan Frederik Veldkamp (1941–2017)
Velen. – Josef Velenovský (1858–1949)
Vell. – José Mariano da Conceição Vellozo (1742–1811)
Velley – Thomas Velley (1749–1806)
Velloso – Joaquim Velloso de Miranda (1733–1815)
Vent. – Étienne Pierre Ventenat (1757–1808)
Vente – M. Vente (fl. 1982)
Verdc. – Bernard Verdcourt (1925–2011)
Verloove – Filip Verloove (fl. 2004)
Verschaff. – Ambroise Colette Alexandre Verschaffelt (1825–1886)
Vesque – Julien Joseph Vesque (1848–1895)
Vest – Lorenz Chrysanth von Vest (1776–1840)
Vězda – Antonín Vězda (1920–2008)
V.Gibbs – Vicary Gibbs (1853–1932)
V.Higgins – Vera Higgins (1892–1968)
Vickery – Joyce Winifred Vickery (1908–1979)
Vict. – Conrad Kirouac, Brother Marie-Victorin (1885–1944)
Vida – (born 1935)
Vidal – António José Vidal (1808–1879)
Vidal-Russ. – Romina Vidal-Russell (fl. 2010)
Vieill. – Eugène Vieillard (1819–1896)
Viera y Clavijo – José de Viera y Clavijo (1731–1813)
Vierh. – Friedrich Karl Max Vierhapper (1876–1932)
Vietz – Ferdinand Bernhard Vietz (1772–1815)
Vig. – Louis Guillaume Alexandre Viguier (1790–1867)
Vigo – Josep Vigo i Bonada (born 1937)
Vignolo – Ferdinando Vignolo-Lutati (1878–1965)
Vilh. – Jan Vilhelm (1876–1931)
Vill. – Dominique Villars (1745–1814)
Villada – Manuel Maria Villada (1841–1924)
Villar – (1871–1951)
Villarreal – José Angel Villarreal (born 1956)
Villarroel – Daniel Villarroel (born 1981)
Villar-Seoane – Liliana Mónica Villar de Seoane (born 1953)
Villaseñor – José Luis Villaseñor (born 1954)
Vilm. – Pierre Louis François Lévêque de Vilmorin (1816–1860)
Vink – Willem Vink (born 1931)
Virot – Robert Virot (1915–2002)
Vis. – Roberto de Visiani (1800–1878)
Vitman – Fulgenzio Vitman (1728–1806)
V.T.Pham – Van The Pham (born 1981)
Vitt – Dale Hadley Vitt (born 1944)
Vittad. – Carlo Vittadini (1800–1865)
Viv. – Domenico Viviani (1772–1840)
Viv.-Morel – Joseph Victor Viviand-Morel (1843–1915)
V.J.Chapm. – Valentine Jackson Chapman (1910–1980)
V.Kučera – Viktor Kučera (fl. 2005)
V.Kunca – Vladimír Kunca (fl. 2015)
Vlădescu – Mihai Vlădescu (1865–1944)
Vl.V.Nikitin – Vladimir Vladimirovich Nikitin (1962–2007)
V.M.Badillo – Victor Manuel Badillo (1920–2008)
V.M.Bates – Vernon M. Bates (fl. 1984)
Vöcht. – Hermann Vöchting (1847–1917)
Voeltzk. – Alfred Voeltzkow (1860–1947)
Vogel – Julius Rudolph Theodor Vogel (1812–1841)
Vogt – Robert M. Vogt (born 1957)
Voigt – Joachim Otto Voigt (1798–1843)
Volkart – (1873–1951)
Volkens – Georg Ludwig August Volkens (1855–1917)
Vollesen – Kaj Børge Vollesen (born 1946)
Voronts. – Maria Sergeevna Vorontsova (born 1979)
Voss – Andreas Voss (1857–1924)
V.P.Castro – Vitorino Paiva Castro (born 1942)
V.P.Prasad – Vadhyaruparambil Prabhakaran Prasad (born 1960)
V.Prakash – Ved Prakash (1957–2000)
Vrugtman – Freek Vrugtman (1927–2022)
V.Singh – Vijendra Singh (born 1947)
V.S.White – Violetta Susan Elizabeth White (1875–1949)
V.Ten. – Vincenzo Tenore (1825–1886)
Vugt – Rogier van Vugt (fl. 2009)
Vuk. – Ljudevit Farkaš Vukotinović (1813–1893)
Vural – Mecit Vural (fl. 1983)
Vved. – Alexei Ivanovich Vvedensky (1898–1972)
V.V.Byalt – Vyacheslav Vyacheslavovich Byalt (born 1966)
V.V.Nikitin – Vasilii Vasilevich Nikitin (1906–1988)
V.W.Steinm. – (fl. 1995)
W–Z
To find entries for W–Z, use the table of contents above.
1 | List of botanists by author abbreviation (T–V) | Biology | 5,918 |
20,656,228 | https://en.wikipedia.org/wiki/Maize | Maize (Zea mays), also known as corn in North American English, is a tall stout grass that produces cereal grain. It was domesticated by indigenous peoples in southern Mexico about 9,000 years ago from wild teosinte. Native Americans planted it alongside beans and squashes in the Three Sisters polyculture. The leafy stalk of the plant gives rise to male inflorescences or tassels which produce pollen, and female inflorescences called ears. The ears yield grain, known as kernels or seeds. In modern commercial varieties, these are usually yellow or white; other varieties can be of many colors.
Maize relies on humans for its propagation. Since the Columbian exchange, it has become a staple food in many parts of the world, with the total production of maize surpassing that of wheat and rice. Much maize is used for animal feed, whether as grain or as the whole plant, which can either be baled or made into the more palatable silage. Sugar-rich varieties called sweet corn are grown for human consumption, while field corn varieties are used for animal feed, for uses such as cornmeal or masa, corn starch, corn syrup, pressing into corn oil, alcoholic beverages like bourbon whiskey, and as chemical feedstocks including ethanol and other biofuels.
Maize is cultivated throughout the world; a greater weight of maize is produced each year than any other grain. In 2020, world production was 1.1 billion tonnes. It is afflicted by many pests and diseases; two major insect pests, European corn borer and corn rootworms, have each caused annual losses of a billion dollars in the US. Modern plant breeding has greatly increased output and qualities such as nutrition, drought tolerance, and tolerance of pests and diseases. Much maize is now genetically modified.
As a food, maize is used to make a wide variety of dishes including Mexican tortillas and tamales, Italian polenta, and American hominy grits. Maize protein is low in some essential amino acids, and the niacin it contains only becomes available if freed by alkali treatment. In Mesoamerica, maize is deified as a maize god and depicted in sculptures.
History
Pre-Columbian development
Maize requires human intervention for its propagation. The kernels of its naturally-propagating teosinte ancestor fall off the cob on their own, while those of domesticated maize do not. All maize arose from a single domestication in southern Mexico about 9,000 years ago. The oldest surviving maize types are those of the Mexican highlands. Maize spread from this region to the lowlands and over the Americas along two major paths. The centre of domestication was most likely the Balsas River valley of south-central Mexico. Maize reached highland Ecuador at least 8000 years ago. It reached lower Central America by 7600 years ago, and the valleys of the Colombian Andes between 7000 and 6000 years ago.
The earliest maize plants grew a single, small ear per plant. The Olmec and Maya cultivated maize in numerous varieties throughout Mesoamerica; they cooked, ground and processed it through nixtamalization. By 3000 years ago, maize was central to Olmec culture, including their calendar, language, and myths.
The Mapuche people of south-central Chile cultivated maize along with quinoa and potatoes in pre-Hispanic times. Before the expansion of the Inca Empire, maize was traded and transported as far south as 40° S in Melinquina, Lácar Department, Argentina, probably brought across the Andes from Chile.
Columbian exchange
After the arrival of Europeans in 1492, Spanish settlers consumed maize, and explorers and traders carried it back to Europe. Spanish settlers much preferred wheat bread to maize. Maize flour could not be substituted for wheat for communion bread, since in Christian belief at that time only wheat could undergo transubstantiation and be transformed into the body of Christ.
Maize spread to the rest of the world because of its ability to grow in diverse climates. It was cultivated in Spain just a few decades after Columbus's voyages and then spread to Italy, West Africa and elsewhere. By the 17th century, it was a common peasant food in Southern Europe. By the 18th century, it was the chief food of the southern French and Italian peasantry, especially as polenta in Italy.
When maize was introduced into Western farming systems, it was welcomed for its productivity. However, a widespread problem of malnutrition soon arose wherever it had become a staple food. Indigenous Americans had learned to soak maize in alkali-water — made with ashes and lime — since at least 1200–1500 BC, creating the process of nixtamalization. They did this to liberate the corn hulls, but coincidentally it also liberated the B-vitamin niacin, the lack of which caused pellagra. Once alkali processing and dietary variety were understood and applied, pellagra disappeared in the developed world. The development of high-lysine maize and the promotion of a more balanced diet have contributed to its demise. Pellagra still exists in food-poor areas and refugee camps where people survive on donated maize.
Names
The name maize derives from the Spanish form of the Taíno . The Swedish botanist Carl Linnaeus used the common name maize as the species epithet in Zea mays. The name maize is preferred in formal, scientific, and international usage as a common name because it refers specifically to this one grain, unlike corn, which has a complex variety of meanings that vary by context and geographic region. Most countries primarily use the term maize, and the name corn is used mainly in the United States and a handful of other English-speaking countries. In countries that primarily use the term maize, the word corn may denote any cereal crop, varying geographically with the local staple, such as wheat in England and oats in Scotland or Ireland. The usage of corn for maize started as a shortening of "Indian corn" in 18th-century North America.
The historian of food Betty Fussell writes in an article on the history of the word corn in North America that "[t]o say the word corn is to plunge into the tragi-farcical mistranslations of language and history". Similar to the British usage, the Spanish referred to maize as , a generic term for cereal grains, as did Italians with the term . The British later referred to maize as Turkey wheat, Turkey corn, or Indian corn; Fussell comments that "they meant not a place but a condition, a savage rather than a civilized grain".
International groups such as the Centre for Agriculture and Bioscience International consider maize the preferred common name. The word maize is used by the UN's Food and Agriculture Organization, and in the names of the International Maize and Wheat Improvement Center of Mexico, the Indian Institute of Maize Research, the Maize Association of Australia, the National Maize Association of Nigeria, the National Maize Association of Ghana, the Maize Trust of South Africa, and the Zimbabwe Seed Maize Association.
Structure and physiology
Maize is a tall annual grass with a single stem, ranging in height from to . The long narrow leaves arise from the nodes or joints, alternately on opposite sides on the stalk. Maize is monoecious, with separate male and female flowers on the same plant. At the top of the stem is the tassel, an inflorescence of male flowers; their anthers release pollen, which is dispersed by wind. Like other pollen, it is an allergen, but most of it falls within a few meters of the tassel and the risk is largely restricted to farm workers.
The female inflorescence, some way down the stem from the tassel, is first seen as a silk, a bundle of soft tubular hairs, one for the carpel in each female flower, which develops into a kernel (often called a seed. Botanically, as in all grasses, it is a fruit, fused with the seed coat to form a caryopsis) when it is pollinated. A whole female inflorescence develops into an ear or corncob, enveloped by multiple leafy layers or husks.
The is the leaf most closely associated with a particular developing ear. This leaf and those above it contribute over three quarters of the carbohydrate (starch) that fills the grain.
The grains are usually yellow or white in modern varieties; other varieties have orange, red, brown, blue, purple, or black grains. They are arranged in 8 to 32 rows around the cob; there can be up to 1200 grains on a large cob. Yellow maizes derive their color from carotenoids; red maizes are colored by anthocyanins and phlobaphenes; and orange and green varieties may contain combinations of these pigments.
Maize has short-day photoperiodism, meaning that it requires nights of a certain length to flower. Flowering further requires enough warm days above . The control of flowering is set genetically; the physiological mechanism involves the phytochrome system. Tropical cultivars can be problematic if grown in higher latitudes, as the longer days can make the plants grow tall instead of setting seed before winter comes. On the other hand, growing tall rapidly could be convenient for producing biofuel.
Immature maize shoots accumulate a powerful antibiotic substance, 2,4-dihydroxy-7-methoxy-1,4-benzoxazin-3-one (DIMBOA), which provides a measure of protection against a wide range of pests. Because of its shallow roots, maize is susceptible to droughts, intolerant of nutrient-deficient soils, and prone to being uprooted by severe winds.
Genomics and genetics
Maize is diploid with 20 chromosomes. 83% of allelic variation within the genome derives from its teosinte ancestors, primarily due to the freedom of Zea species to outcross. Barbara McClintock used maize to validate her transposon theory of "jumping genes", for which she won the 1983 Nobel Prize in Physiology or Medicine. Maize remains an important model organism for genetics and developmental biology. The MADS-box motif is involved in the development of maize flowers.
The Maize Genetics and Genomics Database is funded by the US Department of Agriculture to support maize research. The International Maize and Wheat Improvement Center maintains a large collection of maize accessions tested and cataloged for insect resistance. In 2005, the US National Science Foundation, Department of Agriculture, and the Department of Energy formed a consortium to sequence the maize genome. The resulting DNA sequence data was deposited immediately into GenBank, a public repository for genome-sequence data. Sequencing of the maize genome was completed in 2008. In 2009, the consortium published results of its sequencing effort. The genome, 85% of which is composed of transposons, contains 32,540 genes. Much of it has been duplicated and reshuffled by helitrons, a group of transposable elements within maize's DNA.
Breeding
Conventional breeding
Maize breeding in prehistory resulted in large plants producing large ears. Modern breeding began with individuals who selected highly productive varieties in their fields and then sold seed to other farmers. James L. Reid was one of the earliest and most successful, developing Reid's Yellow Dent in the 1860s. These early efforts were based on mass selection (a row of plants is grown from seeds of one parent), and the choosing of plants after pollination (which means that only the female parents are known). Later breeding efforts included ear to row selection (C. G. Hopkins c. 1896), hybrids made from selected inbred lines (G. H. Shull, 1909), and the highly successful double cross hybrids using four inbred lines (D. F. Jones c. 1918, 1922). University-supported breeding programs were especially important in developing and introducing modern hybrids.
Since the 1940s, the best strains of maize have been first-generation hybrids made from inbred strains that have been optimized for specific traits, such as yield, nutrition, drought, pest and disease tolerance. Both conventional cross-breeding and genetic engineering have succeeded in increasing output and reducing the need for cropland, pesticides, water and fertilizer. There is conflicting evidence to support the hypothesis that maize yield potential has increased over the past few decades. This suggests that changes in yield potential are associated with leaf angle, lodging resistance, tolerance of high plant density, disease/pest tolerance, and other agronomic traits rather than increase of yield potential per individual plant.
Certain varieties of maize have been bred to produce many ears; these are the source of the "baby corn" used as a vegetable in Asian cuisine. A fast-flowering variety named mini-maize was developed to aid scientific research, as multiple generations can be obtained in a single year. One strain called olotón has evolved a symbiotic relationship with nitrogen-fixing microbes, which provides the plant with 29%–82% of its nitrogen. The International Maize and Wheat Improvement Center (CIMMYT) operates a conventional breeding program to provide optimized strains. The program began in the 1980s. Hybrid seeds are distributed in Africa by its Drought Tolerant Maize for Africa project.
Tropical landraces remain an important and underused source of resistance alleles – both those for disease and for herbivores. Such alleles can then be introgressed into productive varieties. Rare alleles for this purpose were discovered by Dao and Sood, both in 2014. In 2018, Zerka Rashid of CIMMYT used its association mapping panel, developed for tropical drought tolerance traits. to find new genomic regions providing sorghum downy mildew resistance, and to further characterize known differentially methylated regions.
Genetic engineering
Genetically modified maize was one of the 26 genetically engineered food crops grown commercially in 2016. The vast majority of this is Bt maize. Genetically modified maize has been grown since 1997 in the United States and Canada; by 2016, 92% of the US maize crop was genetically modified. As of 2011, herbicide-tolerant maize and insect-resistant maize varieties were each grown in over 20 countries.
In September 2000, up to $50 million worth of food products were recalled due to the presence of Starlink genetically modified corn, which had been approved only for animal consumption.
Origin
External phylogeny
The maize genus Zea is relatively closely related to sorghum, both being in the PACMAD clade of Old World grasses, and much more distantly to rice and wheat, which are in the other major group of grasses, the BOP clade. It is closely related to Tripsacum, gamagrass.
Maize and teosinte
Maize is the domesticated variant of the four species of teosintes, which are its crop wild relatives. The teosinte origin theory was proposed by the Russian botanist Nikolai Ivanovich Vavilov in 1931, and the American Nobel Prize-winner George Beadle in 1932. The two plants have dissimilar appearance, maize having a single tall stalk with multiple leaves and teosinte being a short, bushy plant. The difference between the two is largely controlled by differences in just two genes, called grassy tillers-1 (gt1, ) and teosinte branched-1 (tb1, ). In the late 1930s, Paul Mangelsdorf suggested that domesticated maize was the result of a hybridization event between an unknown wild maize and a species of Tripsacum, a related genus; this has been refuted by modern genetic testing.
In 2004, John Doebley identified Balsas teosinte, Zea mays subsp. parviglumis, native to the Balsas River valley in Mexico's southwestern highlands, as the crop wild relative genetically most similar to modern maize. The middle part of the short Balsas River valley is the likely location of early domestication. Stone milling tools with maize residue have been found in an 8,700 year old layer of deposits in a cave not far from Iguala, Guerrero. Doebley and colleagues showed in 2002 that maize had been domesticated only once, about 9,000 years ago, and then spread throughout the Americas.
Maize pollen dated to 7,300 years ago from San Andres, Tabasco has been found on the Caribbean coast. A primitive corn was being grown in southern Mexico, Central America, and northern South America 7,000 years ago. Archaeological remains of early maize ears, found at Guila Naquitz Cave in the Oaxaca Valley, are roughly 6,250 years old; the oldest ears from caves near Tehuacan, Puebla, are 5,450 years old.
Spreading to the north
Around 4,500 years ago, maize began to spread to the north. In the United States, maize was first cultivated at several sites in New Mexico and Arizona about 4,100 years ago. During the first millennium AD, maize cultivation spread more widely in the areas north. In particular, the large-scale adoption of maize agriculture and consumption in eastern North America took place about A.D. 900. Native Americans cleared large forest and grassland areas for the new crop. The rise in maize cultivation 500 to 1,000 years ago in what is now the southeastern United States corresponded with a decline of freshwater mussels, which are very sensitive to environmental changes.
Agronomy
Growing
Because it is cold-intolerant, in the temperate zones maize must be planted in the spring. Its root system is generally shallow, so the plant is dependent on soil moisture. As a plant that uses carbon fixation, maize is a considerably more water-efficient crop than plants that use carbon fixation such as alfalfa and soybeans. Maize is most sensitive to drought at the time of silk emergence, when the flowers are ready for pollination. In the United States, a good harvest was traditionally predicted if the maize was "knee-high by the Fourth of July", although modern hybrids generally exceed this growth rate. Maize used for silage is harvested while the plant is green and the fruit immature. Sweet corn is harvested in the "milk stage", after pollination but before starch has formed, between late summer and early to mid-autumn. Field maize is left in the field until very late in the autumn to thoroughly dry the grain, and may, in fact, sometimes not be harvested until winter or even early spring. The importance of sufficient soil moisture is shown in many parts of Africa, where periodic drought regularly causes maize crop failure and consequent famine. Although it is grown mainly in wet, hot climates, it can thrive in cold, hot, dry or wet conditions, meaning that it is an extremely versatile crop.
Maize was planted by the Native Americans in small hills of soil, in the polyculture system called the Three Sisters. Maize provided support for beans; the beans provided nitrogen derived from nitrogen-fixing rhizobia bacteria which live on the roots of beans and other legumes; and squashes provided ground cover to stop weeds and inhibit evaporation by providing shade over the soil.
Harvesting
Sweet corn, harvested earlier than maize grown for grain, grows to maturity in a period of from 60 to 100 days according to variety. An extended sweet corn harvest, picked at the milk stage, can be arranged either by planting a selection of varieties which ripen earlier and later, or by planting different areas at fortnightly intervals.
Maize harvested as a grain crop can be kept in the field a relatively long time, even months, after the crop is ready to harvest; it can be harvested and stored in the husk leaves if kept dry.
According to the U.S. Department of Agriculture, in the four decades from 1855 to 1894 the amount of labor required to produce one bushel of maize declined from four hours and thirty four minutes to only forty-one minutes. Before 1940 , most maize in North America was harvested by hand. This involved a large number of workers and associated social events (husking or shucking bees). From the 1850s onward, some machinery became available to partially mechanize the processes, such as one- and two-row mechanical pickers (picking the ear, leaving the stover) and corn binders, which are reaper-binders designed specifically for maize. The latter produce sheaves that can be shocked. By hand or mechanical picker, the entire ear is harvested, which requires a separate operation of a maize sheller to remove the kernels from the ear. Whole ears of maize were often stored in corn cribs, sufficient for some livestock feeding uses. Today corn cribs with whole ears, and corn binders, are less common because most modern farms harvest the grain from the field with a combine harvester and store it in bins. The combine with a corn head (with points and snap rolls instead of a reel) does not cut the stalk; it simply pulls the stalk down. The stalk continues downward and is crumpled into a mangled pile on the ground, where it usually is left to become organic matter for the soil. The ear of maize is too large to pass between slots in a plate as the snap rolls pull the stalk away, leaving only the ear and husk to enter the machinery. The combine separates the husk and the cob, keeping only the kernels.
Grain storage
Drying is vital to prevent or at least reduce damage by mould fungi, which contaminate the grain with mycotoxins. Aspergillus and Fusarium spp. are the most common mycotoxin sources, and accordingly important in agriculture. If the moisture content of the harvested grain is too high, grain dryers are used to reduce the moisture content by blowing heated air through the grain. This can require large amounts of energy in the form of combustible gases (propane or natural gas) and electricity to power the blowers.
Production
Maize is widely cultivated throughout the world, and a greater weight of maize is produced each year than any other grain. In 2020, total world production was 1.16 billion tonnes, led by the United States with 31.0% of the total (table). China produced 22.4% of the global total.
Pests
Many pests can affect maize growth and development, including invertebrates, weeds, and pathogens.
Maize is susceptible to a large number of fungal, bacterial, and viral plant diseases. Those of economic importance include diseases of the leaf, smuts such as corn smut, ear rots and stalk rots. Northern corn leaf blight damages maize throughout its range, whereas banded leaf and sheath blight is a problem in Asia. Some fungal diseases of maize produce potentially dangerous mycotoxins such as aflatoxin. In the United States, major diseases include tar spot, bacterial leaf streak, gray leaf spot, northern corn leaf blight, and Goss's wilt; in 2022, the most damaging disease was tar spot, which caused losses of 116.8 million bushels.
Maize sustains a billion dollars' worth of losses annually in the US from each of two major insect pests, namely the European corn borer or ECB (Ostrinia nubilalis) and corn rootworms (Diabrotica spp) western corn rootworm, northern corn rootworm, and southern corn rootworm. Another serious pest is the fall armyworm (Spodoptera frugiperda).
The maize weevil (Sitophilus zeamais) is a serious pest of stored grain. The Northern armyworm, Oriental armyworm or Rice ear-cutting caterpillar (Mythimna separata) is a major pest of maize in Asia.
Nematodes too are pests of maize. It is likely that every maize plant harbors some nematode parasites, and populations of Pratylenchus lesion nematodes in the roots can be "enormous". The effects on the plants include stunting, sometimes of whole fields, sometimes in patches, especially when there is also water stress and poor control of weeds.
Many plants, both monocots (grasses) such as Echinochloa crus-galli (barnyard grass) and dicots (forbs) such as Chenopodium and Amaranthus may compete with maize and reduce crop yields. Control may involve mechanical weed removal, flame weeding, or herbicides.
Uses
Culinary
Maize and cornmeal (ground dried maize) constitute a staple food in many regions of the world. Maize is used to produce the food ingredient cornstarch. Maize starch can be hydrolyzed and enzymatically treated to produce high fructose corn syrup, a sweetener. Maize may be fermented and distilled to produce Bourbon whiskey. Corn oil is extracted from the germ of the grain.
In prehistoric times, Mesoamerican women used a metate quern to grind maize into cornmeal. After ceramic vessels were invented the Olmec people began to cook maize together with beans, improving the nutritional value of the staple meal. Although maize naturally contains niacin, an important nutrient, it is not bioavailable without the process of nixtamalization. The Maya used nixtamal meal to make porridges and tamales.
Maize is a staple of Mexican cuisine. Masa (nixtamal) is the main ingredient for tortillas, atole and many other dishes of Central American food. It is the main ingredient of corn tortilla, tamales, atole and the dishes based on these.
The corn smut fungus, known as huitlacoche, which grows on maize, is a Mexican delicacy.
Coarse maize meal is made into a thick porridge in many cultures: from the polenta of Italy, the angu of Brazil, the mămăligă of Romania, to cornmeal mush in the US (or hominy grits in the Southern US) or the food called mieliepap in South Africa and sadza, nshima, ugali and other names in other parts of Africa. Introduced into Africa by the Portuguese in the 16th century, maize has become Africa's most important staple food crop.
Sweet corn, a genetic variety that is high in sugars and low in starch, is eaten in the unripe state as corn on the cob.
Nutritional value
Raw, yellow, sweet maize kernels are composed of 76% water, 19% carbohydrates, 3% protein, and 1% fat (table). In a 100-gram serving, maize kernels provide 86 calories and are a good source (10–19% of the Daily Value) of the B vitamins, thiamin, niacin (if freed), pantothenic acid (B5) and folate. Maize has suboptimal amounts of the essential amino acids tryptophan and lysine, which accounts for its lower status as a protein source. The proteins of beans and legumes complement those of maize.
Animal feed
Maize is a major source of animal feed. As a grain crop, the dried kernels are used as feed. They are often kept on the cob for storage in a corn crib, or they may be shelled off for storage in a grain bin. When the grain is used for feed, the rest of the plant (the corn stover) can be used later as fodder, bedding (litter), or soil conditioner. When the whole maize plant (grain plus stalks and leaves) is used for fodder, it is usually chopped and made into silage, as this is more digestible and more palatable to ruminants than the dried form. Traditionally, maize was gathered into shocks after harvesting, where it dried further. It could then be stored for months until fed to livestock. Silage can be made in silos or in silage wrappers. In the tropics, maize is harvested year-round and fed as green forage to the animals. Baled cornstalks offer an alternative to hay for animal feed, alongside direct grazing of maize grown for this purpose.
Chemicals
Starch from maize can be made into plastics, fabrics, adhesives, and many other chemical products. Corn steep liquor, a plentiful watery byproduct of maize wet milling process, is used in the biochemical industry and research as a culture medium to grow microorganisms.
Biofuel
Feed maize is being used for heating; specialized corn stoves (similar to wood stoves) use either feed maize or wood pellets to generate heat. Maize cobs can be used as a biomass fuel source. Home-heating furnaces which use maize kernels as a fuel have a large hopper that feeds the kernels into the fire. Maize is used as a feedstock for the production of ethanol fuel. The price of food is indirectly affected by the use of maize for biofuel production: use of maize for biofuel production increases the demand, and therefore the price of maize. A pioneering biomass gasification power plant in Strem, Burgenland, Austria, started operating in 2005. It would be possible to create diesel from the biogas by the Fischer Tropsch method.
In human culture
In Mesoamerica, maize is seen as a vital force, deified as a maize god, usually female. In the United States, maize ears are carved into column capitals in the United States Capitol building. The Corn Palace in Mitchell, South Dakota, uses cobs and ears of colored maize to implement a mural design that is recycled annually. The concrete Field of Corn sculpture in Dublin, Ohio depicts hundreds of ears of corn in a grassy field. A maize stalk with two ripe ears is depicted on the reverse of the Croatian 1 lipa coin, minted since 1993.
See also
Detasseling
Post-harvest losses (grains)
Push–pull technology, pest control strategy for maize and sorghum
Zein
References
Further reading
Byerlee, Derek. "The globalization of hybrid maize, 1921–70." Journal of Global History 15.1 (2020): 101–122.
Clampitt, Cynthia. Maize: How Corn Shaped the U.S. Heartland (2015)
External links
Maize Genetics and Genomics Database
Maize Genetics Cooperation Stock Center
Maize
Zea (plant)
Agriculture in Mesoamerica
Crops originating from Mexico
Demulcents
Energy crops
Flora of Mexico
Flora of Guatemala
Fruit vegetables
Grasses of Mexico
Plant models
Pre-Columbian Native American cuisine
Post-Columbian Native American cuisine
Pre-Columbian Southwest cuisine
Staple foods
Tropical agriculture
Taxa named by Carl Linnaeus
Plants described in 1753
Symbols of Illinois | Maize | Biology | 6,276 |
11,485,454 | https://en.wikipedia.org/wiki/YybP-ykoY%20leader | The yybP-ykoY leader RNA element was originally discovered in E. coli during a large scale screen and was named SraF. This family was later found to exist upstream of related families of protein genes in many bacteria, including the yybP and ykoY genes in B. subtilis. The specific functions of these proteins are unknown, but this structured RNA element may be involved in their genetic regulation as a riboswitch.
The yybP-ykoY element was later proposed to be manganese-responsive after another associated family of genes, YebN/MntP, was shown to encode Mn2+ efflux pumps in several bacteria. Genetic data and a crystal structure confirmed that yybp-ykoY is a manganese riboswitch that directly binds Mn2+
References
External links
Cis-regulatory RNA elements | YybP-ykoY leader | Chemistry | 182 |
51,507,225 | https://en.wikipedia.org/wiki/All%20Tomorrows | All Tomorrows: A Billion Year Chronicle of the Myriad Species and Mixed Fortunes of Man is a 2006 work of science fiction and speculative evolution written and illustrated by the Turkish artist C. M. Kosemen under the pen name Nemo Ramjet. It explores a hypothetical future path of human evolution set from the near future to a billion years from the present. Several future human species evolve through natural means and through genetic engineering, conducted by both humans themselves and by a mysterious and superior alien species called the Qu.
Inspired by the science fiction works of Olaf Stapledon and Edward Gibbon's The History of the Decline and Fall of the Roman Empire, Kosemen worked on All Tomorrows from 2003 to the publication of the book as a free PDF file online in 2006. Kosemen intends to eventually publish a greatly expanded All Tomorrows in physical form, with new text and updated illustrations.
Summary
Centuries following humanity terraforming and colonizing Mars, a brief but catastrophic interplanetary war takes place between Mars and Earth costing both parties billions of lives. The two planets eventually make peace with each other, and a large-scale colonization initiative is carried out by genetically engineered humans called Star People throughout the galaxy.
Humans (now Star People) then encounter a malevolent and superior alien species called the Qu. The Qu's religion motivates them to remake the universe through genetic engineering. A short war follows in which humanity is defeated. The Qu bioengineer the surviving humans as punishment into a range of exotic forms, many of them unintelligent. After forty million years of domination, the Qu leave the galaxy, leaving the altered humans to evolve on their own. The bioengineered humans range from worm-like humans to insectivores and modular and cell-based species. The book follows the progress of these new humans as they either go extinct or regain sapience in wildly different forms and gradually discover that the Qu experimented on them.
One species, known as the Ruin Haunters, replaces their bodies with mechanical forms using the technology the Qu had left, now known as the Gravitals. They begin to colonize the rest of the galaxy while annihilating most life within it, including the other post-human species (except for Bug Facers who in a similar fashion like that of the Qu are genetically modified by the Gravitals for their own gain). They are, themselves, destroyed by the Asteromorphs, the descendants of a human species who escaped experimentation by the Qu, the remaining Gravitals are then re-modified into less sophisticated machines to serve as laborers by the Asteromorphs. The final chapters of the book detail humanity's rebound as a posthuman species, their first contact with another galaxy's life, rediscovering and defeating the Qu after five-hundred million years, and concluding with the rediscovery of Earth 560 million years in the future.
All Tomorrows ends with a picture of the book's in-universe author, an alien researcher, holding a billion-year-old human skull and writing that all posthuman species disappeared a billion years in the future, for unknown reasons. The author goes on to state that mankind's story was always about the lives of humans themselves, not major wars and abstract ideals. The author ends by encouraging the reader to "Love Today, and seize All Tomorrows!"
Development
Kosemen worked on All Tomorrows from 2003 to 2006. The work of Olaf Stapledon, particularly Last and First Men (1930) and Star Maker (1937), served as the main inspiration for the work, alongside Edward Gibbon's The History of the Decline and Fall of the Roman Empire.
All Tomorrows is written in the style of a historical work, narrated by an alien creature recounting the history of humanity. According to Kosemen, the "tone of voice is a high school student fanboying on the Decline and Fall of the Roman Empire by Edward Gibbon". The artwork is also reflective of this "archaeological" approach, with faded and textured visual effects applied to the paintings. The original reason for adding the faded tint to the paintings was Kosemen wanting to avoid the paintings looking like "horrible racist caricatures".
The book was released for free online as a PDF on 4 October 2006 and has since then, per Kosemen himself, "had a life of its own as a PDF floating around the backwaters of the internet like a ghost ship". One of the common links which All Tomorrows has been shared through is a wiki site dedicated to speedrunning.
The first licensed physical edition of All Tomorrows was published by Time Publishing in March 2024, in the Thai language. This edition included the content of the original 2006 book, with a new chapter on the making of the book and some additional artwork by other artists.
All Tomorrows is yet to be physically published in English, however in July 2024 preorders on the crowdfunding site Unbound began for official hardback and e-book editions in the English language, including additional materials and artwork and the intent to publish in 2025.
Reception and legacy
Originally an obscure work, All Tomorrows slowly gained popularity online following its 2006 publication. In a 2021 podcast interview, Kosemen noted that the generation born right after him (Kosemen having been born in 1984) "really embraced" All Tomorrows, which he believes might partially be due to the "myriad disasters" that have happened in the world since then. The book has received some scholarly attention; in 2020, All Tomorrows was among the works discussed in Jörg Matthias Determann's book Islam, Science Fiction and Extraterrestrial Life, which explores astrobiology and science fiction in the Muslim world. Following the upload of an abridged version of the book's story by YouTuber Alt Shift X in June 2021, All Tomorrows saw a particular surge in popularity online during the summer of 2021. Among other things, there was a surge of internet memes based on the book, primarily on YouTube and Twitter as well as fan art based on the creatures in the book.
Readers have characterized All Tomorrows as "bizarre", "inexplicable", "interesting" and "fascinating", and as a work incorporating body horror. Ivan Farkas of Cracked.com called All Tomorrows "existentially freak-ay" in 2021 and described the artwork as "otherwordly". A 2022 article by Andrea Viscusi on the Italian media website Stay Nerd compared All Tomorrows to Man After Man (1990) by Dougal Dixon, also a work tackling future human evolution, but found the depictions in All Tomorrows to be "even more disturbing", yet still possible on an "almost subliminal level" to "recognize as our fellow men". In a 2022 article in the lifestyle magazine A Little Bit Human, Allia Luzong considered All Tomorrows to be a "fun exploration of what could be" but also a serious work with serious themes, particularly noting how humanity's social ills are present throughout the narrative.
Kosemen stated in 2021 that though the book had grown popular, he had almost "disowned" All Tomorrows, finding parts of it "a bit cringey". When designing his website and including his different books and projects, Kosemen purposefully left out All Tomorrows. Following the summer of 2021, he has since added the book to his website and intends to eventually publish All Tomorrows in physical form with new text and illustrations. By 16 October 2022, Kosemen had written the expanded version up until the Qu's conquest of the galaxy. Kosemen stated that the material up until that point amounted to 200 pages, almost twice the length of the entire original book. Kosemen continues to work on the expanded version as of 2024.
In April 2024 he has announced the release of a physical copy of the book, but only in Thai language. Although being the original version of the book, it is stated to comprehend a few illustrations made by other artists and a new chapter, with various informations about the species. This new chapter is only available in Thai.
At the same time, Kosemen has also stated that he is continuing his work on the new version of the book, that has now reached nearly over 300 pages, with still many species to talk about. Every species has now a deeper lore, and new major plot twists have been added.
See also
Transhuman
Posthuman
Biopunk
All Yesterdays (2012) by John Conway, Darren Naish and Kosemen – a similarly titled book on paleoart, co-authored by Kosemen.
Man After Man (1990) by Dougal Dixon – a similar book about (human) speculative evolution
References
External links
All Tomorrows – original 2006 PDF version of the book
Все Грядущие дни – 2009/2010 Russian translation of All Tomorrows by Pavel Volkov
– by Alt Shift X, recommended by C. M. Kosemen himself (see pinned comment)
All Tomorrows - 2023 Italian translation of All Tomorrows by D. Lombardo
All Tomorrows - 2023 Czech translation All Tomorrows by J. Dubánek
All Tomorrows - 2022 French translation All Tomorrows by Lucas G. Blanchard
2006 novels
2006 science fiction novels
Turkish science fiction novels
Fictional species and races
Books about evolution
Human evolution books
Speculative evolution
Novels set on Mars
Novels set in the future
Novels about genetic engineering
Extinction in fiction
Fiction set in the 7th millennium or beyond
Fiction books about genocide
Evolution in popular culture
Internet memes introduced in 2021
Milky Way in fiction | All Tomorrows | Biology | 1,998 |
30,798,197 | https://en.wikipedia.org/wiki/Conway%20polynomial%20%28finite%20fields%29 | In mathematics, the Conway polynomial for the finite field is a particular irreducible polynomial of degree over that can be used to define a standard representation of as a splitting field of . Conway polynomials were named after John H. Conway by Richard A. Parker, who was the first to define them and compute examples. Conway polynomials satisfy a certain compatibility condition that had been proposed by Conway between the representation of a field and the representations of its subfields. They are important in computer algebra where they provide portability among different mathematical databases and computer algebra systems. Since Conway polynomials are expensive to compute, they must be stored to be used in practice. Databases of Conway polynomials are available in the computer algebra systems GAP, Macaulay2, Magma, SageMath, at the web site of Frank Lübeck,
and at the Online Encyclopedia of Integer Sequences.
Background
Elements of may be represented as sums of the form where is a root of an irreducible polynomial of degree over and the are elements of . Addition of field elements in this representation is simply vector addition. While there is a unique finite field of order up to isomorphism, the representation of the field elements depends on the choice of irreducible polynomial. The Conway polynomial is a way of standardizing this choice.
The non-zero elements of a finite field form a cyclic group under multiplication, denoted . A primitive element, , of is an element that generates . Representing the non-zero field elements as powers of allows multiplication in the field to be performed efficiently. The primitive polynomial for is the monic polynomial of smallest possible degree with coefficients in that has as a root in (the minimal polynomial for ). It is necessarily irreducible. The Conway polynomial is chosen to be primitive, so that each of its roots generates the multiplicative group of the associated finite field.
The field contains a unique subfield isomorphic to for each dividing , and this accounts for all the subfields of . For any dividing the cyclic group contains a subgroup isomorphic to . If generates , then the smallest power of that generates this subgroup is where . If is a primitive polynomial for with root and is a primitive polynomial for then, by Conway's definition, and are compatible if is a root of . This necessitates that divide . This notion of compatibility is called norm-compatibility by some authors. The Conway polynomial for a finite field is chosen so as to be compatible with the Conway polynomials of each of its subfields. That it is possible to make the choice in this way was proved by Werner Nickel.
Definition
The Conway polynomial is defined as the lexicographically minimal monic primitive polynomial of degree over that is compatible with for all dividing . This is an inductive definition on : the base case is where is the lexicographically minimal primitive element of . The notion of lexicographical ordering used is the following:
The elements of are ordered .
A polynomial of degree in is written (with terms alternately added and subtracted) and then expressed as the word . Two polynomials of degree d are ordered according to the lexicographical ordering of their corresponding words.
Since there does not appear to be any natural mathematical criterion that would single out one monic primitive polynomial satisfying the compatibility conditions over all the others, the imposition of lexicographical ordering in the definition of the Conway polynomial should be regarded as a convention.
Table
Conway polynomials for the lowest values of and are tabulated below. All of these were first computed by Richard Parker and were taken from the tables of Frank Luebeck. The calculations can be verified using the basic methods of the next section with the assistance of algebra software.
Examples
To illustrate the definition, let us compute the first six Conway polynomials over . By definition, a Conway polynomial is monic, primitive (which implies irreducible), and compatible with Conway polynomials of degree dividing its degree. The table below shows how imposing each of these conditions reduces the number of candidate polynomials.
Degree 1. The primitive elements of are 2 and 3. The two degree-1 polynomials with primitive roots are therefore and , which correspond to the words 12 and 13, Since 12 is less than 13 in lexicographic ordering, .
Degree 2. Since , compatibility requires that be chosen so that divides . The latter factorizes into three degree-2 polynomials, irreducible over , namely , , and . Of these is not primitive since it divides implying that its roots have order at most 8, rather than the required 24. Both of the others are primitive and is chosen to be the lexicographically lesser of the two. Now corresponds to the word 142 and corresponds to the word 112, the latter being lexicographically less than the former. Hence .
Degree 3. Since , compatibility requires that divide , which factorizes as a degree-1 polynomial times the product of ten primitive degree-3 polynomials. Of these, two have no quadratic term, and , which correspond to the words 1032 and 1042. As 1032 is lexicographically less than 1042, .
Degree 4. The proper divisors of are and . Compute and , and note that , the same exponent as appeared in the compatibility condition for degree 2. In degree 4, compatibility requires that be chosen so that divides both and . The second condition is redundant, however, because of the compatibility condition imposed when choosing , which implies that divides . In general, for composite degree , the same reasoning implies that only the maximal proper divisors of need be considered, that is, divisors of the form , where is a prime divisor of . There are 13 factors of , all of degree 4. All but one are primitive. Of the primitive ones, is lexicographically minimal.
Degree 5. The computation is similar to what was done in degrees 2 and 3: ; has one factor of degree 1 and 156 factors of degree 5, of which 140 are primitive. The lexicographically least of the primitive factors is .
Degree 6. Taking into consideration the discussion above in connection with degree 4, the two compatibility conditions that need to be considered are that must divide and . It therefore must divide their greatest common divisor, , which factorizes into 21 degree-6 polynomials, 18 of which are primitive. The lexicographically least of these is .
Computation
Algorithms for computing Conway polynomials that are more efficient than brute-force search have been developed by Heath and Loehr. Lübeck indicates that their algorithm is a rediscovery of the method of Parker.
Notes
References
Finite fields
Computer algebra
John Horton Conway | Conway polynomial (finite fields) | Mathematics,Technology | 1,330 |
10,118,978 | https://en.wikipedia.org/wiki/Constructability | Constructability (or buildability) is a concept that denotes ease of construction. It can be central to project management techniques to review construction processes from start to finish during pre-construction phase. Buildability assessment is employed to identify obstacles before a project is actually built to reduce or prevent errors, delays, and cost overruns.
CII defines constructability as “the optimal use of construction knowledge and experience in planning, design, procurement, and field operations to achieve overall project objectives”.
The term "constructability" can also define the ease and efficiency with which structures can be built. The more constructible a structure is, the more economical it will be. Constructability is in part a reflection of the quality of the design documents; that is, if the design documents are difficult to understand and interpret, the project will be difficult to build.
The term refers to:
the extent to which the design of the building facilitates ease of construction, subject to the overall requirements for the completed building (CIRIA definition).
the effective and timely integration of construction knowledge into the conceptual planning, design, construction, and field operations of a project to achieve the overall project objectives in the best possible time and accuracy at the most cost-effective levels (CII definition).
the integration of construction knowledge in the project delivery process and balancing the various project and environmental constraints to achieve the project goals and building performance at the optimal level.(CIIA definition).
Principles
There are 12 principles of constructability which are mapped on to the procurement process:
Integration
Construction knowledge
Team skills
Corporate objectives
Available resources
External factors
Programme
Construction methodology
Accessibility
Specifications
Construction innovation
Feedback
References
Further reading
Construction management | Constructability | Engineering | 332 |
9,730,335 | https://en.wikipedia.org/wiki/Uniflow%20steam%20engine | The uniflow type of steam engine uses steam that flows in one direction only in each half of the cylinder. Thermal efficiency is increased by having a temperature gradient along the cylinder. Steam always enters at the hot ends of the cylinder and exhausts through ports at the cooler centre. By this means, the relative heating and cooling of the cylinder walls is reduced.
Design details
Steam entry is usually controlled by poppet valves (which act similarly to those used in internal combustion engines) that are operated by a camshaft. The inlet valves open to admit steam when minimum expansion volume has been reached at the start of the stroke. For a period of the crank cycle, steam is admitted, and the poppet inlet is then closed, allowing continued expansion of the steam during the stroke, driving the piston. Near the end of the stroke, the piston will uncover a ring of exhaust ports mounted radially around the centre of the cylinder. These ports are connected by a manifold and piping to the condenser, lowering the pressure in the chamber below that of the atmosphere causing rapid exhausting. Continued rotation of the crank moves the piston. From the animation, the features of a uniflow engine can be seen, with a large piston almost half the length of the cylinder, poppet inlet valves at either end, a camshaft (whose motion is derived from that of the driveshaft) and a central ring of exhaust ports.
Advantages
Uniflow engines potentially allow greater expansion in a single cylinder without the relatively cool exhaust steam flowing across the hot end of the working cylinder and steam ports of a conventional "counterflow" steam engine during the exhaust stroke. This condition allows higher thermal efficiency. The exhaust ports are open for only a small fraction of the piston stroke, with the exhaust ports closed just after the piston begins traveling toward the admission end of the cylinder. The steam remaining within the cylinder after the exhaust ports are closed is trapped, and this trapped steam is compressed by the returning piston. This is thermodynamically desirable as it preheats the hot end of the cylinder before the admission of steam. However, the risk of excessive compression often results in small auxiliary exhaust ports being included at the cylinder heads. Such a design is called a semi-uniflow engine
Engines of this type usually have multiple cylinders in an in-line arrangement, and may be single- or double-acting. A particular advantage of this type is that the valves may be operated by the effect of multiple camshafts, and by changing the relative phase of these camshafts, the amount of steam admitted may be increased for high torque at low speed, and may be decreased at cruising speed for economy of operation. Alternatively, designs using a more-complex cam surface allowed the varying of timing by shifting the entire camshaft longitudinally compared to its follower, allowing the admission timing to be varied. (The camshaft could be shifted by mechanical or hydraulic devices.) And, by changing the absolute phase, the engine's direction of rotation may be changed. The uniflow design also maintains a constant temperature gradient through the cylinder, avoiding passing hot and cold steam through the same end of the cylinder.
Disadvantages
In practice, the uniflow engine has a number of operational shortcomings. The large expansion ratio requires a large cylinder volume. To gain the maximum potential work from the engine a high reciprocation rate is required, typically 80% faster than a double-acting counterflow type engine. This causes the opening times of the inlet valves to be very short, putting great strain on a delicate mechanical part. In order to withstand the huge mechanical forces encountered, engines have to be heavily built and a large flywheel is required both to smooth out the variations in torque as the steam pressure rapidly rises and falls in the cylinder and to compensate for the inertia of the heavy piston. Because there is a thermal gradient across the cylinder, the metal of the wall expands to different extents. This requires the cylinder bore to be machined wider in the cool center (sometimes described as "egg-shaped") than at the hot ends. If the cylinder is not heated correctly, or if water enters, the delicate balance can be upset causing seizure mid-stroke and, potentially, destruction.
History
The uniflow engine was first used in Britain in 1827 by Jacob Perkins and was patented in 1885 by Leonard Jennett Todd. It was popularised by German engineer Johann Stumpf in 1909, with the first commercial stationary engine produced a year previously in 1908.
Steam locomotives
The uniflow principle was mainly used for industrial power generation, but was also tried in a few railway locomotives in England, such as the North Eastern Railway uniflow locomotives No.825 of 1913, and No.2212 of 1918, and the Midland Railway Paget locomotive. Experiments were also made in France, Germany, the United States and Russia. In no case were the results encouraging enough for further development to be undertaken.
Steam wagons
The first large-scale utilization of a Uniflow engine was in Atkinson steam wagons, in 1918. Only one such steam wagon is known to be still in existence; it was built in 1918, spent its working life and a period of dereliction in Australia, and was then repatriated to England and restored by Tom Varley in 1976-77.
Skinner Unaflow
The final commercial evolution of the uniflow engine occurred in the United States during the late 1930s and 1940s by the Skinner Engine Company with the development of the Compound Unaflow Marine Steam Engine. This engine operates in a steeple compound configuration and provides efficiencies approaching contemporary diesels. Many car ferries on the Great Lakes were so equipped, one of which is still operating, of 1952. The , the most prolific aircraft carrier design in history, used two 5-cylinder Skinner Unaflow engines, but these were not steeple compounds. A non-compound Skinner Uniflow remained in service until 2013 in the Great Lakes cement carrier , installed when the vessel was re-powered in 1950. The SS Prince George used two recycled 6 cylinder Skinner uniflows, it was retired in 1984
In small sizes (less than about ), reciprocating steam engines are much more efficient than steam turbines. White Cliffs Solar Power Station used a three-cylinder uniflow engine with "Bash"-type admission valves to generate about 25 kW electrical output.
Home-made conversions of two-stroke engines
The single-acting uniflow steam engine configuration closely resembles that of a two-stroke internal combustion engine, and it is possible to convert a two-stroke engine to a uniflow steam engine by feeding the cylinder with steam via a "bash valve" fitted in place of the spark plug. As the rising piston nears the top of its stroke, it knocks open the bash valve to admit a pulse of steam. The valve closes automatically as the piston descends, and the steam is exhausted through the existing cylinder porting. The inertia of the flywheel then carries the piston back to the top of its stroke against the compression, as it does in the original form of the engine. Also like the original, the conversion is not self-starting and must be turned over by an external power source to start. An example of such a conversion is the steam-powered moped, which is started by pedalling.
See also
Advanced steam technology
References
Sources
Teach yourself heat engines by E. de Ville, published by The English Universities Press Limited, London, 1960, pp 40–41
External links
The Museum of Retro Technology – Uniflow Steam Engines
Steam engines
Piston engines
Engine technology
History of the steam engine | Uniflow steam engine | Technology | 1,548 |
3,555,853 | https://en.wikipedia.org/wiki/Akoustolith | Akoustolith is a porous ceramic material resembling stone. Akoustolith was a patented product of a collaboration between Rafael Guastavino Jr. (the son of Rafael Guastavino) and Harvard professor Wallace Sabine over a period of years starting in 1911. It was used to limit acoustic reflection and noise in large vaulted ceilings. Akoustolith was bonded as an additional layer to the structural tile of the Tile Arch System ceilings built by the Rafael Guastavino Company of New Jersey. The most prevalent use was to aid speech intelligibility in cathedrals and churches prior to the widespread use of public address systems.
History
Akoustolith was first introduced by the Guastavino Fireproof Construction Company, in collaboration with Wallace Sabine of Harvard University, in 1915. The founder of the Guastavino Company, Rafael Guastavino Sr., had immigrated to the United States from Spain in 1881, bringing with him the method of timbrel-vault construction, also known as cohesive construction. The Raphael Guastavino Company's vaulting technique created monolithic assemblies by layering thin bricks and structural tiles with fast-drying mortar. The Guastavino Technique, as it came to be known, consisted of multiple layers of plaster and tile in the construction of masonry vaulting; the first course of tile was set in its position with quick setting mortar creating form-work for the subsequent layers. Tiles were placed in concentric circles in the construction of domes, while in ribbed vaults, ribs served as the general form-work. Upon Guastavino Sr.'s death in 1908, his son, Rafael Guastavino Jr. took over the Guastavino Fireproof Construction Company; he was largely responsible for the company's development of acoustical finishes, including the incorporation and development of Rumford and Akoustolith tiles.
Raphael Guastavino Jr. and Wallace Sabine patented Akoustolith in 1916, to be used as a facing for Guastavino's timbrel vaults. The two had previously collaborated in the development of the Rumford tile, a ceramic acoustical finish used in the construction of the St. Thomas Church in New York City. While initially a success, the cost to manufacture Rumford tile led the company to focus on the development of the cheaper and more durable Akoustolith. As a non-ceramic tile, the sound absorption properties produced by Akoustolith's rough and porous surface, was an improvement on the Rumford tile . With the exception of the replacement of the first layer of tiles with the sound-absorbing Akoustolith, the Guastavino method of construction was unaltered. The effectiveness of Akoustolith in the reduction of reverberation led to its use in the construction of ecclesiastical spaces.
Following Sabine's death in 1919, Guastavino continued to patent acoustical building products. The Guastavino Fireproof Construction Company remained in business until 1962, its decline is attributed to the increased cost of hand labor in conjunction with the rise of concrete-shell construction. As timbrel-vault construction waned, the installation and production of acoustical materials helped sustain the company. By the late 1920s and early 1930s a considerable portion of the firm's business was related to these products. However, as other corporation began to mass-produce less-expensive acoustical building materials, Guastavino products ceased to be competitive.
Composition and properties
Akoustolith developed as an improvement on the earlier Rumford tile. Rumford tiles had previously been made with rich organic soil that burned off during the firing process and created pores, this procedure was ultimately irregular and difficult to control. Consequently, Akoustolith was produced by binding well-sorted pumice particles with Portland cement to create an artificial stone, a process which offered consistency and allowed for a variety of shapes and color. Although sand and Portland Cement were typically used in the production of Akoustolith, the tile patent states that crushed rock or brick could be used as the aggregate, while lime or Plaster of Paris could be used as the binding material.
Akoustolith's efficiency in absorbing different pitches was largely dependent upon the dimensions of its particles; its most imperative feature was its use of aggregate graded to a uniform size. Finer grades of aggregate were sieved out, leaving spaces between the particles, creating an intercommunicating pore structure that absorbed sound. According to Guastavino's and Sabine's 1916 patent Akoustolith absorbed "much in excess of 15% of sounds in the pitch between the middle C and the third octave above the middle C, which are the characteristic sounds which distinguish articulate speech."
Designed with a graded porosity to increase their range of absorption, the stone-like finish of Akoustolith tiles consisted of a mix of coarser aggregate to facilitate the absorption of low pitches. Similarly, the bedding face consisted of a mix of finer aggregate to absorb higher pitches. Eventually, different grades of the material were sold; these varied in size and sound-absorption coefficients.
Building with Akoustolith
Although the production of Akoustolith tile was short-lived, its effectiveness in reducing reverberation in ecclesiastical spaces led to its installation in a variety of building types, including commercial, industrial, and institutional structures. The acoustical and fireproof nature of Akoustolith was advertised, and to a lesser degree, its ability to resist the condensation of moisture. In addition, Akoustolith's aesthetic qualities were touted: the tiles were available in several shades of gray and buff intended to blend with the warm colors of adjacent stone.
Resembling a stone-like masonry material, Akoustolith tile was incorporated into several of the Guastavino Company's major building projects, including the 1929 construction of the Buffalo Central Terminal. Completed in the late 1920s, Fellheimer & Wagner's Buffalo Central Terminal was the largest installation of Akoustolith completed by the Guastavino Company.
Example projects
Fellheimer & Wagner's design of the Buffalo Central Terminal, in New York, was the largest installation of Akoustolith completed by the Guastavino Company.
New York architect Bertram Goodhue specified the use of the Guastavino tile in his 1920 design for the Nebraska Capitol. Consequently, the Nebraska Capitol features tiled vaults and domes, and meeting rooms constructed with Akoustolith tiles.
Ralph Adams Cram's 1921 design for the Princeton University Chapel employs the Guastavino Company's Akoustolith tile vaulting.
References
External links
A Tale of Two Physicists: mentions the collaboration between Sabine and Guastavino.
Further reading
G.F.S. "A Simple Method of Finding the Sound Absorbing Power of a Building Material." Journal of the Franklin Institute 206, no. 1 (1928): 130-31.
Liu, Yishi. "Building Guastavino Dome in China: A Historical Survey of the Dome of the Auditorium at Tsinghua University." Frontiers of Architectural Research 3, no. 2 (2014): 121-40.
Smilor, Raymond. "Confronting the Industrial Environment: The Noise Problem in America, 1893-1932., 1978, ProQuest Dissertations and Theses.
Thompson, Emily. "Dead Rooms and Live Wires: Harvard, Hollywood, and the Deconstruction of Architectural Acoustics, 1900-1930." Isis 88, no. 4 (1997): 597-626.
Building materials
Acoustics | Akoustolith | Physics,Engineering | 1,580 |
55,090,826 | https://en.wikipedia.org/wiki/Life%203.0 | Life 3.0: Being Human in the Age of Artificial Intelligence is a 2017 non-fiction book by Swedish-American cosmologist Max Tegmark. Life 3.0 discusses artificial intelligence (AI) and its impact on the future of life on Earth and beyond. The book discusses a variety of societal implications, what can be done to maximize the chances of a positive outcome, and potential futures for humanity, technology and combinations thereof.
Summary
The book begins by positing a scenario in which AI has exceeded human intelligence and become pervasive in society. Tegmark refers to different stages of human life since its inception: Life 1.0 referring to biological origins, Life 2.0 referring to cultural developments in humanity, and Life 3.0 referring to the technological age of humans. He characterizes these different classification based off of their ability to alter their hardware and software. The book focuses on "Life 3.0", and on emerging technology such as artificial general intelligence that may someday, in addition to being able to learn, be able to also redesign its own hardware and internal structure.
The first part of the book looks at the origin of intelligence billions of years ago and goes on to project the future development of intelligence. Tegmark considers short-term effects of the development of advanced technology, such as technological unemployment, AI weapons, and the quest for human-level AGI (Artificial General Intelligence). The book cites examples like Deepmind and OpenAI, self-driving cars, and AI players that can defeat humans in Chess, Jeopardy, and Go.
After reviewing existing issues in AI, Tegmark then considers a range of possible futures that involve intelligent machines or humans. The fifth chapter describes a number of potential outcomes, such as altered social structures, integration of humans and machines, and both positive and negative scenarios like Friendly AI or an AI apocalypse. Tegmark argues that the risks of AI come not from malevolence or conscious behavior per se, but rather from the misalignment of the goals of AI with those of humans. Many of the goals of the book align with those of the Future of Life Institute, of which Tegmark is a co-founder.
The remaining chapters explore concepts in physics, goals, consciousness and meaning, and investigate what society can do to help create a desirable future for humanity.
Reception
One criticism of the book by Kirkus Reviews is that some of the scenarios or solutions in the book are a stretch or somewhat prophetic: "Tegmark's solutions to inevitable mass unemployment are a stretch." AI researcher Stuart J. Russell, writing in Nature, said: "I am unlikely to disagree strongly with the premise of Life 3.0. Life, Tegmark argues, may or may not spread through the Universe and 'flourish for billions or trillions of years' because of decisions we make now — a possibility both seductive and overwhelming." Writing in Science, Haym Hirsh called it "a highly readable book that complements The Second Machine Age's economic perspective on the near-term implications of recent accomplishments in AI and the more detailed analysis of how we might get from where we are today to AGI and even the superhuman AI in Superintelligence." The Telegraph called it "One of the very best overviews of the arguments around artificial intelligence". The Christian Science Monitor said "Although it's probably not his intention, much of what Tegmark writes will quietly terrify his readers." Publishers Weekly gave a positive review, but also stated that Tegmark's call for researching how to maintain control over superintelligent machines "sits awkwardly beside his acknowledgment that controlling such godlike entities will be almost impossible." Library Journal called it a "must-read" for technologists, but stated the book was not for the casual reader. The Wall Street Journal called it "lucid and engaging"; however, it cautioned readers that the controversial notion that superintelligence could run amok has more credence than it does few years ago, but is still fiercely opposed by many computer scientists.
Rather than endorse a specific future, the book invites readers to think about what future they would like to see, and to discuss their thoughts on the Future of Life Website. The Wall Street Journal review called this attitude noble but naive, and criticized the referenced Web site for being "chockablock with promo material for the book".
The hardcover edition was on the general New York Times Best Seller List for two weeks, and made on the New York Times business bestseller list in September and October 2017.
Former President Barack Obama included the book in his "best of 2018" list.
Business magnate Elon Musk (who had previously endorsed the thesis that, under some scenarios, advanced AI could jeopardize human survival) recommended Life 3.0 as "worth reading".
References
External links
Excerpt from the book
(a video commissioned by Tegmark's FLI to explain the book)
Survey associated with the book
2017 non-fiction books
Existential risk from artificial general intelligence
Futurology books
Alfred A. Knopf books
Allen Lane (imprint) books
Non-fiction books about Artificial intelligence | Life 3.0 | Technology | 1,064 |
68,898,978 | https://en.wikipedia.org/wiki/Kimito%20Funatsu | is a Japanese chemist specializing in chemoinformatics and data-driven chemistry, a Professor Emeritus at University of Tokyo, and the research director of the Data Science Center at Nara Institute of Science and Technology.
Biography
He graduated from Kagoshima Prefectural Konan High School in 1974 and from Department of Chemistry, School of Science, Kyushu University in 1978. He completed Department of Chemistry, Graduate School of Science, Kyushu University and obtained a doctorate in science in 1983. After he served as an Associate Professor at Toyohashi University of Technology, he became a Professor at Department of Chemical System Engineering, School of Engineering, University of Tokyo in 2004. He concurrently holds the posts of a Professor and the research director of the Data Science Center at Nara Institute of Science and Technology from 2017. He was also invited as visiting professor at University of Strasbourg in France in 2011.
The Division of Chemical Information of the American Chemical Society gave him the Herman Skolnik Award in 2019 for his contributions to structure elucidation, de novo structure generation and applications of cheminformatics methods to materials design and chemical process control. He also received the for 2020. In 2021, he retired from University of Tokyo at mandatory age and was given the title of Professor Emeritus.
References
1955 births
Living people
20th-century Japanese chemists
21st-century Japanese chemists
Cheminformatics
Academic staff of the University of Tokyo
Academic staff of Nara Institute of Science and Technology
Kyushu University alumni | Kimito Funatsu | Chemistry | 293 |
37,580,247 | https://en.wikipedia.org/wiki/Wonderful%20compactification | In algebraic group theory, a wonderful compactification of a variety acted on by an algebraic group is a -equivariant compactification such that the closure of each orbit is smooth. constructed a wonderful compactification of any symmetric variety given by a quotient of an algebraic group by the subgroup fixed by some involution of over the complex numbers, sometimes called the De Concini–Procesi compactification, and generalized this construction to arbitrary characteristic. In particular, by writing a group itself as a symmetric homogeneous space, (modulo the diagonal subgroup), this gives a wonderful compactification of the group itself.
References
Algebraic groups
Compactification (mathematics) | Wonderful compactification | Mathematics | 137 |
14,769,219 | https://en.wikipedia.org/wiki/C.%20N.%20Yang%20Institute%20for%20Theoretical%20Physics |
The C. N. Yang Institute of Theoretical Physics (YITP) is a research center at Stony Brook University. In 1965, it was the vision of then University President J.S. Toll and Physics Department chair T.A. Pond to create an institute for theoretical physics and invite the famous physicist Chen Ning Yang from Institute for Advanced Study to serve as its director with the Albert Einstein Professorship of Physics. While the center is often referred to as "YITP", this can be confusing as YITP also stands for the Yukawa Institute for Theoretical Physics in Japan.
The active research areas of the institute include: quantum field theory, string theory, conformal field theory, mathematical physics and statistical mechanics. The YITP is situated on top of the Math Tower, home to the Department of Mathematics which is connected to the Department of Physics and the Simons Center for Geometry and Physics—therefore the physicists enjoy intimate interactions with the mathematicians. This close relationship dates back to the friendship of C.N. Yang and the mathematician James Harris Simons.
Founded in 1967, YITP celebrated its 50th anniversary in 2017. During the time span, the YITP has produced significant results in different areas, most notably was the discovery of supergravity in 1976 by Peter van Nieuwenhuizen, Daniel Z. Freedman, and Sergio Ferrara, who were all working there at the time.
It houses two Breakthrough Prize in Fundamental Physics laureates; Peter Van Nieuwenhuizen (2019) and Alexander Zamolodchikov (2024). Former director Chen Ning Yang is a Nobel Prize in Physics laureate (1957).
Directors
Chen Ning Yang - First director (1967-1999) and 1957 Nobel Laureate.
Peter van Nieuwenhuizen - Second director (1999-2002) and co-discoverer of supergravity.
George Sterman - Third director (2002-) and noted field theorist
Notable tenants
Luis Álvarez-Gaumé - String theory
Gerald E. Brown - Nuclear physics, theoretical astrophysics
Michael Creutz - Lattice gauge theory, computational physics
Michael Douglas - String theory
Ephraim Fischbach - Nuclear physics
Zohar Komargodski - Conformal field theory
Vladimir Korepin - Mathematical physics, quantum information
Barry M. McCoy - Statistical mechanics, conformal field theory
Nikita Nekrasov - Mathematical physics
Peter van Nieuwenhuizen - Field theory, string theory, co-discoverer of supergravity
Martin Roček - Mathematical physics, string theory
Warren Siegel - Field theory, string theory
George Sterman - Field theory, quantum chromodynamics
Alexander Zamolodchikov - Quantum field theory, statistical mechanics, conformal field theory
See also
Institute for Theoretical Physics (disambiguation)
Center for Theoretical Physics (disambiguation)
References
External links
YITP website
8th Simons Workshop in Mathematics and Physics
Yang Chen-Ning
Physics research institutes
Stony Brook University
Brookhaven, New York
Research institutes in New York (state)
1967 establishments in New York (state)
Theoretical physics institutes | C. N. Yang Institute for Theoretical Physics | Physics | 621 |
37,808,838 | https://en.wikipedia.org/wiki/SolidOx | SolidOx was the brand name for welding equipment produced by Cleanweld Products for do-it-yourself welding enthusiasts
from 1965 until at least the early 1980s; the SOLIDOX name was registered as a trademark in 1968.
SolidOx commonly refers to SolidOx Pellets or SolidOx Sticks used to supply the oxygen for the welding equipment. The SolidOx Pellets were made of sodium chlorate and were burned to produce oxygen.
SolidOx products are no longer produced. They apparently continued production until at least 1983. In 1984, Cleanweld Products (or at least a portion of the company) was sold to Cooper Industries to be part of their tool division, including the SolidOx product line.
See also
Oxygen storage
Chemical oxygen generator
References
External links
Armory.com
Youtube.com - SolidOx Welder - Uploaded on 24 Aug 2011
Youtube.com - SolidOx Burning - Uploaded on 23 May 2006
Ytforums.ytmag.com
Welding | SolidOx | Engineering | 195 |
11,306,520 | https://en.wikipedia.org/wiki/Dothiorella%20gregaria | Dothiorella gregaria is a fungal plant pathogen.
References
External links
USDA ARS Fungal Database
Fungal plant pathogens and diseases
gregaria
Fungi described in 1881
Fungus species | Dothiorella gregaria | Biology | 36 |
23,519,200 | https://en.wikipedia.org/wiki/SB-215505 | SB-215505 is a drug which acts as a potent and selective antagonist at the serotonin 5-HT2B receptor, with good selectivity over the related 5-HT2A and 5-HT2C receptors. It is used in scientific research into the function of the 5-HT2 family of receptors, especially to study the role of 5-HT2B receptors in the heart, and to distinguish 5-HT2B-mediated responses from those produced by 5-HT2A or 5-HT2C.
References
5-HT2B antagonists
Chloroarenes
Indolines
Ureas
Quinolines | SB-215505 | Chemistry | 142 |
54,087,171 | https://en.wikipedia.org/wiki/NL-Alert | NL-Alert is a Cell Broadcast alarm system in use by the Dutch government to quickly alert and inform citizens of hazardous or crisis situations. Using this system, authorities can send messages to users of mobile phones in specific areas by using specific cell towers to alert phones within their reach. NL-Alert is one of the first implementations of the EU-Alert or Reverse 1-1-2 legislation as defined by the binding European Electronic Communications Code (EECC) using the Cell Broadcast technology for the delivery of public warning messages to the general public.
Usage of the Service
The system was introduced nationally on 8 November 2012, and was first used in a large fire in Tolbert on 14 December 2012. The second use was in another fire in Meppel in 2013. NL-Alert has been used more than 200 times as of December 2017 for public warning purposes (e.g. large forest & industrial fires, severe weather conditions and gas leakages).
Background
NL-Alert is an addition to the existing emergency population warning system, which works using a large amount of sirens on masts throughout the country. A key difference between these systems is that users of NL-Alert are not only warned, but also immediately informed about the situation. NL-Alert messages include the location of an incident and advice on bringing oneself to safety. NL-Alert messages have a distinct alarm sound - which stops when the message is seen by the user.
Receiving NL-Alert is free. A user does not have to register to receive alerts, but may need to configure a device to receive cell broadcasts. Increasingly, mobile phones are pre-configured by their manufacturers to receive cell broadcasts, including NL-Alert, via 2G, 3G and 4G systems.
Adoption Rate
NL-Alert has been used in the Netherlands for several years, and every six months a test message is sent which is broadcast throughout the Netherlands. The reach of the Control Cell Broadcast message has increased over the years resulting that in June 2020 more than 13.6 million (90%) citizens of 12 years and older received directly the test warning Cell Broadcast message on their mobile phone. There is great willingness to pass on an NL-Alert message to others, from the 10% of people who did not receive the message (e.g. because they do not have a mobile phone), 4% heard it through other people and reaching with this single Cell Broadcast message in total 14.2 million people - 94% of the Dutch population of 12 years and older.
7 December 2015 - 7.1 million people of 12 years and older (49%) (full nationwide LTE coverage)
6 June 2016 - 8.3 million people of 12 years and older (57%)
5 December 2016 - 8.8 million people of 12 years and older (60%)
3 July 2017 - 9.2 million people of 12 years and older (63%)
4 December 2017 - 10.8 million people of 12 years and older (74%)
4 June 2018 - 11.3 million people of 12 years and older (76%)
3 December 2018 - 12.43 million people of 12 years and older (83%)
3 June 2019 - 13.18 million people of 12 years and older (88%)
2 December 2019 - 13.7 million people of 12 years and older (90.7%)
8 June 2020 - 14.2 million people of 12 years and older (94% of the population)
Multi Channel Approach
NL-Alert uses Cell Broadcast as the primary channel to issue warnings and alerts.
As no channel will suit every situation or every person, multiple channels are used by NL-Alert to make sure as many people as possible receive the information they need. This includes, since December 2018, digital information screens at public transportation stops. Additional dissemination channels are expected to be added in the coming period.
Alerts and warnings are sent to the new channel both nationally and locally depending on the emergency.
3 December 2018 - 300.000 people aged 12 and over saw the NL-Alert on the digital information screens.
2 December 2019 - 300.000 people aged 12 and over saw the NL-Alert on the digital information screens and 150.000 on digital advertising screens.
See also
Cell Broadcast
Alert Ready (Canada)
Wireless Emergency Alerts (USA)
Emergency Mobile Alert (New Zealand)
EU-Alert (European Union)
Reverse 1-1-2
References
Emergency management in the Netherlands
Emergency population warning systems
2012 establishments in the Netherlands | NL-Alert | Technology | 903 |
70,065,445 | https://en.wikipedia.org/wiki/Boba%20liberal | Boba liberal is a term mostly used within the Asian diaspora communities in the West, especially in the United States. It describes someone of East or Southeast Asian descent living in the West who has a shallow, surface-level liberal outlook. It is also occasionally used to describe conservatives who weaponize their East or Southeast Asian identity. The neologism emerged among the Asian American leftist community on Twitter who accused "boba liberals" of only holding their liberal beliefs to appear more White adjacent, by engaging in progressive social movements or viewpoints, while at the same time disregarding and trivializing issues concerning Asians.
Mary Chao, writing for The North Jersey Record, said that "Asians call peers boba liberals when they aspire to liberal whiteness." An article in The Yale Herald described it as a term "used to describe the ethnocentric politics of Asian Americans, usually of East Asian descent, who exclusively advocate for issues that benefit themselves, without acknowledging problematic dimensions of their own history and working to support other people of color." The feminist magazine Fem said that "the faces of boba liberalism are Asian Americans that are part of the middle and upper economic class. As a result, boba liberals disregard the negative effects of capitalism because they profit from it. For instance, boba liberals tend to focus on advocating for Asian representation in white spaces, or discussing whether or not wearing chopsticks in one's hair is culture appropriation. These topics are popular within boba liberal circles, all while dialogue regarding inequality, globalization, and racial injustice are purposely neglected."
UnHerd notes that conservative Asian Americans have used the term not to critique capitalism, but to "aim at a small but influential group of progressive Asian-American activists who are supposedly selling out other Asians, especially working-class Asians, in order to win brownie points from elite, generally white liberals."
The Asian identity of boba liberals has often been accused of being shallow and superficial. Boba liberals are accused of using surface-level stereotypical Asian traits such as liking boba tea to bolster their Asian credentials.
Plan A Magazine, an Asian diaspora magazine, described the film Crazy Rich Asians and the sitcom Fresh Off the Boat as "boba liberal media", calling them the result of "a specific kind of atomized identity politics". Other media outlets have connected the Crazy Rich Asians film to boba liberalism.
Controversy
The term "boba liberal" was coined in 2019 by Vietnamese American Twitter user Redmond (@diaspora_is_red) to analyze a form of Asian American liberalism through a Marxist lens. Redmond has criticized the misappropriation of their neologism by stripping away the Marxist framework by failing to discuss "socialism, communism, the capitalist system, imperialism, and the diaspora bourgeoisie" and conflating "boba liberalism" with the flawed concept of "East Asian privilege". In 2024, Redmond criticized misuse of the term by conservatives and liberals, and said "The term boba liberalism can go away for all I care. It's corny and stale".
United States
One commentator described boba liberals as supporting policies that primarily benefit upper-income Asian-Americans, and not necessarily the Asian-American community as a whole. Therefore, while the word "liberal" is used in the term, it is not mutually exclusive to one specific ideology, as it may also extend to conservative-aligned Asians in some areas, as they would often take advantage of the "model minority" label by defending such measures.
See also
Acting white
Baizuo
Chinilpa
Crab mentality
Inferiority complex
Internalized oppression
Internalized racism
Hanjian
Limousine liberal
Makapili
Model minority
Sarong party girl
Tall poppy syndrome
Race traitor
Uncle Tom
Liberal elite
References
Further reading
External links
Why I Hate Subtle Asian Traits by Sarah Mae Dizon (30 August 2020).
Asian-American culture
Asian-American history
Asian-American issues
Asian-American-related controversies
Canadian people of Asian descent
Cultural studies
Cultural assimilation
Liberalism in the United States
Political neologisms
Politics and race in the United States
2019 neologisms
Social inequality
Social media
Asian-Australian issues | Boba liberal | Technology | 848 |
76,345,372 | https://en.wikipedia.org/wiki/Tantalocene%20trihydride | Tantalocene trihydride, or bis(η5-cyclopentadienyl)trihydridotantalum, is an organotanalum compound in the family of bent metallocenes consisting of two cyclopentadienyl rings and three hydrides coordinated to a tantalum center. Its formula is TaCp2H3, and it is a white crystalline compound that is sensitive to air. It is the first example of a molecular trihydride of a transition metal.
Synthesis
The synthesis of tantalocene trihydride was first reported by Green, McCleverty, Pratt, and Wilkinson in 1961. Tantalum pentachloride was added to a solution of sodium cyclopentadienide in tetrahydrofuran and an excess of sodium borohydride with yields reaching 60%, although the authors report that the preparation does not always succeed.
A more reliable and reproducible method was reported by Green and Moreau in 1978. A suspension of tantalocene dichloride in toluene was reacted with NaAlH2(OCH2CH2OCH3)2 and then hydrolyzed to form tantalocene trihydride, though with a lower yield of 42%.
Characterization
The high-field signals in the 1H NMR spectrum corresponding to the hydrides appear at τ = 11.63 ppm (δ = -1.63 ppm, 1H, t, J = 9 Hz) and τ = 13.02 ppm (δ = -3.02 ppm, 2H, d, J = 9 Hz). The peak splitting pattern is characteristic of A2B groupings, which means that there are two equivalent hydrides, and one non-equivalent hydride. The signal for the hydrogen atoms on the cyclopentadienyl rings appear at τ = 5.24 ppm (δ = 4.76 ppm, 10H, s).
A strong, sharp absorption band can be seen in the infrared spectra of TaCp2H3 at 1735 cm−1, which corresponds to the Ta-H bond stretching frequency.
As opposed to other metallocene hydrides, such as ReCp2H, MoCp2H2, and WCp2H2, TaCp2H3 does not behave as a base, even in trifluoroacetic acid. It is decomposed by aqueous acids. This is consistent with the fact that the tantalum center does not have any lone pairs, since all orbitals have been utilized in bonding with the ligands.
The two cyclopentadienyl rings are in a bent conformation as confirmed by neutron diffraction studies where the ring-to-tantalum-to-ring bending angle is 139.9°. The three hydrides lie in the same plane as the tantalum center with the three Ta-H bond distances being essentially equal (1.769(8) Å, 1.775(9) Å, and 1.777(9) Å).
Reactivity
Tantalocene trihydride has been found to be capable of activating C-H bonds by oxidative addition as seen through hydrogen/deuterium exchange, involved in the insertion of phosphines, and capable of forming post transition metal ethyl adducts.
Catalysis of hydrogen–deuterium exchange
Barefield, Parshall, and Tebbe discovered that when TaCp2H3 was heated at 100 °C in benzene-d6 under a hydrogen atmosphere, HD and D2 were detected along with H2 in the vapor phase in a ratio of 41.1 to 41.6 to 17.0 (H2:HD:D2). This indicates that there is catalytic exchange, and that the complex is able to cleave the C-D bonds of the solvent.
In another study by Foust et al., when TaCp2H3 was photolyzed for 36 h at 15 °C in benzene-d6, analysis of the evolved gases revealed that there was a mixture of H2, HD, and D2. If carbon monoxide was present in the reaction with toluene as a solvent, the CO containing product TaCp2(CO)H was formed through the intermediate species TaCp2H.
Activation of Csp3-H bonds by oxidative addition
Neufeldt et al. explored the activation of aliphatic C-H bonds by TaCp2H3 and related monosubstituted cyclopentadienyl rings experimentally and computationally. In order to go through oxidative addition, there must be an initial loss of H2 from TaCp2H3. Then, the monohydride complex can form a π-complex with an unsaturated solvent, such as benzene. Finally, the complex oxidatively adds to the C-H bond. Intramolecular and intermolecular C-H activation was found to be possible.
A σ-complex will form instead if the solvent used is aliphatic, such as octane. The authors observed a change in the hydride NMR signals due to H/D exchange when TaCp2H3 was heated to 120 °C for 48 h in octane-d18 and methylcyclohexyl-d14.
The loss of another hydrogen molecule from the products can lead to β-hydride elimination, which forms complexes of the TaCp2(H)L, with L being an unsaturated π-ligand, having their own reactivity.
Phosphine insertion
The first phosphido derivative of tantalocene was obtained by the insertion of ClPPh2 into the Ta-H bond, resulting in the precipitation of the white ionic compound [TaCp2H2(PHPh2)]Cl. Deprotonation of this compound results in pale yellow crystals of the dihydride phosphido complex TaCp2H2PPh2. Through X-ray diffraction studies, the Ta-P bond distance was 2.595(3) Å, which is typical of a single bond between tantalum and phosphorus.
ClPPh2 has been shown to insert into the niobium analogue, NbCp2H3. However, the deprotonation step results in the monohydride phosphido complex NbCp2H(PHPh2) instead. The authors of this article theorize that stabilization of hydrido phosphide complexes of the third row transition metals is due to higher M-H bond energy when compared to those of the second row.
Lewis acid-base adducts
TaCp2H3 can form Lewis acid-base adducts with AlEt3, GaEt3, ZnEt2, and CdEt2 at the unique hydride. As opposed to promotion of catalysis of olefin reactions with Lewis acids like AlEt3 such as in Ziegler-Natta catalysts, triethylaluminium seems to deactivate the hydride ligand toward ethylene insertion.
References
Organotantalum compounds
Cyclopentadienyl complexes
Hydrido complexes | Tantalocene trihydride | Chemistry | 1,516 |
18,079 | https://en.wikipedia.org/wiki/Leonardo%20da%20Vinci | Leonardo di ser Piero da Vinci (15 April 1452 – 2 May 1519) was an Italian polymath of the High Renaissance who was active as a painter, draughtsman, engineer, scientist, theorist, sculptor, and architect. While his fame initially rested on his achievements as a painter, he has also become known for his notebooks, in which he made drawings and notes on a variety of subjects, including anatomy, astronomy, botany, cartography, painting, and palaeontology. Leonardo is widely regarded to have been a genius who epitomised the Renaissance humanist ideal, and his collective works comprise a contribution to later generations of artists matched only by that of his younger contemporary Michelangelo.
Born out of wedlock to a successful notary and a lower-class woman in, or near, Vinci, he was educated in Florence by the Italian painter and sculptor Andrea del Verrocchio. He began his career in the city, but then spent much time in the service of Ludovico Sforza in Milan. Later, he worked in Florence and Milan again, as well as briefly in Rome, all while attracting a large following of imitators and students. Upon the invitation of Francis I, he spent his last three years in France, where he died in 1519. Since his death, there has not been a time where his achievements, diverse interests, personal life, and empirical thinking have failed to incite interest and admiration, making him a frequent namesake and subject in culture.
Leonardo is identified as one of the greatest painters in the history of Western art and is often credited as the founder of the High Renaissance. Despite having many lost works and fewer than 25 attributed major works – including numerous unfinished works – he created some of the most influential paintings in the Western canon. The Mona Lisa is his best known work and is the world's most famous individual painting. The Last Supper is the most reproduced religious painting of all time and his Vitruvian Man drawing is also regarded as a cultural icon. In 2017, Salvator Mundi, attributed in whole or part to Leonardo, was sold at auction for , setting a new record for the most expensive painting ever sold at public auction.
Revered for his technological ingenuity, he conceptualised flying machines, a type of armoured fighting vehicle, concentrated solar power, a ratio machine that could be used in an adding machine, and the double hull. Relatively few of his designs were constructed or were even feasible during his lifetime, as the modern scientific approaches to metallurgy and engineering were only in their infancy during the Renaissance. Some of his smaller inventions, however, entered the world of manufacturing unheralded, such as an automated bobbin winder and a machine for testing the tensile strength of wire. He made substantial discoveries in anatomy, civil engineering, hydrodynamics, geology, optics, and tribology, but he did not publish his findings and they had little to no direct influence on subsequent science.
Biography
Early life (1452–1472)
Birth and background
Leonardo da Vinci, properly named Leonardo di ser Piero da Vinci ("Leonardo, son of ser Piero from Vinci"), was born on 15 April 1452 in, or close to, the Tuscan hill town of Vinci, 20 miles from Florence. He was born out of wedlock to Piero da Vinci (Ser Piero da Vinci d'Antonio di ser Piero di ser Guido; 1426–1504), a Florentine legal notary, and Caterina di Meo Lippi (), from the lower class. It remains uncertain where Leonardo was born; the traditional account, from a local oral tradition recorded by the historian Emanuele Repetti, is that he was born in Anchiano, a country hamlet that would have offered sufficient privacy for the illegitimate birth, though it is still possible he was born in a house in Florence that Ser Piero almost certainly had. Leonardo's parents both married separately the year after his birth. Caterina – who later appears in Leonardo's notes as only "Caterina" or "Catelina" – is usually identified as the Caterina Buti del Vacca, who married the local artisan Antonio di Piero Buti del Vacca, nicknamed . Having been betrothed to her the previous year, Ser Piero married Albiera Amadori and after her death in 1464, went on to have three subsequent marriages. From all the marriages, Leonardo eventually had 16 half-siblings (of whom 11 survived infancy) who were much younger than he (the last was born when Leonardo was 46 years old) and with whom he had very little contact.
Very little is known about Leonardo's childhood and much is shrouded in myth, partially because of his biography in the frequently apocryphal Lives of the Most Excellent Painters, Sculptors, and Architects (1550) by 16th-century art historian Giorgio Vasari. Tax records indicate that by at least 1457 he lived in the household of his paternal grandfather, Antonio da Vinci, but it is possible that he spent the years before then in the care of his mother in Vinci, either Anchiano or Campo Zeppi in the parish of San Pantaleone. He is thought to have been close to his uncle, Francesco da Vinci, but his father was probably in Florence most of the time. Ser Piero, who was the descendant of a long line of notaries, established an official residence in Florence by at least 1469 and had a successful career. Despite his family history, Leonardo only received a basic and informal education in (vernacular) writing, reading, and mathematics; possibly because his artistic talents were recognised early, so his family decided to focus their attention there.
Later in life, Leonardo recorded his earliest memory, now in the Codex Atlanticus. While writing on the flight of birds, he recalled as an infant when a kite came to his cradle and opened his mouth with its tail; commentators still debate whether the anecdote was an actual memory or a fantasy.
Verrocchio's workshop
In the mid-1460s, Leonardo's family moved to Florence, which at the time was the centre of Christian Humanist thought and culture. Around the age of 14, he became a garzone (studio boy) in the workshop of Andrea del Verrocchio, who was the leading Florentine painter and sculptor of his time. This was about the time of the death of Verrocchio's master, the great sculptor Donatello. Leonardo became an apprentice by the age of 17 and remained in training for seven years. Other famous painters apprenticed in the workshop or associated with it include Ghirlandaio, Perugino, Botticelli, and Lorenzo di Credi. Leonardo was exposed to both theoretical training and a wide range of technical skills, including drafting, chemistry, metallurgy, metal working, plaster casting, leather working, mechanics, and woodwork, as well as the artistic skills of drawing, painting, sculpting, and modelling.
Leonardo was a contemporary of Botticelli, Ghirlandaio and Perugino, who were all slightly older than he was. He would have met them at the workshop of Verrocchio or at the Platonic Academy of the Medici. Florence was ornamented by the works of artists such as Donatello's contemporaries Masaccio, whose figurative frescoes were imbued with realism and emotion, and Ghiberti, whose Gates of Paradise, gleaming with gold leaf, displayed the art of combining complex figure compositions with detailed architectural backgrounds. Piero della Francesca had made a detailed study of perspective, and was the first painter to make a scientific study of light. These studies and Leon Battista Alberti's treatise De pictura were to have a profound effect on younger artists and in particular on Leonardo's own observations and artworks.
Much of the painting in Verrocchio's workshop was done by his assistants. According to Vasari, Leonardo collaborated with Verrocchio on his The Baptism of Christ (), painting the young angel holding Jesus's robe with skill so far superior to his master's that Verrocchio purportedly put down his brush and never painted again (the latter claim probably being apocryphal). The new technique of oil paint was applied to areas of the mostly tempera work, including the landscape, the rocks seen through the brown mountain stream, and much of Jesus's figure, indicating Leonardo's hand. Additionally, Leonardo may have been a model for two works by Verrocchio: the bronze statue of David in the Bargello and the archangel Raphael in Tobias and the Angel.
Vasari tells a story of Leonardo as a very young man: a local peasant made himself a round buckler shield and requested that Ser Piero have it painted for him. Leonardo, inspired by the story of Medusa, responded with a painting of a monster spitting fire that was so terrifying that his father bought a different shield to give to the peasant and sold Leonardo's to a Florentine art dealer for 100 ducats, who in turn sold it to the Duke of Milan.
First Florentine period (1472 – c. 1482)
By 1472, at the age of 20, Leonardo qualified as a master in the Guild of Saint Luke, the guild of artists and doctors of medicine, but even after his father set him up in his own workshop, his attachment to Verrocchio was such that he continued to collaborate and live with him. Leonardo's earliest known dated work is a 1473 pen-and-ink drawing of the Arno valley (see below). According to Vasari, the young Leonardo was the first to suggest making the Arno river a navigable channel between Florence and Pisa.
In January 1478, Leonardo received an independent commission to paint an altarpiece for the Chapel of Saint Bernard in the Palazzo Vecchio, an indication of his independence from Verrocchio's studio. An anonymous early biographer, known as Anonimo Gaddiano, claims that in 1480 Leonardo was living with the Medici and often worked in the garden of the Piazza San Marco, Florence, where a Neoplatonic academy of artists, poets and philosophers organised by the Medici met. In March 1481, he received a commission from the monks of San Donato in Scopeto for The Adoration of the Magi. Neither of these initial commissions were completed, being abandoned when Leonardo went to offer his services to Duke of Milan Ludovico Sforza. Leonardo wrote Sforza a letter which described the diverse things that he could achieve in the fields of engineering and weapon design, and mentioned that he could paint. He brought with him a silver string instrument – either a lute or lyre – in the form of a horse's head.
With Alberti, Leonardo visited the home of the Medici and through them came to know the older Humanist philosophers of whom Marsiglio Ficino, proponent of Neoplatonism; Cristoforo Landino, writer of commentaries on Classical writings, and John Argyropoulos, teacher of Greek and translator of Aristotle were the foremost. Also associated with the Platonic Academy of the Medici was Leonardo's contemporary, the brilliant young poet and philosopher Pico della Mirandola. In 1482, Leonardo was sent as an ambassador by Lorenzo de' Medici to Ludovico il Moro, who ruled Milan between 1479 and 1499.
First Milanese period (c. 1482–1499)
Leonardo worked in Milan from 1482 until 1499. He was commissioned to paint the Virgin of the Rocks for the Confraternity of the Immaculate Conception and The Last Supper for the monastery of Santa Maria delle Grazie. In the spring of 1485, Leonardo travelled to Hungary (on behalf of Sforza) to meet king Matthias Corvinus, and was commissioned by him to paint a Madonna. In 1490 he was called as a consultant, together with Francesco di Giorgio Martini, for the building site of the cathedral of Pavia and was struck by the equestrian statue of Regisole, of which he left a sketch. Leonardo was employed on many other projects for Sforza, such as preparation of floats and pageants for special occasions; a drawing of, and wooden model for, a competition to design the cupola for Milan Cathedral; and a model for a huge equestrian monument to Ludovico's predecessor Francesco Sforza. This would have surpassed in size the only two large equestrian statues of the Renaissance, Donatello's Gattamelata in Padua and Verrocchio's Bartolomeo Colleoni in Venice, and became known as the Gran Cavallo. Leonardo completed a model for the horse and made detailed plans for its casting, but in November 1494, Ludovico gave the metal to his brother-in-law to be used for a cannon to defend the city from Charles VIII of France.
Contemporary correspondence records that Leonardo and his assistants were commissioned by the Duke of Milan to paint the Sala delle Asse in the Sforza Castle, 1498. The project became a trompe-l'œil decoration that made the great hall appear to be a pergola created by the interwoven limbs of sixteen mulberry trees, whose canopy included an intricate labyrinth of leaves and knots on the ceiling.
Second Florentine period (1500–1508)
When Ludovico Sforza was overthrown by France in 1500, Leonardo fled Milan for Venice, accompanied by his assistant Salaì and friend, the mathematician Luca Pacioli. In Venice, Leonardo was employed as a military architect and engineer, devising methods to defend the city from naval attack. On his return to Florence in 1500, he and his household were guests of the Servite monks at the monastery of Santissima Annunziata and were provided with a workshop where, according to Vasari, Leonardo created the cartoon of The Virgin and Child with Saint Anne and Saint John the Baptist, a work that won such admiration that "men [and] women, young and old" flocked to see it "as if they were going to a solemn festival."
In Cesena in 1502, Leonardo entered the service of Cesare Borgia, the son of Pope Alexander VI, acting as a military architect and engineer and travelling throughout Italy with his patron. Leonardo created a map of Cesare Borgia's stronghold, a town plan of Imola in order to win his patronage. Upon seeing it, Cesare hired Leonardo as his chief military engineer and architect. Later in the year, Leonardo produced another map for his patron, one of Chiana Valley, Tuscany, so as to give his patron a better overlay of the land and greater strategic position. He created this map in conjunction with his other project of constructing a dam from the sea to Florence, in order to allow a supply of water to sustain the canal during all seasons.
Leonardo had left Borgia's service and returned to Florence by early 1503, where he rejoined the Guild of Saint Luke on 18 October of that year. By this same month, Leonardo had begun working on a portrait of Lisa del Giocondo, the model for the Mona Lisa, which he would continue working on until his twilight years. In January 1504, he was part of a committee formed to recommend where Michelangelo's statue of David should be placed. He then spent two years in Florence designing and painting a mural of The Battle of Anghiari for the Signoria, with Michelangelo designing its companion piece, The Battle of Cascina.
In 1506, Leonardo was summoned to Milan by Charles II d'Amboise, the acting French governor of the city. There, Leonardo took on another pupil, Count Francesco Melzi, the son of a Lombard aristocrat, who is considered to have been his favourite student. The Council of Florence wished Leonardo to return promptly to finish The Battle of Anghiari, but he was given leave at the behest of Louis XII, who considered commissioning the artist to make some portraits. Leonardo may have commenced a project for an equestrian figure of d'Amboise; a wax model attributed to him survives and would be the only extant example of Leonardo's sculpture, but the attribution is not widely accepted. Leonardo was otherwise free to pursue his scientific interests. Many of Leonardo's most prominent pupils either knew or worked with him in Milan, including Bernardino Luini, Giovanni Antonio Boltraffio, and Marco d'Oggiono. In 1507, Leonardo was in Florence sorting out a dispute with his brothers over the estate of his father, who had died in 1504.
Second Milanese period (1508–1513)
By 1508, Leonardo was back in Milan, living in his own house in Porta Orientale in the parish of Santa Babila.
In 1512, Leonardo was working on plans for an equestrian monument for Gian Giacomo Trivulzio, but this was prevented by an invasion of a confederation of Swiss, Spanish and Venetian forces, which drove the French from Milan. Leonardo stayed in the city, spending several months in 1513 at the Medici's Vaprio d'Adda villa.
Rome and France (1513–1519)
In March 1513, Lorenzo de' Medici's son Giovanni assumed the papacy (as Leo X); Leonardo went to Rome that September, where he was received by the pope's brother Giuliano. From September 1513 to 1516, Leonardo spent much of his time living in the Belvedere Courtyard in the Apostolic Palace, where Michelangelo and Raphael were both active. Leonardo was given an allowance of 33 ducats a month and, according to Vasari, decorated a lizard with scales dipped in quicksilver. The pope gave him a painting commission of unknown subject matter, but cancelled it when the artist set about developing a new kind of varnish. Leonardo became ill, in what may have been the first of multiple strokes leading to his death. He practised botany in the Vatican Gardens, and was commissioned to make plans for the Pope's proposed draining of the Pontine Marshes. He also dissected cadavers, making notes for a treatise on vocal cords; these he gave to an official in hopes of regaining the Pope's favour, but he was unsuccessful.
In October 1515, King Francis I of France recaptured Milan. On 21 March 1516 Antonio Maria Pallavicini, the French ambassador to the Holy See, received a letter sent from Lyon a week previously by the royal advisor Guillaume Gouffier, seigneur de Bonnivet, containing the French king's instructions to assist Leonardo in his relocation to France and to inform the artist that the King was eagerly awaiting his arrival. Pallavicini was also asked to reassure Leonardo that he would be well received at court, both by the King and by his mother, Louise of Savoy. Leonardo entered Francis's service later that year, and was given the use of the manor house Clos Lucé near the King's residence at the royal Château d'Amboise. He was frequently visited by Francis, and drew plans for an immense castle town the King intended to erect at Romorantin. He also made a mechanical lion, which during a pageant walked towards the King and – upon being struck by a wand – opened its chest to reveal a cluster of lilies.
Leonardo was accompanied during this time by his friend and apprentice Francesco Melzi, and was supported by a pension totalling 10,000 scudi. At some point, Melzi drew a portrait of Leonardo; the only others known from his lifetime were a sketch by an unknown assistant on the back of one of Leonardo's studies () and a drawing by Giovanni Ambrogio Figino depicting an elderly Leonardo with his right arm wrapped in clothing. The latter, in addition to the record of an October 1517 visit by Louis d'Aragon, confirms an account of Leonardo's right hand being paralytic when he was 65, which may indicate why he left works such as the Mona Lisa unfinished. He continued to work at some capacity until eventually becoming ill and bedridden for several months.
Death
Leonardo died at Clos Lucé on 2 May 1519 at the age of 67, possibly of a stroke. Francis I had become a close friend. Vasari describes Leonardo as lamenting on his deathbed, full of repentance, that "he had offended against God and men by failing to practice his art as he should have done." Vasari states that in his last days, Leonardo sent for a priest to make his confession and to receive the Holy Sacrament. Vasari also records that the King held Leonardo's head in his arms as he died, although this story may be legend rather than fact. In accordance with his will, sixty beggars carrying tapers followed Leonardo's casket. Melzi was the principal heir and executor, receiving, as well as money, Leonardo's paintings, tools, library and personal effects. Leonardo's other long-time pupil and companion, Salaì, and his servant Baptista de Vilanis, each received half of Leonardo's vineyards. His brothers received land, and his serving woman received a fur-lined cloak. On 12 August 1519, Leonardo's remains were interred in the Collegiate Church of Saint Florentin at the Château d'Amboise.
Some 20 years after Leonardo's death, Francis was reported by the goldsmith and sculptor Benvenuto Cellini as saying: "There had never been another man born in the world who knew as much as Leonardo, not so much about painting, sculpture and architecture, as that he was a very great philosopher."
Salaì, or Il Salaino ("The Little Unclean One", i.e., the devil), entered Leonardo's household in 1490 as an assistant. After only a year, Leonardo made a list of his misdemeanours, calling him "a thief, a liar, stubborn, and a glutton," after he had made off with money and valuables on at least five occasions and spent a fortune on clothes. Nevertheless, Leonardo treated him with great indulgence, and he remained in Leonardo's household for the next thirty years. Salaì executed a number of paintings under the name of Andrea Salaì, but although Vasari claims that Leonardo "taught him many things about painting," his work is generally considered to be of less artistic merit than others among Leonardo's pupils, such as Marco d'Oggiono and Boltraffio.
At the time of his death in 1524, Salaì owned a painting referred to as Joconda in a posthumous inventory of his belongings; it was assessed at 505 lire, an exceptionally high valuation for a small panel portrait.
Personal life
Despite the thousands of pages Leonardo left in notebooks and manuscripts, he scarcely made reference to his personal life.
Within Leonardo's lifetime, his extraordinary powers of invention, his "great physical beauty" and "infinite grace," as described by Vasari, as well as all other aspects of his life, attracted the curiosity of others. One such aspect was his love for animals, likely including vegetarianism and according to Vasari, a habit of purchasing caged birds and releasing them.
Leonardo had many friends who are now notable either in their fields or for their historical significance, including mathematician Luca Pacioli, with whom he collaborated on the book Divina proportione in the 1490s. Leonardo appears to have had no close relationships with women except for his friendship with Cecilia Gallerani and the two Este sisters, Beatrice and Isabella. While on a journey that took him through Mantua, he drew a portrait of Isabella that appears to have been used to create a painted portrait, now lost.
Beyond friendship, Leonardo kept his private life secret. His sexuality has been the subject of satire, analysis, and speculation. This trend began in the mid-16th century and was revived in the 19th and 20th centuries, most notably by Sigmund Freud in his Leonardo da Vinci, A Memory of His Childhood. Leonardo's most intimate relationships were perhaps with his pupils Salaì and Melzi. Melzi, writing to inform Leonardo's brothers of his death, described Leonardo's feelings for his pupils as both loving and passionate. It has been claimed since the 16th century that these relationships were of a sexual or erotic nature. Walter Isaacson in his biography of Leonardo makes explicit his opinion that the relations with Salaì were intimate and homosexual.
Earlier in Leonardo's life, court records of 1476, when he was aged twenty-four, show that Leonardo and three other young men were charged with sodomy in an incident involving a known male prostitute. The charges were dismissed for lack of evidence, and there is speculation that since one of the accused, Lionardo de Tornabuoni, was related to Lorenzo de' Medici, the family exerted its influence to secure the dismissal. Since that date much has been written about his presumed homosexuality and its role in his art, particularly in the androgyny and eroticism manifested in Saint John the Baptist and Bacchus and more explicitly in a number of erotic drawings.
Paintings
Despite the recent awareness and admiration of Leonardo as a scientist and inventor, for the better part of four hundred years his fame rested on his achievements as a painter. A handful of works that are either authenticated or attributed to him have been regarded as among the great masterpieces. These paintings are famous for a variety of qualities that have been much imitated by students and discussed at great length by connoisseurs and critics. By the 1490s Leonardo had already been described as a "Divine" painter.
Among the qualities that make Leonardo's work unique are his innovative techniques for laying on the paint; his detailed knowledge of anatomy, light, botany and geology; his interest in physiognomy and the way humans register emotion in expression and gesture; his innovative use of the human form in figurative composition; and his use of subtle gradation of tone. All these qualities come together in his most famous painted works, the Mona Lisa, the Last Supper, and the Virgin of the Rocks.
Early works
Leonardo first gained attention for his work on the Baptism of Christ, painted in conjunction with Verrocchio. Two other paintings appear to date from his time at Verrocchio's workshop, both of which are Annunciations. One is small, long and high. It is a "predella" to go at the base of a larger composition, a painting by Lorenzo di Credi from which it has become separated. The other is a much larger work, long. In both Annunciations, Leonardo used a formal arrangement, like two well-known pictures by Fra Angelico of the same subject, of the Virgin Mary sitting or kneeling to the right of the picture, approached from the left by an angel in profile, with a rich flowing garment, raised wings and bearing a lily. Although previously attributed to Ghirlandaio, the larger work is now generally attributed to Leonardo.
In the smaller painting, Mary averts her eyes and folds her hands in a gesture that symbolised submission to God's will. Mary is not submissive, however, in the larger piece. The girl, interrupted in her reading by this unexpected messenger, puts a finger in her bible to mark the place and raises her hand in a formal gesture of greeting or surprise. This calm young woman appears to accept her role as the Mother of God, not with resignation but with confidence. In this painting, the young Leonardo presents the humanist face of the Virgin Mary, recognising humanity's role in God's incarnation.
Paintings of the 1480s
In the 1480s, Leonardo received two very important commissions and commenced another work that was of ground-breaking importance in terms of composition. Two of the three were never finished, and the third took so long that it was subject to lengthy negotiations over completion and payment.
One of these paintings was Saint Jerome in the Wilderness, which Bortolon associates with a difficult period of Leonardo's life, as evidenced in his diary: "I thought I was learning to live; I was only learning to die." Although the painting is barely begun, the composition can be seen and is very unusual. Jerome, as a penitent, occupies the middle of the picture, set on a slight diagonal and viewed somewhat from above. His kneeling form takes on a trapezoid shape, with one arm stretched to the outer edge of the painting and his gaze looking in the opposite direction. J. Wasserman points out the link between this painting and Leonardo's anatomical studies. Across the foreground sprawls his symbol, a great lion whose body and tail make a double spiral across the base of the picture space. The other remarkable feature is the sketchy landscape of craggy rocks against which the figure is silhouetted.
The daring display of figure composition, the landscape elements and personal drama also appear in the great unfinished masterpiece, the Adoration of the Magi, a commission from the Monks of San Donato a Scopeto. It is a complex composition, of about Leonardo did numerous drawings and preparatory studies, including a detailed one in linear perspective of the ruined classical architecture that forms part of the background. In 1482 Leonardo went to Milan at the behest of Lorenzo de' Medici in order to win favour with Ludovico il Moro, and the painting was abandoned.
The third important work of this period is the Virgin of the Rocks, commissioned in Milan for the Confraternity of the Immaculate Conception. The painting, to be done with the assistance of the de Predis brothers, was to fill a large complex altarpiece. Leonardo chose to paint an apocryphal moment of the infancy of Christ when the infant John the Baptist, in protection of an angel, met the Holy Family on the road to Egypt. The painting demonstrates an eerie beauty as the graceful figures kneel in adoration around the infant Christ in a wild landscape of tumbling rock and whirling water. While the painting is quite large, about , it is not nearly as complex as the painting ordered by the monks of San Donato, having only four figures rather than about fifty and a rocky landscape rather than architectural details. The painting was eventually finished; in fact, two versions of the painting were finished: one remained at the chapel of the Confraternity, while Leonardo took the other to France. The Brothers did not get their painting, however, nor the de Predis their payment, until the next century.
Leonardo's most remarkable portrait of this period is the Lady with an Ermine, presumed to be Cecilia Gallerani (), lover of Ludovico Sforza. The painting is characterised by the pose of the figure with the head turned at a very different angle to the torso, unusual at a date when many portraits were still rigidly in profile. The ermine plainly carries symbolic meaning, relating either to the sitter, or to Ludovico who belonged to the prestigious Order of the Ermine.
Paintings of the 1490s
Leonardo's most famous painting of the 1490s is The Last Supper, commissioned for the refectory of the Convent of Santa Maria della Grazie in Milan. It represents the last meal shared by Jesus with his disciples before his capture and death, and shows the moment when Jesus has just said "one of you will betray me", and the consternation that this statement caused.
The writer Matteo Bandello observed Leonardo at work and wrote that some days he would paint from dawn till dusk without stopping to eat and then not paint for three or four days at a time. This was beyond the comprehension of the prior of the convent, who hounded him until Leonardo asked Ludovico to intervene. Vasari describes how Leonardo, troubled over his ability to adequately depict the faces of Christ and the traitor Judas, told the duke that he might be obliged to use the prior as his model.
The painting was acclaimed as a masterpiece of design and characterisation, but it deteriorated rapidly, so that within a hundred years it was described by one viewer as "completely ruined." Leonardo, instead of using the reliable technique of fresco, had used tempera over a ground that was mainly gesso, resulting in a surface subject to mould and to flaking. Despite this, the painting remains one of the most reproduced works of art; countless copies have been made in various mediums.
Toward the end of this period, in 1498 Leonardo's trompe-l'œil decoration of the Sala delle Asse was painted for the Duke of Milan in the Castello Sforzesco.
Paintings of the 1500s
In 1505, Leonardo was commissioned to paint The Battle of Anghiari in the Salone dei Cinquecento (Hall of the Five Hundred) in the Palazzo Vecchio, Florence. Leonardo devised a dynamic composition depicting four men riding raging war horses engaged in a battle for possession of a standard, at the Battle of Anghiari in 1440. Michelangelo was assigned the opposite wall to depict the Battle of Cascina. Leonardo's painting deteriorated rapidly and is now known from a copy by Rubens.
Among the works created by Leonardo in the 16th century is the small portrait known as the Mona Lisa or La Gioconda, the laughing one. In the present era, it is arguably the most famous painting in the world. Its fame rests, in particular, on the elusive smile on the woman's face, its mysterious quality perhaps due to the subtly shadowed corners of the mouth and eyes such that the exact nature of the smile cannot be determined. The shadowy quality for which the work is renowned came to be called "sfumato", or Leonardo's smoke. Vasari wrote that the smile was "so pleasing that it seems more divine than human, and it was considered a wondrous thing that it was as lively as the smile of the living original."
Other characteristics of the painting are the unadorned dress, in which the eyes and hands have no competition from other details; the dramatic landscape background, in which the world seems to be in a state of flux; the subdued colouring; and the extremely smooth nature of the painterly technique, employing oils laid on much like tempera, and blended on the surface so that the brushstrokes are indistinguishable. Vasari expressed that the painting's quality would make even "the most confident master ... despair and lose heart." The perfect state of preservation and the fact that there is no sign of repair or overpainting is rare in a panel painting of this date.
In the painting Virgin and Child with Saint Anne, the composition again picks up the theme of figures in a landscape, which Wasserman describes as "breathtakingly beautiful" and harkens back to the Saint Jerome with the figure set at an oblique angle. What makes this painting unusual is that there are two obliquely set figures superimposed. Mary is seated on the knee of her mother, Saint Anne. She leans forward to restrain the Christ Child as he plays roughly with a lamb, the sign of his own impending sacrifice. This painting, which was copied many times, influenced Michelangelo, Raphael, and Andrea del Sarto, and through them Pontormo and Correggio. The trends in composition were adopted in particular by the Venetian painters Tintoretto and Veronese.
Drawings
Leonardo was a prolific draughtsman, keeping journals full of small sketches and detailed drawings recording all manner of things that took his attention. As well as the journals there exist many studies for paintings, some of which can be identified as preparatory to particular works such as The Adoration of the Magi, The Virgin of the Rocks and The Last Supper. His earliest dated drawing is a Landscape of the Arno Valley, 1473, which shows the river, the mountains, Montelupo Castle and the farmlands beyond it in great detail.
Among his famous drawings are the Vitruvian Man, a study of the proportions of the human body; the Head of an Angel, for The Virgin of the Rocks in the Louvre; a botanical study of Star of Bethlehem; and a large drawing (160×100 cm) in black chalk on coloured paper of The Virgin and Child with Saint Anne and Saint John the Baptist in the National Gallery, London. This drawing employs the subtle sfumato technique of shading, in the manner of the Mona Lisa. It is thought that Leonardo never made a painting from it, the closest similarity being to The Virgin and Child with Saint Anne in the Louvre.
Other drawings of interest include numerous studies generally referred to as "caricatures" because, although exaggerated, they appear to be based upon observation of live models. Vasari relates that Leonardo would look for interesting faces in public to use as models for some of his work. There are numerous studies of beautiful young men, often associated with Salaì, with the rare and much admired facial feature, the so-called "Grecian profile". These faces are often contrasted with that of a warrior. Salaì is often depicted in fancy-dress costume. Leonardo is known to have designed sets for pageants with which these may be associated. Other, often meticulous, drawings show studies of drapery. A marked development in Leonardo's ability to draw drapery occurred in his early works. Another often-reproduced drawing is a macabre sketch that was done by Leonardo in Florence in 1479 showing the body of Bernardo Baroncelli, hanged in connection with the murder of Giuliano, brother of Lorenzo de' Medici, in the Pazzi conspiracy. In his notes, Leonardo recorded the colours of the robes that Baroncelli was wearing when he died.
Like the two contemporary architects Donato Bramante (who designed the Belvedere Courtyard) and Antonio da Sangallo the Elder, Leonardo experimented with designs for centrally planned churches, a number of which appear in his journals, as both plans and views, although none was ever realised.
Journals and notes
Renaissance humanism recognised no mutually exclusive polarities between the sciences and the arts, and Leonardo's studies in science and engineering are sometimes considered as impressive and innovative as his artistic work. These studies were recorded in 13,000 pages of notes and drawings, which fuse art and natural philosophy (the forerunner of modern science). They were made and maintained daily throughout Leonardo's life and travels, as he made continual observations of the world around him. Leonardo's notes and drawings display an enormous range of interests and preoccupations, some as mundane as lists of groceries and people who owed him money and some as intriguing as designs for wings and shoes for walking on water. There are compositions for paintings, studies of details and drapery, studies of faces and emotions, of animals, babies, dissections, plant studies, rock formations, whirlpools, war machines, flying machines and architecture.
These notebooks – originally loose papers of different types and sizes – were largely entrusted to Leonardo's pupil and heir Francesco Melzi after the master's death. These were to be published, a task of overwhelming difficulty because of its scope and Leonardo's idiosyncratic writing. Some of Leonardo's drawings were copied by an anonymous Milanese artist for a planned treatise on art . After Melzi's death in 1570, the collection passed to his son, the lawyer Orazio, who initially took little interest in the journals. In 1587, a Melzi household tutor named Lelio Gavardi took 13 of the manuscripts to Pisa; there, the architect Giovanni Magenta reproached Gavardi for having taken the manuscripts illicitly and returned them to Orazio. Having many more such works in his possession, Orazio gifted the volumes to Magenta. News spread of these lost works of Leonardo's, and Orazio retrieved seven of the 13 manuscripts, which he then gave to Pompeo Leoni for publication in two volumes; one of these was the Codex Atlanticus. The other six works had been distributed to a few others. After Orazio's death, his heirs sold the rest of Leonardo's possessions, and thus began their dispersal.
Some works have found their way into major collections such as the Royal Library at Windsor Castle, the Louvre, the , the Victoria and Albert Museum, the Biblioteca Ambrosiana in Milan, which holds the 12-volume Codex Atlanticus, and the British Library in London, which has put a selection from the Codex Arundel (BL Arundel MS 263) online. Works have also been at Holkham Hall, the Metropolitan Museum of Art, and in the private hands of John Nicholas Brown I and Robert Lehman. The Codex Leicester is the only privately owned major scientific work of Leonardo; it is owned by Bill Gates and displayed once a year in different cities around the world.
Most of Leonardo's writings are in mirror-image cursive. Since Leonardo wrote with his left hand, it was probably easier for him to write from right to left. Leonardo used a variety of shorthand and symbols, and states in his notes that he intended to prepare them for publication. In many cases a single topic is covered in detail in both words and pictures on a single sheet, together conveying information that would not be lost if the pages were published out of order. Why they were not published during Leonardo's lifetime is unknown.
Science and inventions
Leonardo's approach to science was observational: he tried to understand a phenomenon by describing and depicting it in utmost detail and did not emphasise experiments or theoretical explanation. Since he lacked formal education in Latin and mathematics, contemporary scholars mostly ignored Leonardo the scientist, although he did teach himself Latin. His keen observations in many areas were noted, such as when he wrote "Il sole non si move." ("The Sun does not move.")
In the 1490s he studied mathematics under Luca Pacioli and prepared a series of drawings of regular solids in a skeletal form to be engraved as plates for Pacioli's book Divina proportione, published in 1509. While living in Milan, he studied light from the summit of Monte Rosa. Scientific writings in his notebook on fossils have been considered as influential on early palaeontology.
The content of his journals suggest that he was planning a series of treatises on a variety of subjects. A coherent treatise on anatomy is said to have been observed during a visit by Cardinal Louis d'Aragon's secretary in 1517. Aspects of his work on the studies of anatomy, light and the landscape were assembled for publication by Melzi and eventually published as A Treatise on Painting in France and Italy in 1651 and Germany in 1724, with engravings based upon drawings by the Classical painter Nicolas Poussin. According to Arasse, the treatise, which in France went into 62 editions in fifty years, caused Leonardo to be seen as "the precursor of French academic thought on art."
While Leonardo's experimentation followed scientific methods, a recent and exhaustive analysis of Leonardo as a scientist by Fritjof Capra argues that Leonardo was a fundamentally different kind of scientist from Galileo, Newton and other scientists who followed him in that, as a "Renaissance Man", his theorising and hypothesising integrated the arts and particularly painting.
Anatomy and physiology
Leonardo started his study in the anatomy of the human body under the apprenticeship of Verrocchio, who demanded that his students develop a deep knowledge of the subject. As an artist, he quickly became master of topographic anatomy, drawing many studies of muscles, tendons and other visible anatomical features.
As a successful artist, Leonardo was given permission to dissect human corpses at the Hospital of Santa Maria Nuova in Florence and later at hospitals in Milan and Rome. From 1510 to 1511 he collaborated in his studies with the doctor Marcantonio della Torre, professor of Anatomy at the University of Pavia. Leonardo made over 240 detailed drawings and wrote about 13,000 words toward a treatise on anatomy. Only a small amount of the material on anatomy was published in Leonardo's Treatise on painting. During the time that Melzi was ordering the material into chapters for publication, they were examined by a number of anatomists and artists, including Vasari, Cellini and Albrecht Dürer, who made a number of drawings from them.
Leonardo's anatomical drawings include many studies of the human skeleton and its parts, and of muscles and sinews. He studied the mechanical functions of the skeleton and the muscular forces that are applied to it in a manner that prefigured the modern science of biomechanics. He drew the heart and vascular system, the sex organs and other internal organs, making one of the first scientific drawings of a fetus in utero. The drawings and notation are far ahead of their time, and if published would undoubtedly have made a major contribution to medical science.
Leonardo also closely observed and recorded the effects of age and of human emotion on the physiology, studying in particular the effects of rage. He drew many figures who had significant facial deformities or signs of illness. Leonardo also studied and drew the anatomy of many animals, dissecting cows, birds, monkeys, bears, and frogs, and comparing in his drawings their anatomical structure with that of humans. He also made a number of studies of horses.
Leonardo's dissections and documentation of muscles, nerves, and vessels helped to describe the physiology and mechanics of movement. He attempted to identify the source of 'emotions' and their expression. He found it difficult to incorporate the prevailing system and theories of bodily humours, but eventually he abandoned these physiological explanations of bodily functions. He made the observations that humours were not located in cerebral spaces or ventricles. He documented that the humours were not contained in the heart or the liver, and that it was the heart that defined the circulatory system. He was the first to define atherosclerosis and liver cirrhosis. He created models of the cerebral ventricles with the use of melted wax and constructed a glass aorta to observe the circulation of blood through the aortic valve by using water and grass seed to watch flow patterns.
Engineering and inventions
During his lifetime, Leonardo was also valued as an engineer. With the same rational and analytical approach that moved him to represent the human body and to investigate anatomy, Leonardo studied and designed many machines and devices. He drew their "anatomy" with unparalleled mastery, producing the first form of the modern technical drawing, including a perfected "exploded view" technique, to represent internal components. Those studies and projects collected in his codices fill more than 5,000 pages. In a letter of 1482 to the lord of Milan Ludovico il Moro, he wrote that he could create all sorts of machines both for the protection of a city and for siege. When he fled from Milan to Venice in 1499, he found employment as an engineer and devised a system of moveable barricades to protect the city from attack. In 1502, he created a scheme for diverting the flow of the Arno river, a project on which Niccolò Machiavelli also worked. He continued to contemplate the canalisation of Lombardy's plains while in Louis XII's company and of the Loire and its tributaries in the company of Francis I. Leonardo's journals include a vast number of inventions, both practical and impractical. They include musical instruments, a mechanical knight, hydraulic pumps, reversible crank mechanisms, finned mortar shells, and a steam cannon.
Leonardo was fascinated by the phenomenon of flight for much of his life, producing many studies, including Codex on the Flight of Birds (), as well as plans for several flying machines, such as a flapping ornithopter and a machine with a helical rotor. In a 2003 documentary by British television station Channel Four, titled Leonardo's Dream Machines, various designs by Leonardo, such as a parachute and a giant crossbow, were interpreted and constructed. Some of those designs proved successful, whilst others fared less well when tested. Similarly, a team of engineers built ten machines designed by Leonardo in the 2009 American television series Doing DaVinci, including a fighting vehicle and a self-propelled cart.
Research performed by Marc van den Broek revealed older prototypes for more than 100 inventions that are ascribed to Leonardo. Similarities between Leonardo's illustrations and drawings from the Middle Ages and from Ancient Greece and Rome, the Chinese and Persian Empires, and Egypt suggest that a large portion of Leonardo's inventions had been conceived before his lifetime. Leonardo's innovation was to combine different functions from existing drafts and set them into scenes that illustrated their utility. By reconstituting technical inventions he created something new.
In his notebooks, Leonardo first stated the 'laws' of sliding friction in 1493. His inspiration for investigating friction came about in part from his study of perpetual motion, which he correctly concluded was not possible. His results were never published and the friction laws were not rediscovered until 1699 by Guillaume Amontons, with whose name they are now usually associated. For this contribution, Leonardo was named as the first of the 23 "Men of Tribology" by Duncan Dowson.
Legacy
Although he had no formal academic training, many historians and scholars regard Leonardo as the prime exemplar of the "Universal Genius" or "Renaissance Man", an individual of "unquenchable curiosity" and "feverishly inventive imagination." He is widely considered one of the most diversely talented individuals ever to have lived. According to art historian Helen Gardner, the scope and depth of his interests were without precedent in recorded history, and "his mind and personality seem to us superhuman, while the man himself mysterious and remote." Scholars interpret his view of the world as being based in logic, though the empirical methods he used were unorthodox for his time.
Leonardo's fame within his own lifetime was such that the King of France carried him away like a trophy, and was claimed to have supported him in his old age and held him in his arms as he died. Interest in Leonardo and his work has never diminished. Crowds still queue to see his best-known artworks, T-shirts still bear his most famous drawing, and writers continue to hail him as a genius while speculating about his private life, as well as about what one so intelligent actually believed in.
The continued admiration that Leonardo commanded from painters, critics and historians is reflected in many other written tributes. Baldassare Castiglione, author of Il Cortegiano (The Courtier), wrote in 1528: "...Another of the greatest painters in this world looks down on this art in which he is unequalled..." while the biographer known as "Anonimo Gaddiano" wrote, : "His genius was so rare and universal that it can be said that nature worked a miracle on his behalf..." Vasari, in his Lives of the Artists (1568), opens his chapter on Leonardo:
In the normal course of events many men and women are born with remarkable talents; but occasionally, in a way that transcends nature, a single person is marvellously endowed by Heaven with beauty, grace and talent in such abundance that he leaves other men far behind, all his actions seem inspired and indeed everything he does clearly comes from God rather than from human skill. Everyone acknowledged that this was true of Leonardo da Vinci, an artist of outstanding physical beauty, who displayed infinite grace in everything that he did and who cultivated his genius so brilliantly that all problems he studied he solved with ease.
The 19th century brought a particular admiration for Leonardo's genius, causing Henry Fuseli to write in 1801: "Such was the dawn of modern art, when Leonardo da Vinci broke forth with a splendour that distanced former excellence: made up of all the elements that constitute the essence of genius..." This is echoed by A. E. Rio who wrote in 1861: "He towered above all other artists through the strength and the nobility of his talents."
By the 19th century, the scope of Leonardo's notebooks was known, as well as his paintings. Hippolyte Taine wrote in 1866: "There may not be in the world an example of another genius so universal, so incapable of fulfilment, so full of yearning for the infinite, so naturally refined, so far ahead of his own century and the following centuries."
Art historian Bernard Berenson wrote in 1896:
The interest in Leonardo's genius has continued unabated; experts study and translate his writings, analyse his paintings using scientific techniques, argue over attributions and search for works which have been recorded but never found. Liana Bortolon, writing in 1967, said: The Elmer Belt Library of Vinciana is a special collection at the University of California, Los Angeles.
Twenty-first-century author Walter Isaacson based much of his biography of Leonardo on thousands of notebook entries, studying the personal notes, sketches, budget notations, and musings of the man whom he considers the greatest of innovators. Isaacson was surprised to discover a "fun, joyous" side of Leonardo in addition to his limitless curiosity and creative genius.
On the 500th anniversary of Leonardo's death, the Louvre in Paris arranged for the largest ever single exhibit of his work, called Leonardo, between November 2019 and February 2020. The exhibit includes over 100 paintings, drawings and notebooks. Eleven of the paintings that Leonardo completed in his lifetime were included. Five of these are owned by the Louvre, but the Mona Lisa was not included because it is in such great demand among general visitors to the Louvre; it remains on display in its gallery. Vitruvian Man, however, is on display following a legal battle with its owner, the Gallerie dell'Accademia in Venice. Salvator Mundi was also not included because its Saudi owner did not agree to lease the work.
The Mona Lisa, considered Leonardo's magnum opus, is often regarded as the most famous portrait ever made. The Last Supper is the most reproduced religious painting of all time, and Leonardo's Vitruvian Man drawing is also considered a cultural icon.
More than a decade of analysis of Leonardo's genetic genealogy, conducted by Alessandro Vezzosi and Agnese Sabato, came to a conclusion in mid-2021. It was determined that the artist has 14 living male relatives. The work could also help determine the authenticity of remains thought to belong to Leonardo.
Location of remains
While Leonardo was certainly buried in the collegiate church of Saint Florentin at the Château d'Amboise in 12 August 1519, the current location of his remains is unclear. Much of Château d'Amboise was damaged during the French Revolution, leading to the church's demolition in 1802. Some of the graves were destroyed in the process, scattering the bones interred there and thereby leaving the whereabouts of Leonardo's remains subject to dispute; a gardener may have even buried some in the corner of the courtyard.
In 1863, fine-arts inspector general Arsène Houssaye received an imperial commission to excavate the site and discovered a partially complete skeleton with a bronze ring on one finger, white hair, and stone fragments bearing the inscriptions "EO", "AR", "DUS", and "VINC" interpreted as forming "Leonardus Vinci". The skull's eight teeth correspond to someone of approximately the appropriate age, and a silver shield found near the bones depicts a beardless Francis I, corresponding to the king's appearance during Leonardo's time in France.
Houssaye postulated that the unusually large skull was an indicator of Leonardo's intelligence; author Charles Nicholl describes this as a "dubious phrenological deduction". At the same time, Houssaye noted some issues with his observations, including that the feet were turned toward the high altar, a practice generally reserved for laymen, and that the skeleton of seemed too short. Art historian Mary Margaret Heaton wrote in 1874 that the height would be appropriate for Leonardo. The skull was allegedly presented to Napoleon III before being returned to the Château d'Amboise, where they were in the chapel of Saint Hubert in 1874. A plaque above the tomb states that its contents are only presumed to be those of Leonardo.
It has since been theorised that the folding of the skeleton's right arm over the head may correspond to the paralysis of Leonardo's right hand. In 2016, it was announced that DNA tests would be conducted to determine whether the attribution is correct. The DNA of the remains will be compared to that of samples collected from Leonardo's work and his half-brother Domenico's descendants; it may also be sequenced.
In 2019, documents were published revealing that Houssaye had kept the ring and a lock of hair. In 1925, his great-grandson sold these to an American collector. Sixty years later, another American acquired them, leading to their being displayed at the Leonardo Museum in Vinci beginning on 2 May 2019, the 500th anniversary of the artist's death.
Notes
General
Dates of works
References
Citations
Early
Modern
Works cited
Early
in
in
Modern
Books
volume 2: . A reprint of the original 1883 edition
Journals and encyclopedia articles
Further reading
See and for extensive bibliographies
External links
General
Universal Leonardo, a database of Leonardo's life and works maintained by Martin Kemp and Marina Wallace
Leonardo da Vinci on the National Gallery website
Works
Biblioteca Leonardiana, online bibliography (in Italian)
e-Leo: Archivio digitale di storia della tecnica e della scienza, archive of drawings, notes and manuscripts
Complete text and images of Richter's translation of the Notebooks
The Notebooks of Leonardo da Vinci
1452 births
1519 deaths
15th-century Italian mathematicians
15th-century Italian painters
15th-century Italian scientists
15th-century Italian sculptors
15th-century people from the Republic of Florence
16th-century Italian mathematicians
16th-century Italian painters
16th-century Italian scientists
16th-century Italian sculptors
16th-century people from the Republic of Florence
Ambassadors of the Republic of Florence
Ballistics experts
Fabulists
Painters from Florence
Italian botanical illustrators
Fluid dynamicists
History of anatomy
Italian anatomists
Italian caricaturists
Italian civil engineers
16th-century Italian inventors
Italian male painters
Italian male sculptors
Italian military engineers
Italian physiologists
Italian Renaissance humanists
Italian Renaissance painters
Italian Renaissance sculptors
Italian Roman Catholics
Italian LGBTQ painters
Italian LGBTQ sculptors
Mathematical artists
People prosecuted under anti-homosexuality laws
Philosophical theists
Physiognomists
Italian Renaissance architects
Writers who illustrated their own writing
Historical figures with ambiguous or disputed sexuality | Leonardo da Vinci | Chemistry | 12,098 |
32,046,448 | https://en.wikipedia.org/wiki/Strong%20and%20weak%20typing | In computer programming, one of the many ways that programming languages are colloquially classified is whether the language's type system makes it strongly typed or weakly typed (loosely typed). However, there is no precise technical definition of what the terms mean and different authors disagree about the implied meaning of the terms and the relative rankings of the "strength" of the type systems of mainstream programming languages. For this reason, writers who wish to write unambiguously about type systems often eschew the terms "strong typing" and "weak typing" in favor of specific expressions such as "type safety".
Generally, a strongly typed language has stricter typing rules at compile time, which implies that errors and exceptions are more likely to happen during compilation. Most of these rules affect variable assignment, function return values, procedure arguments and function calling. Dynamically typed languages (where type checking happens at run time) can also be strongly typed. In dynamically typed languages, values, rather than variables, have types.
A weakly typed language has looser typing rules and may produce unpredictable or even erroneous results or may perform implicit type conversion at runtime. A different but related concept is latent typing.
History
In 1974, Barbara Liskov and Stephen Zilles defined a strongly-typed language as one in which "whenever an object is passed from a calling function to a called function, its type must be compatible with the type declared in the called function."
In 1977, K. Jackson wrote, "In a strongly typed language each data area will have a distinct type and each process will state its communication requirements in terms of these types."
Definitions of "strong" or "weak"
A number of different language design decisions have been referred to as evidence of "strong" or "weak" typing. Many of these are more accurately understood as the presence or absence of type safety, memory safety, static type-checking, or dynamic type-checking.
"Strong typing" generally refers to use of programming language types in order to both capture invariants of the code, and ensure its correctness, and definitely exclude certain classes of programming errors. Thus there are many "strong typing" disciplines used to achieve these goals.
Implicit type conversions and "type punning"
Some programming languages make it easy to use a value of one type as if it were a value of another type. This is sometimes described as "weak typing".
For example, Aahz Maruch observes that "Coercion occurs when you have a statically typed language and you use the syntactic features of the language to force the usage of one type as if it were a different type (consider the common use of void* in C). Coercion is usually a symptom of weak typing. Conversion, on the other hand, creates a brand-new object of the appropriate type."
As another example, GCC describes this as type-punning and warns that it will break strict aliasing. Thiago Macieira discusses several problems that can arise when type-punning causes the compiler to make inappropriate optimizations.
There are many examples of languages that allow implicit type conversions, but in a type-safe manner. For example, both C++ and C# allow programs to define operators to convert a value from one type to another with well-defined semantics. When a C++ compiler encounters such a conversion, it treats the operation just like a function call. In contrast, converting a value to the C type is an unsafe operation that is invisible to the compiler.
Pointers
Some programming languages expose pointers as if they were numeric values, and allow users to perform arithmetic on them. These languages are sometimes referred to as "weakly typed", since pointer arithmetic can be used to bypass the language's type system.
Untagged unions
Some programming languages support untagged unions, which allow a value of one type to be viewed as if it were a value of another type.
Static type-checking
In Luca Cardelli's article Typeful Programming, a "strong type system" is described as one in which there is no possibility of an unchecked runtime type error. In other writing, the absence of unchecked run-time errors is referred to as safety or type safety; Tony Hoare's early papers call this property security.
Variation across programming languages
Some of these definitions are contradictory, others are merely conceptually independent, and still others are special cases (with additional constraints) of other, more "liberal" (less strong) definitions. Because of the wide divergence among these definitions, it is possible to defend claims about most programming languages that they are either strongly or weakly typed. For instance:
Java, Pascal, Ada, and C require variables to have a declared type, and support the use of explicit casts of arithmetic values to other arithmetic types. Java, C#, Ada, and Pascal are sometimes said to be more strongly typed than C, because C supports more kinds of implicit conversions, and allows pointer values to be explicitly cast while Java and Pascal do not. Java may be considered more strongly typed than Pascal as methods of evading the static type system in Java are controlled by the Java virtual machine's type system. C# and VB.NET are similar to Java in that respect, though they allow disabling of dynamic type checking by explicitly putting code segments in an "unsafe context". Pascal's type system has been described as "too strong", because the size of an array or string is part of its type, making some programming tasks very difficult. However, Delphi fixes this issue.
Smalltalk, Ruby, Python, and Self are all "strongly typed" in the sense that typing errors are prevented at runtime and they do little implicit type conversion, but these languages make no use of static type checking: the compiler does not check or enforce type constraint rules. The term duck typing is now used to describe the dynamic typing paradigm used by the languages in this group.
The Lisp family of languages are all "strongly typed" in the sense that typing errors are prevented at runtime. Some Lisp dialects like Common Lisp or Clojure do support various forms of type declarations and some compilers (CMU Common Lisp (CMUCL) and related) use these declarations together with type inference to enable various optimizations and limited forms of compile time type checks.
Standard ML, F#, OCaml, Haskell, Go and Rust are statically type-checked, but the compiler automatically infers a precise type for most values.
Assembly language and Forth can be characterized as untyped. There is no type checking; it is up to the programmer to ensure that data given to functions is of the appropriate type.
See also
Comparison of programming languages
Data type includes a more thorough discussion of typing issues
Design by contract (strong typing as implicit contract form)
Latent typing
Memory safety
Type safety
Type system
Strongly-typed identifier
References
Type systems | Strong and weak typing | Mathematics | 1,426 |
14,214,617 | https://en.wikipedia.org/wiki/Power%20of%20three | In mathematics, a power of three is a number of the form where is an integer, that is, the result of exponentiation with number three as the base and integer as the exponent.
In base 10, every power of 3 has an even number as its second-last digit.
Applications
The powers of three give the place values in the ternary numeral system.
Graph theory
In graph theory, powers of three appear in the Moon–Moser bound on the number of maximal independent sets of an -vertex graph, and in the time analysis of the Bron–Kerbosch algorithm for finding these sets. Several important strongly regular graphs also have a number of vertices that is a power of three, including the Brouwer–Haemers graph (81 vertices), Berlekamp–van Lint–Seidel graph (243 vertices), and Games graph (729 vertices).
Enumerative combinatorics
In enumerative combinatorics, there are signed subsets of a set of elements. In polyhedral combinatorics, the hypercube and all other Hanner polytopes have a number of faces (not counting the empty set as a face) that is a power of three. For example, a , or square, has 4 vertices, 4 edges and 1 face, and . Kalai's conjecture states that this is the minimum possible number of faces for a centrally symmetric polytope.
Inverse power of three lengths
In recreational mathematics and fractal geometry, inverse power-of-three lengths occur in the constructions leading to the Koch snowflake, Cantor set, Sierpinski carpet and Menger sponge, in the number of elements in the construction steps for a Sierpinski triangle, and in many formulas related to these sets. There are possible states in an -disk Tower of Hanoi puzzle or vertices in its associated Hanoi graph. In a balance puzzle with weighing steps, there are possible outcomes (sequences where the scale tilts left or right or stays balanced); powers of three often arise in the solutions to these puzzles, and it has been suggested that (for similar reasons) the powers of three would make an ideal system of coins.
Perfect totient numbers
In number theory, all powers of three are perfect totient numbers. The sums of distinct powers of three form a Stanley sequence, the lexicographically smallest sequence that does not contain an arithmetic progression of three elements. A conjecture of Paul Erdős states that this sequence contains no powers of two other than 1, 4, and 256.
Graham's number
Graham's number, an enormous number arising from a proof in Ramsey theory, is (in the version popularized by Martin Gardner) a power of three.
However, the actual publication of the proof by Ronald Graham used a different number which is a power of two and much smaller.
See also
Power of 10
Power of two
Square root of 3
References
Integers
3 (number) | Power of three | Mathematics | 603 |
36,829,436 | https://en.wikipedia.org/wiki/List%20of%20muffler%20men | This is a list of muffler men, large molded fiberglass advertising icons.
Arizona
Louie the Lumberjack at Northern Arizona University in Flagstaff
Paul Bunyan at Leo's Auto and Home Supply/Don's Hot Rod Shop in Tucson
Big Ed at Cumming's Plumbing in Tucson. Originally Stamper Miner in Rapid City, South Dakota, he moved to Tucson and was refurbished in December 2015.
Big Johnson (B.J.) at Big Johnson's Store in Prescott
Arkansas
"Lee the Tow Man", custom-built for Driven Towing and Recovery in Hot Springs.
"Big John", Driven Towing and Recovery, Hot Springs. This muffler man was originally in St. George, Utah from 1964 to 2001. It was transported to Cheshire, Connecticut, then to Bache, Oklahoma in 2019, before being brought to Hot Springs in 2022.
"Muffler Mr. Spock", Driven Towing and Recovery, Hot Springs. Depicts Spock from Star Trek.
California
"Big Bert", River Bend Resort, Forestville.
"Big Josh" in Joshua Tree (formerly "Mecca Man" at El Tompa Mini Mart in Mecca).
"Big Mike", formerly at Big Mike's Muffler, Hayward.
"Babe Royer" at Babe's Lightning & Muffler in San Jose.
"Chicken Boy" in Highland Park.
"El Salsero" muffler man at 22800 Pacific Coast Highway, Malibu
"Edwin", a golfer, at El Monte Sign Company, 2710 Santa Anita Avenue, El Monte.
"The Big Man", originally a lumberjack with axe at the intersection of highways 88 and 104 (Lower Ridge Road), in Martell.
"The Guy," a race car driver holding a checkered flag, off the 405 freeway, at the west coast Porsche Experience Center in Carson. Originally a golfer for the Dominguez Golf Course.
Unnamed, King's Auto Repair in Compton.
"Sergio," at Automotive Alley in Boyle Heights.
"Tony" at Tony's Transmissions in City Terrace.
"Kevin" at Tuneup Masters in Van Nuys.
"Joor Muffler Man" at Joor Muffler in Escondido.
"The Indian Warrior" at Ethel's Old Corral Cafe in Bakersfield.
"Rodeo Man" in Livermore.
Colorado
"Trailer Park Cowboy" at Rustic Ranch Mobile Home Park, 5565 Federal Blvd., Denver.
"Ranch Cowboy" stands guard with a pitchfork at Lazy T Ranch, 12765 N. 63rd St., Longmont.
"Greeley Muffler Man" in front of Just Bob's Auto, 500 1st Ave, Greeley.
Connecticut
When the 26-foot "Muffler Man" Paul Bunyan was erected in front of a Cheshire lumber business in the 1980s, the town objected to the statue, citing that it was a violation of town codes given its substantial height. Finding no limitation on flagpole height on the books, the owners of the statue replaced Bunyan's axe with an American flag.
"Big Bob", Norwich. A cowboy muffler man with an American flag and cowboy hat that has been in Norwich since the mid-1960s. For his first 20 years he was on the other side of town. Previously belonged to amusement park owner Alex Cohen. The owners of Surplus Unlimited bought Big Bob in 1982.
Florida
In front of CSD Truck Repairs in Palm River-Clair Mel
In front of Aufo Air Muffler & Brake City in Dade City
Kansas
Muffler Man in front of Brown's Tire in Wichita.
Indiana
Paul Bunyan, MacAllister Rental outside of Muncie. The Paul Bunyan stands outside the main building, facing towards Interstate 69.
Ralph's Muffler shop in Indianapolis.
Idaho
The World's Largest Janitor, "Big Don", at the Museum of Clean in Pocatello.
The "lumberjack" at Heyburn Elementary School in St. Maries.
Illinois
Gemini Giant in Wilmington
A Giant Hot Dog Statue on Route 66 in Atlanta, Illinois was relocated from Bunyon's in Cicero upon that restaurateur's retirement.
Lauterbach Man at Lauterbach Service Center in Springfield
Spartan at Southeast High School in Springfield.
BIG FAT in Evergreen Park, Paul Bunyan statue on top of Guardian Auto Re-builders.
Blind Without Glasses at 6300 S. Pulaski in Chicago, Indian on top of an eye clinic.
Paul Bunyan at Lamb's Farm in Libertyville.
Two muffler men, Paul Bunyan and Beach Boy, at the Pink Elephant Antique Mall in Livingston.
Stogie Man at Cigars and Stripes in Berwyn.
Iowa
Giant "Phil", Williamsburg. A Phillips 66 Cowboy, Muffler Man, once stood at the Landmark Truck Stop in Williamsburg, at the intersection of I-80 and IA-149. It appears on a vintage postcard and possibly now stands with a large fiberglass bull in Waukon.
Louisiana
In front of Topps Western World, 3003 Topps Trail, Bossier City (facing I-20, near exit 23).
Maine
Paul Bunyan and Babe the Blue Ox in J. Eugene Boivin Park, Rumford.
Maryland
Paul Bunyan aka Uncle Harve.
Located at the Anne Arundel County Fairgrounds in Crownsville, Md
Massachusetts
Plantation Man, Chicopee. It was originally made for a restaurant in Framingham, then as Uncle Sam for a car dealership in Springfield. As of 2013, it was being auctioned and its future was uncertain.
There is a Muffler Man with a large ax and lumberman's hard hat at the entrance to Valley Tree Service on the east side of Route 97 in Groveland.
A muffler man depicted in rural attire and baseball cap stands at Green Valley Equipment Company, on Route 43 in Hancock.
Michigan
Greg E. Normous: Golfer, stands at a putt putt golf course, named for Greg Norman, 23 Mile Road, New Baltimore.
Golf Giant: Golfer at driving range, 1/2 mile west of Greg E. Normous, New Baltimore.
Manistique: Paul Bunyan statue at Manistique Chamber of Commmerce Visitor Center.
Mississippi
• Muffler Man (Held two bags of "groceries" at Giant Food Stores, painted as a clerk in the 1960s) now at Boom City Fireworks, 9199 HY 61, Walls.
Missouri
Bunyan (a Mr. Bendo): Stones Last Resort, Cleveland.
Bunyan (no job too big): Skyline Motors Tractor Trailer Repair, Foristell.
Injun Joe and Country Bumpkin: Bagnell Dam Strip, Lake Ozark. (One was gone from 2013 until 2024 but suffered damage a few weeks after returning.)
Muffler Man: Chief Wappalese: Chaonia Landing Resort and Marina, Lake Wappapello.
Cowboy Muffler Man (in pieces): Croft Automotive and Trailer, Valley Park.
Muffler Man (custom designed as a chef and erected in 2020) Route 66 Food-Truck Park (corner of St Louis St and Delaware Ave), Springfield.
Mega Major: Opposite Uranus Fudge Factory, 14400 Hwy Z, St. Robert.
Montana
There is a muffler man in front of the tire store on Montana Avenue in Billings.
Casino Dude Muffler Man at Fort Rockvale Restaurant and Casino in Joliet.
Muffler Man at Plentywood.
Nebraska
"Indian" in the stockade behind Fort Cody Trading Post has been a fixture along I-80 for decades at North Platte, exit 177.
New Jersey
Nitro Girl, Black Horse Pike, Blackwood.
Carpet Viking Statue, Route 77, Deerfield.
Muffler man collection - Halfwit, Halfwit head paintball target, Dracula head, Monmoth Road, Holmeston.
Carpet-clutching Muffler Man, Broadway, Jersey City.
Tire Man in Pink, White Horse Pike, Magnolia.
Pirate, Boardwalk, Ocean City.
Barnacle Bill's Amusements, Highway 35, Ortley Beach.
Muffler Man collection, Ocean Terrace, Seaside Heights.
Happy Halfwit, Highway 73, Winslow.
Cowtown Rodeo, Highway 40, Woodstown.
New Mexico
Sun Glass, Farmington.
Big Daddy's Flea Market, Las Cruces.
Franciscan RV Inc., Hatch.
Cowboy Muffler Man, John's Used Cars, Gallup.
New York
Gas station in Westchester County 135 N Saw Mill River Road, Elmsford (yellow shirt, green pants).
Mountain Air Campground 1265 Lake Ave, Lake Luzerne (red shirt, blue pants).
Camp Bullowa Boy Scout Camp, Rockland County (classic Paul Bunyan, red shirt with blue pants and ax).
North Carolina
Bradsher Landscape Supply, Raleigh (blue jeans and baseball cap).
Harry's On The Hill Cadillac GMC, Asheville (Chief Pontiac).
Paul Bunyan holding an axe, with Babe the Blue Ox, Original Log Cabin Company, Rocky Mount.
White's Tire Company, Wilson.
North Dakota
Chieftain Motel, Carrington (figure is a Native American with upraised arm)
Oklahoma
Buck Atom the Space Cowboy, Tulsa.
Native American with arm stretched out at Indian Trading Post in Calumet.
Oregon
Harvey the Giant Rabbit (originally The Texaco Big Friend) in Reedville.
Pennsylvania
Cadet Restaurant, Kittanning.
Lugnutz Tire Service, Greensburg.
The Inside Scoop ice cream shop in Coopersburg.
Muffler Man at Mr. Tire, Uniontown.
South Dakota
Automotive Brake & Exhaust, Sioux Falls - 'Mr. Bendo', arm upraised holding an exhaust pipe in one hand.
Full Throttle Saloon (new location), 'Windover Willie' originally from Winnemucca, Nevada. Located at 19942 Hwy 79, Vale - Cowboy holding a cigar in his left hand and a mug of beer in his right.
Texas
Red McCombs Big Chief in downtown San Antonio.
Mr. Bendo, Paul Bunyan, at San Angelo.
Glenn Goode and Mary Jean Goode's "Big People!": Cowboy, two Big Johns and a Uniroyal Gal in Gainesville.
Happy The Halfwit at Kenn's Muffler Shop in Beaumont.
2nd Amendment Cowboy at Cadillac RV Park in Amarillo.
Wine Garage, east of Fredericksburg on US-290.
A giant with an Alfred E. Neuman head holds an absurdly large muffler at Ken's Muffler and Brake in Dallas.
Tennessee
Native American in Chinos - Beside Sad Sam's at Exit 112 off of I-65, near Cross Plains.
Pal's, Kingsport - Man carrying a hamburger.
Muffler Man at Four Way Mufflers & Motors, 1368 E Broadway, Gallatin.
Utah
"Big John", a coal miner on South Main Street in front of the Helper Civic Auditorium and city library (formerly the city hall) in Helper.
"Mr. Spock," a muffler man repainted to resemble Mr. Spock sits atop a business in Salt Lake City.
Virginia
Chincoteague Viking, Chincoteague Island.
Auto Muffler King at 5835 Jefferson Avenue in Newport News.
Williamson Road Service Center at 3110 Williamson Road, Roanoke.
Coeburn Red Oak Trading Company.
Washington
Paul Bunyan, Shelton.
One on the roof of the SSA Marine building, 1105 Hewitt Avenue, Everett.
22-foot cowboy muffler man on Highway 12, lower east Pomeroy.
Wisconsin
Gus's Drive-In, East Troy - Gus's Giant.
Bulik's Amusement Center, Spooner - Cowboy.
Fasco Appliance, Oshkosh - Paul Bunyan.
Larry the Logroller, a a logroller muffler man with a pike pole in Wabeno.
References
External links
"Muffler Men" at Roadside America website
"Muffler Men tracker" at Roadside America Website
Fiberglass sculptures
Lists of buildings and structures in the United States
Lists of public art in the United States
Roadside attractions in the United States
Transport culture
Advertising-related lists | List of muffler men | Physics | 2,508 |
3,912,962 | https://en.wikipedia.org/wiki/Midcourse%20Space%20Experiment | The Midcourse Space Experiment (MSX) is a Ballistic Missile Defense Organization (BMDO) satellite experiment (unmanned space mission) to map bright infrared sources in space. MSX offered the first system demonstration of technology in space to identify and track ballistic missiles during their midcourse flight phase.
History
On 24 April 1996, the US Air Force launched the MSX satellite on a Delta II launch vehicle from Vandenberg AFB, California. MSX was placed in a Sun-synchronous orbit at 898 km and an inclination of 99.16 degrees. MSX's mission was to gather data in three spectral bands (long wavelength infrared, visible, and ultraviolet).
From 13 May 1998, MSX became a contributing sensor to the Space Surveillance Network.
Launch debris incident
Lottie Williams was exercising in a park in Tulsa on January 22, 1997, when she was hit in the shoulder by a piece of blackened metallic material. U.S. Space Command confirmed that a used Delta II rocket from the April 1996 launch of the Midcourse Space Experiment had re-entered into the atmosphere 30 minutes earlier. The object tapped her on the shoulder and fell off harmlessly onto the ground. Williams collected the item and NASA tests later showed that the fragment was consistent with the materials of the rocket, and Nicholas Johnson, the agency's chief scientist for orbital debris, believes that she was indeed hit by a piece of the rocket.
Operations
Operational from 1996 to 1997, MSX mapped the galactic plane and areas either missed or identified as particularly bright by the Infrared Astronomical Satellite (IRAS) at wavelengths of 4.29 μm, 4.35 μm, 8.28 μm, 12.13 μm, 14.65 μm, and 21.3 μm.
It carried the 33-cm SPIRIT III infrared telescope and interferometer–spectrometer with solid hydrogen-cooled five line-scanned infrared focal plane arrays.
Calibration of MSX posed a challenge for designers of the experiment, as baselines did not exist for the bands it would be observing under. Engineers solved the problem by having MSX fire projectiles of known composition in front of the detector, and calibrating the instruments to the known black-body curves of the objects. The MSX calibration serves as the basis for other satellites working in the same wavelength range, including AKARI (2006-2011) and the Spitzer Space Telescope (SST).
MSX data is currently available in the Infrared Science Archive (IRSA) provided by NASA's Infrared Processing and Analysis Center (IPAC). Collaborative efforts between the Air Force Research Laboratory and IPAC has resulted in an archive containing images for about 15 percent of the sky, including the entire Galactic Plane, the Large Magellanic Cloud, and regions of the sky missed by IRAS.
See also
List of largest infrared telescopes
Militarisation of space
Notes
External links
The Midcourse Space Experiment Point Source Catalog Version 2.3 Explanatory Guide From VizieR Catalogue Service
The Spatial Infrared Imaging Telescope III (SPIRIT III), an instrument for MSX
Welcome to the MSX Showcase
Space telescopes
Infrared telescopes
Spacecraft launched in 1996
Spacecraft launched by Delta II rockets | Midcourse Space Experiment | Astronomy | 656 |
12,661,891 | https://en.wikipedia.org/wiki/NMS-8250 | Philips NMS 8250, (NMS is short for "New Media Systems") was a professional MSX2 home computer for the high end market, with a built in floppy disk drive in a "pizza box" configuration, released in 1986. The machine was in fact manufactured by Sanyo and it is basically the MPC-25FS with a different color.
It featured professional video output possibilities, such as SCART for a better picture quality, and a detachable keyboard.
Three regional models were produced:
NMS 8250/00 for the Dutch and Belgian markets with a QWERTY keyboard;
NMS 8250/16 for the Spanish market with a QWERTY keyboard with ñ key;
NMS 8250/19 for the French market with a AZERTY keyboard.
The Philips NMS 8255 is a similar machine, but has with two disc drives instead of one.
Specifications
The Philips NMS 8250/8255 have the following specifications:
CPU: Zilog Z80A with a clock speed of 3,56 MHz
ROM: 64 kB (MSX 2: 48 kB, Disk BASIC: 16 kB)
RAM: 256 kB
VRAM: 128 kB
Display: Yamaha V9938 (80×24, 40×24 and 32×24 character text in four colors - two foreground colors and two background colors; resolution of 512×212 pixels (with 16 from 512 colors) or 256×212 (with 256 from 512 colors).
Controller chip: MSX-Engine (S-3527, real-time clock with rechargeable battery).
Sound: PSG (S-3527, 3 sound channels, one noise channel)
Floppy drive: 3.5'', 720 kB double sided.
Connectors: mains cable, RF-output, CVBS monitor, luminance video output connector (for monochrome monitors), tulip (RCA) connector audio output, SCART audio/video-output using RGB, data recorder, Centronics compatible parallel printer port, detachable keyboard connector, two joysticks, two cartridge slots.
Gallery
Philips NMS 8250
Philips NMS 8250
NMS 8250
References | NMS-8250 | Technology | 450 |
33,301,698 | https://en.wikipedia.org/wiki/Glycoside%20hydrolase%20family%2027 | In molecular biology, glycoside hydrolase family 27 is a family of glycoside hydrolases.
Glycoside hydrolases are a widespread group of enzymes that hydrolyse the glycosidic bond between two or more carbohydrates, or between a carbohydrate and a non-carbohydrate moiety. A classification system for glycoside hydrolases, based on sequence similarity, has led to the definition of >100 different families. This classification is available on the CAZy web site, and also discussed at CAZypedia, an online encyclopedia of carbohydrate active enzymes.
Glycoside hydrolase family 27 together with family 31 and the family 36 alpha-galactosidases form the glycosyl hydrolase clan GH-D, a superfamily of alpha-galactosidases, alpha-N-acetylgalactosaminidases, and isomaltodextranases which are likely to share a common catalytic mechanism and structural topology.
Alpha-galactosidase () (melibiase) catalyzes the hydrolysis of melibiose into galactose and glucose. In man, the deficiency of this enzyme is the cause of Fabry's disease (X-linked sphingolipidosis). Alpha-galactosidase is present in a variety of organisms. There is a considerable degree of similarity in the sequence of alpha-galactosidase from various eukaryotic species. Escherichia coli alpha-galactosidase (gene melA), which requires NAD and magnesium as cofactors, is not structurally related to the eukaryotic enzymes; by contrast, an Escherichia coli plasmid encoded alpha-galactosidase (gene rafA ) contains a region of about 50 amino acids which is similar to a domain of the eukaryotic alpha-galactosidases. Alpha-N-acetylgalactosaminidase () catalyzes the hydrolysis of terminal non-reducing N-acetyl-D-galactosamine residues in N-acetyl-alpha-D- galactosaminides. In man, the deficiency of this enzyme is the cause of Schindler and Kanzaki diseases. The sequence of this enzyme is highly related to that of the eukaryotic alpha-galactosidases.
External links
GH27 in CAZypedia
References
EC 3.2.1
Glycoside hydrolase families
Protein families | Glycoside hydrolase family 27 | Biology | 545 |
2,685,825 | https://en.wikipedia.org/wiki/Z-Wave | Z-Wave is a wireless communications protocol used primarily for residential and commercial building automation. It is a mesh network using low-energy radio waves to communicate from device to device, allowing for wireless control of smart home devices, such as smart lights, security systems, thermostats, sensors, smart door locks, and garage door openers. The Z-Wave brand and technology are owned by Silicon Labs. Over 300 companies involved in this technology are gathered within the Z-Wave Alliance.
Like other protocols and systems aimed at the residential, commercial, MDU and building markets, a Z-Wave system can be controlled from a smart phone, tablet, or computer, and locally through a smart speaker, wireless keyfob, or wall-mounted panel with a Z-Wave gateway or central control device serving as both the hub or controller. Z-Wave provides the application layer interoperability between home control systems of different manufacturers that are a part of its alliance. There is a growing number of interoperable Z-Wave products; over 1,700 in 2017, over 2,600 by 2019, and over 4,000 by 2022.
History
The Z-Wave protocol was developed by Zensys, a Danish company based in Copenhagen, in 1999. That year, Zensys introduced a consumer light-control system, which evolved into Z-Wave as a proprietary system on a chip (SoC) home automation protocol on an unlicensed frequency band in the 900 MHz range. Its 100 series chip set was released in 2003, and its 200 series was released in May 2005, with the ZW0201 chip offering high performance at a low cost. Its 500 series chip, also known as Z-Wave Plus, was released in March 2013, with four times the memory, improved wireless range, improved battery life, an enhanced S2 security framework, and the SmartStart setup feature. Its 700 series chip was released in 2019, with the ability to communicate up to 100 meters directly from point-to-point, or 800 meters across an entire Z-Wave network, an extended battery life of up to 10 years, and comes with S2 and SmartStart technology. In July 2019, the Z-Wave Plus v2 certification was announced. It is designed for devices built on the 700 platform. The Z-Wave Long Range (LR) specification was announced in September 2020, a new specification with up to four-times the wireless range of standard Z-Wave. Z-Wave's 800 series chip was released in late 2021, with improved security and battery life over the 700 series.
The technology began to catch on in North America around 2005, when five companies, including Danfoss, Ingersoll-Rand and Leviton Manufacturing, adopted Z-Wave. They formed the Z-Wave Alliance, whose objective is to promote the use of Z-Wave technology, with all certified products by companies in the Alliance interoperable. In 2005, Bessemer Venture Partners led a $16 million third seed round for Zensys. In May 2006, Intel Capital announced that it was investing in Zensys, a few days after Intel joined the Z-Wave Alliance. In 2008, Zensys received investments from Panasonic, Cisco Systems, Palamon Capital Partners and Sunstone Capital.
Z-Wave was acquired by Sigma Designs in December 2008. Following the acquisition, Z-Wave's U.S. headquarters in Fremont, California were merged with Sigma's headquarters in Milpitas, California. As part of the changes, the trademark interests in Z-Wave were retained in the United States by Sigma Designs and acquired by a subsidiary of Aeotec Group in Europe.
On January 23, 2018, Sigma announced it planned to sell the Z-Wave technology and business assets to Silicon Labs for $240 million, and the sale was completed on April 18, 2018.
In 2005, there were six products on the market that used Z-Wave technology. By 2012, as smart home technology was becoming increasingly popular, there were approximately 600 products using Z-Wave technology available in the U.S. As of June 2022, there are over 4,000 Z-Wave certified interoperable products.
Interoperability
Z-Wave's interoperability at the application layer ensures that devices can share information and allows all Z-Wave hardware and software to work together. Its wireless mesh networking technology enables any node to talk to adjacent nodes directly or indirectly, controlling any additional nodes. Nodes that are within range communicate directly with one another. If they aren't within range, they can link with another node that is within range of both to access and exchange information. In September 2016, certain parts of the Z-Wave technology were made publicly available, when then-owner Sigma Designs released a public version of Z-Wave's interoperability layer, with the software added to Z-Wave's open-source library. The Z-Wave MAC/PHY is globally standardized by the International Telecommunication Union as ITU 9959 radio. The open-source availability allows software developers to integrate Z-Wave into devices with fewer restrictions. Z-Wave's S2 security, Z/IP for transporting Z-Wave signals over IP networks, and Z-Wave middleware are all open source as of 2016. In 2020, the Z-Wave Alliance ratified the Z-Wave specification, adding the application to open-source development. The Alliance Technical Working Group manages Z-Wave specification development and maintains a library of standard implementations for Z-Wave compliant products.
Standards and the Z-Wave Alliance
Established in 2005 and re-incorporated as a non-profit in 2020, the Z-Wave Alliance is a member-driven standards development organization dedicated to market development, technical Z-Wave specification and device certification, and education on Z-Wave technology. Z-Wave Alliance is a consortium of over 300 companies in the residential and commercial connected technology market. Z-Wave Alliance certifies devices to standards that guarantee interoperability with full backwards compatibility among all generations of Z-Wave devices. These standards include specifications for reliability, range, power consumption, and device interoperability.
In October 2013, a new protocol and interoperability certification program called Z-Wave Plus was announced, based upon new features and higher interoperability standards bundled together and required for the 500 series system on a chip (SoC), and including some features that had been available since 2012 for the 300/400 series SoCs. In February 2014, the first product was certified by Z-Wave Plus.
In 2016, the Alliance launched a Z-Wave Certified Installer Training program to give installers, integrators and dealers the tools to deploy Z-Wave networks and devices in their residential and commercial jobs. That year, the Alliance announced the Z-Wave Certified Installer Toolkit (Z-CIT), a diagnostics and troubleshooting device that can be used during network and device setup and can also function as a remote diagnostics tool.
Z-Wave Long Range (LR) was announced in September 2020, a new specification with an increased range over regular Z-Wave signals. The LR specification is managed and certified under the Z-Wave Plus v2 certification. On March 15, 2022, the Z-Wave Alliance announced that Ecolink, a security and home automation brand, was the first to complete Z-Wave LR certification, with the Ecolink 700 Series Garage Door Controller.
Z-Wave Alliance maintains the Z-Wave certification program. There are two components to Z-Wave certification: technical certification and market certification.
In December 2019, Z-Wave announced the Z-Wave Source Code Project, in which it would release the source code to its platform, for members to contribute to the advancement of the standard, under the supervision of the newly-established OS Work Group. The project is available to alliance members on GitHub.
In December 2019, the Z-Wave Alliance announced that the Z-Wave specification would become a ratified, multi-source wireless standard. It includes the ITU.G9959 PHY/MAC radio specification, the application layer, the network layer, and the host-device communication protocol. Instead of being a single-source specification, it will become a multi-source, wireless smart home standard developed by collective working group members of the Z-Wave Alliance. The Z-Wave Alliance would become a standards development organization (SDO), while continuing to manage the certification program. In August 2020, the Z-Wave Alliance officially became incorporated as an independent nonprofit standards development organization, with seven founding members under its new SDO structure: Alarm.com, Assa Abloy, Leedarson, Ring, Silicon Labs, StratIS, and Qolsys. Under the SDO, there are new membership levels, workgroups, and committees, including technical working groups specific to features, and certification, security, and marketing groups.
Technical characteristics
Radio frequencies
Z-Wave is designed to provide reliable, low-latency transmission of small data packets at data rates up to 100 kbit/s, and is suitable for control and sensor applications, unlike Wi-Fi and other IEEE 802.11-based wireless LAN systems that are designed primarily for high data rates. Communication distance between two nodes is 200 meters line of sight outdoors and 50 meters line of sight indoors, and with message ability to hop up to four times between nodes, it gives enough coverage for most residential houses. Modulation is frequency-shift keying (FSK) with Manchester encoding, and other supported modulations schemes include GFSK and DSSS-OQPSK.
Z-Wave uses the Part 15 unlicensed industrial, scientific, and medical (ISM) band, operating on varying frequencies globally. For instance, in Europe it operates at the 868-869 MHz band while in North America the band varies from 908-916 MHz when Z-Wave is operating as a mesh network and 912-920 MHz when Z-Wave is operating with a star topology in Z-Wave LR mode. Z-Wave's mesh network band competes with some cordless telephones and other consumer electronics devices, but avoids interference with Wi-Fi, Bluetooth and other systems that operate on the crowded band. The lower layers, MAC and PHY, are described by ITU-T G.9959 and fully backwards compatible. In 2012, the International Telecommunication Union (ITU) included the Z-Wave PHY and MAC layers as an option in its G.9959 standard for wireless devices under 1 GHz. Data rates include 9600 bit/s and 40 kbit/s, with output power at 1 mW or 0 dBm.
Z-Wave has been released to be used frequencies with the following frequency bands in various parts of the world:
Network setup, topology and routing
Traditional hub-and-spoke networks include one central hub or access point to which all devices are connected, such as a wireless device connecting to a router. Z-Wave devices create a mesh network, where devices can communicate with each other in addition to the central hub. Advantages to a mesh network include greater range and compatibility, and a stronger network.
Z-Wave LR devices operate on a star network topology that features the hub at a central point and then establishes a direct connection to each device, rather than sending signals from node to node until the intended destination is met, as in a mesh network. The key difference between a star network and a mesh network is the direct hub-to-device connection. Both Z-Wave LR and traditional Z-Wave nodes can coexist within the same network.
The simplest network is a single controllable device and a primary controller. Devices can communicate to one another by using intermediate nodes to actively route around and circumvent household obstacles or radio dead spots that might occur in the multipath environment of a house. A message from node A to node C can be successfully delivered even if the two nodes are not within range, providing that a third node B can communicate with nodes A and C. If the preferred route is unavailable, the message originator will attempt other routes until a path is found to the C node. Therefore, a Z-Wave network can span much farther than the radio range of a single unit; however, with several of these hops a slight delay may be introduced between the control command and the desired result.
Additional devices can be added at any time, as can secondary controllers, including traditional hand-held controllers, key-fob controllers, wall-switch controllers and PC applications designed for management and control of a Z-Wave network. A Z-Wave network can consist of up to 232 devices, or up to 4,000 nodes on a single smart-home network with Z-Wave LR. Both allow the option of bridging networks if more devices are required.
A device must be "included" to the Z-Wave network before it can be controlled via Z-Wave. This process (also known as "pairing" and "adding") is usually achieved by pressing a sequence of buttons on the controller and on the device being added to the network. This sequence only needs to be performed once, after which the device is always recognized by the controller. Devices can be removed from the Z-Wave network by a similar process. The controller learns the signal strength between the devices during the inclusion process and will utilize this information when calculating routes. In the event that devices have been moved and the previously stored signal strength is wrong, the controller may issue a new route resolution through one or more explore frames.
Each Z-Wave network is identified by a Network ID, and each device is further identified by a Node ID. The Network ID (also called Home ID) is the common identification of all nodes belonging to one logical Z-Wave network. The Network ID has a length of 4 bytes (32 bits) and is assigned to each device, by the primary controller, when the device is "included" into the Network. Nodes with different Network IDs cannot communicate with each other. The Node ID is the address of a single node in the network. The Node ID has a length of 1 byte (8 bits) and must be unique in its network.
The Z-Wave chip is optimized for battery-powered devices, and most of the time remains in a power saving mode to consume less energy, waking up only to perform its function. With Z-Wave mesh networks, each device in the house bounces wireless signals around the house, which results in low power consumption, allowing devices to work for years without needing to replace batteries. For Z-Wave units to be able to route unsolicited messages, they cannot be in sleep mode. Therefore, battery-operated devices are not designed as repeater units. Mobile devices, such as remote controls, are also excluded since Z-Wave assumes that all repeater capable devices in the network remain in their original detected position.
Security
Z-Wave is based on a proprietary design, supported by Sigma Designs as its primary chip vendor, but the Z-Wave business unit was acquired by Silicon Labs in 2018. In December 2019, Silicon Labs announced that it would release the Z-Wave specification as an open wireless standard for development to be certified by the Z-Wave Alliance.
An early vulnerability was uncovered in AES-encrypted Z-Wave door locks that could be remotely exploited to unlock doors without the knowledge of the encryption keys, and due to the changed keys, subsequent network messages, as in "door is open", would be ignored by the established controller of the network. The vulnerability was not due to a flaw in the Z-Wave protocol specification but was an implementation error by the door-lock manufacturer.
On November 17, 2016, the Z-Wave Alliance announced stronger security standards for devices receiving Z-Wave Certification as of April 2, 2017. Known as Security 2 (or S2), it provides advanced security for smart home devices, gateways and hubs. It shores up encryption standards for transmissions between nodes, and mandates new pairing procedures for each device, with unique PIN or QR codes on each device. The new layer of authentication is intended to prevent hackers from taking control of unsecured or poorly-secured devices. According to the Z-Wave Alliance, the new security standard is the most advanced security available on the market for smart home devices and controllers, gateways and hubs. The 800 series chip, released in late 2021, continues to support standard S2 security capabilities, as well as Silicon Labs Secure Vault technology, enabling wireless devices with PSA Certification Level 3 security.
In 2022, researchers published several vulnerabilities in the Z-Wave chipsets up to the 700 series, based on an open-source protocol-specific fuzzer. As a result, depending on the chipset and device, an attacker within Z-Wave radio range can deny service, cause devices to crash, deplete batteries, intercept, observe, and replay traffic, and control vulnerable devices. The related CVEs (CVE-2020-9057, CVE-2020-9058, CVE-2020-9059, CVE-2020-9060, CVE-2020-9061, CVE-2020-10137) were published by CERT. Z-Wave devices with 100, 200, 300 series chipsets cannot be updated to fix the vulnerabilities. For devices with 500 and 700 chipset series those vulnerabilities could be mitigated through firmware updates.
Hardware
The chip for Z-Wave nodes is the ZW0500, built around an Intel MCS-51 microcontroller with an internal system clock of 32 MHz. The RF part of the chip contains an GisFSK transceiver for a software selectable frequency. With a power supply of 2.2-3.6 volts, it consumes 23mA in transmit mode. Its features include AES-128 encryption, a 100 kbit/s wireless channel, concurrent listening on multiple channels, and USB VCP support.
At the Consumer Electronics Show on January 8, 2018, Sigma Designs introduced its Z-Wave 700 platform. The 700 series chip was released in 2019. It enables a new class of smart home devices that can be used outdoors, with a range of up to 300 feet, and that can operate on a coin-cell battery for up to a decade. Though the 700 series uses a 32-bit ARM Cortex SoC, it remains backward compatible with all other Z-Wave devices. It includes enhanced S2 security framework as well as the SmartStart setup feature. In July 2019, the Z-Wave Alliance announced Z-Wave Plus v2 certification, designed for devices built on the 700 platform, for stronger interoperability and security, and an easier installation process.
Z-Wave Long Range (LR) was announced in September 2020, a new specification with an improved range over regular Z-Wave signals. The specification supports a maximum output power of 30 dBm, which can be used to bolster transmission range by up to several miles. In testing, Z-Wave LR had a transmission distance of 1-mile (1.6 km) direct line of sight utilizing +14-dBm output power. Z-Wave LR is an extra 100-kb/s DSSS OQPSK modulation addition to the Z-Wave protocol. The modulation is treated as a fourth channel, allowing gateways to add LR nodes to the existing Z-Wave channel scanning. Z-Wave LR also increases scalability on a single smart-home network by up to 4,000 nodes, a 20x increase compared to Z-Wave. Z-Wave LR operates on low power so that sensors can last for 10 years on a single coin cell. It is backwards compatible and interoperable with other Z-Wave devices.
In December 2021, Silicon Labs announced the availability of the Z-Wave 800 system-on-chips and modules for the Z-Wave smart home and automation ecosystem. It is described as secure, ultra-low powered, and wireless, for Internet of Things devices, with an improved battery life as compared to the 700 series.
Comparison to other protocols
For smart home wireless networking, there are numerous technologies working together. Z-Wave operates on the sub1GHz (low bandwidth) vs 2.4 GHz (high bandwidth) to capitalize on the application-level benefits of low power, long range, less RF interference. WiFi and Bluetooth operate on the 2.4 GHz bandwidth which manages a lot of traffic among devices that consume a lot of power. Other network standards include Bluetooth LE and Thread. Z-Wave has better interoperability than ZigBee, but ZigBee has a faster data transmission rate. Thread and Zigbee operate on the busy Wi-Fi standard frequency of 2.4 GHz, while Z-Wave operates below 1 GHz, which has reduced noise and congestion, and a greater coverage area. All three are mesh networks.
The Z-Wave MAC/PHY is globally standardized by the International Telecommunication Union as ITU 9959 radio, and the Z-Wave Interoperability, Security (S2), Middleware and Z-Wave over IP specifications were all released into the public domain in 2016, and Z-Wave has become a fully-ratified open-source protocol for development.
OpenZWave is a library, written in C++ and wrappers and supporting projects, to interface different languages and protocol(s) allowing anyone to create applications to control devices on a Z-Wave network, without requiring in-depth knowledge of the Z-Wave protocol. This software is currently aimed at application developers who wish to incorporate Z-Wave functionality into their applications. As of November 17, 2022 OpenZWave is no longer being actively maintained.
Matter, brought forth by the Connectivity Standards Alliance, and founded on December 19, 2019, aims to unify device communication so that connected devices will work together, across both wireless technologies and smart home ecosystems. Z-Wave networks have IP at the gateway level, enabling cloud connectivity to Matter. They can also work together at the local network level.
See also
Bluetooth LE
Matter (connectivity protocol)
Thread (network protocol)
Wi-Fi
Zigbee
References
External links
Home automation
Building automation
2001 software
Wireless sensor network
Personal area networks
Mesh networking
Computer access control protocols
Network protocols
Computer network security
Internet of things
Wireless communication systems | Z-Wave | Technology,Engineering | 4,598 |
2,067,016 | https://en.wikipedia.org/wiki/Lithium%20tantalate | Lithium tantalate is the inorganic compound with the formula LiTaO3. It is a white, diamagnetic, water-insoluble solid. The compound has the perovskite structure. It has optical, piezoelectric, and pyroelectric properties. Considerable information is available from commercial sources about this material.
Synthesis and processing
Lithium tantalate is produced by treating tantalum(V) oxide with lithium oxide. The use of excess alkali gives water-soluble polyoxotantalates. Single crystals of Lithium tantalate are pulled from the melt using the Czochralski method.
Applications
Lithium tantalate is used for nonlinear optics, passive infrared sensors such as motion detectors, terahertz generation and detection, surface acoustic wave applications, cell phones.
Lithium tantalate is a standard detector element in infrared spectrophotometers.
Research
The phenomenon of pyroelectric fusion has been demonstrated using a lithium tantalate crystal producing a large enough charge to generate and accelerate a beam of deuterium nuclei into a deuterated target resulting in the production of a small flux of helium-3 and neutrons through nuclear fusion without extreme heat or pressure.
A difference between positively and negatively charged parts of pyroelectric LiTaO3 crystals was observed when water freezes to them.
See also
Lithium tantalate (data page)
References
Lithium salts
Tantalates
Nonlinear optical materials
Piezoelectric materials
Crystals | Lithium tantalate | Physics,Chemistry,Materials_science | 303 |
62,576,396 | https://en.wikipedia.org/wiki/National%20Union%20of%20Scalemakers | The National Union of Scalemakers was a trade union representing workers involved in making weighing scales in the United Kingdom and Ireland.
History
In 1909, a strike occurred among scalemakers at Messrs Hodgson and Stead, in Manchester. Following the strike, many employees decided to found a union, the Amalgamated Society of Scale Beam and Weighing Machine Makers. Initially very small, the union expanded steadily, opening branches in Liverpool and Sheffield in 1910, and expanding into Wales in 1911, Scotland in 1912, and Ireland in 1918. That year, membership reached 600, and in 1920 it peaked at 1,000. Wage reductions in the industry and poor organisation led to financial difficulties, which culminated in 1923 with the London branch splitting away.
The London branch claimed to represent the continuation of the union, and it was moderately successful, reaching 150 members by 1927. The remainder of the union struggled to survive, making its general and financial secretary post part-time, and renaming itself as the Society of Scale Beam and Weighing Machinists. It registered as a trade union in 1924 and affiliated to the Trades Union Congress (TUC), but declined to only 150 members.
The TUC was concerned about the conflict between the two unions, and brokered a merger, which took place at the start of 1928, although the union still had a membership of only 282. A ballot saw the union's headquarters move to London, and membership began increasing rapidly. In 1939, it was able to make the general and financial secretary position full-time again, and by 1949 it had a membership of 2,500.
In 1935, the union affiliated with the Scottish Trades Union Congress, with the Irish Trades Union Congress in 1945, and the Confederation of Shipbuilding and Engineering Unions in 1948. In 1938, it began describing itself as an industrial union, representing all workers connected with the scalemaking trade, and the first woman joined the union in 1941.
The union repeatedly considered merging into the Amalgamated Engineering Union, but feared that its members interests would be neglected by the much larger union. In 1993, the union merged into Manufacturing, Science and Finance.
Leadership
General and Financial Secretary
1909: J. Cope
1915: J. P. Wadsworth
1924: G. Hatfield
1928: Harry Bending
1963: S. W. Parfitt
1980: A. F. Smith
President
1909: Andrew Leslie
1913: T. Richardson
1914: D. Donaldson
1918: Harry Walker
1920: J. A. Hodson
1921: J. Maxwell
1922: J. C. Turnbull
1925: J. Maxwell
1926: Andrew Leslie Jr
1928: Thomas Knight
1937: Albert Jackson
References
Business organisations based in the United Kingdom
1909 establishments in the United Kingdom
Engineering trade unions
Trade unions established in 1909
Trade unions disestablished in 1993
Weighing instruments
Trade unions based in the West Midlands (county) | National Union of Scalemakers | Physics,Technology,Engineering | 565 |
74,872,973 | https://en.wikipedia.org/wiki/Injury | Injury is physiological damage to the living tissue of any organism, whether in humans, in other animals, or in plants.
Injuries can be caused in many ways, including mechanically with penetration by sharp objects such as teeth or with blunt objects, by heat or cold, or by venoms and biotoxins. Injury prompts an inflammatory response in many taxa of animals; this prompts wound healing. In both plants and animals, substances are often released to help to occlude the wound, limiting loss of fluids and the entry of pathogens such as bacteria. Many organisms secrete antimicrobial chemicals which limit wound infection; in addition, animals have a variety of immune responses for the same purpose. Both plants and animals have regrowth mechanisms which may result in complete or partial healing over the injury. Cells too can repair damage to a certain degree.
Taxonomic range
Animals
Injury in animals is sometimes defined as mechanical damage to anatomical structure, but it has a wider connotation of physical damage with any cause, including drowning, burns, and poisoning. Such damage may result from attempted predation, territorial fights, falls, and abiotic factors.
Injury prompts an inflammatory response in animals of many different phyla; this prompts coagulation of the blood or body fluid, followed by wound healing, which may be rapid, as in the cnidaria. Arthropods are able to repair injuries to the cuticle that forms their exoskeleton to some extent.
Animals in several phyla, including annelids, arthropods, cnidaria, molluscs, nematodes, and vertebrates are able to produce antimicrobial peptides to fight off infection following an injury.
Humans
Injury in humans has been studied extensively for its importance in medicine. Much of medical practice, including emergency medicine and pain management, is dedicated to the treatment of injuries. The World Health Organization has developed a classification of injuries in humans by categories including mechanism, objects/substances producing injury, place of occurrence, activity when injured and the role of human intent. In addition to physical harm, injuries can cause psychological harm, including post-traumatic stress disorder.
Plants
In plants, injuries result from the eating of plant parts by herbivorous animals including insects and mammals, from damage to tissues by plant pathogens such as bacteria and fungi, which may gain entry after herbivore damage or in other ways, and from abiotic factors such as heat, freezing, flooding, lightning, and pollutants such as ozone. Plants respond to injury by signalling that damage has occurred, by secreting materials to seal off the damaged area, by producing antimicrobial chemicals, and in woody plants by regrowing over wounds.
Cell injury
Cell injury is a variety of changes of stress that a cell suffers due to external as well as internal environmental changes. Amongst other causes, this can be due to physical, chemical, infectious, biological, nutritional or immunological factors. Cell damage can be reversible or irreversible. Depending on the extent of injury, the cellular response may be adaptive and where possible, homeostasis is restored. Cell death occurs when the severity of the injury exceeds the cell's ability to repair itself. Cell death is relative to both the length of exposure to a harmful stimulus and the severity of the damage caused.
References
Biological concepts | Injury | Biology | 688 |
53,243,664 | https://en.wikipedia.org/wiki/Transmembrane%20protein%20255A | Transmembrane protein 255A is a protein that is encoded by the TMEM255A gene. TMEM255A is often referred to as family with sequence similarity 70, member A (FAM70A). The TMEM255A protein is transmembrane and is predicted to be located the nuclear envelope of eukaryote organisms.
Gene
The TMEM25A gene (often referred to as Family with Sequence Similarity 70 Member A; FAM70A) is located on Xq24, spanning 60,555 base pairs. TMEM255A is flanked by the genes ATPase Na+/K+ transporting family member beta 4 (ATP1B4) and NFKB activating protein pseudogene 1 (NKAPP1).
mRNA
There are three variants of the transcript seen, where isoform 1 is the longest. The 5’- and 3’- UTRs of the mRNA spans 227 and 2207 base pairs, respectively, and are predicted to contain several stem-loops. The mRNA is 3512 base pairs long and the gene consists of 9 exons.
Protein
The longest protein encoded for is isoform 1, which spans 349 amino acids, and is predicted to have a molecular weight at 38 kDa and isoelectric point at pH 7.89. Compared to the average vertebrate protein, TMEM255A is rich in aspartic acid, isoleucine, proline and tyrosine, and relatively poor in glutamic acid and lysine. No charge clusters have been found in this protein.
The protein is predicted to be post-translationally modified by phosphorylation and glycosylation. The protein is predicted to have four transmembrane domains in the nuclear membrane. The structure of the protein is predicted to be helical in the transmembrane domains. Disulfide bonds are predicted to be found in the region in between transmembrane domains 3 and 4, which indicates that this particular region is located in the nucleoplasm.
Expression
TMEM255A is predicted to be most abundantly expressed in nerve, brain, testis, ovary, thymus and kidney. The protein is expressed in a variety of tissues, but at relatively moderate levels.
Regulation of expression
Both the 5' and 3' Untranslated Regions (UTRs) are predicted to consist of several stem-loops. The 3' UTR also contain a conserved miRNA target site (amino acids 22-29). Phosphorylation and glycosylation sites have also been predicted in TMEM255A.
Interacting proteins
Affinity Capture MS experimentally predicts that TMEM255A interacts with ten different proteins; Ankyrin repeat domain 13D (ANKRD13D), Collagen beta (1-O) galactosyltransferase 2 (COLGALT2), Grancalcin (GCA), Itchy E3 ubiquitin protein ligase (ITCH), Potassium channel tetramerization domain containing 2 (KCTD2), Neural precursor cell expressed developmentally down-regulated 4 (NEDD4), SEC24 family member B (SEC24D), Ubiquitin associated and SH3 domain containing B (UBASH3D), WW domain containing E3 ubiquitin protein ligase 1 and 2 (WWP1, WWP2) - most of these are included in ubiquitination processes, transcription regulation and protein degradation.
Clinical significance
TMEM255A is predicted to be highly expressed in peroxisome proliferator-activated receptor γ coactivator 1α-upregulated glioblastoma multiforme cells (specific gene function not yet fully established). Ongoing research is investigating the possibility of TMEM255A to be used in personalized immunotherapy.
Homology
There is one known paralog for TMEM255A, called TMEM255B, which is found on chromosome 13 (position 13q34). TMEM255A is only found in the kingdom of animalia, and its most distant homolog is found in invertebrata (i.e. Saccoglossus kowalenskii).
References
Transmembrane proteins
Phylogenetics | Transmembrane protein 255A | Biology | 922 |
4,405,547 | https://en.wikipedia.org/wiki/Darzens%20reaction | The Darzens reaction (also known as the Darzens condensation or glycidic ester condensation) is the chemical reaction of a ketone or aldehyde with an α-haloester in the presence of a base to form an α,β-epoxy ester, also called a "glycidic ester". This reaction was discovered by the organic chemist Auguste Georges Darzens in 1904.
Reaction mechanism
The reaction process begins with deprotonation at the halogenated position. Because of the ester substituents, this carbanion is a resonance-stabilized enolate. This nucleophile next attacks the carbonyl reagent, forming a carbon–carbon bond. These two steps are similar to a base-catalyzed aldol reaction. The oxygen anion in this aldol-like product then SN2 attacks on the formerly-nucleophilic halide-bearing position, displacing the halide to form an epoxide. This reaction sequence is thus a condensation reaction since there is a net loss of HCl when the two reactant molecules join.
If the starting halide is an α-halo amide, the product is an α,β-epoxy amide. If an α-halo ketone is used, the product is an α,β-epoxy ketone.
Any sufficiently strong base can be used for the initial deprotonation. However, if the starting material is an ester, the alkoxide corresponding to the ester side-chain is commonly chosen in order to prevent complications due to potential acyl exchange side reactions.
Stereochemistry
Depending on the specific structures involved, the epoxide may exist in cis and trans forms. A specific reaction may give only cis, only trans, or a mixture of the two. The specific stereochemical outcome of the reaction is affected by several aspects of the intermediate steps in the sequence.
The initial stereochemistry of the reaction sequence is established in the step where the carbanion attacks the carbonyl. Two sp3 (tetrahedral) carbons are created at this stage, which allows two different diastereomeric possibilities of the halohydrin intermediate. The most likely result is due to chemical kinetics: whichever product is easier and faster to form will be the major product of this reaction. The subsequent SN2 reaction step proceeds with stereochemical inversion, so the cis or trans form of the epoxide is controlled by the kinetics of an intermediate step. Alternately, the halohydrin can epimerize due to the basic nature of the reaction conditions prior to the SN2 reaction. In this case, the initially formed diastereomer can convert to a different one. This is an equilibrium process, so the cis or trans form of the epoxide is controlled by chemical thermodynamics—the product resulting from the more stable diastereomer, regardless of which one was the kinetic result.
Alternative reactions
Glycidic esters can also be obtained via nucleophilic epoxidation of an α,β-unsaturated ester, but that approach requires synthesis of the alkene substrate first whereas the Darzens condensation allows formation of the carbon–carbon connectivity and epoxide ring in a single reaction.
Subsequent reactions
The product of the Darzens reaction can be reacted further to form various types of compounds. Hydrolysis of the ester can lead to decarboxylation, which triggers a rearrangement of the epoxide into a carbonyl (4). Alternately, other epoxide rearrangements can be induced to form other structures.
See also
Johnson–Corey–Chaykovsky reaction
Reformatskii reaction
References
Addition reactions
Carbon-carbon bond forming reactions
Epoxidation reactions
Epoxides
Name reactions | Darzens reaction | Chemistry | 798 |
47,278,431 | https://en.wikipedia.org/wiki/Bullet%20Galaxy | The Bullet Galaxy (RXC J2359.3-6042 CC) is a galaxy in the galaxy cluster RXC J2359.3-6042 (Abell 4067 or ACO 4067). The Bullet Galaxy is the sole component of one half of a cluster merger between the bulk of the cluster and this galaxy, which is plowing through the cluster, similar to how merging clusters Bullet Cluster and Bullet Group have merged. Unlike those two mergers, the Bullet Galaxy's merger is between one galaxy and a galaxy cluster. The cluster merger is happening at a lower speed than the Bullet Cluster, thus allowing the core of the Bullet Galaxy to retain cool gas and remain relatively undisturbed by its passage through the larger cluster. This cluster merger is the first one observed between a single galaxy and a cluster. The galaxy and cluster lies at redshift z=0.0992, some away. The galaxy is traveling through the cluster at a speed of .
By studying this unique merging researchers can gain insight on dark matter, and how it interacts with other objects in space. According to astrophysicists James Bullock, "Galaxy clusters that are merging with each other represent interesting laboratories for this kind of question,” when he was speaking of dark matter and the Bullet cluster.
Bullet Cluster
The Bullet Cluster (1E 0657-558) consists of two colliding clusters of galaxies. Strictly speaking, the name Bullet Cluster refers to the smaller sub cluster, moving away from the larger one. It is at a co-moving radial distance of 1.141 Gpc (3.7 billion light-years). Gravitational lensing studies of the Bullet Cluster are claimed to provide the best evidence to date for the existence of dark matter. Observations of other galaxy cluster collisions, such as MACS J0025.4-1222, are similarly claimed to support the existence of dark matter.
References
Galaxies
Tucana | Bullet Galaxy | Astronomy | 395 |
5,624,421 | https://en.wikipedia.org/wiki/Phosphoric%20monoester%20hydrolases | Phosphoric monoester hydrolases (or phosphomonoesterases) are enzymes that catalyse the hydrolysis of O-P bonds by nucleophilic attack of phosphorus by cysteine residues or coordinated metal ions.
They are categorized with the EC number 3.1.3.
Examples include:
acid phosphatase
alkaline phosphatase
fructose-bisphosphatase
glucose-6-phosphatase
phosphofructokinase-2
phosphoprotein phosphatase
calcineurin
6-phytase
See also
phosphodiesterase
phosphatase
External links
Metabolism | Phosphoric monoester hydrolases | Chemistry,Biology | 145 |
11,790,980 | https://en.wikipedia.org/wiki/Acrosporium%20tingitaninum | Acrosporium tingitaninum is an ascomycete fungus that is a plant pathogen.
References
External links
Index Fungorum
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Enigmatic Ascomycota taxa
Fungus species | Acrosporium tingitaninum | Biology | 51 |
4,870,290 | https://en.wikipedia.org/wiki/Complex%20logarithm | In mathematics, a complex logarithm is a generalization of the natural logarithm to nonzero complex numbers. The term refers to one of the following, which are strongly related:
A complex logarithm of a nonzero complex number , defined to be any complex number for which . Such a number is denoted by . If is given in polar form as , where and are real numbers with , then is one logarithm of , and all the complex logarithms of are exactly the numbers of the form for integers . These logarithms are equally spaced along a vertical line in the complex plane.
A complex-valued function , defined on some subset of the set of nonzero complex numbers, satisfying for all in . Such complex logarithm functions are analogous to the real logarithm function , which is the inverse of the real exponential function and hence satisfies for all positive real numbers . Complex logarithm functions can be constructed by explicit formulas involving real-valued functions, by integration of , or by the process of analytic continuation.
There is no continuous complex logarithm function defined on all of . Ways of dealing with this include branches, the associated Riemann surface, and partial inverses of the complex exponential function. The principal value defines a particular complex logarithm function that is continuous except along the negative real axis; on the complex plane with the negative real numbers and 0 removed, it is the analytic continuation of the (real) natural logarithm.
Problems with inverting the complex exponential function
For a function to have an inverse, it must map distinct values to distinct values; that is, it must be injective. But the complex exponential function is not injective, because for any complex number and integer , since adding to has the effect of rotating counterclockwise radians. So the points
equally spaced along a vertical line, are all mapped to the same number by the exponential function. This means that the exponential function does not have an inverse function in the standard sense. There are two solutions to this problem.
One is to restrict the domain of the exponential function to a region that does not contain any two numbers differing by an integer multiple of : this leads naturally to the definition of branches of , which are certain functions that single out one logarithm of each number in their domains. This is analogous to the definition of on as the inverse of the restriction of to the interval : there are infinitely many real numbers with , but one arbitrarily chooses the one in .
Another way to resolve the indeterminacy is to view the logarithm as a function whose domain is not a region in the complex plane, but a Riemann surface that covers the punctured complex plane in an infinite-to-1 way.
Branches have the advantage that they can be evaluated at complex numbers. On the other hand, the function on the Riemann surface is elegant in that it packages together all branches of the logarithm and does not require an arbitrary choice as part of its definition.
Principal value
Definition
For each nonzero complex number , the principal value is the logarithm whose imaginary part lies in the interval . The expression is left undefined since there is no complex number satisfying .
When the notation appears without any particular logarithm having been specified, it is generally best to assume that the principal value is intended. In particular, this gives a value consistent with the real value of when is a positive real number. The capitalization in the notation is used by some authors to distinguish the principal value from other logarithms of
Calculating the principal value
The polar form of a nonzero complex number is , where is the absolute value of , and is its argument. The absolute value is real and positive. The argument is defined up to addition of an integer multiple of . Its principal value is the value that belongs to the interval , which is expressed as .
This leads to the following formula for the principal value of the complex logarithm:
For example, , and .
The principal value as an inverse function
Another way to describe is as the inverse of a restriction of the complex exponential function, as in the previous section. The horizontal strip consisting of complex numbers such that is an example of a region not containing any two numbers differing by an integer multiple of , so the restriction of the exponential function to has an inverse. In fact, the exponential function maps bijectively to the punctured complex plane , and the inverse of this restriction is . The conformal mapping section below explains the geometric properties of this map in more detail.
The principal value as an analytic continuation
On the region consisting of complex numbers that are not negative real numbers or 0, the function is the analytic continuation of the natural logarithm. The values on the negative real line can be obtained as limits of values at nearby complex numbers with positive imaginary parts.
Properties
Not all identities satisfied by extend to complex numbers. It is true that for all (this is what it means for to be a logarithm of ), but the identity fails for outside the strip . For this reason, one cannot always apply to both sides of an identity to deduce . Also, the identity can fail: the two sides can differ by an integer multiple of ; for instance,
but
The function is discontinuous at each negative real number, but continuous everywhere else in . To explain the discontinuity, consider what happens to as approaches a negative real number . If approaches from above, then approaches which is also the value of itself. But if approaches from below, then approaches So "jumps" by as crosses the negative real axis, and similarly jumps by
Branches of the complex logarithm
Is there a different way to choose a logarithm of each nonzero complex number so as to make a function that is continuous on all of ? The answer is no. To see why, imagine tracking such a logarithm function along the unit circle, by evaluating as increases from to . If is continuous, then so is , but the latter is a difference of two logarithms of so it takes values in the discrete set so it is constant. In particular, , which contradicts .
To obtain a continuous logarithm defined on complex numbers, it is hence necessary to restrict the domain to a smaller subset of the complex plane. Because one of the goals is to be able to differentiate the function, it is reasonable to assume that the function is defined on a neighborhood of each point of its domain; in other words, should be an open set. Also, it is reasonable to assume that is connected, since otherwise the function values on different components of could be unrelated to each other. All this motivates the following definition:
A branch of is a continuous function defined on a connected open subset of the complex plane such that is a logarithm of for each in .
For example, the principal value defines a branch on the open set where it is continuous, which is the set obtained by removing 0 and all negative real numbers from the complex plane.
Another example: The Mercator series
converges locally uniformly for , so setting defines a branch of on the open disk of radius 1 centered at 1. (Actually, this is just a restriction of , as can be shown by differentiating the difference and comparing values at 1.)
Once a branch is fixed, it may be denoted if no confusion can result. Different branches can give different values for the logarithm of a particular complex number, however, so a branch must be fixed in advance (or else the principal branch must be understood) in order for "" to have a precise unambiguous meaning.
Branch cuts
The argument above involving the unit circle generalizes to show that no branch of exists on an open set containing a closed curve that winds around 0. One says that "" has a branch point at 0". To avoid containing closed curves winding around 0, is typically chosen as the complement of a ray or curve in the complex plane going from 0 (inclusive) to infinity in some direction. In this case, the curve is known as a branch cut. For example, the principal branch has a branch cut along the negative real axis.
If the function is extended to be defined at a point of the branch cut, it will necessarily be discontinuous there; at best it will be continuous "on one side", like at a negative real number.
The derivative of the complex logarithm
Each branch of on an open set is the inverse of a restriction of the exponential function, namely the restriction to the image . Since the exponential function is holomorphic (that is, complex differentiable) with nonvanishing derivative, the complex analogue of the inverse function theorem applies. It shows that is holomorphic on , and for each in . Another way to prove this is to check the Cauchy–Riemann equations in polar coordinates.
Constructing branches via integration
The function for real can be constructed by the formula
If the range of integration started at a positive number other than 1, the formula would have to be
instead.
In developing the analogue for the complex logarithm, there is an additional complication: the definition of the complex integral requires a choice of path. Fortunately, if the integrand is holomorphic, then the value of the integral is unchanged by deforming the path (while holding the endpoints fixed), and in a simply connected region (a region with "no holes"), any path from to inside can be continuously deformed inside into any other. All this leads to the following:
The complex logarithm as a conformal map
Any holomorphic map satisfying for all is a conformal map, which means that if two curves passing through a point of form an angle (in the sense that the tangent lines to the curves at form an angle ), then the images of the two curves form the same angle at .
Since a branch of is holomorphic, and since its derivative is never 0, it defines a conformal map.
For example, the principal branch , viewed as a mapping from to the horizontal strip defined by , has the following properties, which are direct consequences of the formula in terms of polar form:
Circles in the z-plane centered at 0 are mapped to vertical segments in the w-plane connecting to , where is the real log of the radius of the circle.
Rays emanating from 0 in the z-plane are mapped to horizontal lines in the w-plane.
Each circle and ray in the z-plane as above meet at a right angle. Their images under Log are a vertical segment and a horizontal line (respectively) in the w-plane, and these too meet at a right angle. This is an illustration of the conformal property of Log.
The associated Riemann surface
Construction
The various branches of cannot be glued to give a single continuous function because two branches may give different values at a point where both are defined. Compare, for example, the principal branch on with imaginary part in and the branch on whose imaginary part lies in . These agree on the upper half plane, but not on the lower half plane. So it makes sense to glue the domains of these branches only along the copies of the upper half plane. The resulting glued domain is connected, but it has two copies of the lower half plane. Those two copies can be visualized as two levels of a parking garage, and one can get from the level of the lower half plane up to the level of the lower half plane by going radians counterclockwise around , first crossing the positive real axis (of the level) into the shared copy of the upper half plane and then crossing the negative real axis (of the level) into the level of the lower half plane.
One can continue by gluing branches with imaginary part in , in , and so on, and in the other direction, branches with imaginary part in , in , and so on. The final result is a connected surface that can be viewed as a spiraling parking garage with infinitely many levels extending both upward and downward. This is the Riemann surface associated to .
A point on can be thought of as a pair where is a possible value of the argument of . In this way, can be embedded in .
The logarithm function on the Riemann surface
Because the domains of the branches were glued only along open sets where their values agreed, the branches glue to give a single well-defined function . It maps each point on to . This process of extending the original branch by gluing compatible holomorphic functions is known as analytic continuation.
There is a "projection map" from down to that "flattens" the spiral, sending to . For any , if one takes all the points of lying "directly above" and evaluates at all these points, one gets all the logarithms of .
Gluing all branches of log z
Instead of gluing only the branches chosen above, one can start with all branches of , and simultaneously glue every pair of branches and along the largest open subset of on which and agree. This yields the same Riemann surface and function as before. This approach, although slightly harder to visualize, is more natural in that it does not require selecting any particular branches.
If is an open subset of projecting bijectively to its image in , then the restriction of to corresponds to a branch of defined on . Every branch of arises in this way.
The Riemann surface as a universal cover
The projection map realizes as a covering space of . In fact, it is a Galois covering with deck transformation group isomorphic to , generated by the homeomorphism sending to .
As a complex manifold, is biholomorphic with via . (The inverse map sends to .) This shows that is simply connected, so is the universal cover of .
Applications
The complex logarithm is needed to define exponentiation in which the base is a complex number. Namely, if and are complex numbers with , one can use the principal value to define . One can also replace by other logarithms of to obtain other values of , differing by factors of the form . The expression has a single value if and only if is an integer.
Because trigonometric functions can be expressed as rational functions of , the inverse trigonometric functions can be expressed in terms of complex logarithms.
In electrical engineering, the propagation constant involves a complex logarithm.
Generalizations
Logarithms to other bases
Just as for real numbers, one can define for complex numbers and
with the only caveat that its value depends on the choice of a branch of log defined at and (with ). For example, using the principal value gives
Logarithms of holomorphic functions
If f is a holomorphic function on a connected open subset of , then a branch of on is a continuous function on such that for all in . Such a function is necessarily holomorphic with for all in .
If is a simply connected open subset of , and is a nowhere-vanishing holomorphic function on , then a branch of defined on can be constructed by choosing a starting point a in , choosing a logarithm of , and defining
for each in .
Notes
References
Analytic functions
Logarithms | Complex logarithm | Mathematics | 3,149 |
53,738,672 | https://en.wikipedia.org/wiki/Gel%20wipe | Gel wipe is a moisturizing gel applied to dry toilet paper for cleaning purposes, like personal hygiene, or to reduce skin irritation from diarrhea. It was developed in the 21st century as an environmentally sensitive alternative to wet wipes.
History
Estonian Siim Saat is seen as the inventor of gel wipe in 2011. In 2016, he was among seven entrepreneurs in the world nominated for an award by the Healthcare Startup Society in London at the Healthcare Startup Conference. Gel wipe is seen as the solution to wet wipe pollution.
Uses
Although marketed primarily for wiping bottoms, it is not uncommon to use it against skin rash, in the case of diarrhea or even as a substitute for water and soap on hiking trips.
Gel wipes began to be marketed as complementary hygiene product for toilet paper by SATU laboratory, as a luxury option by St Joseph's Toiletries or hipster product by Zum Bum, and Zero Taboos that makes Wipegel. Many adults now use gel wipe with toilet paper as an alternative to wet wipes that cause environmental and sewer problems. All wet wipes sold as "flushable" in the UK have so far failed the water industry's disintegration tests, the BBC has found. A study by Ryerson University tested 23 wipes with the "flushable" label and found only two that partially disintegrated.
See also
Anal hygiene
References
Personal hygiene products
Paper products
Toilets
Babycare
Disposable products | Gel wipe | Biology | 297 |
54,316,454 | https://en.wikipedia.org/wiki/Tajug | Tajug is a pyramidal or pyramid square (i.e. an equilateral square base with a peak) ornament which is usually used for sacred buildings in Southeast Asia including Indonesia, such as mosque or cupola graveyard. It is considered derived from Indian and Chinese architecture, which has history since pre-Islamic era, although there's also an element of an influence from Indian mosques. The term tajug is also used to refer to mosques or surau (Islamic assembly building) in some regions of Indonesia.
See also
Indonesian mosques
References
Islamic architectural elements
Islamic architecture in Asia
Architecture in Indonesia
Javanese architecture
Architectural elements
Ornaments (architecture)
Towers in Asia | Tajug | Technology,Engineering | 137 |
18,650,424 | https://en.wikipedia.org/wiki/Ernest%20L.%20Eliel | Ernest Ludwig Eliel (December 28, 1921 – September 18, 2008) was an organic chemist born in Cologne, Germany. Among his awards were the Priestley Medal in 1996 and the NAS Award for Chemistry in Service to Society in 1997.
When the Nazis came to power, he left Germany and moved to Scotland, then Canada, then Cuba. He received his B.S. from the University of Havana in 1946. He moved to the United States in 1946 and taught at the University of Notre Dame from 1948. In 1972 he moved to be the W.R. Kenan, Jr. Professor of Chemistry at the University of North Carolina at Chapel Hill until his retirement in 1993. Eliel was elected a Fellow of the American Academy of Arts and Sciences in 1980. In 1981, Eliel became a founding member of the World Cultural Council. He served as president of the American Chemical Society in 1992. In 1995 he received the George C. Pimentel Award in Chemical Education, and in 1996 he was awarded the Priestley Medal of the American Chemical Society. He died in Chapel Hill, North Carolina.
His research focussed on the stereochemistry and conformational analysis of flexible organic molecules, including derivatives of cyclohexane and saturated heterocyclic rings, using nuclear magnetic resonance spectroscopy (NMR) extensively. His 1962 textbook Stereochemistry of Carbon Compounds influenced generations of organic chemists. The most recent edition is Stereochemistry of Organic Compounds, co-authored in 1994 with Samuel H. Wilen.
References
External links
Jeffrey I. Seeman, "Ernest L. Eliel", Biographical Memoirs of the National Academy of Sciences (2014)
1921 births
2008 deaths
American organic chemists
Presidents of the American Chemical Society
University of Notre Dame faculty
University of North Carolina at Chapel Hill faculty
Fellows of the American Academy of Arts and Sciences
Founding members of the World Cultural Council
Members of the United States National Academy of Sciences
Scientists from Cologne
Jewish emigrants from Nazi Germany to the United States
Stereochemists
20th-century American chemists | Ernest L. Eliel | Chemistry | 413 |
3,758,415 | https://en.wikipedia.org/wiki/Nadcap | Nadcap (formerly NADCAP, the National Aerospace and Defense Contractors Accreditation Program) is a global cooperative accreditation program for aerospace engineering, defense and related industries.
History of Nadcap
The Nadcap program is administered by the Performance Review Institute (PRI). Nadcap was established in 1990 by SAE International. Nadcap's membership consists of "prime contractors" who coordinate with aerospace accredited suppliers to develop industry-wide audit criteria for special processes and products. Through PRI, Nadcap provides independent certification of manufacturing processes for the industry. PRI has its headquarters in Warrendale, Pennsylvania with branch offices for Nadcap located in London, Beijing, and Nagoya.
Fields of Nadcap activities
The Nadcap program provides accreditation for special processes in the aerospace and defense industry.
These include:
Aerospace Quality Systems (AQS)
Aero Structure Assembly (ASA)
Chemical Processing (CP)
Coatings (CT)
Composites (COMP)
Conventional Machining as a Special Process (CMSP)
Elastomer Seals (SEAL)
Electronics (ETG)
Fluids Distribution (FLU)
Heat Treating (HT)
Materials Testing Laboratories (MTL)
Measurement & Inspection (M&I)
Metallic Materials Manufacturing (MMM)
Nonconventional Machining and Surface Enhancement (NMSE)
Nondestructive Testing (NDT)
Non Metallic Materials Manufacturing (NMMM)
Non Metallic Materials Testing (NMMT)
Sealants (SLT)
Welding (WLD)
The Nadcap program and industry
PRI schedules an audit and assigns an industry approved auditor who will conduct the audit using an industry agreed checklist. At the end of the audit, any non-conformity issues will be raised through a non-conformance report. PRI will administer and close out the non-conformance reports with the Supplier. Upon completion PRI will present the audit pack to a 'special process Task Group’ made up of members from industry who will review it and vote on its acceptability for approval.
The Nadcap subscribers include:
309th Maintenance Wing-Hill AFB
Aerojet Rocketdyne
Airbus Group - Airbus
Airbus Group - Airbus Defence and Space
Airbus Group - Airbus Helicopters
Airbus Group - Premium AEROTEC GmbH
Airbus Group - Stelia Aerospace
Air Force
BAE Systems Military Air Information (MAI)
BAE Systems
The Boeing Company
Bombardier Inc.
COMAC
Defense Contract Management Agency (DCMA)
Eaton, Aerospace Group
Embraer S.A.
GARDNER AEROSPACE Group
GE Aviation
GE Aviation - GE Avio S.r.l.
General Dynamics - Gulfstream
GKN Aerospace
GKN Aerospace Sweden AB
Harris Corporation
Heroux-Devtek Landing Gear Division Inc.
Honeywell Aerospace
Howmet Aerospace
Israel Aerospace Industries
Latécoère
Leonardo S.p.A. Divisione Velivoli
Leonardo S.p.A. – Helicopter Division
Liebherr-Aerospace SAS
Lockheed Martin Corporation
Lockheed Martin - Sikorsky Aircraft
Mitsubishi Aircraft Corporation
Mitsubishi Heavy Industries LTD
MTU Aero Engines AG
NASA
Northrop Grumman Corporation
Parker Aerospace Group
Raytheon Company
Raytheon Technologies - Goodrich
Raytheon Technologies - Collins Aerospace (Hamilton Sundstrand)
Raytheon Technologies - Pratt & Whitney
Raytheon Technologies - Pratt & Whitney Canada
Raytheon Technologies - Collins Aerospace (Rockwell Collins)
Rolls-Royce
SAFRAN Group
Singapore Technologies Aerospace
Sonaca
Spirit AeroSystems
Swift Engineering
Textron Inc. - Textron Aviation
Textron Inc. - Bell Helicopter
Thales Group
Triumph Group Inc.
Zodiac Aerospace (SAFRAN)
Nadcap Meetings
Nadcap meetings are held several times a year in different locations worldwide. For example, the 2017 meetings were held in New Orleans, LA, USA in February, Berlin (Germany) in June; and Pittsburgh (Pennsylvania). During these meetings there are open Task Group meetings and other workshops (with participation of Primes, Suppliers, and PRI staff). These meetings are used to discuss the program development and changes to audit criteria among other topics. Agendas and minutes are posted on the PRI website.
Nadcap Training
During the Nadcap meetings, training classes are provided on different topics such as:
Root Cause Corrective Action - RCCA
Special processes, such as, NDT, chemical processing, etc.
Internal auditing
AS/EN/JISQ 9100
Problem Solving Tools
Nadcap Audit Preparation – Chemical Processing
Nadcap Audit Preparation – Heat Treating
Nadcap Audit Preparation – Metallic Material Testing Laboratories
Nadcap Audit Preparation – Non-Destructive Testing
Nadcap Audit Preparation – Welding
References
External links
Boeing official site
ADS Group official site
Aerospace Manufacturing
Quality Manufacturing Today
Aerospace engineering | Nadcap | Engineering | 911 |
423,740 | https://en.wikipedia.org/wiki/Grommet | A grommet is a ring or edge strip inserted into a hole through thin material, typically a sheet of textile fabric, sheet metal or composite of carbon fiber, wood or honeycomb. Grommets are generally flared or collared on each side to keep them in place, and are often made of metal, plastic, or rubber. They may be used to prevent tearing or abrasion of the pierced material or protection from abrasion of the insulation on the wire, cable, line being routed through the penetration, and to cover sharp edges of the piercing, or all of the above.
A small grommet may also be called an eyelet, used for example on shoes, tarps and sails for lacing purposes.
Grommets in electrical applications are referred to as "insulating bushings". Most common are molded rubber bushings that are inserted into hole diameters up to 2″ (51 mm). There are many hole configurations from standard round to assorted U-shapes. Larger penetrations that are irregular in shape as well as long straight edges are often fitted with extruded or stamped strips of continuous length, referred to as "grommet edging". This type of protective bushing is quite common in applications that range from telecom switches and data center cabinets to complex and dense wire/cable and even hydraulic tubing in aircraft, transportation vehicles and medical equipment.
As reinforcement or crafting
Grommets are typically used to reinforce holes in leather, cloth, shoes, canvas and other fabrics. They can be made of metal, rubber, or plastic, and are easily used in common projects, requiring only the grommet itself and a means of setting it. A simple punch, a metal rod with a convex tip, is often sold with the grommets. It can be struck with a hammer to set the grommet. It can alternatively be set with an electronic, pneumatic, or gas-powered machine. There are also dedicated grommet presses with punch and anvil, as shown in the picture, ranging from inexpensive to better-quality tools, which are somewhat faster to use.
Typical applications are footwear for boot and shoe laces, in laced clothing such as corsets, in flags for hoisting, and in curtains and other household items that require hanging from hooks, as when they are used in conjunction with tensioner rods for shower curtains. The grommet prevents the cord from tearing through the hole, thereby providing structural integrity. Small grommets are also called eyelets, especially when used in clothing or crafting. Eyelets may be used purely decoratively for crafting. When used in sailing and various other applications, they are called cringles. Sometimes field workers refer to them as grunyons.
Maritime use
Traditionally, rope grommets have been widely used on sailing ships in a variety of ways. They have been utilized as chest handles or on row boats as a soft oar lock. They are a rope ring that is made by first disassembling the rope then re-weaving the strands to the desired size.
Use in electrical equipment
For cable protection
Holes in metal or another hard material will often have sharp edges. Electrical wires, cord, rope, lacings, or other soft vulnerable material passing through the hole can become abraded or cut, or electrical insulation may break due to repeated flexing at the exit point of the casing of a junction box for example. Rubber, plastic or plastic coated metal grommets are used to avoid this. Tight fitting rubber grommets can also prevent the entry of dirt, air, water, etc. The smooth and sometimes soft inner surface of the grommet shields the wire from damage.
Grommets are generally used whenever wires pass through punched or drilled sheet metal or plastic casings for this reason. Molded and continuous strip grommets, also known as edge grommets, are manufactured in a wide variety of sizes and lengths expressly for this purpose; they are usually a single piece which can be inserted by hand. Two-piece hard plastic devices are available which also grip the wire that passes through. These are called strain relief bushings and are often used to insulate, anchor, and protect power cords where they enter panels. Preventing a tug or twist on the wire from stressing the electrical connections inside the connected equipment. Sleeved grommets have a flexible extension (sleeve), usually tapered or moulded to flex increasingly towards the free end in order to reduce fracturing of electrical insulation.
To minimize vibration
Grommets made of rubber or other elastic material are also used to minimize the transmission of vibration. They were widely used for mounting shock-sensitive computer disk drives, particularly in equipment subject to vibration or jarring, but are not usually used with more robust modern drives. The screws that hold the drive in place pass through grommets that decouple it acoustically from the chassis. Grommets are used in a similar way to acoustically isolate electronic circuit components that are susceptible to microphonism caused by mechanical vibration or jarring.
Surgical grommets
In chronic cases of otitis media with effusions present for months, surgery is sometimes performed to insert a grommet, called a "tympanostomy tube" into the eardrum to allow air to pass through into the middle ear, and thus release any pressure buildup and help clear excess fluid within.
This is also a correcting measure for a patulous Eustachian tube (when air moves to and from the middle ear with each breath making the eardrum flap).
Gallery
See also
Blind rivet
Cable grommet
Cringle
Shoulder washer
References
External links
Fasteners
Footwear components
Implants (medicine)
Textile closures | Grommet | Engineering | 1,194 |
70,171,813 | https://en.wikipedia.org/wiki/Mobility%20transition | Mobility transition is a set of social, technological and political processes of converting traffic (including freight transport) and mobility to sustainable transport with renewable energy resources, and an integration of several different modes of private transport and local public transport. It also includes social change, a redistribution of public spaces, and different ways of financing and spending money in urban planning. The main motivation for mobility transition is the reduction of the harm and damage that traffic causes to people (mostly but not solely due to collisions) and the environment (which also often directly or indirectly affects people) in order to make (urban) society more livable, as well as solving various interconnected logistical, social, economic and energy issues and inefficiencies.
Motivation
Environmental damage
An important goal is the reduction of greenhouse gas emissions such as CO2. To achieve the goal set in the Paris Agreement, that is, to restrict global warming to clearly below 2 °C, the burning of fossil fuels is to be discontinued around 2040. Because the CO2 emissions of traffic practically need to be reduced to zero, the measures taken so far in the transport sector are not sufficient in order to achieve the climate change mitigation goals that have been set.
Air pollution
A mobility transition also serves health purposes in the metropolitan regions and large cities and is intended in particular to counteract the massive air pollution. For example, in Germany in 2015, traffic caused about 38% of human-related nitrogen oxide emissions. According to Lelieveld et al. (2015), air pollution from land traffic alone killed around 164,000 people in 2010; in Germany alone, it was over 6,900 people. A 2017 study by the same lead author concluded that air pollution from road traffic in Germany causes 11,000 deaths every year that could potentially be avoided. This figure is 3.5 times the number of fatalities from accidents.
To demonstrate how much road traffic contributes to air pollution in Germany, for every 100 inhabitants, 58 of them owned passenger cars, according to Federal Statistical Office of Germany.
Accident fatalities, quality of life, aggressive behaviour
Further motives for the mobility transition are the desire for less noise, streets with quality of life and lower accident risks (see also Vision Zero). According to estimates by the European Environment Agency, 113 million people in Europe are affected by road noise at unhealthy levels. With increasing traffic and commuter numbers, many citizens also wished for more attractive places to spend time in public spaces. A mobility transition therefore also serves to increase the quality of life.
The mobility transition is also seen by some as a means of reducing aggressive behaviour in traffic (road rage) and in society. Studies indicate that people in large and expensive cars are more likely to behave more recklessly. According to the German Verkehrsklima 2020 (Traffic Mood 2020) study, women feel more insecure in traffic than men, and they want more controls and stricter laws. On the other hand, the "evil eye" design of vehicles is increasingly used by manufacturers to sell vehicles to drivers who want to feel strong and superior on the road. Accident reporting by the press and the police sometimes paints a distorted picture.
Traffic congestion
Traffic congestion has been increasing in streets and roads. Traditional traffic policy usually relies on expanding the roads to solve the congestion problem. From a global perspective, there are two important factors behind the increasing traffic jams urbanisation and the purchase of more automobiles (also known as status symbol) are being bought as prosperity increases. A return to more public and non-motorised transport is likely in the future.
Peak oil
Petroleum production is approaching its peak, or by some estimates may already have been passed in the 2020s. The Earth's oil reserves are finite, and oil extraction will become inadequate to power as many petroleum-fueled vehicles. Sooner or later in the 21st century, mobility must rely on other energy sources.
Mobility transition concept
Origins
There has been criticism of automotive cities and car dependency since at least the 1960s. In the Netherlands, Provo Luud Schimmelpennink's 1965 White Bicycle Plan was an early attempt to stop the rising death toll due to car-related traffic accidents, and to stimulate cycling as a safer and healthier alternative for short-distance travel in the city of Amsterdam. Although the plan itself was a complete failure, it drew widespread publicity and influenced urban planning ideas around the world – with the white bicycle becoming 'an almost mythical worldwide symbol for a better world'. It inspired the emergence of both strongly anti-car movements such as Kabouter (Gnome), Amsterdam Autovrij ("Amsterdam Car-Free") and De Lastige Amsterdammer ("The Troubled/Troublesome Amsterdammer"), as well as pro-cycling movements in Amsterdam and elsewhere in the Netherlands in the early 1970s. A prominent example was protest group Stop de Kindermoord ("Stop the Child Murder"), founded in 1972 (formalised in 1973) by a journalist from Eindhoven whose young daughter was killed in a traffic accident, and shortly thereafter another daughter of his was almost killed as well. The movement highlighted how lethally dangerous traffic had become for children in particular, and that the authorities had failed to acknowledge and address the problem. It mobilised parents, teachers, journalists, other citizens and politicians; even right-wing politicians, who had traditionally promoted automobile interests, were influenced by the campaign and became more willing to adopt preventive measures. In Autokind vs Mankind (1971) and On the Nature of Cities (1979), American author Kenneth R. Schneider vehemently criticised the excesses of automobile dependence and called for a struggle to halt and partially reverse negative developments in transportation, although he was largely ignored at the time.
An early theorist on mobility transitions was American cultural geographer Wilbur Zelinsky, whose 1971 paper "The Hypothesis of the Mobility Transition" formed the basis of what has become known as the Zelinsky Model. In 1975, Austrian civil engineer and transportation planner Hermann Knoflacher sought to promote cycling traffic in Vienna. He caricatured the enormous spatial demands of automobiles with his self-invented Gehzeug ("walking gear/vehicle").
Definitions and scope
The German dictionary Duden defines 'mobility transition' (German: Verkehrswende) as "fundamental conversion of public transport [especially with ecological objectives]" (German: „grundlegende Umstellung des öffentlichen Verkehrs [besonders mit ökologischen Zielvorstellungen]"). Adey et al. (2021) defined 'mobility transition' as 'the necessary and inevitable transformation from a world in which mobility is dominated by the use of fossil fuels, the production of greenhouse gases and the dominance of automobility to one in which mobility entails reduced or eliminated fossil fuels and GHG emissions and is less dependent on the automobile.'
According to a 2016 thesis paper by Agora Verkehrswende – a joint initiative of Stiftung Mercator and the European Climate Foundation – the goal of a traffic transition (Verkehrswende) in Germany is ensuring climate neutrality in transport by 2050. It must be based on two pillars:
Mobility transition (Mobilitätswende): The goal is a significant reduction of energy consumption. The mobility transition is intended to bring about a qualitative change in traffic behaviour (Verkehrsverhalten), in particular avoiding and relocating traffic. An efficient design of the traffic systems without restricting mobility should be achieved.
Energy transition in traffic (Energiewende im Verkehr, see also phase-out of fossil fuel vehicles): In order to decarbonise traffic, the conversion of the energy supply of traffic towards renewable energy is considered a necessity.
A mobility transition also includes a cultural change, in particular a re-evaluation of "the street". Currently, the primary purpose of streets is to direct traffic through the city with as little disruption as possible. In the future, the dominance of the car should give way to equal rights for all modes of transport.
In an expanded definition, the mobility transition is distinguished from a pure propulsion transition on the one hand to a fundamental mobility transition on the other:
Propulsion transition (Antriebswende): the gradual replacement of internal combustion engines by those powered by hydrogen, fuel cells or battery-electric power.
Traffic transition (Verkehrswende): private car traffic is reduced or replaced by other modes of transportation. In the large cities and metropolitan regions in particular, the focus is increasingly on establishing and spreading alternative means of transport - from the expansion of public transport to the promotion of so-called active transport (pedestrian and bicycle traffic), the approval of new electrified micro-vehicles such as e-scooters and the range of different mobility services (the so-called MaaS, "mobility as a service").
Mobility transition (Mobilitätswende): This perspective takes into account not only the distances travelled and the means of transport used for them, but also the socio-economic, cultural and spatial dynamics and constraints that cause the need to overcome distances. These include, for example, settlement and transport policies, housing and labour markets, social policy and migration. The need to quickly overcome distances is not understood as an invariant characteristic of people, but as part and prerequisite of the current, growth-oriented capitalist shape of society.
In some cases, a mobility transition is also presented as a paradigm shift of the 'understanding of ownership'. Collective use of means of transport makes it possible to use modes of transportation 'adapted to specific needs', such as carsharing, peer-to-peer carsharing, bicycle-sharing systems. It also enables connecting different modes of transportation to one another on a route to be travelled. Electromobiles could better exploit their advantages in networking with other means of transport. Electric vehicles adapted to the respective uses can be small or large depending on the application, and do not (always) have to be designed for long distances. A suitable charging infrastructure is required. Under certain circumstances, in such an environment it will no longer be necessary to own private transport for one's own use.
In Germany, the mobility transition can be contrasted to the Bundesverkehrswegeplan 2030 ('Federal Transport Routes Plan 2030'). The mobility transition is based on avoiding traffic and shifting to rail, but the Bundesverkehrswegeplan is based on the construction and expansion of trunk roads in Germany (including but not limited to the Autobahn). Transport scientist regards the transition as a "turning away from car subsidies through billions [of euros] in road network expansion". He sees a decisive change in the priorities of transport policy as a necessary condition to achieve this.
The Umweltbundesamt announced that in 2018, the sum of all environmentally harmful subsidies in Germany was 65.4 billion euros, almost half of them in the areas of traffic and transport. In traffic, such subsidies with harmful effects even increased from 2012 to 2018.
Changes in behaviour due to the COVID-19 pandemic
The COVID-19 pandemic made it clear that work and transport can be organised differently, even in a comparatively short time. An increased focus on working from home could save millions of tonnes of greenhouse gases.
Measures in passenger transport
Overview
Various measures have been proposed by different people and groups to achieve a mobility transition.
In a 2017 position paper, German think tank Agora Verkehrswende described how a climate-neutral conversion of transport would be possible by 2050 without sacrificing mobility. In addition to technological innovations, there are new traffic concepts, regulatory measures and cultural change. Multi-link transport chains (Intermodal passenger transport) are considered. Amongst other things, there were also studies on this in November 2019 by the (VCD, "Traffic Club Germany") and the Heinrich Böll Foundation.
Mobility transition
Various measures have been proposed to achieve the mobility transition – in particular a significant reduction in energy requirements and a change in traffic behaviour:
Major changes can succeed with the help of traffic avoidance, and a shift towards sustainable transport in the form of pedestrian traffic, cycling, rail transport and local public transport. According to a 2010 report, each person in Germany in 2008 conducted an average of 3.4 trips a day, with an average length of 11.5 kilometres. On average, private cars were parked for around 22,5 hours a day, because they were used for only 1 hour and 19 to 28 minutes a day. Electric cars with a short range, bicycles, electric bicycles (e-bikes), pedelecs, cargo bikes, but also recently e-scooters, are usually well suited for a majority of these routes. The joint use of automobiles in carsharing could increase the utilisation of the vehicles and lead to fewer cars being needed overall. This could also reduce the land consumption of parking spaces and free up space for other uses. In 2002 and 2008, vehicles in Germany were occupied by an average of 1.5 people. One method of efficient use of passenger cars is the formation of carpools and the operation of ridesharing companies. Needs-based use of various sorts of low emission vehicles can also serve to reduce fuel consumption. The latter measures would lead to an increase in energy and vehicle efficiency. Another component in the future mobility mix could be Neighborhood Electric Vehicles.
Numerous regulatory control measures are possible, for example congestion charges, aviation taxation and subsidies (such as a jet fuel tax and a departure tax), a reform of company car taxation, parking space management (for example through pay and display), or an extension of emissions trading to road traffic. The introduction of speed limits, or lowering existing speed limits, would also have an impact on greenhouse gas emissions such as CO2 (carbon dioxide) and NOx (nitric oxide and nitrogen dioxide). Passenger cars consume a disproportionately large amount of fuel at high speeds. A speed limit can also have secondary emissions-reducing effects, about which there is still considerable uncertainty: lower maximum speeds and longer travel times can contribute to a shift in traffic to rail and to the promotion of vehicles with lower engine power.
The externalities of traffic, namely the impact that air pollution caused by motor vehicles has on society and the environment, must also be taken into account here.
The , which indirectly caused the Dutch farmers' protests, convinced the government in November 2019 to lower the speed limits in the Netherlands on national roads to 100 kilometres per hour during the day, from 6 am to 7 pm. In the evening and at night the old speeds were maintained. Meanwhile, the State of the Netherlands v. Urgenda Foundation court case was decided in favour of its plaintiff Urgenda (initially in June 2015, upheld on appeal in October 2018, and finally confirmed by the Supreme Court of the Netherlands on 20 December 2019), who successfully forced the government to implement the necessary measures to reduce the Netherlands' CO2 emissions from 1990 levels by 25% by 2020. Although the government was free to choose which measures it would take to achieve this reduction, the plaintiff and other environmentalists had been suggesting throughout the legal process to lower the speed limit as one of several effective options to do so. Similar environmental arguments for speed limits have been proposed in Germany.
As one of several methods to mitigate the environmental impact of aviation, a shift to other modes of transport or a switch from short-haul air traffic to high-speed trains has been proposed. In several countries in Europe, increasingly in the 2010s and early 2020s, some governments have even imposed a short-haul flight ban on all airlines, while many governmental agencies, commercial companies, universities, and NGOs have imposed restrictions or prohibitions on their employees to not take short-haul flights that can also be properly accomplished by train.
In the field of urban planning, there are concepts for walkability, the compact city (or 'city of short distances'), New Urbanism (or its variant New Pedestrianism), and car-free living. In research policy, there are demands to give more consideration to the consequences of motorised private transport in the form of practice- and solution-oriented research.
Further development of local public transport
According to a 2015 study by the Verkehrsclub Deutschland, local public transport in Germany was not customer-friendly enough. Cryptic route networks, opaque fare systems, ticket machines that cannot be operated, draft bus stops, and a lack of announcements about transfer and connection options were criticised. The club also called for better linking of local public transport with other modes of transportation. This included bike racks at bus stops, information on taking bikes on buses and trains, and options for switching to carsharing providers. Furthermore, the synchronisation of timetables was criticised, because it led to unnecessarily long waiting times for connecting buses or trains. In 2012, several local public transport companies reportedly had been making efforts to improve the usability of ticket machines in Bavaria and Saxony. Against this background, Federal Transport Minister Alexander Dobrindt in 2017 called for electronic tickets and a uniform tariff system for all transport associations to be established by 2019.
Since the 2010s, there have been frequent discussions on whether local public transport should be free of charge. The best-known example of free public transport is the Estonian capital Tallinn, where buses and trains have been free since 2013. By 2021, most counties in Estonia had also introduced free buses and trains. Public transport is also free throughout Luxembourg. In Germany, the cities of Monheim am Rhein and Langenfeld, Rhineland were testing free public transport as of September 2021.
Some cities have introduced mini electric buses, primarily in inner-city areas. The historic city centre of Aix-en-Provence, France is very narrow and closed to cars, taxis and normal bus traffic. In order to get people with restricted mobility to their destination, wheelchair-accessible electric minibuses are frequented there without a fixed timetable. Likewise, in the medieval old town of Regensburg, only mini-ebuses are still driving around. Furthermore, two self-propelled e-shuttles are in use in Regensburg's industrial park. Berlin and Göppingen also want to supplement their local public transport with electric, highly automated minibuses.
In some cities, cableways are built as part of local public transit. Such cableways can be found in places such as Medellín (see Metrocable (Medellín)), La Paz (see Mi Teleférico), New York (see Roosevelt Island Tramway), Portland (see Portland Aerial Tram), Algiers (see ), Lisbon (see ), Brest (see ), Bozen, London (see Emirates Air Line (cable car)) and Ankara. Cable cars are electrically operated and they have very low CO2 emissions compared to other modes of transport. At 50% capacity, a cable car causes 27 grams of CO2 per person and kilometre, a train with an electric locomotive 30 grams, a bus with a diesel engine 38.5 grams, and a car with a combustion engine even 248 grams. Furthermore, cable cars cause practically no noise pollution on the route, since the individual gondolas do not have their own drive, but are moved by a central motor housed in the station. In Germany, on the occasion of the Bundesgartenschau ('Federal Horticultural Show'), cable cars have emerged in Berlin (see IGA Cable Car), Koblenz (see Koblenz cable car) and Cologne (see Cologne Cable Car). Compared to underground or suburban trains, cable cars are relatively cheap and can be built quickly. As of November 2021, there are projects to build more cable cars to supplement local public transit in Berlin, Bonn, Düsseldorf, Cologne, Munich, Stuttgart and Wuppertal.
Continuous development is also affecting the rural areas as well. As a solution, what came into play was the integrated systems of public transport that is playing an important role in the development of rural areas, especially in post-communist countries.
Propulsion and energy transition in transport
In order to achieve the energy transition in transport, it is considered necessary to refrain from burning petroleum-based fuel and to use more climate-friendly propulsion technologies or fuels. Electricity from renewable sources, or e-fuels or biofuels produced from green electricity, can serve as substitutes for petrol and diesel fuel.
Since the overall efficiency of e-fuels is far lower than direct electrification via electric cars, the German Advisory Council on the Environment has recommended restricting the use of electricity-based synthetic fuels to air and shipping traffic in particular, in order not to increase electricity consumption too much. For example, hydrogen-powered fuel cell vehicles (FCVs) require more than twice as much energy per kilometre as battery electric vehicles (BEVs), and vehicles with combustion engines powered by power-to-liquid fuels even need between four and six times as much. Battery vehicles therefore have significantly better energy efficiency than vehicles that are operated with e-fuels. In general, electric cars consume around 12 to 15 kWh of electrical energy per 100 km, while conventionally powered cars use the equivalent of around 50 kWh per 100 km. At the same time, the energy required for the production, transport and distribution of fuels such as petrol or diesel is also eliminated. In China in particular, the switch from internal combustion engines to electromobility is being promoted for health reasons (to avoid smog) in order to counteract the massive air pollution in the cities.
According to Canzler & Wittowsky (2016), the propulsion transition could also become the central building block of Germany's Energiewende, While the switch to renewable energies is already underway worldwide, the energy transition in transport is proving more difficult, especially with the switch from oil to sustainable energy sources. However, disruptive technologies (such as the development of more powerful and cheaper batteries or innovations in the field of autonomous driving) and new business models (especially in the field of digitalisation) can also lead to unpredictable, rapid and far-reaching changes in mobility.
New methods of getting around in urban traffic have also emerged:
Vienna
Vienna, the capital of Austria, has been consistently developing into a city that is restructuring public space and promoting local public transport. Viennese urban planner Hermann Knoflacher has stated: 'The money comes on foot or by bike.' The economic use of space as parking spaces is inefficient. A car-free street increases the turnover of restaurants, clothing stores and retailers. This would create new jobs.
The attractiveness of public transport can be stimulated by lowering the price of an annual pass: in Vienna one can use public transport with a subscription fee of 1 euro a day. Between 2012 and 2018 the number of annual ticket holders increased from 373,000 to 780,000. At the same time as the changeover, the city began to invest more heavily in local transport. In July 2018, some German cities announced that they would follow the Viennese model and lower the prices for annual tickets.
Luxembourg
Since 1 March 2020, local public transport across Luxembourg has been free of charge for everyone. The Grand Duchy thus became the first country in the world to introduce free local public transit. An exception to this is first class travel on the railways. A major reason for the overhaul was the increasingly problematic traffic jams on Luxembourg's roads.
Further examples
Several more significant examples of (potential) components and initiatives for mobility transition that have been proposed, studied, or put into practice include:
As an alternative to the Viennese model of the annual ticket, a citizen ticket is being discussed in some German municipalities as a new way of financing and using local public transport. It is to be financed by a levy for all citizens of a municipality and function as a kind of flat rate for buses and trains.
Phase-out of fossil fuel vehicles: In Germany, a ban on the sale of combustion engines from 2030 has been adopted by the Bundesrat in October 2016. Norway, on the other hand, already wants no cars with petrol or diesel engines to be registered from 2025 and ships and ferries only to be registered without fossil fuels from 2030, and is therefore considered a leading nation in electromobility. The Netherlands are also planning a ban on the registration of conventional drives in cars from 2025. In China, all automotive groups are obliged to meet a quota for the production and sale of purely electric or plug-in hybrid drives.
There are numerous electromobility projects in Germany, such as the Modellregionen Elektromobilität and BeMobility. The German Association of Towns and Municipalities (DStGB) sees towns and municipalities as drivers and designers of the mobility transition and also supports a number of projects.
Critical Mass is a form of direct action for promoting more and safer cycling in cities around the world. When riding together through inner cities, cyclists draw attention to cycling as a form of individual transport, advocate for mobility transition and, in particular, more rights for cyclists, better cycling traffic networks and infrastructure, and more room for non-motorised traffic. The first Critical Mass action took place in September 1992 in San Francisco.
To improve air quality, efforts across Europe are being stepped up to introduce low-emission zones. A progressive approach is the French Crit'air, which provides for different restrictions depending on air pollution. The applicable prohibitions can be viewed on the Internet or via phone app. Electric vehicles or hydrogen-powered vehicles receive category 0 (green vignette) and can always drive anywhere. were also issued in Germany.
Instead of a company car, individual companies offer their employees a that can be used to pay for different means of transport for business purposes.
The city-state of Singapore has not allowed additional private cars since 1 February 2018 under its Vehicular Quota System. This is intended to promote the switch to other means of transport. It is the only country in the world which requires prospective vehicle owners to bid for a Certificate of Entitlement before they are allowed to own a vehicle for up to 10 years. The state only gives permission for a new car if another has been de-registered. Singapore was also the first country in the world to implement congestion pricing in 1975.
Since 2003, there has been a London congestion charge which drivers have to pay in Central London. From October 2017 on, an additional, new fee for older and more polluting cars and vans is due with a toxicity charge.
In many cities in Germany there are citizens' initiatives which, following the example of the Initiative Volksentscheid Fahrrad ("Cycling Referendum Initiative") in Berlin, advocate for mobility transition and "bicycle laws". In June 2018, the Berlin Mobility Act to promote cycling was passed in Berlin, also due to a successful application for a referendum.
Traffic lights are being tested in Karlsruhe as part of a pilot project which, in contrast to conventional pedestrian traffic lights, display a permanent green light for pedestrians and cyclists, not for vehicles, and only interrupt this when a vehicle approaches.
In Japan, it is generally illegal to park a car on the street; a car buyer must provide evidence of owning private parking space or renting a public parking space for the car. As of 2019, renting fees for public parking spaces in the more central districts of Tokyo cost about a month, while in residential areas on the outskirts of Tokyo they cost around a month. Only after the police have verified that the parking lot exists and is large enough for the car the owner want to buy, the car dealer approves the purchase, and gives the owner a parking sticker to put on the new car's front or rear window. The Japanese state has been using regulations to discourage the sale of luxury cars and to stimulate consumers to buy small light-weight cars with small engines (see also: kei car) or to motivate them to switch to local public transport.
In Spain, a general speed limit of in built-up areas was introduced in 2021. On narrow streets with only one lane (often found in historic city centres), the permitted speed was limited to a maximum of ; for streets with more than one lane in both directions, the previously set speed limit was maintained at 50 km/h. A total of 509 people died in urban traffic accidents in Spain in 2019. The 2021 reduction of urban speed limits was intended to reduce the risk of pedestrians dying after being hit by a car by 80%.
With the educational motto Weniger Wagen wagen ("risk fewer cars"), the Roman Catholic Archdiocese of Cologne has sought to raise awareness, and has calculated: 'Due to mobility (journeys to work, committees, church services, etc.), around 16,370 tons of (as of 2012) are emitted annually in the Archdiocese of Cologne. This corresponds to a share of approx. 13 per cent of the archdiocese's total emissions.' In response, the Archdiocese stated it sought 'strategic and practical reorientation of mobility', including stimulating cycling through the Pharr-Rad initiative (a pun on Pfarrer "priest" and Fahrrad "bycicle") and the BistumsTicket ("diocesan ticket") which offers reduced fees for public transport travels by groups of 50 people or more to Catholic events organised within the archdiocese.
Short-haul flight ban
By July 2019, most political parties in Germany, including the Left Party, the Social Democrats, the Green Party and the Christian Democrats, started to agree to move all governmental institutions remaining in Bonn (the former capital of West Germany) to Berlin (the official capital since German Reunification in 1990), because ministers and civil servants were flying between the two cities about 230,000 times a year, which was considered too impractical, expensive and environmentally damaging. The distance of 500 kilometres between Bonn and Berlin could only be travelled by train in 5.5 hours, so either the train connections required upgrading, or Bonn had to be abolished as the secondary capital.
Measures in freight transport
Sea freight
By far the largest part of the world's freight traffic is sea freight. In 2010, about 60,000 trillion kilometre-tonnes were transported by sea, which was 85% of the world's total freight traffic. According to a 2015 forecast by Statista, by 2050 the volume of freight will have increased to four times the levels of 2010, while the share of sea freight will remain about the same.
Transporting goods by container ship is very efficient. Relatively few carbon dioxide (CO2) emissions are caused per transported tonne and kilometre compared to transport by truck (lorry). According to the Naturschutzbund Deutschland (NABU), the latter emit 50 grams of carbon dioxide per tonne and kilometre, while container ships only emit 15 grams. However, the mineral oil-based ship fuel used by container ships is particularly polluting; 90 per cent of all large ships run on heavy fuel oil (bunker fuel). Among other things, this means that emissions of toxic sulfur oxide are many times higher. To counteract this problem, the International Maritime Organization (IMO) lowered the limit value for sulfur in fuel from 3.5% to 0.5% in 2020.
Efficiency can be further increased and fuel consumption reduced by building the ships even larger.
There are innovations to harness wind power for sea transportation. These include cylindrical sails that can be retrofitted to cargo ships (making them "rotor ships" or "Flettner ships") and can reduce fuel consumption. Another option is a towing kite construction, which was originally developed in 2001 by the Hamburg-based company SkySails and is now being sold by AirSeas. The sail has an area of 1,000 square metres and was developed to reduce fuel consumption on cargo ships by up to 20%. As of 2019, the aviation group Airbus was testing this idea on four of its own freighters with the aim of saving up to 8,000 tonnes of carbon dioxide emissions.
Inland navigation
As inland navigation (also known as 'inland waterway transport' (IWT) or 'inland shipping') is a relatively environmentally friendly option for freight transport (similar to rail freight transport), researchers and policy makers have been aiming to shift the volume of cargo transported by more pollutive means towards inland navigation (for example, as part of the 2019 European Green Deal). According to the Research Information System for Mobility and Traffic (FIS; an agency of the German Transport Ministry), deficits in the competitiveness of German inland navigation, especially in an international comparison, are responsible for the stagnating transport volume of German inland navigation. A water infrastructure that is not optimally developed with insufficient water channel depths and bridge clearance heights lead to low loading capacities and thus to high costs. A certain exception are the waterways of the Rhine area, which also have by far the highest transport volume. Furthermore, the German inland waterway fleet is quite old by international comparison (45 years in 2013).
Inland navigation is closely related to seaport hinterland traffic. For example, in the modal split in hinterland traffic at the Dutch and Belgian seaports (Rotterdam, Amsterdam, Antwerp and Zeebrugge), inland shipping has a share of around 55%, while in Germany it usually remains below 10% of hinterland traffic. The reason for this is the better expansion of the Rhine waterways. Furthermore, the majority of the 250 important inland ports in Germany are owned by large companies that only handle transport goods from third-party companies to a small extent. Against this background, the FIS has called for the expansion and maintenance of German waterways. The number and carrying capacity of the German inland waterway vessels has remained constant in the early 21st century and was around 2.61 million tonnes in 2015.
Various approaches to energy efficiency and air pollution reduction are being tested and researched in inland shipping. This includes propulsion configurations such as the father–son concept, diesel-electric hybrid drives, hydrodynamic optimisations, fuel water emulsion injection, SCR-catalysts, diesel particulate filters, gas-to-liquid fuels (GTL) or Liquified Natural Gas (LNG), some of which can also be used in combination and are suitable for retrofitting existing systems. With an engine funding program, the German Transport Ministry supports inland navigation companies in the installation and retrofitting of low-emission engines or other emission-reducing technologies. The funding rate is up to 70%.
Road freight and modal share
In road freight transport, some transport companies are proposing partly new technologies such as trolleytrucks, electric trucks or electric cargo bikes. Package delivery services are experimenting with new concepts of smart logistics. Trolleytrucks with an auxiliary battery offer the possibility of lower-emission long-distance truck transport that is also more energy-efficient than battery-powered trucks. Equipping motorways with overhead lines for heavy goods vehicles (HGVs) has the advantage that HGVs would only have to carry small batteries, as only comparatively short distances would be covered in battery-only mode. At the same time, trolleytrucks would be a cost-effective way to make freight transport climate-friendly, as the electrification of motorways, at a cost of 3 million euros/km, does not represent too much of a financial outlay.
Another option to reduce CO2 emissions and environmental problems is to shift truck traffic to freight rail and inland waterway transport. This process is also known as modal shift. The German Environment Agency gives the climate impact of transport by truck in the reference year 2020 as 126 grams of CO2 equivalents per tonne-kilometre on average (g/tkm). According to the Environment Agency, transport by freight train has a climate impact of 33 g/tkm and transport by inland waterway vessel has a climate impact of 43 g/tkm, making rail and ship significantly more climate-friendly.
Although the European Union and its member states strongly promote the use of inland waterways and rail in combination with truck transport, in some cases financially, only HGVs have been developing positively in the 2010s, while shipping and rail have been stagnating or recording declines. For 2016, the Federal Statistical Office of Germany reported a decline in transport performance of 3.7% for inland waterways, a decline of 0.5% for rail and growth of 2.8% for trucks. In 2015, with a growing transport volume of 1.1%, there was a plus of 1.9% for road, a minus of 1% for rail and a minus of 3.2% for inland waterways. Overall, 71% of the transport performance is accounted for by the truck.
With growing containerization however, a combination of different modes of transport (intermodal freight transport) becomes more efficient. In so-called multimodal transport or combined transport, the truck only has to cover the last mile between the port or rail terminal and the customer. Measures to promote combined transport are, for example:
The Port of Rotterdam has set a quota for the modal share of hinterland transport modes: the truck share is to drop from 47% to 35%, while rail is to provide 20% instead of 13% in the future, and the transport performance of inland waterways is to increase from 40% to 45%.
Instead of burdening trunk roads with the transport of heavy goods such as industrial plants or components for wind turbines, German transport companies have ben required since 2010 to use the electronic portal Procedural Management of Large and Heavy Goods Transport (VEMAGS) to check whether alternative transport routes such as ship and rail are available, and if not, to explain that in their application for a permit to transport goods via road trucks.
With the promotion of handling facilities for combined transport, the German federal government supports the shift in traffic to inland waterways and freight trains.
The Lower-Rhine Chamber of Commerce and Industry, the Schifferbörse and the Development Centre for Naval Technology and Transport Systems (DST) in Duisburg jointly offer an additional course. Apprentice forwarding and logistics clerks should thus learn about the advantages of alternative modes of transport, rail and inland waterway, and thus integrate them more easily into their everyday work. Frequently, the curriculum only includes road freight transport and additional sea freight or air transport.
See also
Energy transition
Jet fuel tax
Phase-out of fossil fuel vehicles
Urban sprawl
References
Literature
, Format: PDF, KBytes: 2326
Udo Becker: Grundwissen Verkehrsökologie: Grundlagen, Handlungsfelder und Maßnahmen für die Verkehrswende. München 2016, ISBN 978-3-86581-993-2.
Andrej Cacilo: Wege zu einer nachhaltigen Mobilität: Im Spannungsfeld kultureller Werte, ökonomischer Funktionslogik und diskursrationaler Wirtschafts- und Umweltethik. 2., durchges. Aufl., Metropolis, Marburg 2021, ISBN 978-3-7316-1473-9.
Weert Canzler, Andreas Knie: Schlaue Netze – Wie die Energie- und Verkehrswende gelingt. München 2013, ISBN 978-3-86581-440-1.
Weert Canzler, Andreas Knie, Lisa Ruhrort, Christian Scherf: Erloschene Liebe? Das Auto in der Verkehrswende. Soziologische Deutungen. transcript, Bielefeld 2018, ISBN 978-3-8376-4568-2.
Hermann Knoflacher: Zurück zur Mobilität! Anstöße zum Umdenken. Ueberreuter, Wien 2013, ISBN 978-3-8000-7557-7.
, Format: PDF, KBytes: 2940
Markus Hesse: Verkehrswende. ökologisch-ökonomische Perspektiven für Stadt und Region. Marburg 1993, ISBN 978-3-926570-62-8.
External links
, long article on the mobility transition in Germany
Energy policy
Environmental policy
Sustainability
Transport and the environment
Urban planning | Mobility transition | Physics,Engineering,Environmental_science | 8,241 |
9,354,534 | https://en.wikipedia.org/wiki/Wollaston%20wire | Wollaston wire is a very fine (c. 0.001 mm thick) platinum wire clad in silver and used in electrical instruments. For most uses, the silver cladding is etched away by acid to expose the platinum core.
History
The wire is named after its inventor, William Hyde Wollaston, who first produced it in England in the early 19th century. Platinum wire is drawn through successively smaller dies until it is about in diameter. It is then embedded in the middle of a silver wire having a diameter of about . This composite wire is then drawn until the silver wire has a diameter of about , causing the embedded platinum wire to be reduced by the same 50:1 ratio to a final diameter of . Removal of the silver coating with an acid bath leaves the fine platinum wire as a product of the process.
Uses
Wollaston wire was used in early radio detectors known as electrolytic detectors and the hot wire barretter. Other uses include suspension of delicate devices, sensing of temperature, and sensitive electrical power measurements.
It continues to be used for the fastest-responding hot-wire anemometers.
References
History of radio
Radio electronics
Wire | Wollaston wire | Engineering | 233 |
58,622,279 | https://en.wikipedia.org/wiki/Aspergillus%20udagawae | Aspergillus udagawae is a species of fungus in the genus Aspergillus. It is from the Fumigati section. Several fungi from this section produce heat-resistant ascospores, and the isolates from this section are frequently obtained from locations where natural fires have previously occurred. The species was first described in 1995. It has been reported to produce fumagillin, fumigaclavine A and C, fumigatins, fumiquinazolin F or G, helvolic acid, monomethylsulochrin, pyripyropene A, E, trypacidin, tryptoquivalines, and tryptoquivalones.
Growth and morphology
A. udagawae has been cultivated on both Czapek yeast extract agar (CYA) plates and Malt Extract Agar Oxoid® (MEAOX) plates. The growth morphology of the colonies can be seen in the pictures below.
References
udagawae
Fungi described in 1995
Fungus species | Aspergillus udagawae | Biology | 217 |
74,121,818 | https://en.wikipedia.org/wiki/Lifting%20boss | Lifting bosses or handling bosses are protrusions intentionally left on stones by masons to facilitate maneuvering the blocks with ropes and levers.
They are an important feature of ancient and classical construction, and were often not cut away, despite having fulfilled their purpose. Sometimes this was the result of a cost-saving measure or a construction halt. Other times bosses were left as a stylistic element, and even if dressed back, a remnant of them was kept to make their existence obvious.
See also
Boss (architecture)
Bossage
References
Further reading
Stonemasonry
Construction
Architecture | Lifting boss | Engineering | 115 |
21,566,557 | https://en.wikipedia.org/wiki/Voluntary%20action | Voluntary action is an anticipated goal-oriented movement. The concept of voluntary action arises in many areas of study, including cognitive psychology, operant conditioning, philosophy, neurology, criminology, and others. Additionally, voluntary action has various meanings depending on the context in which it is used. For example, operant psychology uses the term to refer to the actions that are modifiable by their consequences. A more cognitive account may refer to voluntary action as involving the identification of a desired outcome together with the action necessary to achieve that outcome. Voluntary action is often associated with consciousness and will. For example, Psychologist Charles Nuckolls holds that we control our voluntary behavior, and that it is not known how we come to plan what actions will be executed. Many psychologists, notably Tolman, apply the concept of voluntary action to both animal and human behavior, raising the issue of animal consciousness and its role in voluntary action.
History: William James on voluntary action
The concept of voluntary action was discussed by William James in his influential book The Principles of Psychology (1890). James states that for an act to be classified as a voluntary, it must be foreseen, as opposed to involuntary action which occurs without foresight. James suggests, for example, that the idea of a particular movement is a voluntary action; however, the movement itself, once the idea has been formed, is involuntary, provided the action itself require no further thought. Voluntary action arises because humans and animals wish to fulfill desires. In order to fulfill these desires, humans and animals form goals and voluntary actions are undertaken to achieve these goals. Some of the terms that James used to describe voluntary action – such as desire – are now outdated, and his introspective approach is out of favor, but many of his ideas are still find a place in current thinking.
See also
Involuntary action
Cognitive psychology
William James
The Principles of Psychology
References
Animal physiology
Cognitive psychology | Voluntary action | Biology | 388 |
69,229,950 | https://en.wikipedia.org/wiki/Tramp%20species | In ecology, a tramp species is an organism that has been spread globally by human activities. The term was coined by William Morton Wheeler in the bulletin of the American Museum of Natural History in 1906, used to describe ants that “have made their way as well known tramps or stow-aways [sic] to many islands". The term has since widened to include non-ant organisms, but remains most popular in myrmecology. Tramp species have been noted in multiple phyla spanning both animal and plant kingdoms, including but not limited to arthropods, mollusca, bryophytes, and pteridophytes. The term "tramp species" was popularized and given a more set definition by Luc Passera in his chapter of David F William's 1994 book Exotic Ants: Biology, Impact, And Control Of Introduced Species.
Definition
Tramp species are organisms that have stable populations outside their native ranges. They are closely associated with human activities. They are disturbance-specialists, and are characterized by their synanthropic associations with humans as their primary mode of expansion is human-mediated dispersal. That being said, tramp species are not limited to anthropogenically disturbed habitats, they have the potential to invade pristine habitats, especially when established in a new area. For example, Anoplolepis gracilipes was able to invade undisturbed forest ecosystems in Australia after being introduced and having an established population in northeast Arnhem Land. It is important to note that while some tramp species are invasive, the majority of them are not. Some can exist alongside native species without competing with them, simply occupying unfilled niches, such as is the case with some populations of Tapinoma melanocephalum and Monomorium pharaonic, who rarely interfere with native species outside human settlement areas.
Ants
Ants have a more rigid list of criterion to be considered "true" tramp species. The most cited body of work outlining these traits comes from Luc Passera. His primary and most important criterion is that the distribution of the species must be linked to human activities, what he refers to as "anthropophilic tendency". He also lists the following traits as being likely common to all tramp species: small size, monomorphism of worker ants (worker ants having only one phenotype), high rates of polygyny, unicoloniality, strong interspecific aggressiveness, worker ant sterility, and colony reproduction by budding. These traits may appear with more or less intensity among considered tramp species, and in fact, literature does not currently require a tramp species to possess every single one of these attributes. Ant tramp species in particular can be ecological indicators on the susceptibility of an ecosystem to become invaded or ecological instability.
Causes and distribution
All tramp species are distributed globally by as a result of human transportation. As such, they are almost always present in urban or human-settled environments, and have colonizing mechanisms that are well adapted to human cohabitation, referred to as possessing "anthropogenically reinforced dispersal biology". The globalization of trade and travel have contributed significantly to the dispersal of tramp species worldwide. Trade activities involving the importation and exportation of cargos on ships (often containing plants, soil, wood and other biological mediums) are noted as being an especially important methods of introduction. These often repeated introductions (as oftentimes shipments will come from the same place) contribute to fortifying the genetic variability and initial population sizes of newly transplanted tramp species, which facilitates their establishment in novel environments. After their human-mediated introductions, tramp species can also benefit from human disturbance to the environment. Anthropogenic forces (such as construction and agriculture) can dramatically impact local fauna and flora, weakening the environment and making the area more susceptible to the encroachment of tramp species. This phenomenon is noted as a particularly tough issue in Tropical Asia, where monocropping practices of local rubber plant farms have decimated indigenous species assemblages and habitat structures, allowing the establishment of many problematic tramp species. Another example is the Thousand Islands Archipelago in Indonesia, where the small tropical islands are especially vulnerable to human disturbance, which facilitated the establishment of multiple tramp species.
The range expansion of tramp ants is projected to increase with weather pattern changes due to climate change. As many tramp species are well adapted to disturbances in their native habitat, they are particularly resilient to large-scale, unpredictable weather events (such as floods, wildfires and monsoons), which are set to increase in frequency as anthropogenic activity continues to affect global systems.
Effects on local environments
Tramp species can have similar effects to invasive species, and in some literature the term "tramp" species is used as a synonym for invasive. As such they can outcompete and displace local fauna, decreasing species richness. They can also have direct impacts on human health, such as is the case with Solenopsis geminata and Pachycondyla senaarensis. Both of these venomous species have been known to bite humans, often times causing severe anaphylactic reactions; this has made them known public health hazards in the regions they are found. Tramp species can also be nuisance pests, damaging housing structures and crops. However, it is important to note that tramp species are not always invasive, and can cohabitate without harming local environments or species assemblages.
Control and eradication
As tramp species are so diverse in their ecology, there is no universal protocol to prevent their encroachment into new territories. However, there are certain strategies that can be employed to mitigate tramp species. In some environments, maintaining diversity of local species assemblages can deter certain tramp species. Currently, there is a deficiency in our ability to identify potential new tramp species quickly - a phenomenon dubbed "taxonomic impediment", which is a delay in identifying invasive species threats. As such, it is essential to increasing identification tools for preventative action against tramp species. Interdepartmental cooperation for pest management can be very effective in tramp species management, as a collaborative effort between affected stakeholders can increase the likelihood of success in mitigation. Direct pest management efforts have included baits with insect growth regulators to sterilize colonies to varying degrees of success. One method that can be successful for urban infestation of tramp ants specifically (depending on their specific biology) in temperate zones is to shut off heat sources for two weeks or more, as many can be heat-adapted species.
List of tramp species
Arthropods
Ants
Anoplolepis gracilipes
Brachyponera sennaarensis
Cardiocondyla emeryi
Cardiocondyla kagutsuchi
Cardiocondyla nuda
Cardiocondyla obscurior
Cardiocondyla wroughtonii
Hypoponera punctatissima
Iridomyrmex anceps
Lasius neglectus
Linepithema humile
Monomorium destructor
Monomorium floricola
Monomorium indicum
Monomorium monomorium
Monomorium pharaonic
Nylanderia spp.*
Paratrechina flavipes
Paratrechina jaegerskioeldi
Paratrechina longicornis
Pheidole fervens
Pheidole megacephala
Pheidole teneriffana
Solenopsis geminata
Solenopsis invicta
Tetramorium caespitum
Tetramorium bicarinatum
Tetramorium lanuginosum
Tetramorium pacificum
Tetramorium simillimum
Tapinoma melanocephalum
Tapinoma simrothi
Technomyrmex albipes
Technomyrmex brunneus
Trichomyrmex destructor
Wasmannia auropunctata
Millipedes
Chondromorpha xanthotricha
Glyphiulus granulatus
Orthomorpha coarcata
Oxidus gracilis
Pseudospirobolellus avernus
Trigoniulus corallinus
Silverfish
Ctenolepisma longicaudata
Termites
Cryptotermes sp.
Wasps
Calliscelio elegans
Platygastroidea superfamily
Mollusca
Land snails
Bradybaena similaris
Slugs
Deroceras panormitanum
Deroceras invadens
Plants
Bryophytes
Diplasiolejeunea ingekarolae
Daltonia marginata
Daltonia splachnoides
Pteridophytes
Nephrolepis biserrata
Williams and Lucky 2020 provide a thorough listing of all known Nylanderia species with established populations outside their native ranges.
See also
Lists of invasive species
Supertramp (ecology)
Climate change and invasive species
Attribution of recent climate change
References
Introduced species
Ecology terminology | Tramp species | Biology | 1,848 |
57,569,173 | https://en.wikipedia.org/wiki/Kim%20Jelfs | Kim E. Jelfs is a computational chemist based at Imperial College London who was one of the recipients of the Harrison-Meldola Memorial Prizes in 2018. She develops software to predict the structures and properties of molecular systems for renewable energy.
Early life and education
Jelfs studied chemistry at University College London. For her final year project, Jelfs worked at the Royal Institution. She earned her PhD in 2010, working with Ben Slater on modelling the growth of zeolitic materials.
Research and career
After completing her PhD Jelfs joined the University of Barcelona, working with Stefan Bromley. She moved to the University of Liverpool, working as a postdoctoral researcher with Matthew Rosseinsky and Andrew Ian Cooper. At the University of Liverpool Jelfs characterised the structure of porous materials. She was funded by an Engineering and Physical Sciences Research Council Programme Grant.
In 2013 she joined Imperial College London as a Royal Society University Research Fellow. In 2015 she was awarded a European Research Council Starting Grant, which provides €1.5 million funding for five years of materials discovery. Her research will consider porous molecules, organic small molecules and polymers. She uses computational models to predict the relationships between structure and properties. The models can also be used to predict the properties of amorphous frameworks and porous molecules. Her group identified the 20 most probable topologies for porous cage molecules, which can be synthesised through dynamic covalent chemistry.
In 2018 Jelfs was awarded the Harrison-Meldola Memorial Prize from the Royal Society of Chemistry. She was also awarded an Imperial College London President's Award for Outstanding Early Career Research. In 2019, she was awarded a Philip Leverhulme Prize in Chemistry.
References
computational chemists
living people
year of birth missing (living people) | Kim Jelfs | Chemistry | 360 |
49,694,655 | https://en.wikipedia.org/wiki/Lanz%20Bulldog%20D%209506 | The Lanz Bulldog D 9506 is a tractor of the HR 8 series, produced by Heinrich Lanz AG in Mannheim from 1934 to 1955, with a production stop in 1945. In total, 3817 units were produced. The tractor was sold under the brand name Ackerluft (field-air). The Ursus C-45, produced in Poland from 1947 to 1959, was a copy of the D 9506.
Description
The D 9506 utilises a frameless block construction. It has a rear live axle and a dead front beam axle. The front axle was available with optional leaf springs. The tractor has air filled tyres. The D 9506 does not have a lockable differential. The gearbox is a manual 3-speed Lanz gearbox with a reverse gear, and an additional range, this makes 6 forward gears and 2 reverse gears. The minimum speed is 3,3 km/h in first gear, maximum speed is 16,7 km/h in sixth gear. The drum brakes at the rear wheels are foot-operated, the handbrake locks the gearbox.
The standard Lanz hot-bulb engine with a displacement of 10.3 L was used, it has a thermosiphon cooler. Compared to the predecessor series HR 6 and HR 7, the engine now has a better speed governor, the rated engine speed was increased from 540 min−1 to 630 min−1. Many sorts of diesel oil can be used as fuel. The D 9506 has an electrical system. If a starter motor is used, the lead-battery has a capacity of 94 Ah, without a starter motor the capacity is 75 Ah. The Lanz factory offered additional accessories, such as a cab or fenders.
Technical data
References
Tractors | Lanz Bulldog D 9506 | Engineering | 354 |
67,512,604 | https://en.wikipedia.org/wiki/Azumolene | Azumolene is an experimental drug which is a derivative of dantrolene. In animal studies, azumolene showed similar efficacy to dantrolene at controlling symptoms of malignant hyperthermia but with better water solubility and lower toxicity, albeit with lower potency.
References
Furans
Hydantoins
Muscle relaxants
4-Bromophenyl compounds | Azumolene | Chemistry | 78 |
33,329,948 | https://en.wikipedia.org/wiki/Transport%20coefficient | A transport coefficient measures how rapidly a perturbed system returns to equilibrium.
The transport coefficients occur in transport phenomenon with transport laws
where:
is a flux of the property
the transport coefficient of this property
, the gradient force which acts on the property .
Transport coefficients can be expressed via a Green–Kubo relation:
where is an observable occurring in a perturbed Hamiltonian, is an ensemble average and the dot above the A denotes the time derivative.
For times that are greater than the correlation time of the fluctuations of the observable the transport coefficient obeys a generalized Einstein relation:
In general a transport coefficient is a tensor.
Examples
Diffusion constant, relates the flux of particles with the negative gradient of the concentration (see Fick's laws of diffusion)
Thermal conductivity (see Fourier's law)
Ionic conductivity
Mass transport coefficient
Shear viscosity , where is the viscous stress tensor (see Newtonian fluid)
Electrical conductivity
Transport coefficients of higher order
For strong gradients the transport equation typically has to be modified with higher order terms (and higher order Transport coefficients).
See also
Linear response theory
Onsager reciprocal relations
References
Thermodynamics
Statistical mechanics | Transport coefficient | Physics,Chemistry,Mathematics | 240 |
10,163,607 | https://en.wikipedia.org/wiki/Work-up | In chemistry, work-up refers to the series of manipulations required to isolate and purify the product(s) of a chemical reaction. The term is used colloquially to refer to these manipulations, which may include:
deactivating any unreacted reagents by quenching a reaction.
cooling the reaction mixture or adding an antisolvent to induce precipitation, and collecting or removing the solids by filtration, decantation, or centrifugation.
changing the protonation state of the products or impurities by adding an acid or base.
separating the reaction mixture into organic and aqueous layers by liquid-liquid extraction.
removal of solvents by evaporation.
purification by chromatography, distillation or recrystallization.
The work-up steps required for a given chemical reaction may require one or more of these manipulations. Work-up steps are not always explicitly shown in reaction schemes. Written experimental procedures will describe work-up steps but will usually not formally refer to them as a work-up.
Examples
Isolation of benzoic acid
The Grignard reaction between phenylmagnesium bromide (1) and carbon dioxide in the form of dry ice gives the conjugate base of benzoic acid (2). The desired product, benzoic acid (3), is obtained by the following work-up:
The reaction mixture containing the Grignard reagent is allowed to warm to room temperature in a water bath to allow excess dry ice to evaporate.
Any remaining Grignard reagent is quenched by the addition of water.
Dilute hydrochloric acid is added to the reaction mixture to protonate the benzoate salts, as well as to dissolve the magnesium salts. White solids of impure benzoic acid are obtained.
The benzoic acid is decanted to remove the aqueous solution of impurities, more water is added, and the mixture is brought to a boil with more water added to give a homogeneous solution.
The solution is allowed to cool slowly to room temperature, then in an ice bath to recrystallize benzoic acid.
The recrystallized benzoic acid crystals are collected on a Buchner funnel and are allowed to air-dry to give pure benzoic acid.
Dehydration of 4-methylcyclohexanol
This dehydration reaction produces the desired alkene (3) from an alcohol (1). The reaction is performed in a distillation apparatus so the formed alkene product can be distilled off and collected as the reaction proceeds. The water produced by the reaction as well as some acid will co-distill, giving a distillate mixture (2). The product is isolated from the mixture by the following work-up:
A concentrated solution of sodium chloride in water, known as a brine solution, is added to the mixture and the layers are allowed to separate. The brine is used to remove any acid or water from the organic layer. In this example the organic layer is the product, which is a liquid at room temperature.
The bottom aqueous layer is removed with a pipette and discarded.
The top layer is transferred to an Erlenmeyer flask where it is treated with anhydrous sodium sulfate to remove any remaining water.
The sodium sulfate is filtered out leaving the pure liquid product.
Synthesis of an amide
The reaction between a secondary amine (1) and an acyl chloride (2) yields the desired amide (4) as shown below. The acyl chloride is added slowly to a solution of the amine and triethylamine in dichloromethane at 0 °C. The reaction is allowed to warm to room temperature and is stirred for 14 hours. The following manipulations are then performed on the crude reaction mixture (3) to isolate the desired product:
A concentrated solution of sodium bicarbonate is added to the reaction mixture. This will promote the migration of impurities and byproducts to the aqueous layer and leave the product in the dichloromethane (organic layer). The aqueous and organic layers are allowed to separate. This process is typically performed in a separatory funnel.
The aqueous layer is collected and extracted once with dichloromethane.
The organic phase is collected and dried with anhydrous sodium sulfate.
The solid is filtered off and the organic layer is concentrated under reduced pressure to yield the desired amide.
Further purification is achieved by flash column chromatography.
References
Chemical reactions | Work-up | Chemistry | 947 |
3,126,072 | https://en.wikipedia.org/wiki/Serotiny | Serotiny in botany simply means 'following' or 'later'.
In the case of serotinous flowers, it means flowers which grow following the growth of leaves, or even more simply, flowering later in the season than is customary with allied species. Having serotinous leaves is also possible, these follow the flowering.
Serotiny is contrasted with coetany. Coetaneous flowers or leaves appear together with each other.
In the case of serotinous fruit, the term is used in the more general sense of plants that release their seed over a long period of time, irrespective of whether release is spontaneous; in this sense the term is synonymous with bradyspory.
In the case of certain Australian, North American, South African or Californian plants which grow in areas subjected to regular wildfires, serotinous fruit can also mean an ecological adaptation exhibited by some seed plants, in which seed release occurs in response to an environmental trigger, rather than spontaneously at seed maturation. The most common and best studied trigger is fire, and the term serotiny is used to refer to this specific case.
Possible triggers include:
Death of the parent plant or branch (necriscence)
Wetting (hygriscence)
Warming by the sun (soliscence)
Drying atmospheric conditions (xyriscence)
Fire (pyriscence) – this is the most common and best studied case, and the term serotiny is often used where pyriscence is intended.
Fire followed by wetting (pyrohydriscence)
Some plants may respond to more than one of these triggers. For example, Pinus halepensis exhibits primarily fire-mediated serotiny, but responds weakly to drying atmospheric conditions. Similarly, Sierras sequoias and some Banksia species are strongly serotinous with respect to fire, but also release some seed in response to plant or branch death.
Serotiny can occur in various degrees. Plants that retain all of their seed indefinitely in the absence of a trigger event are strongly serotinous. Plants that eventually release some of their seed spontaneously in the absence of a trigger are weakly serotinous. Finally, some plants release all of their seed spontaneously after a period of seed storage, but the occurrence of a trigger event curtails the seed storage period, causing all seed to be released immediately; such plants are essentially non-serotinous, but may be termed facultatively serotinous.
Fire-mediated serotiny
In the southern hemisphere, fire-mediated serotiny is found in angiosperms in fire-prone parts of Australia and South Africa. It is extremely common in the Proteaceae of these areas, and also occurs in other taxa, such as Eucalyptus (Myrtaceae) and even exceptionally in Erica sessiliflora (Ericaceae). In the northern hemisphere, it is found in a range of conifer taxa, including species of Pinus, Cupressus, Sequoiadendron, and more rarely Picea.
Since even non-serotinous cones and woody fruits can provide protection from the heat of fire, the key adaptation of fire-induced serotiny is seed storage in a canopy seed bank, which can be released by fire. The fire-release mechanism is commonly a resin that seals the fruit or cone scales shut, but which melts when heated. This mechanism is refined in some Banksia by the presence inside the follicle of a winged seed separator which blocks the opening, preventing the seed from falling out. Thus, the follicles open after fire, but seed release does not occur. As the cone dries, wetting by rain or humidity causes the cone scales to expand and reflex, promoting seed release. The seed separator thus acts as a lever against the seeds, gradually prying them out of the follicle over the course of one or more wet-dry cycles. The effect of this adaptation is to ensure that seed release occurs not in response to fire, but in response to the onset of rains following fire.
The relative importance of serotiny can vary among populations of the same plant species. For example, North American populations of lodgepole pine (Pinus contorta) can vary from being highly serotinous to having no serotiny at all, opening annually to release seed. Different levels of cone serotiny have been linked to variations in the local fire regime: areas that experience more frequent crown-fire tend to have high rates of serotiny, while areas with infrequent crown-fire have low levels of serotiny. Additionally, herbivory of lodgepole pines can make fire-mediated serotiny less advantageous in a population. Red squirrels (Sciurus vulgaris) and red crossbills (Loxia curvirostra) will eat seeds, and so serotinous cones, which last in the canopy longer, are more likely to be chosen. Serotiny occurs less frequently in areas where this seed predation is common.
Pyriscence can be understood as an adaptation to an environment in which fires are regular and in which post-fire environments offer the best germination and seedling survival rates. In Australia, for example, fire-mediated serotiny occurs in areas that not only are prone to regular fires but also possess oligotrophic soils and a seasonally dry climate. This results in intense competition for nutrients and moisture, leading to very low seedling survival rates. The passage of fire, however, reduces competition by clearing out undergrowth, and results in an ash bed that temporarily increases soil nutrition; thus the survival rates of post-fire seedlings is greatly increased. Furthermore, releasing a large number of seeds at once, rather than gradually, increases the possibility that some of those seeds will escape predation. Similar pressures apply in Northern Hemisphere conifer forests, but in this case there is the further issue of allelopathic leaf litter, which suppresses seed germination. Fire clears out this litter, eliminating this obstacle to germination.
Evolution
Serotinous adaptations occur in at least 530 species in 40 genera, in multiple (paraphyletic) lineages. Serotiny likely evolved separately in these species, but may in some cases have been lost by the related non-serotinous species.
In the genus Pinus, serotiny likely evolved because of the atmospheric conditions during the Cretaceous period. The atmosphere during the Cretaceous had higher oxygen and carbon dioxide levels than our atmosphere. Fire occurred more frequently than it does currently, and plant growth was high enough to create an abundance of flammable material. Many Pinus species adapted to this fire-prone environment with serotinous pine cones.
A set of conditions must be met in order for long-term seed storage to be evolutionarily viable for a plant:
The plant must be phylogenetically able (pre-adapted) to develop the necessary reproductive structures
The seeds must remain viable until cued to release
Seed release must be cued by a trigger that indicates environmental conditions that are favorable to germination,
The cue must occur on an average timescale that is within the reproductive lifespan of the plant
The plant must have the capacity and opportunity to produce enough seeds prior to release to ensure population replacement
Serotiny must be heritable
References
Plant morphology
Plant physiology | Serotiny | Biology | 1,519 |
1,038,844 | https://en.wikipedia.org/wiki/Spring%20bloom | The spring bloom is a strong increase in phytoplankton abundance (i.e. stock) that typically occurs in the early spring and lasts until late spring or early summer. This seasonal event is characteristic of temperate North Atlantic, sub-polar, and coastal waters. Phytoplankton blooms occur when growth exceeds losses, however there is no universally accepted definition of the magnitude of change or the threshold of abundance that constitutes a bloom. The magnitude, spatial extent and duration of a bloom depends on a variety of abiotic and biotic factors. Abiotic factors include light availability, nutrients, temperature, and physical processes that influence light availability, and biotic factors include grazing, viral lysis, and phytoplankton physiology. The factors that lead to bloom initiation are still actively debated (see Critical depth).
Classical mechanism
In the spring, more light becomes available and stratification of the water column occurs as increasing temperatures warm the surface waters (referred to as thermal stratification). As a result, vertical mixing is inhibited and phytoplankton and nutrients are entrained in the euphotic zone. This creates a comparatively high nutrient and high light environment that allows rapid phytoplankton growth.
Along with thermal stratification, spring blooms can be triggered by salinity stratification due to freshwater input, from sources such as high river runoff. This type of stratification is normally limited to coastal areas and estuaries, including Chesapeake Bay. Freshwater influences primary productivity in two ways. First, because freshwater is less dense, it rests on top of seawater and creates a stratified water column. Second, freshwater often carries nutrients that phytoplankton need to carry out processes, including photosynthesis.
Rapid increases in phytoplankton growth, that typically occur during the spring bloom, arise because phytoplankton can reproduce rapidly under optimal growth conditions (i.e., high nutrient levels, ideal light and temperature, and minimal losses from grazing and vertical mixing). In terms of reproduction, many species of phytoplankton can double at least once per day, allowing for exponential increases in phytoplankton stock size. For example, the stock size of a population that doubles once per day will increase 1000-fold in just 10 days. In addition, there is a lag in the grazing response of herbivorous zooplankton at the start of blooms, which minimize phytoplankton losses. This lag occurs because there is low winter zooplankton abundance and many zooplankton, such as copepods, have longer generation times than phytoplankton.
Spring blooms typically last until late spring or early summer, at which time the bloom collapses due to nutrient depletion in the stratified water column and increased grazing pressure by zooplankton. The most limiting nutrient in the marine environment is typically nitrogen (N). This is because most organisms are unable to fix atmospheric nitrogen into usable forms (i.e. ammonium, nitrite, or nitrate). However, with the exception of coastal waters, it can be argued, that iron (Fe) is the most limiting nutrient because it is required to fix nitrogen, but is only available in small quantities in the marine environment, coming from dust storms and leaching from rocks. Phosphorus can also be limiting, particularly in freshwater environments and tropical coastal regions.
During winter, wind-driven turbulence and cooling water temperatures break down the stratified water column formed during the summer. This breakdown allows vertical mixing of the water column and replenishes nutrients from deep water to the surface waters and the rest of the euphotic zone. However, vertical mixing also causes high losses, as phytoplankton are carried below the euphotic zone (so their respiration exceeds primary production). In addition, reduced illumination (intensity and daily duration) during winter limits growth rates.
Alternative mechanisms
Historically, blooms have been explained by Sverdrup's critical depth hypothesis, which says blooms are caused by shoaling of the mixed layer. Similarly, Winder and Cloern (2010) described spring blooms as a response to increasing temperature and light availability. However, new explanations have been offered recently, including that blooms occur due to:
Coupling between phytoplankton growth and zooplankton grazing.
The onset of near surface stratification in the spring.
Mixing of the water column, rather than stratification
Low turbulence
Increasing light intensity (in shallow water environments).
Eddies (see ‘The role of eddies in the onset of the North Atlantic spring bloom’)
The role of eddies in the onset of the North Atlantic spring bloom
A 2012 study showed that the onset of the North Atlantic bloom is due to eddies. Eddies, or circular currents of water, are ubiquitous throughout the world’s ocean and play an important role in ocean mixing. In the North Atlantic, surface water is colder and denser farther north and warmer and lighter in the south. This sets up a horizontal density gradient. Earth’s rotation maintains this gradient by preventing the dense water from slipping underneath the light water. Eddies, however, can mix dense water underneath the lighter water, setting up a vertical stratification that limits the depth of vertical mixing (leading to a shallower mixed layer).
Mechanisms that limit the depth of vertical mixing can be referred to as ‘restratifying mechanisms’ (e.g. eddies, solar heating), which compete against mechanisms that increase vertical mixing (and deepen the mixed layer). This includes convection and down-front winds. Convection is strongest in the winter when surface cooling is strongest. Convection increases the depth of vertical mixing, which can move phytoplankton away from the light they need to grow.
When convection weakens and wind switches direction in the spring, the re-stratifying effect of eddies becomes dominant. Phytoplankton are trapped closer to the surface, increasing their exposure to light. This spurs phytoplankton growth, leading to the onset of the North Atlantic spring bloom 20-30 days earlier than would occur with thermal stratification alone.
Northward progression
At greater latitudes, spring blooms take place later in the year. This northward progression is because spring occurs later, delaying thermal stratification and increases in illumination that promote blooms. A study by Wolf and Woods (1988) showed evidence that spring blooms follow the northward migration of the 12 °C isotherm, suggesting that blooms may be controlled by temperature limitations, in addition to stratification.
At high latitudes, the shorter warm season commonly results in one mid-summer bloom. These blooms tend to be more intense than spring blooms of temperate areas because there is a longer duration of daylight for photosynthesis to take place. Also, grazing pressure tends to be lower because the generally cooler temperatures at higher latitudes slow zooplankton metabolism.
Species succession
The spring bloom often consists of a series of sequential blooms of different phytoplankton species. Succession occurs because different species have optimal nutrient uptake at different ambient concentrations and reach their growth peaks at different times. Shifts in the dominant phytoplankton species are likely caused by biological and physical (i.e. environmental) factors. For instance, diatom growth rate becomes limited when the supply of silicate is depleted. Since silicate is not required by other phytoplankton, such as dinoflagellates, their growth rates continue to increase.
For example, in oceanic environments, diatoms (cells diameter greater than 10 to 70 μm or larger) typically dominate first because they are capable of growing faster. Once silicate is depleted in the environment, diatoms are succeeded by smaller dinoflagellates. This scenario has been observed in Rhode Island, as well as Massachusetts and Cape Cod Bay. By the end of a spring bloom, when most nutrients have been depleted, the majority of the total phytoplankton biomass is very small phytoplankton, known as ultraphytoplankton (cell diameter <5 to 10 μm). Ultraphytoplankton can sustain low, but constant stocks, in nutrient depleted environments because they have a larger surface area to volume ratio, which offers a much more effective rate of diffusion. The types of phytoplankton comprising a bloom can be determined by examination of the varying photosynthetic pigments found in chloroplasts of each species.
Variability and the influence of climate change
Variability in the patterns (e.g., timing of onset, duration, magnitude, position, and spatial extent) of annual spring bloom events has been well documented. These variations occur due to fluctuations in environmental conditions, such as wind intensity, temperature, freshwater input, and light. Consequently, spring bloom patterns are likely sensitive to global climate change.
Links have been found between temperature and spring bloom patterns. For example, several studies have reported a correlation between earlier spring bloom onset and temperature increases over time. Furthermore, in Long Island Sound and the Gulf of Maine, blooms begin later in the year, are more productive, and last longer during colder years, while years that are warmer exhibit earlier, shorter blooms of greater magnitude.
Temperature may also regulate bloom sizes. In Narragansett Bay, Rhode Island, a study by Durbin et al. (1992) indicated that a 2 °C increase in water temperature resulted in a three-week shift in the maturation of the copepod, Acartia hudsonica, which could significantly increase zooplankton grazing intensity. Oviatt et al. (2002) noted a reduction in spring bloom intensity and duration in years when winter water temperatures were warmer. Oviatt et al. suggested that the reduction was due to increased grazing pressure, which could potentially become intense enough to prevent spring blooms from occurring altogether.
Miller and Harding (2007) suggested climate change (influencing winter weather patterns and freshwater influxes) was responsible for shifts in spring bloom patterns in the Chesapeake Bay. They found that during warm, wet years (as opposed to cool, dry years), the spatial extent of blooms was larger and was positioned more seaward. Also, during these same years, biomass was higher and peak biomass occurred later in the spring.
See also
Algal bloom
Critical depth
Gordon Arthur Riley
Plankton
References
Aquatic ecology
Biological oceanography
Marine biology
Oceanography
Fisheries science
Planktology
Barents Sea
Algal blooms | Spring bloom | Physics,Chemistry,Biology,Environmental_science | 2,168 |
58,362,519 | https://en.wikipedia.org/wiki/Alexander%20Boyd%20Stewart | Prof Alexander Boyd Stewart CBE FRSE FRIC (1904–1981) was a 20th century Scottish organic chemist and agriculturalist. He was President of the British Society of Soil Science.
Life
He was born on 3 November 1904 at Tarland in Aberdeenshire, the son of Donald Stewart, a farmer. He was educated at Robert Gordon's College in Aberdeen. He then studied science at Aberdeen University graduating MA in 1925 and BSc in 1928. He then continued as a postgraduate, gaining his doctorate (PhD) in 1932. He immediately obtained a post as Head of the Soil Fertility Department at the Macaulay Institute. Remaining at the institute he became its deputy director in 1954.
In 1955 he was elected a Fellow of the Royal Society of Edinburgh. His proposers were Donald McArthur, David Cuthbertson, A. T. Phillipson, Thomas Phemister, James Robert Matthews and Murray Macgregor.
In 1958 he left to become Professor of Agriculture at Aberdeen University. He was created a Commander of the Order of the British Empire (CBE) in 1962. He returned to the Macaulay Institute in 1964 as its director.
He retired in 1968 and died at his home, 3 Woodburn Place (a 1980s bungalow) in Aberdeen on 27 February 1981.
Family
In 1939 he married Alice F. Bowman.
Publications
Soil Fertility Investigations in India (1946)
Agriculture in the University of Aberdeen (1959)
References
1904 births
1981 deaths
British organic chemists
People from Tarland
People educated at Robert Gordon's College
Scottish agriculturalists
Alumni of the University of Aberdeen
Academics of the University of Aberdeen
Commanders of the Order of the British Empire
Fellows of the Royal Society of Edinburgh | Alexander Boyd Stewart | Chemistry | 333 |
365,845 | https://en.wikipedia.org/wiki/Undecane | Undecane (also known as hendecane) is a liquid alkane hydrocarbon with the chemical formula CH3(CH2)9CH3. It is used as a mild sex attractant for various types of moths and cockroaches, and an alert signal for a variety of ants. It has 159 isomers.
Undecane may also be used as an internal standard in gas chromatography when working with other hydrocarbons. Since the boiling point of undecane (196 °C) is well known, it may be used as a comparison for retention times in a gas chromatograph for molecules whose structure has been freshly elucidated. For example, if one is working with a 50 m crosslinked methyl silicone capillary column with an oven temperature increasing slowly, beginning around 60 °C, an 11-carbon molecule like undecane may be used as an internal standard to be compared with the retention times of other 10-, 11-, or 12- carbon molecules, depending on their structures.
See also
Higher alkanes
List of isomers of undecane
Cycloundecane
References
External links
Undecane at Dr. Duke's Phytochemical and Ethnobotanical Databases
Alkanes | Undecane | Chemistry | 263 |
45,515,131 | https://en.wikipedia.org/wiki/Finite%20Volume%20Community%20Ocean%20Model | The Finite Volume Community Ocean Model (FVCOM; Formerly Finite Volume Coastal Ocean Model) is a prognostic, unstructured-grid, free-surface, 3-D primitive equation coastal ocean circulation model. The model is developed primarily by researchers at the University of Massachusetts Dartmouth and Woods Hole Oceanographic Institution, and used by researchers worldwide. Originally developed for the estuarine flooding/drying process, FVCOM has been upgraded to the spherical coordinate system for basin and global applications.
References
External links
:
"About us". FVCOM website, by the University of Massachusetts Dartmouth
Physical oceanography
Numerical climate and weather models | Finite Volume Community Ocean Model | Physics | 129 |
34,568,491 | https://en.wikipedia.org/wiki/Crystal%20model | A crystal model is a teaching aid used for understanding concepts in crystallography and the morphology of crystals. Models are ideal to learn recognizing symmetry elements in crystals.
Romé de l'Isle
The first real collections of crystal models were produced by Romé de l'Isle. He actually offered sets of small (ca 3 cm) models made of "terra cotta" in order to stimulate the sales of the expensive four-volume set of his book "Cristallographie" (1783). The models were manufactured by his co-workers Arnould Carangeot, Lhermina and Swebach-Desfontaines, who produced numerous large sets (up to 448 models in each set). In order to exactly transfer interplanar angles from natural crystals to the terra cotta models, Carangeot invented and designed a prototype of a contact goniometer. This instrument, that proved to be an invaluable tool in geometric crystallography, enabled the measurement of interplanar angles with a precision of about half a degree.
Teylers Museum in Haarlem has a complete set of these terracotta models that were bought in Paris (in 1785) by Martin van Marum, the first director of the museum. After over 200 years, this collection is still complete and in perfect condition at Teylers Museum.
René Just Haüy
Almost two decades later, René Just Haüy introduced wooden crystal models to illustrate the two-dimensional drawings in the atlas volume of his "Traité de Minéralogie" (1801). For the production of crystal models, wood appeared to be much more convenient than clay. Especially pear wood permitted getting smooth faces, sharp edges and accurate dihedral angles required for the production of these three-dimensional objects. In general, the angular accuracy was very high and some models, especially those illustrating crystal twins and Haüy's figures of decrement, still appear as masterpieces of fine woodwork and carving. Skilful craftsmen such as Pleuvin, Beloeuf and Lambotin (to name only a few) became specialists in this field and the models they offered were highly esteemed.
Between 1802 and 1804, Martin van Marum bought 597 of these pear wood models, 550 of these are still present in the collection of Teylers Museum. Each model is labeled, mentioning a number and the name of the crystal form. This set is the most complete collection of Haüy crystal models that still survives. That Van Marum was able to acquire such a unique collection was due to his networking. Van Marum allowed Haüy as a member of the Hollandsche Maatschappij, a nomination to which Haüy attached great value. Haüy mentioned this membership in all of his publications.
After their introduction by Romé de l'Isle and Haüy, crystal models were increasingly demanded both by scholars for teaching purposes as well as by mineral collectors. The quality of the models improved due to the technical progress in their production. Several mineralogists and crystallographers started designing their own series of models. Although pear wood kept a prominent place, models were also manufactured using materials like plaster, cast iron, lead, brass, glass, porcelain, cardboard, etc.
The Krantz Company
In 1833, Adam August Krantz (who studied pharmacy and later "Geognosie" at the Bergakademie Freiberg) founded the Krantz company in Bonn. Four years later, Krantz moved to Berlin and sold minerals, fossils, rocks and basically acquired a monopoly in the production of crystal models made of pear wood or walnut. Ever since its foundation, the firm was always in contact with renowned scientists and important collectors. Hence in 1880, Krantz proposed a series of 743 pear wood models compiled for teaching purposes by the crystallographer Paul Groth. Seven years later, a supplementary collection of 213 models was available.
At the onset of the 20th century, Friedrich Krantz (a nephew of August Krantz, with a degree in mineralogy) supported by his teacher the crystallographer Carl Hintze, offered a collection of 928 models including most of the Groth models. Later, and along with many other productions, a Dana collection of 282 models was manufactured. Krantz offered a choice of collections of wooden models in different sizes (5, 10, 15–25 cm). In addition, he sold a variety of glass models having the crystallographic axes illustrated by colored silk threads or with the holohedral form made of cardboard inside. Also available were models in massive cut and polished glass (colored and uncolored), cardboard models, wire crystal models, crystal lattice models, models with rotating parts, etc.
Over the years, Krantz published numerous detailed catalogues of the collections he offered; they constitute a precious documentation.
External links
Teylers Universum
Early Crystal Models
References
Mineralogy
Crystallography
Teylers Museum | Crystal model | Physics,Chemistry,Materials_science,Engineering | 1,014 |
51,341,771 | https://en.wikipedia.org/wiki/CM%20%28commerce%29 | CM.com (formerly called CM Telecom) is a Mobile service company based in the Netherlands. The company was formed in 1999, and provides software for direct messaging, VoIP messaging, ecommerce payments and digital identification.
In recent years, CM has acquired a number of messaging and media startups. CM.com was the main sponsor of football club NAC Breda from 2015 to 2020.
History
CM.com was established in February 1999 by Jeroen van Glabbeek and Gilbert Gooijer In its first year, it became known as ClubMessage B.V. The main product of the company at the time Group text, used by the event organisers to distribute information to clubbers and event participants. The software has been primarily used throughout the Benelux by leading nightlife venues to promote their DJ nights and events.
In the first year of the society it was known as ClubMessage B.V. The main product of the company at the time Group Text, used by the event organisers to distribute information to clubbers and event participants. The software was predominantly used across the Benelux by large nightlife venues to promote their nights and DJ events.
As the company grew, they began to operate in other event sectors, such as music festivals. During the next couple of years, the company focused on the expansion of its SMS messaging service, and created and patented the software MailText. In recent years, CM has diversified into the mobile payments market, with the launch of CM Payments Worldwide.
In September 2013, CM.com acquired Dutch mobile app developer OneSixty Mobile B.V. with offices in 's-Hertogenbosch and London, enabling its initial physical presence in the UK.
In early 2015 it was announced that the company would be opening offices in both Paris and London. During the same year, CM received coverage on The Next Web for their growing innovations in media messaging. Their solution for managing push message campaigns, where companies or app developers can measure and drive interaction with their users. Firstly push messages are sent using the CM solution. Customers can then be automatically contacted by SMS if the push notification goes unread.
In March 2016, CM.com acquired Global messaging, which was based in Peterborough, England. Around the same time, the company also acquired the mobile app developer, .
In July 2017, CM.com took over payment institution Payments from its American owner Ingram Micro. In the same month, the company revealed a new visual identity, logo and name change.
Market & products
As part of the instant messaging market, the CM has predominantly focused on backend solutions. The Mobile service company and others similar have been at the forefront of mobile messaging solutions over the last decade, and the rise of what is often referred to as the messaging economy.
CM has become known for their work in the market of hybrid messaging. This type of messaging means that customers can be contacted using a variety of messaging formats. It allows the sender to contact its customers through more than one messaging format.
In 2016, they launched a real-time analytics tool for mobile. The web tool provides insights in messaging traffic and conversions to customers.
References
Mobile telecommunications
Mobile technology companies
VoIP companies
Cloud communication platforms
Dutch companies established in 1999 | CM (commerce) | Technology | 661 |
15,442,452 | https://en.wikipedia.org/wiki/Ouvrage%20Boussois | Ouvrage Boussois is a petit ouvrage of the Maginot Line, built as part of the "New Fronts" program to address shortcomings in the Line's coverage of the border with Belgium. Like the other three ouvrages near Maubeuge, it is built on an old Séré de Rivières fortification, near the town of Boussois. The fortification surrendered to the Germans twice, in the First World War on 6 September 1914, and in the Second World War on 22 May 1940. The site is now abandoned.
Fort de Boussois
The Fort de Boussois, also known as the Fort de Kilmaine, was built between 1881 and 1883 as part of the Séré de Rivières system of fortifications. It overlooks the valley of the Sambre. The pentagonal fort is surrounded by a ditch defended by caponiers and counterscarps. The fort featured a Mougin turret with two 155 mm guns. A cavalier, or elevated surface for artillery, surmounts the reinforced barracks. Underground galleries link the salients, caponiers and counterscarps to the central portions of the fort.
World War I
The Fort de Boussois came under fire in 1914 during the opening phases of World War I, during the Siege of Maubeuge. On 31 August a shell killed 60 men when it hit the powder magazine. The Mougin turret jammed the same day. The fortifications of Maubeuge were by now far in the rear of the German lines. The fort surrendered to the Germans on 6 September, who blew up the caponiers and the turret at the end of the month.
Design and construction
The Maginot-era site was approved in 1934. Work by the contractor Caroni cost 8.26 million francs. A planned second phase was to add an artillery block and support facilities. The rise in tensions between France and Germany in the late 1930s prevented the second phase from being pursued.
Description
Boussois comprises three combat blocks, featuring a new combination 25mm gun/50mm mortar turret. The ouvrage was built within the walls of the old Fort de Boussois. A compact underground gallery links the three blocks and contains utility spaces, barracks and magazine space. Construction was complicated by the presence of old mines beneath the fort.
Block 1: infantry block with one automatic rifle cloche (GFM-B), one mixed-arms cloche (AM), one twin machine gun embrasure and one machine gun/47 mm anti-tank gun (JM/AC47) embrasure.
Block 2: infantry block with one GFM cloche and one retractable 25mm gun/50mm mortar mixed-arms turret.
Block 3: infantry block with two GFM cloches, one retractable mixed-arms turret, one twin machine gun embrasure and one machine gun/47 mm anti-tank gun (JM/AC47) embrasure.
The second phase was to add two blocks with a 75mm twin gun turret each, as well as separate munitions and personnel entries well beyond the walls of the old fort.
A number of small blockhouses are associated with Bersillies, as well as a casemate:
Casemate de l'Épinette: Double machine gun block with two JM/AC47 embrasures, two JM embrasures, one AM cloche and two GFM-B cloches. It is not connected to the ouvrage.
Manning
The 1940 manning of the ouvrage under the command of Captain Bertain comprised 195 men and 5 officers of the 84th Fortress Infantry Regiment. The units were under the umbrella of the 101st Fortress Infantry Division, 1st Army, Army Group 1.
History of the Maginot ouvrage
See Fortified Sector of Maubeuge for a broader discussion of the events of 1940 in the Maubeuge sector of the Maginot Line.
During the Battle of France in 1940, the invading German forces approached Maubeuge from the south and east, to the rear of the defensive line. As the German 28th Infantry Division moved along the line of fortifications on 19 May they were fired upon by Boussois. The Germans replied with fire from 8.8cm and 15 cm guns, hitting blocks 1 and 3 at short range. Firing continued the next day and was extended to the other ouvrages of the sector, with aerial bombardment by Stukas. Late on the 21st an infantry attack on the ouvrage was repelled. By the next morning the ventilation system had failed, and ventilation had to be improvised using the drains and a portable fan. The turret was jammed, pointing in a useless direction. The fort finally surrendered at 1100 hours on the 22nd.
Current condition
The interiors of the Maubeuge fortifications were stripped of their equipment by the Germans in 1941. The surface of the Séré de Rivières fortifications is enveloped by weeds and thorns. The Maginot fortifications are closed to access.
See also
List of all works on Maginot Line
Siegfried Line
Atlantic Wall
Czechoslovak border fortifications
Notes
References
Bibliography
Allcorn, William. The Maginot Line 1928-45. Oxford: Osprey Publishing, 2003.
Kaufmann, J.E. and Kaufmann, H.W. Fortress France: The Maginot Line and French Defenses in World War II, Stackpole Books, 2006.
Kaufmann, J.E., Kaufmann, H.W., Jancovič-Potočnik, A. and Lang, P. The Maginot Line: History and Guide, Pen and Sword, 2011.
Mary, Jean-Yves; Hohnadel, Alain; Sicard, Jacques. Hommes et Ouvrages de la Ligne Maginot, Tome 1. Paris, Histoire & Collections, 2001.
Mary, Jean-Yves; Hohnadel, Alain; Sicard, Jacques. Hommes et Ouvrages de la Ligne Maginot, Tome 2. Paris, Histoire & Collections, 2003.
Mary, Jean-Yves; Hohnadel, Alain; Sicard, Jacques. Hommes et Ouvrages de la Ligne Maginot, Tome 3. Paris, Histoire & Collections, 2003.
Mary, Jean-Yves; Hohnadel, Alain; Sicard, Jacques. Hommes et Ouvrages de la Ligne Maginot, Tome 5. Paris, Histoire & Collections, 2009.
External links
Boussois (petit ouvrage de) at fortiff.be
Maginotlinie - Fort Boussois at TracesOfWar.com
L'ouvrage de Boussois at wikimaginot.eu
Petit ouvrage de Boussois at lignemaginot.com
BOUS
Maginot Line
Séré de Rivières system
Fortifications of Maubeuge | Ouvrage Boussois | Engineering | 1,418 |
14,798,249 | https://en.wikipedia.org/wiki/MATR3 | Matrin-3 is a protein that in humans is encoded by the MATR3 gene.
Function
The protein encoded by this gene is localized in the nuclear matrix. It may play a role in transcription or may interact with other nuclear matrix proteins to form the internal fibrogranular network. Two transcript variants encoding the same protein have been identified for this gene.
Pathology
Mutations in the Matrin 3 gene are associated with familial amyotrophic lateral sclerosis.
References
Further reading | MATR3 | Chemistry | 102 |
2,692,119 | https://en.wikipedia.org/wiki/Iota%20Tauri | Iota Tauri, Latinized from ι Tauri, is a white-hued star in the zodiac constellation Taurus and an outlying member of the Hyades star cluster. It is visible to the naked eye with an apparent visual magnitude of 4.62, and is located at an estimated distance of about 173 light years based upon parallax measurements. The star is moving away from the Sun with a radial velocity of +38 km/s.
This has been reported as a double star with two components at separation 0.1", both of type A7V and magnitude 5.4. The combined spectrum matches a stellar classification of A7 V, which would normally indicate an A-type main-sequence star that is generating energy through hydrogen fusion at its core. It has an estimated age of 717 million years.
References
A-type main-sequence stars
Hyades (star cluster)
Tauri, Iota
Taurus (constellation)
BD+21 0751
Tauri, 102
032301
023497
1620 | Iota Tauri | Astronomy | 213 |
68,194,137 | https://en.wikipedia.org/wiki/Total%20mycosynthesis | Total mycosynthesis is the combination of the use of a filamentous fungal host organism with a genetic expression system that allows the assembly and controlled expression of one or more biosynthetic genes. Total mycosynthesis involves the reconstruction and/or engineering of biosynthetic pathways for the production of secondary metabolites. It is competitive with chemical total synthesis. It can be used both for the production of known natural products, and for the engineering of pathways to produce new compounds or pathway intermediates.Examples include the total mycosynthesis of tenellin where the tenS, tenC, tenA and tenB genes were transferred from Beauveria bassiana to the expression host Aspergillus oryzae. The expression system allows the engineering of TenS to control chain-length and methylation pattern.
Examples
References
Mycology | Total mycosynthesis | Biology | 175 |
44,949,594 | https://en.wikipedia.org/wiki/AppyParking | AppyWay (formerly AppyParking and Yellow Line Parking) is a technology company that provides parking apps and services for drivers. It was founded in London in 2013 by Dan Hubert initially under the name of Yellow Line Parking. It produces software that shows on-street and off-street parking options in major cities in the UK. The app is available on both Android and iOS. There is also a paid-for enterprise app, AppyParking Pro, which is a software as a service aimed at businesses with fleets.
History
AppyParking was founded by Dan Hubert, a former advertising creative in 2012. Hubert started to contact every London borough and digitize their Controlled Parking Zones from basic PDF maps. Originally named Yellow Line Parking, dealing only with single yellow line parking restrictions, the company was expanded and rebranded as AppyParking in 2014.
In late 2014, the company was part of the Microsoft Ventures Accelerator programme in London, during which Eric Requena, the company's chief technology officer, was advised to revise much of the application code.
In December 2014, AppyParking launched an enterprise app aimed at commercial fleets, and at the CES 2015 in January 2015 Ford announced a partnership with the company. Later in the year, the company ran a one-month trial in Westminster, London with Vodafone xone and Pimlico Plumbers. When a driver located a parking space with the help of the app, he clicked a button in the app when he arrived and simply drove away later, being billed only for the time parked. This was made possible by the sensors already built into parking bays.
Since January 2016, AppyParking provides a feature that shows the nearest and cheapest petrol stations anywhere in the UK.
In September 2016, founder Dan Hubert appeared on BBC business show Dragon's Den seeking investment to expand the service, valuing the company at £10m ($m) based on a 2% equity offered for a sum of £200,000 ($). He was unsuccessful after declining two offers from Peter Jones and Nick Jenkins, who instead valued the company at £1m ($m).
In July 2019 AppyParking closed a Series A round of investment worth £7.6m ($m) from investors including Hyundai Motor Company and Sumitomo Corporation, led by London-based venture capital firm, West Hill Capital. The company has raised a total of £11m ($m) as of 2019 and is now valued at £50m ($m) after its 2019 round.
As of 2019, AppyParking hosts the largest dataset of the UK's kerbside restrictions, with over 450 UK towns and cities mapped within the AppyParking mobile app and Kerbside API.
AppyParking rebranded as AppyWay in September 2019.
AppyWay launched its second Smart City Parking scheme in the town of Halifax in October 2019.
Technology
AppyWay uses Google Maps overlays to display areas, and projects its dataset in the form of pins. The app retrieves the user's location and the present date and time to provide a list of prices, and connects with routing apps like Maps and Waze to direct the user to the parking site.
References
Online companies of the United Kingdom
IOS software
Parking companies
Internet properties established in 2013
2013 software
Companies based in the London Borough of Hackney
Mobile technology | AppyParking | Technology | 692 |
3,225,240 | https://en.wikipedia.org/wiki/Lactarius%20deterrimus | Lactarius deterrimus, also known as false saffron milkcap or orange milkcap, is a species of fungus in the family Russulaceae. The fungus produces medium-sized fruit bodies (mushrooms) with orangish caps up to wide that develop green spots in old age or if injured. Its orange-coloured latex stains maroon within 30 minutes. Lactarius deterrimus is a mycorrhizal fungus that associates with Norway spruce and bearberry. The species is distributed in Europe, but has also found in parts of Asia. A visually similar species in the United States and Mexico is not closely related to the European species. Fruit bodies appear between late June and November, usually in spruce forests. Although the fungus is edible—like all Lactarius mushrooms from the section Deliciosi—its taste is often bitter, and it is not highly valued. The fruit bodies are used as source of food for the larvae of several insect species. Lactarius deterrimus can be distinguished from similar Lactarius species by difference in the mycorrhizal host or latex colour.
Taxonomy and classification
Although the fungus is one of the most common in Central Europe, the species was not validly described until 1968 by German mycologist Frieder Gröger. Before this, L. deterrimus was regarded as a variety of L. deliciosus (L. deliciosus var. piceus, described by Miroslav Smotlacha in 1946). After Roger Heim and A. Leclair described L. semisanguifluus in 1950, this fungus was referred to as the latter. L. fennoscandicus was separated from L. deterrimus in 1998 by Annemieke T. Verbeken and Jan Vesterholt and was classified as a separate species.
The epithet of deterrimus is Latin, and was chosen by Gröger to highlight the poor gustatory properties of the mushroom, such as the bitter aftertaste and often heavy maggot infestations. The superlative of "dēterior" (meaning less good) means "the worst, the poorest". The mushroom is commonly known as the "false saffron milkcap".
Several molecular phylogenetic analyses show that L. deterrimus, L. sanguifluus, Lactarius vinosus and L. fennoscandicus form a group of related species, which might include the North American species L. paradoxus and L. miniatosporus. Although L. deliciosus var. deterrimus qualifies as synonym for L. deterrimus, the families that had been characterized in North America as Lactarius deliciosus var. deterrimus are not closely related with the European types. They also seem not to form a monophyletic group.
Lactarius deterrimus belongs to the section Deliciosi of the genus Lactarius. According to molecular phylogenetics studies, this section forms a definite phylogenetic group within the milk cap relatives. Deliciosi species mainly have an orange or reddish-coloured latex and taste mild to slightly bitter. They are strict mycorrhizal associates of conifers. The next closest relative of L. deterrimus is L. fennoscandicus.
Characteristics
Macroscopic characteristics
The cap is , rarely up to wide and more or less centrifugal-shaped and round. It is at early stage convex and furled on the slightly churlish edge, and depressed in the centre and later flat, funnel-shaped depressed. The cap skin is bare, greasy in moist weather and slightly shiny when dry. The cap is tangerine to orange-brown, darker zoned towards the edges and dulls mainly yellow-brown. In old age or after cold or frost it changes the colour more or less to dirty greenish or green-spotted.
The dense, bow-like lamellae are pale-orange to pale-ochre and on the stipe basifixed or slightly decurrent. They are brittle and intermixed with shorter lamellulae (short gills that do not extend fully from the cap margin to the stem) as well as partly forking near the stem. In old age or in cases of injury they receive initially dark red, later grey green spots. The spore print is pale ochre.
The mainly long and cylindrical stipe is reddish orange. It is (rarely ) long, wide and barely foveate or blotchy. On the basis it is often slightly thickened or pompous and becomes hollow inside. A bloomy circular zones is found on the lamella disposition.
The milk is first carrot-red and becomes a maroon colour within 10 to 30 minutes. The brittle and pale-yellowish flesh is often infested with maggots. If cut or injured it becomes, as the milk, first carrot-red, then maroon and within hours dirty green. The fruit body smells harsh, fruit-like and first tastes mild, but then slightly resinous-bitter and nearly spicy or somewhat astringent.
Microscopic characteristics
The rotund to ellipsoid spores are 7.5–10 μm long and 6–7.6 μm wide. The surface ornamentation extends to 0.5 μm high and is mainly from warts and short, wide ridges, which are linked through few fine lines to form an incomplete net (reticulum). The suprahilar area, a distinctly limited zone above the apiculus, is weakly amyloid. Basidia (spore-bearing cells) are four-spored and measure 45–60 × 9.5–12 μm. They are roughly cylindrical to somewhat club-shaped and often have an oil droplet or a granular body. The sterigmata are 4.5–5.5 μm long. The thin-walled pleurocystidia are sparse, but somewhat common near the gill edge. They are protruded and are 45–65 μm long and 5–8 μm wide, they are sometimes smaller near the gill edge. Nearly spindle-shaped, they are often straightened or constricted like a string of pearls at the apex. The body is often fine and grained. Pseudocystidia are largely present. They are 4–6 μm wide and are sometimes protruded, but are often shorter than the basidioles (basidia in the early developmental stage). The basidioles are cylindric to spiral and have an ochre-coloured substance, similar to the laticifers. Near the top they are, however, almost hyaline (transparent). The gill edge is usually sterile and has a few to many cheilocystidia. The thin-walled cheiloleptocystidia are 15–25 μm long and 5–10 μm wide. They are almost club-shaped or irregularly shaped and transparent, and often contain a granular material. The cheilomacrocystidia are also thin-walled and measure 25–50 μm long and 6–8 μm wide. They are slightly spindle-shaped and often have a tip resembling a string of pearls; their interior is hyaline or granular. Laticifers are abundant, striking and body is ochre coloured. The cuticle of the cap is an ixocutis, whereby the hyphae are linked in a jellylike matrix, that can swell up in moisture to become heavily slimy.
Similar species
The likewise very common Lactarius deliciosus is similar in appearance. Lactarius deterrimus differs basically from the first because its flesh becomes reddish within 10 minutes and in about 30 minutes dark maroon, caused by the discolouration of the milk. The milk of L. deliciosus stays orange or becomes reddish within 30 minutes. Also, the milk of the latter tastes mild, while the milk of the first distinctly bitter. The cap of L. deterrimus changes its colour in old age or if injured distinctly greenish and is common only under spruces, while L. deliciosus is native under pines.
Even more similar is the very rare Lactarius semisanguifluus. Its milk also discolours within 5 to 8 minutes to maroon. The cap of older fruit bodies is nearly completely greenish. It is also common under pines. The most similar and also the most closely related fungus is Lactarius fennoscandicus, a boreal to subalpine species. Its cap is distinctly zoned and brown-orange. Sometimes the cap has purple-grey tones. The stem is pale to blunt orange-ochre.
Distribution
Lactarius deterrimus is mainly distributed in Europe, but the fungus has also found in areas of Asia (Turkey, India, Pakistan). According to recent molecular biologic research, the similar North American species from the United States and Mexico are not closely related to the European species. In Europe, the fungus is especially common in Northern, North-East and Central Europe; in the UK, it may be found from July through to November. In the south and west it is common in mountainous areas. In the east, its range extends to Russia.
Ecology
Lactarius deterrimus has traditionally been considered to have a strict mycorrhizal host specificity with Norway spruce. In 2006, it was reported that the fungus can also form arbutoid mycorrhiza with bearberry (Arctostaphylos uva-ursi). Arbutoid mycorrhizal associations are variants of ectomycorrhiza found in certain plants in the Ericaceae characterised by hyphal coils in epidermal cells. The mycorrhiza formed by L. deterrimus on both bearberry and Norway spruce show typical features such as a hyphal mantle and a Hartig net; the distinguishing characteristic between the mycorrhizal symbioses with the different hosts is that the hyphae penetrate the epidermal cells of bearberry, although there are also some differences in the form of the Hartig net, branching pattern, and colour. Although bearberry has been shown to form mycorrhiza with a wide range of fungi both in the field and in laboratory experiments, it had never previously been known to form mycorrhiza with fungi thought to be strictly host-specific. Bearberry may function as a nurse plant to help re-establish Norway spruce in deforested areas.
The species is common in spruce-fir and spruce-moorland forests and in spruce forests and plantations. Together with spruces, the fungus is also common in different European beech and oak-European hornbeam forests, but also on the forest edges, on clearings and in clearcut meadows and even on juniper heathers and in parkland. There are scarcely any habitats where the spruce is common, while the fungus is not found there. The fungus is very common in young spruce forests that are 10 to 20 years old, where it occurs on forest path edges occasionally in masses.
The fungus probably favours calcareous soil, although it has been found on nearly every soil type. It appears on sand, peat, limestone soils, rankers and Cambisols. It endures acidic as well as alkaline and low-nutrient to relatively high-nutrient soils. Heavily eutrophic soils are inappropriate for its habitat.
The fruit bodies appear from late June to November, but usually from August to October; overwintered specimens can be found in freezing days up to early February. The fungus prefers the downs and the uplands, but is also not uncommon in lowlands.
Fruit body
Many fungi can serve as source of food for insect larvae, whereas most insects eat fungi only occasionally. Still, a whole range of insect species specialize on fungi. These animals are mainly beetle larvae, especially hairy fungus beetles (Mycetophagidae), rove beetles (Staphylinidae), and true flies (Diptera). Milk caps are especially attractive for true flies, while beetle larvae are comparatively rarer. The most common insects found on the fungus are Mycetophilidae and Phoridae larvae, which populate even the youngest fruit bodies. Also relatively common on mature or overaged fruit bodies are Drosophilidae and Psychodidae. Species from the section Deliciosi are often infested by Diptera larvae.
The following species have been isolated from the fruit bodies of L. deterrimus:
Ula sylvatica (Pediciidae): This very common Micropezidae were isolated from more than 70 different fungus species, which belong to completely different genera and families. Its larvae spend an unusually long portion of its life cycle in the fruit body, usually three or four weeks.
Mycetophila blanda (Mycetophilidae): The Mycetophilidae cultivates usually in milk caps of the section Deliciosi.
Mycetophila estonica (Mycetophilidae): A rare species first described in 1992, which is closely related with Mycetophila blanda and is also common in milk caps.
Mycetophila evanida (Mycetophilidae): Was found in fungi including Lactarius fulvissimus and Russula luteotacta.
Culicoides scoticus (Ceratopogonidae): It is one of the most common biting midges found in fungi and been found in over 20 different fungus species.
Mydaea corni: This species belongs to the family Muscidae and was to date only found in species of Lactarius and Russula.
Many different fruit flies have been recorded on L. deterrimus: Drosophila funebris, Drosophila phalerata, Drosophila transversa and Drosophila testacea.
Psychoda albipennis, Psychoda lobata and Tinearia alternata were isolated from the fruit bodies of the fungus. The larvae of Psychoda lobata are known to develop from a wide range of fungus species from over 30 genera.
Parasitism
Abnormally developed milk caps infested by the parasitic sac fungus Hypomyces lateritius (syn. Peckialla laterita) are occasionally found in summer and autumn. The infested fruit bodies are usually more or less heavily malformed with a harder and more solid flesh than typical fruit bodies, so that they are more resistant to rot and can even survive the winter. They do not create gills; instead, the cap bottom is covered by an initially soft, white hyphen fungus, also known as a subiculum. Early on the mycelia becomes denser and takes on a white-grey colour. The perithecia are created after about 10–14 days. Perithecia are fruit bodies of the Hypomyces and other sac fungi, in which the spindle-shaped asci are produced. Besides L. deterrimus, L. deliciosus and L. sanguifluus can become infested, rarely other milk caps. Hypomyces lateritius, H. ochraceus, H. rosellus, H. odoratus, among other Hypomyces species live parasitically on different milk caps and brittlegills as well as on the fruit bodies of species from other genera.
Importance
Edibility
Lactarius deterrimus is an edible mushroom, but is much less appreciated than the similar L. deliciosus. The first tastes slightly bitter and is often infested by maggots. Like L. deliciosus, this fungus is mainly stir-fried in butter or oil; if it is cooked in water, the flesh becomes very soft. Young fruit bodies can be also pickled, or dried for later use. As the fungus is often heavily infested by maggots, skilled mushroom pickers prefer young fruit bodies. The urine discolours to red if a large amount of milk caps are eaten, but this is entirely harmless and is not evidence for an impairment to health. The red-coloured azulene compounds, ingested with the mushroom food, are more or less excreted with the urine.
Contents
The milk cap's fruit bodies have a characteristic orange milk juice (latex). The guaiane sesquiterpenes are responsible for the orange colour. Sesquiterpenes are terpenes composed of three isoprene units and therefore have 15 carbon atoms. Sesquiterpenes are widely distributed in nature and are found in plants as well as animals, for example in the juvenile hormone of insects. Plants use sesquiterpenes as a defense compound against insects. According to some studies, sesquiterpenes have antibiotic, anticarcinogenic, or immunostimulant effects.
Young, uninjured fruit bodies of L. deterrimus have sesquiterpenoides in the form of fatty acid dihydroazulene-esters. About 85% of the yellow-coloured dihydroazulene are esterified with stearic acid and about 15% with linoleic acid. If the fruit body is injured, the free sesquiterpene – a dihydroazulene alcohol – is released enzymatically. Several products are produced from it through oxidation: the yellow-coloured aldehyde delicial (1-formyl-6, 7-dihydro-4-methyl-7-isopropenylazulene), the purple-coloured aldehyde lactarovioline (1-formyl-4-methyl-7-isopropenylazulene), and the blue-coloured alcohol deterrol (1-hydroxymethyl-4-methyl-7-isopropenylazulene). The milk is first maroon through mixing with the different colours and discolours green. The dihydroazulene alcohol and delicial are unstable compounds, which react to form further products. Delicial polymerises particularly slight.
See also
List of Lactarius species
References
External links
Photographs and Latin original diagnosis
Bioactive compounds and medical characteristics of the fungus
Edible fungi
deterrimus
Fungi of Asia
Fungi of Europe
Fungi described in 1968
Fungus species | Lactarius deterrimus | Biology | 3,773 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.