id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160
values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
2,814,939 | https://en.wikipedia.org/wiki/Solid%20oxygen | Solid oxygen forms at normal atmospheric pressure at a temperature below 54.36 K (−218.79 °C, −361.82 °F). Solid oxygen O2, like liquid oxygen, is a clear substance with a light sky-blue color caused by absorption in the red part of the visible light spectrum.
Oxygen molecules have attracted attention because of the relationship between the molecular magnetization and crystal structures, electronic structures, and superconductivity. Oxygen is the only simple diatomic molecule (and one of the few molecules in general) to carry a magnetic moment. This makes solid oxygen particularly interesting, as it is considered a "spin-controlled" crystal that displays antiferromagnetic magnetic order in the low temperature phases. The magnetic properties of oxygen have been studied extensively. At very high pressures, solid oxygen changes from an insulating to a metallic state; and at very low temperatures, it even transforms to a superconducting state. Structural investigations of solid oxygen began in the 1920s and, at present, six distinct crystallographic phases are established unambiguously.
The density of solid oxygen ranges from 21 cm3/mol in the α-phase, to 23.5 cm3/mol in the γ-phase.
Phases
Six different phases of solid oxygen are known to exist:
α-phase: light blue forms at 1 atm, below 23.8 K, monoclinic crystal structure, space group C2/m (no. 12).
β-phase: faint blue to pink forms at 1 atm, below 43.8 K, rhombohedral crystal structure, space group Rm (no. 166). At room temperature and high pressure begins transformation to tetraoxygen.
γ-phase: faint blue forms at 1 atm, below 54.36 K, cubic crystal structure, Pmn (no. 223).
δ-phase: orange forms at room temperature at a pressure of 9 GPa
ε-phase: dark-red to black forms at room temperature at pressures greater than 10 GPa
ζ-phase: metallic forms at pressures greater than 96 GPa
It has been found that oxygen is solidified into a state called the β-phase at room temperature by applying pressure, and with further increasing pressure, the β-phase undergoes phase transitions to the δ-phase at 9 GPa and the ε-phase at 10 GPa; and, due to the increase in molecular interactions, the color of the β-phase changes to pink, orange, then red (the stable octaoxygen phase), and the red color further darkens to black with increasing pressure. It was found that a metallic ζ-phase appears at 96 GPa when ε-phase oxygen is further compressed.
Red oxygen
As the pressure of oxygen at room temperature is increased through , it undergoes a dramatic phase transition. Its volume decreases significantly and it changes color from sky-blue to deep red. However, this is a different allotrope of oxygen, , not merely a different crystalline phase of O2.
Metallic oxygen
A ζ-phase appears at 96 GPa when ε-phase oxygen is further compressed. This phase was discovered in 1990 by pressurizing oxygen to 132 GPa. The ζ-phase with metallic cluster exhibits superconductivity at pressures over 100 GPa and a temperature below 0.6 K.
References
Crystals in space group 12
Crystals in space group 166
Crystals in space group 221
Oxygen
Cryogenics
Ice | Solid oxygen | Physics | 705 |
22,308,837 | https://en.wikipedia.org/wiki/Niobium%20bromide | Niobium bromide may refer to
Niobium(III) bromide. NbBr3
Niobium(IV) bromide, NbBr4
Niobium(V) bromide, NbBr5
References
Niobium compounds
Bromides
Metal halides | Niobium bromide | Chemistry | 60 |
29,175,658 | https://en.wikipedia.org/wiki/List%20of%20Russian%20IT%20developers | This list of Russian IT developers includes the hardware engineers, computer scientists and programmers from the Russian Empire, the Soviet Union and the Russian Federation.
See also :Category:Russian computer scientists and :Category:Russian computer programmers.
Alphabetical list
A
Georgy Adelson-Velsky, inventor of AVL tree algorithm, developer of Kaissa (the first World Computer Chess Champion)
Andrey Andreev, creator of Badoo, one of the world's largest dating sites, and the 10th largest social network in the world
Vladimir Arlazarov, DBS Ines, developer of Kaissa (the first World Computer Chess Champion)
B
Boris Babayan, developer of the Elbrus-series supercomputers, founder of Moscow Center of SPARC Technologies (MCST)
Alexander Brudno, described the alpha-beta (α-β) search algorithm
Nikolay Brusentsov, inventor of ternary computer (Setun)
C
Andrei Chernov, one of the founders of the Russian Internet and the creator of the KOI8-R character encoding
Alexey Chervonenkis, developed the Vapnik–Chervonenkis theory, also known as the "fundamental theory of learning", a key part of the computational learning theory
D
Mikhail Donskoy, a leading developer of Kaissa, the first computer chess champion
Pavel Durov, founded the VKontakte.ru social network, #35 on Alexa's Top 500 Most Visited Global Websites, the 6th largest social network in the world, and Telegram
E
Andrey Ershov, developed Rapira programming language, started the predecessor to the Russian National Corpus
G
Vadim Gerasimov, one of the original co-developers of the famous video game Tetris
Victor Glushkov, a founder of cybernetics, inventor of the first personal computer, MIR
K
Yevgeny Kaspersky, developer of Kaspersky anti-virus products
Anatoly Karatsuba, developed the Karatsuba algorithm (the first fast multiplication algorithm)
Leonid Khachiyan, developed the Ellipsoid algorithm for linear programming
Tigran Khudaverdyan, deputy CEO of Yandex
Lev Korolyov, co-developed the first Soviet computers
Semen Korsakov, first to use punched cards for information storage and search
Alexander Kronrod, developer of Gauss–Kronrod quadrature formula and Kaissa, the first world computer chess champion
L
Evgeny Landis, inventor of AVL tree algorithm
Sergey Lebedev, developer of the first Soviet and European electronic computers, MESM and BESM
Vladimir Levenshtein, developed the Levenshtein automaton, Levenshtein coding and Levenshtein distance
Leonid Levin, IT scientist, developed the Cook-Levin theorem (the foundation for computational complexity)
Oleg Lupanov, coined the term "Shannon effect"; developed the (k, s)-Lupanov representation of Boolean functions
M
Yuri Matiyasevich, solved Hilbert's tenth problem
Alexander Mikhailov, coined the term "informatics"
Anatoly Morozov, worked on automated control systems, problem-focused complexes, modelling, and situational management
N
Anton Nossik, godfather of the Russian internet who began Russian online news
P
Alexey Pajitnov, inventor of Tetris
Victor Pan, worked in the area of polynomial computations
Igor Pavlov, creator of the file archiver 7-Zip; creator of the 7z archive format
Svyatoslav Pestov, developer of jEdit text editor and Factor programming language
Vladimir Pokhilko, specialized in human-computer interaction
Yuriy Polyakov, developed an approximate method for nonlinear differential and integrodifferential equations
R
Bashir Rameyev, developer of Strela computer, the first mainframe computer manufactured serially in the Soviet Union
Alexander Razborov, won the Nevanlinna Prize for introducing the "approximation method" in proving Boolean circuit lower bounds of some essential algorithmic problems, and the Gödel Prize for the paper "Natural Proofs"
Eugene Roshal, developer of the FAR file manager, RAR file format, WinRAR file archiver
S
Ilya Segalovich, founder and one of the first programmers of Yandex, Russian search engine
Anatoly Shalyto, initiator of the Foundation for Open Project Documentation; developed Automata-based programming
Dmitry Sklyarov, computer programmer known for his 2001 arrest by American law enforcement; US v. ElcomSoft Sklyarov
Alexander Stepanov, created and implemented the C++ Standard Template Library
Igor Sysoev, creator of nginx, the popular high performance web server, and founder of NGINX, Inc.
T
Andrey Terekhov (Терехов, Андрей Николаевич), developer of Algol 68 LGU; telecommunication systems
Andrey Ternovskiy, creator of Chatroulette
Valentin Turchin, inventor of Refal programming language, introduced metasystem transition and supercompilation
V
Vladimir Vapnik, developed the theory of the support vector machine; demonstrated its performance on a number of problems of interest to the machine learning community, including handwriting recognition
Y
Sergey Yablonsky, founder of the Soviet school of mathematical cybernetics and discrete mathematics
See also
List of computer scientists
List of pioneers in computer science
List of programmers
Information technology
List of Russian inventors
It Developers
Lists of computer scientists
It Developers | List of Russian IT developers | Technology | 1,134 |
33,641,350 | https://en.wikipedia.org/wiki/WildThings | WildThings is an urban fauna translocation program in the Australian state of New South Wales developed by Ku-ring-gai Council in 2004 to protect, promote and proliferate wildlife in the Ku-ring-gai local government area.
Program background
It was noticed in Ku-ring-gai that while bush regenerators intended to preserve habitat and wildlife, in practice some Bushcare Groups were responsible for the wholesale removal of weeds which often lessened the biodiversity value of the ecosystem that the volunteers were trying to protect.
In New South Wales, where Ku-ring-gai is located, the state government has many regulations that effectively deny people the opportunity to have native animals for pets. On their web page they have detailed their objections.
In an attempt to create positive relationships between people and wildlife, the program WildThings was created; however due to the restrictive legislative environment around mammals, the program has concentrated on invertebrates, fish and reptiles.
Initiatives
There are two main components to WildThings:
The placement of Trigona carbonaria hives. This program places native bee hives on residential properties, assisting with pollination while increasing awareness of this insect. As of 2011, over 200 hives have been distributed.
The conversion of unwanted swimming pools into ponds. Baby boomers, in particular, have pools that are no longer being used. Studies conducted by the University of Western Sydney, on a random selection of converted ponds, showed they promote invertebrate biodiversity and their water quality is suitable for recreational use. An added bonus is that a converted pool is essentially a rainwater tank without a lid, making large amounts of water available for a variety of activities around the home, saving potable water, electricity and eliminating the need for chemicals.
References
Nature conservation in Australia
Ecological restoration
Environment of New South Wales | WildThings | Chemistry,Engineering | 369 |
7,675,689 | https://en.wikipedia.org/wiki/Catabiosis | Catabiosis is the process of growing older, aging and physical degradation.
The word comes from Greek "kata"—down, against, reverse and "biosis"—way of life and is generally used to describe senescence and degeneration in living organisms and biophysics of aging in general.
One of the popular catabiotic theories is the entropy theory of aging, where aging is characterized by thermodynamically favourable increase in structural disorder. Living organisms are open systems that take free energy from the environment and offload their entropy as waste. However, basic components of living systems—DNA, proteins, lipids and sugars—tend towards the state of maximum entropy while continuously accumulating damages causing catabiosis of the living structure.
Catabiotic force on the contrary is the influence exerted by living structures on adjoining cells, by which the latter are developed in harmony with the primary structures.
References
External links
Onpedia definition of catabiosis
Catabiotic force
Dictionary.com - Catabiosis
Medical aspects of death
Biology terminology
Senescence | Catabiosis | Chemistry,Biology | 217 |
983,400 | https://en.wikipedia.org/wiki/L%20ring | The L-ring of the bacterial flagellum is the ring in the lipid outer cell membrane through which the axial filament (rod, hook, and flagellum) passes. that l ring stands for lipopolysaccharide.
References
Bacteria | L ring | Biology | 56 |
70,347,258 | https://en.wikipedia.org/wiki/Eddy%20pumping | Eddy pumping is a component of mesoscale eddy-induced vertical motion in the ocean. It is a physical mechanism through which vertical motion is created from variations in an eddy's rotational strength. Cyclonic (Anticyclonic) eddies lead primarily to upwelling (downwelling). It is a key mechanism driving biological and biogeochemical processes in the ocean such as algal blooms and the carbon cycle.
The mechanism
Eddies have a re-stratifying effect, which means they tend to organise the water in layers of different density. These layers are separated by surfaces called isopycnals. The re-stratification of the mixed layer is strongest in regions with large horizontal density gradients, known also as “fronts”, where the geostrophic shear and potential energy provide an energy source from which baroclinic and symmetric instabilities can grow. Below the mixed layer, a region of rapid density change (or pycnocline) separates the upper and lower water, hindering vertical transport.
Eddy pumping is a component of mesoscale eddy-induced vertical motion. Such vertical motion is caused by the deformation of the pycnocline. It can be conceptualised by assuming that ocean water has a density surface with mean depth averaged over time and space. This surface separates the upper ocean, corresponding to the euphotic zone, from the lower, deep ocean. When an eddy transits through, such density surface is deformed. Dependent on the phases of the lifespan of an eddy this will create vertical perturbations in different direction. Eddy lifespans are divided in formation, evolution and destruction. Eddy-pumping perturbations are of three types:
Cyclones
Anticyclones
Mode-water eddies
Eddy-centric approach
Mode-water eddies have a complex density structure. Due to their shape, they cannot be distinguished from regular anticyclones in an eddy-centric (focused on the core of the eddy) analysis based on sea level height. Nonetheless, eddy pumping induced vertical motion in the euphotic zone of mode-water eddies is comparable to cyclones. For this reasons, only the cyclonic and anticyclonic mechanisms of eddy-pumping perturbations are explained.
Conceptual explanation based on sea-surface level
An intuitive description of this mechanism is what is defined as eddy-centric-analysis based on sea-surface level. In the Northern hemisphere, anticlockwise rotation in cyclonic eddies creates a divergence of horizontal surface currents due to the Coriolis effect, leading to a dampened water surface. To compensate the inhomogeneity of surface elevation, isopycnal surfaces are uplifted toward the euphotic zone and incorporation of deep ocean, nutrient-rich waters can occur.
Physical explanation
Conceptually, eddy pumping associates the vertical motion in the interior of eddies to temporal changes in eddy relative vorticity. The vertical motion created by the change in vorticity is understood from the characteristics of the water contained in the core of the eddy. Cyclonic eddies rotate anticlockwise (clockwise) in the Northern (Southern) hemisphere and have a cold core. Anticyclonic eddies rotate clockwise (anticlockwise) in the Northern (Southern) hemisphere and have a warm core. The temperature and salinity difference between the eddy core and the surrounding waters is the key element driving vertical motion. While propagating in horizontal direction, Cyclones and anticyclones “bend” the pycnocline upwards and downwards, respectively, induced by this temperature and salinity discrepancy. The extent of the vertical perturbation of the density surface inside the eddy (compared to the mean ocean density surface) is determined by the changes in rotational strength (relative vorticity) of the eddy.
Ignoring horizontal advection in the density conservation equation, the density changes due to changes in vorticity can be directly related to vertical transport. This assumption is coherent with the idea of vertical motion occurring at the eddy centre, in correspondence to variations of a perfectly circular flow.
Through such mechanism eddy pumping generates upwelling of cold, nutrient rich deep waters in cyclonic eddies and downwelling of warm, nutrient poor, surface water in anticyclonic eddies.
Dependency on the phase of lifespan
Eddies weaken over time due to kinetic energy dissipation. As eddies form and intensify, the mechanisms mentioned above will strengthen and, as an increase in relative vorticity generates perturbations of the isopycnal surfaces, the pycnocline deforms. On the other hand, when eddies have aged and carry low kinetic energy, their vorticity diminishes and leads to eddy destruction. Such process opposes to eddy formation and intensification, as the pycnocline will return to its original position prior to the eddy-induced deformation. This means that the pycnocline will uplift in anticyclones and compress in cyclones, leading to upwelling and downwelling, respectively.
Eddy pumping characteristics
The direction of vertical motion in cyclonic and anticyclonic eddies is independent of the hemisphere. Observed vertical velocities of eddy pumping are in the order of one meter per day. However, there are regional differences. In regions where kinetic energy is higher, such as in the Western boundary current, eddies are found to generate stronger vertical currents than eddies in open ocean.
Limitations
When describing vertical motion in eddies it is important to note that eddy pumping is only one component of a complex mechanism. Another important factor to take into account, especially when considering ocean-wind interaction, is the role played by eddy-induced Ekman pumping. Some other limitations of the explanation above are due to the idealised, quasi circular linear dynamical response to perturbations that neglects the vertical displacement that a particle can experience moving along a sloping neutral surface. Vertical motion in eddies is a fairly recent research topic that still presents limitations in the theory both due to complexity and lack of sufficient observations. Nonetheless, the one presented above is a simplification that helps explain partially the important role that eddies play in biological productivity, as well as their biogeochemical role in the carbon cycle.
Biological impact
Recent findings suggest that mesoscale eddies are likely to play a key role in nutrient transport, such as the spatial distribution of chlorophyll concentration, in the open ocean. Lack of knowledge on the impact of eddy activity is however still notable, as eddies’ contribution has been argued not to be sufficient to maintain the observed primary production through nitrogen supply in parts of the subtropical gyre. Although the mechanisms through which eddies shape ecosystems are not yet fully understood, eddies transport nutrients through a combination of horizontal and vertical processes. Stirring and trapping relate to nutrient transport, whereas eddy pumping, eddy-induced Ekman pumping, and eddy impacts on mixed-layer depth variate nutrient. Here, the role played by eddy pumping is discussed.
Cyclonic eddy pumping drives new primary production by lifting nutrient-rich waters into the euphotic zone. Complete utilisation of the upwelled nutrients is guaranteed by two main factors. Firstly, biological uptake takes place in timescales that are much shorter than the average lifetime of eddies. Secondly, because the nutrient enhancement takes place in the eddy's interior, isolated from the surrounding waters, biomass can accumulate until upwelled nutrients are fully consumed.
Main examples
Evidence of the biological impacts of eddy pumping mechanism is present in various publications based on observations and modelling of multiple locations worldwide. Eddy-centric chlorophyll anomalies have been observed in the Gulf Stream region and off the west coast of British Columbia (Haida eddies), as well as eddy-induced enhanced biological production in the Weddell-Scotia Confluence in the Southern Ocean, in the northern Gulf of Alaska, in the South China Sea, in the Bay of Bengal, in the Arabian Sea and in the north-western Alboran Sea, to name a few. Estimations of the eddy pumping in the Sargasso Sea resulted in a flux between 0.24 and 0.5 nitrogen . These quantities have been deemed sufficient to sustain a rate of new primary production consistent with estimates for this region.
On a wider ecological scale, eddy-driven variations in productivity influence the trade-off between phytoplankton larval survival and the abundance of predators. These concepts partially explain mesoscale variations in the distribution of larval bluefin tuna, sailfish, marlin, swordfish, and other species. Distributions of adult fishes have also been associated with the presence of cyclonic eddies. Particularly, higher abundances of bluefin tuna and cetaceans in the Gulf of Mexico and blue marlin in the proximity of Hawaii are linked to cyclonic eddy activities. Such spatial patterns extend to seabirds spotted in the vicinities of eddies, including great frigate birds in the Mozambique Channel and albatross, terns, and shearwaters in the South Indian Ocean.
North Atlantic Algal Bloom
The North Sea is an ideal basin for the formation of algal blooms or spring blooms due to the combination of abundant nutrients and intense Arctic winds that favour the mixing of waters. Blooms are important indicators of the health of a marine ecosystem.
Springtime phytoplankton blooms have been thought to be initiated by seasonal light increase and near-surface stratification. Recent observations from the sub-polar North Atlantic experiment and biophysical models suggest that the bloom may be instead resulting from an eddy-induced stratification, taking place 20 to 30 days earlier than it would occur by seasonal changes. These findings revolutionise the entire understanding of spring blooms. Moreover, eddy pumping and eddy-induced Ekman pumping have been shown to dominate late-bloom and post-bloom biological fields.
Biogeochemistry
Phytoplankton absorbs through photosynthesis. When such organisms die and sink to the seafloor, the carbon they absorbed gets stored in the deep ocean through what is known as the biological pump. Recent research has been investigating the role of eddy pumping and more in general, of vertical motion in mesoscale eddies in the carbon cycle. Evidence has shown that eddy pumping-induced upwelling and downwelling may play a significant role in shaping the way that carbon is stored in the ocean. Despite the fact that research in this field is only developing recently, first results show that eddies contribute less than 5% of the total annual export of phytoplankton to the ocean interior.
Plastic pollution
Eddies play an important role in the sea surface distribution of microplastics in the ocean. Due to their convergent nature, anticyclonic eddies trap and transport microplastics at the sea surface, along with nutrients, chlorophyll and zooplankton. In the North Atlantic subtropical gyre, the first direct observation of sea surface concentrations of microplastics between a cyclonic and an anticyclonic mesoscale eddy has shown an increased accumulation in the latter. Accumulation of microplastics has environmental impacts through its interaction with the biota. Initially buoyant plastic particles (between 0.01 and 1 mm) are submerged below the climatological mixed layer depth mainly due to biofouling. In regions with very low productivity, particles remain within the upper part of the mixed layer and can only sink below it if a spring bloom occurs.
See also
Algal bloom - a rapid increase or accumulation in the population of algae in freshwater o marine water systems
Baroclinic instability - fluid dynamical instability of fundamental importance in the atmosphere and ocean
Ekman pumping - Ekman Pumping is the component of Ekman transport that results in areas of downwelling due to the convergence of water
Haida Eddies - episodic, clockwise rotating ocean eddies that form during the winter off the west coast of British Columbia
Mesoscale ocean eddies - Swirling in the ocean created by its turbulent nature
Spring bloom – Strong increase in phytoplankton abundance that typically occurs in the early spring
References
Water physics | Eddy pumping | Physics,Materials_science | 2,529 |
15,215,501 | https://en.wikipedia.org/wiki/KCNJ14 | Potassium inwardly-rectifying channel, subfamily J, member 14 (KCNJ14), also known as Kir2.4, is a human gene.
Potassium channels are present in most mammalian cells, where they participate in a wide range of physiologic responses. The protein encoded by this gene is an integral membrane protein and inward-rectifier type potassium channel, and probably has a role in controlling the excitability of motor neurons. Two transcript variants encoding the same protein have been found for this gene.
See also
Inward-rectifier potassium ion channel
References
Further reading
External links
Ion channels | KCNJ14 | Chemistry | 129 |
1,758,819 | https://en.wikipedia.org/wiki/Phosphorus%20trichloride | Phosphorus trichloride is an inorganic compound with the chemical formula PCl3. A colorless liquid when pure, it is an important industrial chemical, being used for the manufacture of phosphites and other organophosphorus compounds. It is toxic and reacts readily with water to release hydrogen chloride.
History
Phosphorus trichloride was first prepared in 1808 by the French chemists Joseph Louis Gay-Lussac and Louis Jacques Thénard by heating calomel (Hg2Cl2) with phosphorus. Later during the same year, the English chemist Humphry Davy produced phosphorus trichloride by burning phosphorus in chlorine gas.
Preparation
World production exceeds one-third of a million tonnes. Phosphorus trichloride is prepared industrially by the reaction of chlorine with white phosphorus, using phosphorus trichloride as the solvent. In this continuous process PCl3 is removed as it is formed in order to avoid the formation of PCl5.
P4 + 6 Cl2 → 4 PCl3
Structure and spectroscopy
It has a trigonal pyramidal shape. Its 31P NMR spectrum exhibits a singlet around +220 ppm with reference to a phosphoric acid standard.
Reactions
The phosphorus in PCl3 is often considered to have the +3 oxidation state and the chlorine atoms are considered to be in the −1 oxidation state. Most of its reactivity is consistent with this description.
Oxidation
PCl3 is a precursor to other phosphorus compounds, undergoing oxidation to phosphorus pentachloride (PCl5), thiophosphoryl chloride (PSCl3), or phosphorus oxychloride (POCl3).
PCl3 as an electrophile
PCl3 reacts vigorously with water to form phosphorous acid (H3PO3) and hydrochloric acid:
PCl3 + 3 H2O → H3PO3 + 3 HCl
Phosphorus trichloride is the precursor to organophosphorus compounds. It reacts with phenol to give triphenyl phosphite:
Alcohols such as ethanol react similarly in the presence of a base such as a tertiary amine:
With one equivalent of alcohol and in the absence of base, the first product is alkoxyphosphorodichloridite:
In the absence of base, however, with excess alcohol, phosphorus trichloride converts to diethylphosphite:
PCl3 + 3 EtOH → (EtO)2P(O)H + 2 HCl + EtCl
Secondary amines (R2NH) form aminophosphines. For example, bis(diethylamino)chlorophosphine, (Et2N)2PCl, is obtained from diethylamine and PCl3. Thiols (RSH) form P(SR)3. An industrially relevant reaction of PCl3 with amines is phosphonomethylation, which employs formaldehyde:
R2NH + PCl3 + CH2O → (HO)2P(O)CH2NR2 + 3 HCl
The herbicide glyphosate is also produced this way.
The reaction of PCl3 with Grignard reagents and organolithium reagents is a useful method for the preparation of organic phosphines with the formula R3P (sometimes called phosphanes) such as triphenylphosphine, Ph3P.
Triphenylphosphine is produced industrially by the reaction between phosphorus trichlorid, chlorobenzene, and sodium:
, where Ph =
Under controlled conditions or especially with bulky R groups, similar reactions afford less substituted derivatives such as chlorodiisopropylphosphine.
Conversion of alcohols to alkyl chlorides
Phosphorus trichloride is commonly used to convert primary and secondary alcohols to the corresponding chlorides. As discussed above, the reaction of alcohols with phosphorus trichloride is sensitive to conditions. The mechanism for the ROH →RCl conversion involves the reaction of HCl with phosphite esters:
.
The first step proceeds with nearly ideal stereochemistry but the final step far less so owing to an SN1 pathway.
Redox reactions
Phosphorus trichloride undergoes a variety of redox reactions:
PCl3 as a nucleophile
Phosphorus trichloride has a lone pair, and therefore can act as a Lewis base, e.g., forming a 1:1 adduct Br3B-PCl3. Metal complexes such as Ni(PCl3)4 are known, again demonstrating the ligand properties of PCl3.
This Lewis basicity is exploited in the Kinnear–Perren reaction to prepare alkylphosphonyl dichlorides (RP(O)Cl2) and alkylphosphonate esters (RP(O)(OR')2). Alkylation of phosphorus trichloride is effected in the presence of aluminium trichloride give the alkyltrichlorophosphonium salts, which are versatile intermediates:
PCl3 + RCl + AlCl3 → RPCl + AlCl
The RPCl product can then be decomposed with water to produce an alkylphosphonic dichloride RP(=O)Cl2.
PCl3 as a ligand
PCl3, like the more popular phosphorus trifluoride, is a ligand in coordination chemistry. One example is Mo(CO)5PCl3.
Uses
PCl3 is important indirectly as a precursor to PCl5, POCl3 and PSCl3, which are used in many applications, including herbicides, insecticides, plasticisers, oil additives, and flame retardants.
For example, oxidation of PCl3 gives POCl3, which is used for the manufacture of triphenyl phosphate and tricresyl phosphate, which find application as flame retardants and plasticisers for PVC. They are also used to make insecticides such as diazinon. Phosphonates include the herbicide glyphosate.
PCl3 is the precursor to triphenylphosphine for the Wittig reaction, and phosphite esters which may be used as industrial intermediates, or used in the Horner-Wadsworth-Emmons reaction, both important methods for making alkenes. It can be used to make trioctylphosphine oxide (TOPO), used as an extraction agent, although TOPO is usually made via the corresponding phosphine.
PCl3 is also used directly as a reagent in organic synthesis. It is used to convert primary and secondary alcohols into alkyl chlorides, or carboxylic acids into acyl chlorides, although thionyl chloride generally gives better yields than PCl3.
Safety
600 ppm is lethal in just a few minutes.
25 ppm is the US NIOSH "Immediately Dangerous to Life and Health" level
0.5 ppm is the US OSHA "permissible exposure limit" over a time-weighted average of 8 hours.
0.2 ppm is the US NIOSH "recommended exposure limit" over a time-weighted average of 8 hours.
Under EU Directive 67/548/EEC, PCl3 is classified as very toxic and corrosive , and the risk phrases R14, R26/28, R35 and R48/20 are obligatory.
Industrial production of phosphorus trichloride is controlled under the Chemical Weapons Convention, where it is listed in schedule 3, as it can be used to produce mustard agents.
See also
Phosphorus pentachloride
Phosphoryl chloride
Phosphorus trifluorodichloride
References
Inorganic phosphorus compounds
Phosphorus chlorides
Phosphorus(III) compounds
Pulmonary agents | Phosphorus trichloride | Chemistry | 1,647 |
59,894,017 | https://en.wikipedia.org/wiki/Malonoben | Malonoben (also known as tyrphostin A9, SF-6847, GCP5126, and AG-17) is an uncoupling agent/protonophore. As of 1974 when it was discovered, it was considered the most powerful agent of this type, with a potency over 1800 times that of 2,4-dinitrophenol - the prototypical uncoupling agent - and about 3 times the effectiveness of 5-chloro-3-tert-butyl-2'-chloro-4'-nitrosalicylanilide.
References
Uncouplers
Ionophores
Nitriles
Tert-butyl compounds
Phenols | Malonoben | Chemistry | 150 |
30,846,294 | https://en.wikipedia.org/wiki/Local%20elevation | Local elevation is a technique used in computational chemistry or physics, mainly in the field of molecular simulation (including molecular dynamics (MD) and Monte Carlo (MC) simulations). It was developed in 1994 by Huber, Torda and van Gunsteren
to enhance the searching of conformational space in molecular dynamics simulations and is available in the GROMOS software for molecular dynamics simulation (since GROMOS96). The method was, together with the conformational flooding method,
the first to introduce memory dependence into molecular simulations. Many recent methods build on the principles of the local elevation technique,
including the Engkvist-Karlström,
adaptive biasing force,
Wang–Landau, metadynamics,
adaptively biased molecular dynamics,
adaptive reaction coordinate forces,
and local elevation umbrella sampling
methods.
The basic principle of the method is to add a memory-dependent potential energy term in the simulation so as to prevent the simulation to revisit already sampled configurations, which leads to the increased probability of discovering new configurations. The method can be seen as a continuous variant of the Tabu search method.
Algorithm
Basic step
The basic step of the algorithm is to add a small, repulsive potential energy function to the current configuration of the molecule such as to penalize this configuration and increase the likelihood of discovering other configurations. This requires the selection of a subset of the degrees of freedom, which define the relevant conformational variables. These are typically a set of conformationally relevant dihedral angles, but can
in principle be any differentiable function of the cartesian coordinates .
The algorithm deforms the physical potential energy surface by introducing a bias energy, such that the total potential energy is defined as
The local elevation bias depends on the simulation time and is set to zero at the start of the simulation
() and is gradually built as a sum of small, repulsive functions, giving
,
where is a scaling constant and is a multidimensional, repulsive function with .
The resulting bias potential will be a sum of all the added functions
To reduce the number of added repulsive functions, a common approach is to add the functions to grid points. The original choice of is to use a multidimensional Gaussian function. However, due to the infinite range of the Gaussian as well as the artifacts that can occur with a sum of gridded Gaussians, a better choice is to apply multidimensional truncated polynomial functions
.
Applications
The local elevation method can be applied to free energy calculations as well as to conformational searching problems. In free energy calculations the local elevation technique is applied to level out the free energy surface along the selected set of variables. It has been shown by Engkvist and Karlström
that the bias potential built by the local elevation method will approximate the negative of the free energy surface. The free energy surface can therefore be approximated directly from the bias potential (as done in the metadynamics method) or the bias potential can be used for umbrella sampling (as done in metadynamics with umbrella sampling corrections
and local elevation umbrella sampling methods) to obtain more accurate free energies.
References
Molecular dynamics
Computational chemistry
Theoretical chemistry | Local elevation | Physics,Chemistry | 643 |
313,845 | https://en.wikipedia.org/wiki/Formal%20concept%20analysis | In information science, formal concept analysis (FCA) is a principled way of deriving a concept hierarchy or formal ontology from a collection of objects and their properties. Each concept in the hierarchy represents the objects sharing some set of properties; and each sub-concept in the hierarchy represents a subset of the objects (as well as a superset of the properties) in the concepts above it. The term was introduced by Rudolf Wille in 1981, and builds on the mathematical theory of lattices and ordered sets that was developed by Garrett Birkhoff and others in the 1930s.
Formal concept analysis finds practical application in fields including data mining, text mining, machine learning, knowledge management, semantic web, software development, chemistry and biology.
Overview and history
The original motivation of formal concept analysis was the search for real-world meaning of mathematical order theory. One such possibility of very general nature is that data tables can be transformed into algebraic structures called complete lattices, and that these can be utilized for data visualization and interpretation. A data table that represents a heterogeneous relation between objects and attributes, tabulating pairs of the form "object g has attribute m", is considered as a basic data type. It is referred to as a formal context. In this theory, a formal concept is defined to be a pair (A, B), where A is a set of objects (called the extent) and B is a set of attributes (the intent) such that
the extent A consists of all objects that share the attributes in B, and dually
the intent B consists of all attributes shared by the objects in A.
In this way, formal concept analysis formalizes the semantic notions of extension and intension.
The formal concepts of any formal context can—as explained below—be ordered in a hierarchy called more formally the context's "concept lattice". The concept lattice can be graphically visualized as a "line diagram", which then may be helpful for understanding the data. Often however these lattices get too large for visualization. Then the mathematical theory of formal concept analysis may be helpful, e.g., for decomposing the lattice into smaller pieces without information loss, or for embedding it into another structure which is easier to interpret.
The theory in its present form goes back to the early 1980s and a research group led by Rudolf Wille, Bernhard Ganter and Peter Burmeister at the Technische Universität Darmstadt. Its basic mathematical definitions, however, were already introduced in the 1930s by Garrett Birkhoff as part of general lattice theory. Other previous approaches to the same idea arose from various French research groups, but the Darmstadt group normalised the field and systematically worked out both its mathematical theory and its philosophical foundations. The latter refer in particular to Charles S. Peirce, but also to the Port-Royal Logic.
Motivation and philosophical background
In his article "Restructuring Lattice Theory" (1982), initiating formal concept analysis as a mathematical discipline, Wille starts from a discontent with the current lattice theory and pure mathematics in general: The production of theoretical results—often achieved by "elaborate mental gymnastics"—were impressive, but the connections between neighboring domains, even parts of a theory were getting weaker.
This aim traces back to the educationalist Hartmut von Hentig, who in 1972 pleaded for restructuring sciences in view of better teaching and in order to make sciences mutually available and more generally (i.e. also without specialized knowledge) critiqueable. Hence, by its origins formal concept analysis aims at interdisciplinarity and democratic control of research.
It corrects the starting point of lattice theory during the development of formal logic in the 19th century. Then—and later in model theory—a concept as unary predicate had been reduced to its extent. Now again, the philosophy of concepts should become less abstract by considering the intent. Hence, formal concept analysis is oriented towards the categories extension and intension of linguistics and classical conceptual logic.
Formal concept analysis aims at the clarity of concepts according to Charles S. Peirce's pragmatic maxim by unfolding observable, elementary properties of the subsumed objects. In his late philosophy, Peirce assumed that logical thinking aims at perceiving reality, by the triade concept, judgement and conclusion. Mathematics is an abstraction of logic, develops patterns of possible realities and therefore may support rational communication. On this background, Wille defines:
Example
The data in the example is taken from a semantic field study, where different kinds of bodies of water were systematically categorized by their attributes. For the purpose here it has been simplified.
The data table represents a formal context, the line diagram next to it shows its concept lattice. Formal definitions follow below.
The above line diagram consists of circles, connecting line segments, and labels. Circles represent formal concepts. The lines allow to read off the subconcept-superconcept hierarchy. Each object and attribute name is used as a label exactly once in the diagram, with objects below and attributes above concept circles. This is done in a way that an attribute can be reached from an object via an ascending path if and only if the object has the attribute.
In the diagram shown, e.g. the object reservoir has the attributes stagnant and constant, but not the attributes temporary, running, natural, maritime. Accordingly, puddle has exactly the characteristics temporary, stagnant and natural.
The original formal context can be reconstructed from the labelled diagram, as well as the formal concepts. The extent of a concept consists of those objects from which an ascending path leads to the circle representing the concept. The intent consists of those attributes to which there is an ascending path from that concept circle (in the diagram). In this diagram the concept immediately to the left of the label reservoir has the intent stagnant and natural and the extent puddle, maar, lake, pond, tarn, pool, lagoon, and sea.
Formal contexts and concepts
A formal context is a triple , where G is a set of objects, M is a set of attributes, and is a binary relation called incidence that expresses which objects have which attributes. For subsets of objects and subsets of attributes, one defines two derivation operators as follows:
, i.e., a set of all attributes shared by all objects from A, and dually
, i.e., a set of all objects sharing all attributes from B.
Applying either derivation operator and then the other constitutes two closure operators:
A ↦ = () for A ⊆ G (extent closure), and
B ↦ = () for B ⊆ M (intent closure).
The derivation operators define a Galois connection between sets of objects and of attributes. This is why in French a concept lattice is sometimes called a treillis de Galois (Galois lattice).
With these derivation operators, Wille gave an elegant definition of a formal concept:
a pair (A,B) is a formal concept of a context provided that:
A ⊆ G, B ⊆ M, = B, and = A.
Equivalently and more intuitively, (A,B) is a formal concept precisely when:
every object in A has every attribute in B,
for every object in G that is not in A, there is some attribute in B that the object does not have,
for every attribute in M that is not in B, there is some object in A that does not have that attribute.
For computing purposes, a formal context may be naturally represented as a (0,1)-matrix K in which the rows correspond to the objects, the columns correspond to the attributes, and each entry ki,j equals to 1 if "object i has attribute j." In this matrix representation, each formal concept corresponds to a maximal submatrix (not necessarily contiguous) all of whose elements equal 1. It is however misleading to consider a formal context as boolean, because the negated incidence ("object g does not have attribute m") is not concept forming in the same way as defined above. For this reason, the values 1 and 0 or TRUE and FALSE are usually avoided when representing formal contexts, and a symbol like × is used to express incidence.
Concept lattice of a formal context
The concepts (Ai, Bi) of a context K can be (partially) ordered by the inclusion of extents, or, equivalently, by the dual inclusion of intents. An order ≤ on the concepts is defined as follows: for any two concepts (A1, B1) and (A2, B2) of K, we say that (A1, B1) ≤ (A2, B2) precisely when A1 ⊆ A2. Equivalently, (A1, B1) ≤ (A2, B2) whenever B1 ⊇ B2.
In this order, every set of formal concepts has a greatest common subconcept, or meet. Its extent consists of those objects that are common to all extents of the set. Dually, every set of formal concepts has a least common superconcept, the intent of which comprises all attributes which all objects of that set of concepts have.
These meet and join operations satisfy the axioms defining a lattice, in fact a complete lattice. Conversely, it can be shown that every complete lattice is the concept lattice of some formal context (up to isomorphism).
Attribute values and negation
Real-world data is often given in the form of an object-attribute table, where the attributes have "values". Formal concept analysis handles such data by transforming them into the basic type of a ("one-valued") formal context. The method is called conceptual scaling.
The negation of an attribute m is an attribute ¬m, the extent of which is just the complement of the extent of m, i.e., with (¬m) = G \ . It is in general not assumed that negated attributes are available for concept formation. But pairs of attributes which are negations of each other often naturally occur, for example in contexts derived from conceptual scaling.
For possible negations of formal concepts see the section concept algebras below.
Implications
An implication A → B relates two sets A and B of attributes and expresses that every object possessing each attribute from A also has each attribute from B. When is a formal context and A, B are subsets of the set M of attributes (i.e., A,B ⊆ M), then the implication A → B is valid if ⊆ . For each finite formal context, the set of all valid implications has a canonical basis, an irredundant set of implications from which all valid implications can be derived by the natural inference (Armstrong rules). This is used in attribute exploration, a knowledge acquisition method based on implications.
Arrow relations
Formal concept analysis has elaborate mathematical foundations, making the field versatile. As a basic example we mention the arrow relations, which are simple and easy to compute, but very useful. They are defined as follows: For and let
and dually
Since only non-incident object-attribute pairs can be related, these relations can conveniently be recorded in the table representing a formal context. Many lattice properties can be read off from the arrow relations, including distributivity and several of its generalizations. They also reveal structural information and can be used for determining, e.g., the congruence relations of the lattice.
Extensions of the theory
Triadic concept analysis replaces the binary incidence relation between objects and attributes by a ternary relation between objects, attributes, and conditions. An incidence then expresses that the object has the attribute under the condition . Although triadic concepts can be defined in analogy to the formal concepts above, the theory of the trilattices formed by them is much less developed than that of concept lattices, and seems to be difficult. Voutsadakis has studied the n-ary case.
Fuzzy concept analysis: Extensive work has been done on a fuzzy version of formal concept analysis.
Concept algebras: Modelling negation of formal concepts is somewhat problematic because the complement of a formal concept (A, B) is in general not a concept. However, since the concept lattice is complete one can consider the join (A, B)Δ of all concepts (C, D) that satisfy ; or dually the meet (A, B)𝛁 of all concepts satisfying . These two operations are known as weak negation and weak opposition, respectively. This can be expressed in terms of the derivation operators. Weak negation can be written as , and weak opposition can be written as . The concept lattice equipped with the two additional operations Δ and 𝛁 is known as the concept algebra of a context. Concept algebras generalize power sets. Weak negation on a concept lattice L is a weak complementation, i.e. an order-reversing map which satisfies the axioms . Weak opposition is a dual weak complementation. A (bounded) lattice such as a concept algebra, which is equipped with a weak complementation and a dual weak complementation, is called a weakly dicomplemented lattice. Weakly dicomplemented lattices generalize distributive orthocomplemented lattices, i.e. Boolean algebras.
Temporal concept analysis
Temporal concept analysis (TCA) is an extension of Formal Concept Analysis (FCA) aiming at a conceptual description of temporal phenomena. It provides animations in concept lattices obtained from data about changing objects. It offers a general way of understanding change of concrete or abstract objects in continuous, discrete or hybrid space and time. TCA applies conceptual scaling to temporal data bases.
In the simplest case TCA considers objects that change in time like a particle in physics, which, at each time, is at exactly one place. That happens in those temporal data where the attributes 'temporal object' and 'time' together form a key of the data base. Then the state (of a temporal object at a time in a view) is formalized as a certain object concept of the formal context describing the chosen view. In this simple case, a typical visualization of a temporal system is a line diagram of the concept lattice of the view into which trajectories of temporal objects are embedded.
TCA generalizes the above mentioned case by considering temporal data bases with an arbitrary key. That leads to the notion of distributed objects which are at any given time at possibly many places, as for example, a high pressure zone on a weather map. The notions of 'temporal objects', 'time' and 'place' are represented as formal concepts in scales. A state is formalized as a set of object concepts.
That leads to a conceptual interpretation of the ideas of particles and waves in physics.
Algorithms and tools
There are a number of simple and fast algorithms for generating formal concepts and for constructing and navigating concept lattices. For a survey, see Kuznetsov and Obiedkov or the book by Ganter and Obiedkov, where also some pseudo-code can be found. Since the number of formal concepts may be exponential in the size of the formal context, the complexity of the algorithms usually is given with respect to the output size. Concept lattices with a few million elements can be handled without problems.
Many FCA software applications are available today. The main purpose of these tools varies from formal context creation to formal concept mining and generating the concepts lattice of a given formal context and the corresponding implications and association rules. Most of these tools are academic open-source applications, such as:
ConExp
ToscanaJ
Lattice Miner
Coron
FcaBedrock
GALACTIC
Related analytical techniques
Bicliques
A formal context can naturally be interpreted as a bipartite graph. The formal concepts then correspond to the maximal bicliques in that graph. The mathematical and algorithmic results of formal concept analysis may thus be used for the theory of maximal bicliques. The notion of bipartite dimension (of the complemented bipartite graph) translates to that of Ferrers dimension (of the formal context) and of order dimension (of the concept lattice) and has applications e.g. for Boolean matrix factorization.
Biclustering and multidimensional clustering
Given an object-attribute numerical data-table, the goal of biclustering is to group together some objects having similar values of some attributes. For example, in gene expression data, it is known that genes (objects) may share a common behavior for a subset of biological situations (attributes) only: one should accordingly produce local patterns to characterize biological processes, the latter should possibly overlap, since a gene may be involved in several processes. The same remark applies for recommender systems where one is interested in local patterns characterizing groups of users that strongly share almost the same tastes for a subset of items.
A bicluster in a binary object-attribute data-table is a pair (A,B) consisting of an inclusion-maximal set of objects A and an inclusion-maximal set of attributes B such that almost all objects from A have almost all attributes from B and vice versa.
Of course, formal concepts can be considered as "rigid" biclusters where all objects have all attributes and vice versa. Hence, it is not surprising that some bicluster definitions coming from practice are just definitions of a formal concept. Relaxed FCA-based versions of biclustering and triclustering include OA-biclustering and OAC-triclustering (here O stands for object, A for attribute, C for condition); to generate patterns these methods use prime operators only once being applied to a single entity (e.g. object) or a pair of entities (e.g. attribute-condition), respectively.
A bicluster of similar values in a numerical object-attribute data-table is usually defined as a pair consisting of an inclusion-maximal set of objects and an inclusion-maximal set of attributes having similar values for the objects. Such a pair can be represented as an inclusion-maximal rectangle in the numerical table, modulo rows and columns permutations. In it was shown that biclusters of similar values correspond to triconcepts of a triadic context where the third dimension is given by a scale that represents numerical attribute values by binary attributes.
This fact can be generalized to n-dimensional case, where n-dimensional clusters of similar values in n-dimensional data are represented by n+1-dimensional concepts. This reduction allows one to use standard definitions and algorithms from multidimensional concept analysis for computing multidimensional clusters.
Knowledge spaces
In the theory of knowledge spaces it is assumed that in any knowledge space the family of knowledge states is union-closed. The complements of knowledge states therefore form a closure system and may be represented as the extents of some formal context.
Hands-on experience with formal concept analysis
The formal concept analysis can be used as a qualitative method for data analysis. Since the early beginnings of FCA in the early 1980s, the FCA research group at TU Darmstadt has gained experience from more than 200 projects using the FCA (as of 2005). Including the fields of: medicine and cell biology, genetics, ecology, software engineering, ontology, information and library sciences, office administration, law, linguistics, political science.
Many more examples are e.g. described in: Formal Concept Analysis. Foundations and Applications, conference papers at regular conferences such as: International Conference on Formal Concept Analysis (ICFCA), Concept Lattices and their Applications (CLA), or International Conference on Conceptual Structures (ICCS).
See also
Association rule learning
Cluster analysis
Commonsense reasoning
Conceptual analysis
Conceptual clustering
Conceptual space
Concept learning
Correspondence analysis
Description logic
Factor analysis
Formal semantics (natural language)
General Concept Lattice
Graphical model
Grounded theory
Inductive logic programming
Pattern theory
Statistical relational learning
Schema (genetic algorithms)
Notes
References
External links
A Formal Concept Analysis Homepage
Demo
Formal Concept Analysis. ICFCA International Conference Proceedings
2007 5th
2008 6th
2009 7th
2010 8th
2011 9th
2012 10th
2013 11th
2014 12th
2015 13th
2017 14th
2019 15th
2021 16th
Machine learning
Lattice theory
Data mining
Formal semantics (natural language)
Ontology (information science)
Semantic relations | Formal concept analysis | Mathematics,Engineering | 4,130 |
2,976,760 | https://en.wikipedia.org/wiki/Baluster | A baluster () is an upright support, often a vertical moulded shaft, square, or lathe-turned form found in stairways, parapets, and other architectural features. In furniture construction it is known as a spindle. Common materials used in its construction are wood, stone, and less frequently metal and ceramic. A group of balusters supporting a handrail, coping, or ornamental detail is known as a balustrade.
The term baluster shaft is used to describe forms such as a candlestick, upright furniture support, and the stem of a brass chandelier.
The term banister (also bannister) refers to a baluster or to the system of balusters and handrail of a stairway. It may be used to include its supporting structures, such as a supporting newel post.
In the UK, there are different height requirements for domestic and commercial balustrades, as outlined in Approved Document K.
Etymology
According to the Oxford English Dictionary, "baluster" is derived through the , from , from balaustra, "pomegranate flower" [from a resemblance to the swelling form of the half-open flower (illustration, below right)], from Latin balaustrium, from Greek βαλαύστριον (balaustrion).
History
The earliest examples of balusters are those shown in the bas-reliefs representing the Assyrian palaces, where they were employed as functional window balustrades and apparently had Ionic capitals. As an architectural element alone the balustrade did not seem to have been known to either the Greeks or the Romans, but baluster forms are familiar in the legs of chairs and tables represented in Roman bas-reliefs, where the original legs or the models for cast bronze ones were shaped on the lathe, or in Antique marble candelabra, formed as a series of stacked bulbous and disc-shaped elements, both kinds of sources familiar to Quattrocento designers.
The application to architecture was a feature of the early Renaissance architecture: late fifteenth-century examples are found in the balconies of palaces at Venice and Verona. These quattrocento balustrades are likely to be following yet-unidentified Gothic precedents. They form balustrades of colonettes as an alternative to miniature arcading.
Rudolf Wittkower withheld judgement as to the inventor of the baluster<ref>H. Siebenhüner, in tracing the baluster's career, found its origin in the profile of the round base of Donatello's Judith and Holofernes, c 1460 (Siebenhüner, "Docke", in Reallexikon zur Deutsche Kunstgeschichte vol. 4 1988:102-107)</ref> and credited Giuliano da Sangallo with using it consistently as early as the balustrade on the terrace and stairs at the Medici villa at Poggio a Caiano (c 1480), and used balustrades in his reconstructions of antique structures. Sangallo passed the motif to Bramante (his Tempietto, 1502) and Michelangelo, through whom balustrades gained wide currency in the 16th century.
Wittkower distinguished two types, one symmetrical in profile that inverted one bulbous vase-shape over another, separating them with a cushionlike torus or a concave ring, and the other a simple vase shape, whose employment by Michelangelo at the Campidoglio steps (c 1546), noted by Wittkower, was preceded by very early vasiform balusters in a balustrade round the drum of Santa Maria delle Grazie (c 1482), and railings in the cathedrals of Aquileia (c'' 1495) and Parma, in the cortile of San Damaso, Vatican, and Antonio da Sangallo's crowning balustrade on the Santa Casa at Loreto installed in 1535, and liberally in his model for the Basilica of Saint Peter. Because of its low center of gravity, this "vase-baluster" may be given the modern term "dropped baluster".
Materials used
Balusters may be made of carved stone, cast stone, plaster, polymer, polyurethane/polystyrene, polyvinyl chloride (PVC), precast concrete, wood, or wrought iron. Cast-stone balusters were a development of the 18th century in Great Britain (see Coade stone), and cast iron balusters a development largely of the 1840s. As balusters and balustrades have evolved, they can now be made from various materials with a few popular choices being timber, glass and stainless steel.
Profiles and style changes
The baluster, being a turned structure, tends to follow design precedents that were set in woodworking and ceramic practices, where the turner's lathe and the potter's wheel are ancient tools. The profile a baluster takes is often diagnostic of a particular style of architecture or furniture, and may offer a rough guide to date of a design, though not of a particular example.
Some complicated Mannerist baluster forms can be read as a vase set upon another vase. The high shoulders and bold, rhythmic shapes of the Baroque vase and baluster forms are distinctly different from the sober baluster forms of Neoclassicism, which look to other precedents, like Greek amphoras. The distinctive twist-turned designs of balusters in oak and walnut English and Dutch seventeenth-century furniture, which took as their prototype the Solomonic column that was given prominence by Bernini, fell out of style after the 1710s.
Once it had been taken from the lathe, a turned wood baluster could be split and applied to an architectural surface, or to one in which architectonic themes were more freely treated, as on cabinets made in Italy, Spain and Northern Europe from the sixteenth through the seventeenth centuries. Modern baluster design is also in use for example in designs influenced by the Arts and Crafts movement in a 1905 row of houses in Etchingham Park Road Finchley London England.
Outside Europe, the baluster column appeared as a new motif in Mughal architecture, introduced in Shah Jahan's interventions in two of the three great fortress-palaces, the Red Fort of Agra and Delhi, in the early seventeenth century. Foliate baluster columns with naturalistic foliate capitals, unexampled in previous Indo-Islamic architecture according to Ebba Koch, rapidly became one of the most widely used forms of supporting shaft in Northern and Central India in the eighteenth and nineteenth centuries.
The modern term baluster shaft is applied to the shaft dividing a window in Saxon architecture. In the south transept of the Abbey in St Albans, England, are some of these shafts, supposed to have been taken from the old Saxon church. Norman bases and capitals have been added, together with plain cylindrical Norman shafts.
Balusters are normally separated by at least the same measurement as the size of the square bottom section. Placing balusters too far apart diminishes their aesthetic appeal, and the structural integrity of the balustrade they form. Balustrades normally terminate in heavy newel posts, columns, and building walls for structural support.
Balusters may be formed in several ways. Wood and stone can be shaped on the lathe, wood can be cut from square or rectangular section boards, while concrete, plaster, iron, and plastics are usually formed by molding and casting. Turned patterns or old examples are used for the molds.
Gallery
See also
Bollard
Guard rail
Handrail
Citations
General and cited references
(Links are to the 1983 American edition.)
External links
Architectural elements
Garden features
Stairways
Pedestrian infrastructure
Architectural history
Ironmongery | Baluster | Technology,Engineering | 1,629 |
16,765,925 | https://en.wikipedia.org/wiki/WASP-10 | WASP-10 is a star in the constellation Pegasus. The SuperWASP project has observed and classified this star as a variable star, perhaps due to the eclipsing planet.
The star is likely older than Sun, has fraction of heavy elements close to solar abundance, and is rotating rapidly, being spun up by the tides raised by the giant planet on the close orbit.
Planetary system
WASP-10b
WASP-10b is an extrasolar planet discovered in 2008.
WASP-10c
WASP-10c is an unconfirmed as in 2020 extrasolar planet inferred from transit time variations of WASP-10b's transits. It was discovered in 2010.
High likelihood of another Super-Jupiter planet at wide (at least 5 astronomical units) orbit was reported in 2013.
See also
SuperWASP
WASP-10b
List of extrasolar planets
References
External links
Pegasus (constellation)
K-type main-sequence stars
Planetary transit variables
Planetary systems with one confirmed planet
J23155829+3127462
10 | WASP-10 | Astronomy | 209 |
2,665,882 | https://en.wikipedia.org/wiki/Opioid%20peptide | Opioid peptides or opiate peptides are peptides that bind to opioid receptors in the brain; opiates and opioids mimic the effect of these peptides. Such peptides may be produced by the body itself, for example endorphins. The effects of these peptides vary, but they all resemble those of opiates. Brain opioid peptide systems are known to play an important role in motivation, emotion, attachment behaviour, the response to stress and pain, control of food intake, and the rewarding effects of alcohol and nicotine.
Opioid-like peptides may also be absorbed from partially digested food (casomorphins, exorphins, and rubiscolins). Opioid peptides from food typically have lengths between 4–8 amino acids. Endogenous opioids are generally much longer.
Opioid peptides are released by post-translational proteolytic cleavage of precursor proteins. The precursors consist of the following components: a signal sequence that precedes a conserved region of about 50 residues; a variable-length region; and the sequence of the neuropeptides themselves. Sequence analysis reveals that the conserved N-terminal region of the precursors contains 6 cysteines, which are probably involved in disulfide bond formation. It is speculated that this region might be important for neuropeptide processing.
Endogenous
The human genome contains several homologous genes that are known to code for endogenous opioid peptides.
The nucleotide sequence of the human gene for proopiomelanocortin (POMC) was characterized in 1980. The POMC gene codes for endogenous opioids such as β-endorphin and γ-endorphin.
The human gene for the enkephalins was isolated and its sequence described in 1982.
The human gene for dynorphins (originally called the "Enkephalin B" gene because of sequence similarity to the enkephalin gene) was isolated and its sequence described in 1983.
The PNOC gene encoding prepronociceptin, which is cleaved into nociceptin and potentially two additional neuropeptides.
Adrenorphin, amidorphin, and leumorphin were discovered in the 1980s.
The endomorphins were discovered in the 1990s.
Opiorphin and spinorphin, enkephalinase inhibitors (i.e., prevent the metabolism of enkephalins).
Hemorphins, hemoglobin-derived opioid peptides, including hemorphin-4, valorphin, and spinorphin, among others.
While not peptides, codeine and morphine are also produced in the human body.
Exogenous
Exogenous opioid substances are called exorphins, as opposed to endorphins. Exorphins include opioid food peptides, such as gluten exorphin and opioid food peptides, and are often contained in cereals and animal milk. Exorphins mimic the actions of endorphins by binding to and activating opioid receptors in the brain.
Common exorphins include:
Casomorphin (from casein found in milk of mammals, including cows)
Gluten exorphin (from gluten found in cereals wheat, rye, barley)
Gliadorphin/gluteomorphin (from gluten found in cereals wheat, rye, barley)
Soymorphin-5 (from soybean)
Rubiscolin (from spinach)
Amphibian
Deltorphin
Deltorphin I
Deltorphin II
Dermorphin
Synthetic
Zyklophin – semisynthetic KOR antagonist derived from dynorphin A
References
External links
Protein families | Opioid peptide | Biology | 825 |
36,888,768 | https://en.wikipedia.org/wiki/MT%20Vulcanus | MT Vulcanus, also known as Vulcanus I, Oragreen, Kotrando, and Erich Schröder, is a cargo ship first placed in service in 1956 that was used from 1972 to 1990 as an incinerator ship and later as a tanker.
History
Launch and use as a freighter
In 1955 the Richard Schröder shipping company of Hamburg ordered construction of a dry cargo freighter from Norderwerft Köser & Meyer, also of Hamburg. Built as hull number 818, the ship was launched on 10 November 1955 as the Erich Schröder. After sea trials that began on 29 December 1955, the ship was delivered to the owners on 17 January 1956. It was built as a triple-superstructure ship with the machine room aft and the bridge amidships. It was equipped with three cargo hatches and loading equipment consisting of one 25-metric ton derrick and ten 5-metric ton derricks. In August 1962, the ship was transferred to the Richard Schröder K.G. shipping company, and in February 1972 sold to Ocean Combustion Service N. V. in Rotterdam.
Use as an incinerator ship
Beginning on 11 February 1972, the new owner had the ship converted into a waste incinerator ship at the K. A. van Brink shipyard in Rotterdam. Tanks for transportation of the waste were added, plus two incinerators located aft, in which the waste would be combusted at temperatures between . On 15 September 1972, the shipyard delivered the completed ship to Vulcanus Shipping Pte. Ltd. in Singapore, which placed it in service as the Vulcanus. Management of the ship remained with Ocean Combustion Service; it was operated by Hansa Steamship Company of Bremen (Ocean Combustion Service and Vulcanus Shipping both being subsidiaries of Hansa). It was capable of incinerating 400–500 metric tons of waste a day, or approximately 100,000 metric tons a year. The ship primarily operated in the North Sea out of Rotterdam; in 1980 it and other incinerator ships were burning an estimated 80,000 metric tons of wastes including TCDD in the North Sea; but was also used on other routes. For example, in 1974, Shell Oil contracted to have liquid chlorinated hydrocarbon wastes from its Shell Chemical subsidiary incinerated in the Gulf of Mexico, and in 1977 in the South Pacific, Vulcanus disposed of more than 8 million liters of Agent Orange left over from the Vietnam War, in the U.S. Air Force Operation Pacer HO.
Following Hansa's declaration of bankruptcy on 18 August 1980, the Vulcanus continued to operate until 1983, when it was overhauled at the Jurong shipyard in Singapore and equipped with a totally new forecastle equipping it to transport chemical waste. On 4 May 1983 the old forecastle was scrapped at Lien Ho Hsing Steel Enterprise Company in Kaohsiung, Taiwan. The rebuilt ship was again placed in service as an incinerator vessel, now under the name Vulcanus I. At the beginning of 1988, Waste Management, Inc., which had bought Vulcanus I and was then operating it and another incinerator ship named Vulcanus II, withdrew a longstanding application to provide offshore incineration of toxic wastes to the US market. Growing protests by environmental groups led to a decision by the Third International Conference on the Protection of the North Sea in 1990 to ban waste incineration in the North Sea from 31 December 1991. The decision was ratified on 23 June 1990 by the OSPAR Commission. Vulcanus I was then sold that year to the Danish shipping company M.H. Simonsen A.p.S. in Svendborg.
Later career
In 1990, Simonsen registered the ship with Simonsen Tankers Ltd. of Nassau, Bahamas, as the Oragreen, and had it converted into a bunkering tanker. It remained with Simonsen until 3 May 2004, when it was transferred, in Dakar, Senegal, to the Nigerian shipping company Kotram Nigeria Ltd. of Apapa. In 2009, the ship remained registered with Kotram as the Kotrando. She was lost in 2012.
References
Further reading
"Müllverbrennungsschiff 'Vulcanus'". Hansa 111.7. April 1974. pp. 519–20.
Cargo ships of the United Kingdom
1955 ships
Incinerators | MT Vulcanus | Chemistry | 893 |
14,639,061 | https://en.wikipedia.org/wiki/Intellifont | Intellifont is a scalable font technology developed by Tom Hawkins at Compugraphic in Wilmington, Massachusetts during the late 1980s, the patent for which was granted to Hawkins in 1987. Intellifont fonts were hinted on a Digital Equipment Corporation VAX mainframe computer using Ikarus software. In 1990, printer and computing system manufacturer Hewlett-Packard adopted Intellifont scaling as part of its PCL 5 printer control protocol, and Intellifont technology was shipped with HP LaserJet III and 4 printers. In 1991, Commodore released AmigaOS 2.04, which included a version of diskfont.library that contained the Bullet font scaling engine (which in Workbench 2.1 became a separate library called bullet.library), with native support for the format. Intellifont technology became part of Agfa-Gevaert's Universal Font Scaling Technology (UFST), which allows OEMs to produce printers capable of printing on either the Adobe systems PostScript or HP PCL language.
See also
PCL
References
Further reading
(NB. FAIS = Font Access Interchange Standard.)
External links
Font archive
Font engine's API
Method for construction of a scalable font database
Typesetting
Digital typography
Font formats
Wilmington, Massachusetts
AmigaOS | Intellifont | Technology | 266 |
51,997,860 | https://en.wikipedia.org/wiki/NGC%20276 | NGC 276 is a barred spiral galaxy located approximately 626 million light-years from the Solar System in the constellation Cetus. It was discovered in 1886 by Frank Muller and was later also observed by DeLisle Stewart.
John Dreyer, creator of the New General Catalogue describes the object as "extremely faint, pretty small, extended 265°, 11 magnitude star 3 arcmin to north". The galaxy's right ascension was later corrected in the Index Catalogue using the observation data by Stewart.
See also
List of NGC objects (1–1000)
Pisces (constellation)
References
External links
SEDS
0276
03054
Barred spiral galaxies
?
Cetus
Discoveries by Frank Muller (astronomer) | NGC 276 | Astronomy | 142 |
46,802,192 | https://en.wikipedia.org/wiki/Water%20Science%20and%20Technology | Water Science and Technology is a monthly peer-reviewed scientific journal covering all aspects of the management of water quality. It was established in 1969 and is published by IWA Publishing. The editor-in-chief is Wolfgang Rauch (University of Innsbruck).
Abstracting and indexing
The journal is abstracted and indexed in the Science Citation Index Expanded, Current Contents/Agriculture, Biology & Environmental Sciences, Current Contents/Engineering, Computing & Technology, BIOSIS Previews, Elsevier Biobase, and Scopus.
References
External links
English-language journals
Monthly journals
Academic journals published by learned and professional societies
Academic journals established in 1969
Hydrology journals
Creative Commons Attribution-licensed journals | Water Science and Technology | Environmental_science | 138 |
31,405,403 | https://en.wikipedia.org/wiki/NGC%204444 | NGC 4444 is an intermediate spiral galaxy in the constellation Centaurus. The morphological classification places it midway on the continuum between a barred spiral (SB) and an unbarred spiral (SA), with an inner region that lies between a ring-like (r) and a purely spiral form (s), and medium- (b) to loosely wound (c) outer spiral arms. This makes it a hybrid ringed, barred spiral galaxy. It has an angular size of and the estimated mass M is given log M = 9.76, yielding solar masses.
References
External links
Intermediate spiral galaxies
4444
Astronomical objects discovered in 1836
Centaurus
041043 | NGC 4444 | Astronomy | 135 |
59,095,069 | https://en.wikipedia.org/wiki/Killer%20cell%20immunoglobulin-like%20receptor%202DL3 | KIR2DL3, Killer cell immunoglobulin-like receptor 2DL3 is a transmembrane glycoprotein expressed by the natural killer cells and the subsets of the T cells. The KIR genes are polymorphic, which means that they have many different alleles. The KIR genes are also extremely homologous, which means that they are similar in position, structure and evolutionary origin, but not necessarily in function.
Natural killer (NK) cells are an important component of innate antiviral immune response. Have the ability to lyse target cells without prior sensitization antigen and regulate the immune responses by secreting chemokine adaptive and cytokines. Activation of NK cells is determined by integration of inhibitory signals and activating issued by several families of different receptors, including killer cell immunoglobulin-like receptors (KIR) that predominantly recognize antigens of class I human leukocyte antigen ( HLA).
Structure and location
The genes responsible for coding of KIR proteins are found along the 19th chromosome section 19q 13.4 within the 1Mb Leukocyte Receptor Complex (LRC). The subsets of the KIR proteins are classified by their number of extracellular IG domains and by whether they have a long (L) or short (S) cytoplasmic domain-tail. The number coming at the end of the name of protein classifies it as a branch of the subset it belongs to.
Function
The protein KIR2DL3 has long tailed cytoplasmic domain and transduce the inhibitory signal upon the ligand binding via an immunoreceptor tyrosine-based inhibitory motif (ITIM). The ligands of the protein is a subset of HLA-C molecules: HLA-Cw1, HLA-Cw3, HLA-Cw7. The protein is thought to play an important role in regulating of the immune responses. The HLA-C molecules are human leukocyte antigens and are the gene complexes to encode major histocompatibility complex (MHC) proteins in humans. HLA are polymorphic, thus the MHCs of humans differ from an individual to another. KIR2DL3 is a protein complex of two extracellular domains and a long tailed endo-cellular cytoplasmatic tail, which assign it in charge of sending inhibitory signals throughout the cell.
Pathology
The protein KIR2DL3 transduces inhibitory signals upon the ligand binding via an immune tyrosine-based inhibitory motif (ITIM) to its long inner cytoplasmic tail. The tyrosine kinase based transductions are enzymatic transferences of a phosphate group from an ATP molecule to a protein in the cell. Thus functioning as an ' on ' and ' off ' switch in many cellular functions. Tyrosine Kinases are a sub-class of the protein-kinase. Phosphorylation of proteins is a necessary step in transduction of signals within a cell in order to regulate the cellular activity. Protein Kinases might get stuck in ' off ' position and inhibit the cell reproduction for good, or on the contrary might get stuck in ' on 'position, thus rendering the cell to reproduce unregulatedly, which is a necessary step for the development of cancer.
References
Transmembrane receptors | Killer cell immunoglobulin-like receptor 2DL3 | Chemistry | 703 |
39,697,193 | https://en.wikipedia.org/wiki/Damasonium%20minus | Damasonium minus is a species of flowering plant in the water-plantain family known by the common names starfruit and star-fruit (not to be confused with the cultivated starfruit). It is native to Australia, where it occurs everywhere except the Northern Territory. It is perhaps best known as an agricultural weed. It is a major weed of Australian rice crops.
This species is an emergent aquatic plant. It is an annual or short-lived perennial herb growing up to a meter tall. The floating or emergent leaves have blades up to 10 centimeters long by 4 wide and lance-shaped to heart-shaped. They are borne on petioles up to 30 centimeters long. The branching inflorescence has whorls of flowers. Each flower has tiny green sepals and white or pink petals a few millimeters long. The star-shaped aggregate fruit is made up of follicles containing seeds.
This plant grows in habitat with slow-moving and still water, such as swamps.
In agriculture, this plant has been called "the most important broadleaf weed in the Australian rice crop." Most rice is grown in Victoria and New South Wales. This weed has been controlled with the herbicide bensulfuron-methyl, but it has become less effective as herbicide-resistant strains have evolved. A pathogenic fungus, Rhynchosporium alismatis, was discovered on the plant, and it has become an option for biological control as a mycoherbicide. The fungus causes chlorosis and necrosis of the leaves on the mature plant and stunting of immature individuals. If immature weeds in a paddy are stunted, the rice plants may have a competitive advantage. The fungus can kill seedlings, and if it infects the inflorescence of the weed it can reduce seed weight and viability. The fungus can also help control another rice weed, Alisma lanceolatum.
References
Alismataceae
Agricultural pests
Freshwater plants | Damasonium minus | Biology | 404 |
51,600,966 | https://en.wikipedia.org/wiki/Dynamical%20energy%20analysis | Dynamical energy analysis (DEA) is a method for numerically modelling structure borne sound and vibration in complex structures. It is applicable in the mid-to-high frequency range and is in this regime computational more efficient than traditional deterministic approaches (such as finite element and boundary element methods). In comparison to conventional statistical approaches such as statistical energy analysis (SEA), DEA provides more structural details and is less problematic with respect to subsystem division. The DEA method predicts the flow of vibrational wave energy across complex structures in terms of (linear) transport equations. These equations are then discretized and solved on meshes.
Key point summary of DEA
High frequency method in numerical acoustics.
The flow of energy is tracked across a mesh. Can be thought of as ray tracing using density of rays instead of individual rays.
Can use existing FEM meshes. No remodelling necessary.
Computational time is independent of frequency.
The necessary mesh resolution does not depend on frequency and can be chosen coarser than in FEM. It only should resolve the geometry.
Fine structural details can be resolved, in contrast to SEA which gives only one number per subsystem.
Greater flexibility for the models usable by DEA. No implicit assumptions (equilibrium in weakly coupled subsystems) as in SEA.
Introduction
Simulations of the vibro-acoustic properties of complex structures (such as cars, ships, airplanes,...) are routinely carried out in various design stages. For low frequencies, the established method of choice is the finite element method (FEM). But high frequency analysis using FEM requires very fine meshes of the body structure to capture the shorter wavelengths and therefore is computational extremely costly. Furthermore the structural response at high frequencies is very sensitive to small variations in material properties, geometry and boundary conditions. This makes the output of a single FEM calculation less reliable and makes ensemble averages necessary furthermore enhancing computational cost. Therefore at high frequencies other numerical methods with better computational efficiency are preferable.
The statistical energy analysis (SEA) has been developed to deal with high frequency problems and leads to relatively small and simple models. However, SEA is based on a set of often hard to verify assumptions, which effectively require diffuse wave fields and quasi-equilibrium of wave energy within weakly coupled (and weakly damped) sub-systems.
One alternative to SEA is to instead consider the original vibrational wave problem in the high frequency limit, leading to a ray tracing model of the structural vibrations. The tracking of individual rays across multiple reflection is not computational feasible because of the proliferation of trajectories. Instead, a better approach is tracking densities of rays propagated by a transfer operator. This forms the basis of the Dynamical Energy Analysis (DEA) method introduced in reference. DEA can be seen as an improvement over SEA where one lifts the diffusive field and the well separated subsystem assumption. One uses an energy density which depends both on position and momentum. DEA can work with relatively fine meshes where energy can flow freely between neighboring mesh cells. This allows far greater flexibility for the models used by DEA in comparison to the restriction imposed by SEA. No remodeling as for SEA is necessary as DEA can use meshes created for a FE analysis. As a result, finer structural details than SEA can be resolved by DEA.
Method
The implementation of DEA on meshes is called Discrete Flow Mapping (DFM). We will here briefly describe the idea behind DFM, for details see the references below. Using DFM it is possible to compute vibro-acoustic energy densities in complex structures at high frequencies, including multi-modal propagation and curved surfaces. DFM is a mesh based technique where a transfer operator is used to describe the flow of energy through boundaries of subsystems of the structure; the energy flow is represented in terms of a density of rays , that is, the energy flux through a given surface is given through the density of rays passing through the surface at point with direction . Here, parametrises the surface and is the direction component tangential to the surface. In what follows, the surfaces is represented by the union of all boundaries of the mesh cells of the FE mesh describing the car floor. The density , with phase space coordinate , is transported from one boundary to the adjacent boundary intersection via the boundary integral operator
where is the map determining where a ray starting on a boundary segment at point with direction passes through another boundary segment, and is a factor containing damping and reflection/transmission coefficients (akin to the coupling loss factors in SEA). It also governs the mode conversion probabilities in the case of both in-plane and flexural waves, which are derived from wave scattering theory. This allows DEA to take curvature and varying material parameters into account. Equation () is a way to write ray tracing across one single mesh cell in terms of an integral equation transferring an energy density from one surface to an adjacent surface.
In a next step, the transfer operator () is discretised using a set of basis functions of the phase space. Once the matrix has been constructed, the final energy density on the boundary phase-space of each element is given in terms of the initial density by the solution of a linear system of the form
The initial density models some source distribution for vibrational excitations, for example the engine in ship. Once the final density (describing the energy density
on all cell boundaries) has been computed, the energy density at any location inside the structure may be computed as a post-processing step.
Concerning the terminology, there is some ambiguity concerning the terms "Discrete Flow Mapping(DFM)" and "Dynamical Energy Analysis". To some extent, one can use one term in place of the other. For example, consider a plate. In DFM, one would subdivide the plate into many small triangles and propagate the flow of energy from triangle to (neighbouring) triangle. In DEA, one would not subdivide the plate, but use some high order basis functions (both in position and momentum) on the boundary of the plate. But in principle it would be admissible to describe both procedures as either DFM or DEA.
Examples
As an example application, a simulation of a car floor panel is shown here. A point excitation at 2500 Hz with 0.04 hysteretic damping was applied. The results from a frequency averaged FEM simulation are compared with a DEA simulation (for DEA, no frequency averaging is necessary). The results also show a good quantitative agreement. In particular, we see the directional dependence of the energy flow, which is predominantly in the horizontal direction as plotted. This is caused by several horizontally extended out-of-plane bulges. It is only in the lower right part of the panel, with negligible energy content, that deviations between the FEM and DFM predictions are visible. The total kinetic energy given by the DFM prediction is within 12% of the FEM prediction.
As a more applied example, the result of a DEA simulation on a Yanmar tractor model (body in blue: chassis/cabin steel frame and windows) is shown here to the left. The numerical DEA results are compared with experimental measurements at frequencies between 400 Hz and 4000 Hz for an excitation on the back of the gear casing. Both results agree favorably. The DEA simulation can be extended to predict the sound pressure level at driver's ear.
Notes
References
External links
University of Nottingham Wave Modelling Research Group . One of the foci of this research group is on DEA.
Mechanical vibrations
Acoustics | Dynamical energy analysis | Physics,Engineering | 1,547 |
4,064,391 | https://en.wikipedia.org/wiki/Advanced%20Digital%20Information%20Corporation | Advanced Digital Information Corporation (ADIC) was an American manufacturer of tape libraries and storage management software which is now part of Quantum Corp. Their product line included both hardware, such as the Scalar line of robotic tape libraries, and software, such as the StorNext File System and the StorNext Storage Manager, a Hierarchical Storage Management system. Partners and resellers included Apple, Dell, EMC, Fujitsu-Siemens, HP, IBM and Sun.
ADIC was acquired by Quantum in August 2006.
References
1983 establishments in Washington (state)
2006 disestablishments in Washington (state)
2006 mergers and acquisitions
American companies established in 1983
American companies disestablished in 2006
Computer companies established in 1983
Computer companies disestablished in 2006
Defunct companies based in Redmond, Washington
Defunct computer companies of the United States
Defunct computer hardware companies | Advanced Digital Information Corporation | Technology | 173 |
22,280,219 | https://en.wikipedia.org/wiki/HD%20139357%20b | HD 139357 b is a very massive extrasolar planet or brown dwarf located approximately 390 light years away, orbiting the 6th magnitude K-type giant star HD 139357 in the constellation of Draco. The detection occurred on March 20, 2009, which was the first day of spring.
The actual mass and radius of this body remain uncertain, but it has a minimum mass of nearly 10 times that of Jupiter and a radius of probably no more than 1.2 times Jupiter's. Most likely this is a brown dwarf rather than a planet. The reason why the object's true mass was initially unknown is due to the undetermined inclination of its orbital plane. Follow up observations via direct imaging may determine its radius and orbital inclination, thereby giving its density and surface gravity, which will allow a determination as to whether this object is a brown dwarf or a supermassive planet.
A 2022 study estimated the true mass of HD 139357 b at about via astrometry, although this estimate is poorly constrained. If this is the true mass, the object would be a brown dwarf.
As it is typical for supermassive planets, this orbits further from its host star than Earth is from the Sun. The planet's year is over three Earth years. However, the orbital eccentricity of this object is much greater than Earth's: 0.1 vs. 0.017. Like most known extrasolar planets, it was detected by the wobble method, which detects planets through the circular wobbling motion of the star caused by the gravity of orbiting body.
See also
42 Draconis b
Iota Draconis b
References
Exoplanets discovered in 2009
Giant planets
Draco (constellation)
Exoplanets detected by radial velocity
Exoplanets detected by astrometry | HD 139357 b | Astronomy | 366 |
26,658,925 | https://en.wikipedia.org/wiki/Eobacterium | Eobacterium is an extinct genus of bacteria from the Fig Tree Formation in Africa. It is about 3 billion years old, one of the oldest known organisms. The discovery of Eobacterium and other Fig Tree organisms in the 1960s helped prove that life existed over three billion years ago.
References
K.N. Prasad: An Introduction to Paleobotany.
Prehistoric bacteria | Eobacterium | Biology | 80 |
14,553,158 | https://en.wikipedia.org/wiki/Cylindrical%20harmonics | In mathematics, the cylindrical harmonics are a set of linearly independent functions that are solutions to Laplace's differential equation, , expressed in cylindrical coordinates, ρ (radial coordinate), φ (polar angle), and z (height). Each function Vn(k) is the product of three terms, each depending on one coordinate alone. The ρ-dependent term is given by Bessel functions (which occasionally are also called cylindrical harmonics).
Definition
Each function of this basis consists of the product of three functions:
where are the cylindrical coordinates, and n and k constants that differentiate the members of the set. As a result of the superposition principle applied to Laplace's equation, very general solutions to Laplace's equation can be obtained by linear combinations of these functions.
Since all surfaces with constant ρ, φ and z are conicoid, Laplace's equation is separable in cylindrical coordinates. Using the technique of the separation of variables, a separated solution to Laplace's equation can be expressed as:
and Laplace's equation, divided by V, is written:
The Z part of the equation is a function of z alone, and must therefore be equal to a constant:
where k is, in general, a complex number. For a particular k, the Z(z) function has two linearly independent solutions. If k is real they are:
or by their behavior at infinity:
If k is imaginary:
or:
It can be seen that the Z(k,z) functions are the kernels of the Fourier transform or Laplace transform of the Z(z) function and so k may be a discrete variable for periodic boundary conditions, or it may be a continuous variable for non-periodic boundary conditions.
Substituting for , Laplace's equation may now be written:
Multiplying by , we may now separate the P and Φ functions and introduce another constant (n) to obtain:
Since is periodic, we may take n to be a non-negative integer and accordingly, the the constants are subscripted. Real solutions for are
or, equivalently:
The differential equation for is a form of Bessel's equation.
If k is zero, but n is not, the solutions are:
If both k and n are zero, the solutions are:
If k is a real number we may write a real solution as:
where and are ordinary Bessel functions.
If k is an imaginary number, we may write a real solution as:
where and are modified Bessel functions.
The cylindrical harmonics for (k,n) are now the product of these solutions and the general solution to Laplace's equation is given by a linear combination of these solutions:
where the are constants with respect to the cylindrical coordinates and the limits of the summation and integration are determined by the boundary conditions of the problem. Note that the integral may be replaced by a sum for appropriate boundary conditions. The orthogonality of the is often very useful when finding a solution to a particular problem. The and functions are essentially Fourier or Laplace expansions, and form a set of orthogonal functions. When is simply , the orthogonality of , along with the orthogonality relationships of and allow the constants to be determined.
If is the sequence of the positive zeros of then:
In solving problems, the space may be divided into any number of pieces, as long as the values of the potential and its derivative match across a boundary which contains no sources.
Example: Point source inside a conducting cylindrical tube
As an example, consider the problem of determining the potential of a unit source located at inside a conducting cylindrical tube (e.g. an empty tin can) which is bounded above and below by the planes and and on the sides by the cylinder . (In MKS units, we will assume ). Since the potential is bounded by the planes on the z axis, the Z(k,z) function can be taken to be periodic. Since the potential must be zero at the origin, we take the function to be the ordinary Bessel function , and it must be chosen so that one of its zeroes lands on the bounding cylinder. For the measurement point below the source point on the z axis, the potential will be:
where is the r-th zero of and, from the orthogonality relationships for each of the functions:
Above the source point:
It is clear that when or , the above function is zero. It can also be easily shown that the two functions match in value and in the value of their first derivatives at .
Point source inside cylinder
Removing the plane ends (i.e. taking the limit as L approaches infinity) gives the field of the point source inside a conducting cylinder:
Point source in open space
As the radius of the cylinder (a) approaches infinity, the sum over the zeroes of becomes an integral, and we have the field of a point source in infinite space:
and R is the distance from the point source to the measurement point:
Point source in open space at origin
Finally, when the point source is at the origin,
See also
Spherical harmonics
Notes
References
Differential equations | Cylindrical harmonics | Mathematics | 1,044 |
72,937,498 | https://en.wikipedia.org/wiki/Flags%20of%20international%20organizations | This article contains a list of flags of international organizations.
Political, cultural and military organizations
Global
Transcontinental
Africa
Americas
Asia
Europe
Oceania
International sports federations
Global
Multi-sport
Single sport
Cultural and linguistic
Continental
Regional
Former organizations
See also
Flag of Earth
Anthems of international organizations
Flags
Vexillology
International organizations
Flags of international organizations | Flags of international organizations | Mathematics | 67 |
21,784 | https://en.wikipedia.org/wiki/Nova | A nova ( novae or novas) is a transient astronomical event that causes the sudden appearance of a bright, apparently "new" star (hence the name "nova", Latin for "new") that slowly fades over weeks or months. All observed novae involve white dwarfs in close binary systems, but causes of the dramatic appearance of a nova vary, depending on the circumstances of the two progenitor stars. The main sub-classes of novae are classical novae, recurrent novae (RNe), and dwarf novae. They are all considered to be cataclysmic variable stars.
Classical nova eruptions are the most common type. This type is usually created in a close binary star system consisting of a white dwarf and either a main sequence, subgiant, or red giant star. If the orbital period of the system is a few days or less, the white dwarf is close enough to its companion star to draw accreted matter onto its surface, creating a dense but shallow atmosphere. This atmosphere, mostly consisting of hydrogen, is heated by the hot white dwarf and eventually reaches a critical temperature, causing ignition of rapid runaway fusion. The sudden increase in energy expels the atmosphere into interstellar space, creating the envelope seen as visible light during the nova event. In past centuries such an event was thought to be a new star. A few novae produce short-lived nova remnants, lasting for perhaps several centuries.
A recurrent nova involves the same processes as a classical nova, except that the nova event repeats in cycles of a few decades or less as the companion star again feeds the dense atmosphere of the white dwarf after each ignition, as in the star T Coronae Borealis.
Under certain conditions, mass accretion can eventually trigger runaway fusion that destroys the white dwarf rather than merely expelling its atmosphere. In this case, the event is usually classified as a Type Ia supernova.
Novae most often occur in the sky along the path of the Milky Way, especially near the observed Galactic Center in Sagittarius; however, they can appear anywhere in the sky. They occur far more frequently than galactic supernovae, averaging about ten per year in the Milky Way. Most are found telescopically, perhaps only one every 12–18 months reaching naked-eye visibility. Novae reaching first or second magnitude occur only a few times per century. The last bright nova was V1369 Centauri, which reached 3.3 magnitude on 14 December 2013.
Etymology
During the sixteenth century, astronomer Tycho Brahe observed the supernova SN 1572 in the constellation Cassiopeia. He described it in his book De nova stella (Latin for "concerning the new star"), giving rise to the adoption of the name nova. In this work he argued that a nearby object should be seen to move relative to the fixed stars, and thus the nova had to be very far away. Although SN 1572 was later found to be a supernova and not a nova, the terms were considered interchangeable until the 1930s. After this, novae were called classical novae to distinguish them from supernovae, as their causes and energies were thought to be different, based solely on the observational evidence.
Although the term "stella nova" means "new star", novae most often take place on white dwarfs, which are remnants of extremely old stars.
Stellar evolution of novae
Evolution of potential novae begins with two main sequence stars in a binary system. One of the two evolves into a red giant, leaving its remnant white dwarf core in orbit with the remaining star. The second star—which may be either a main-sequence star or an aging giant—begins to shed its envelope onto its white dwarf companion when it overflows its Roche lobe. As a result, the white dwarf steadily captures matter from the companion's outer atmosphere in an accretion disk, and in turn, the accreted matter falls into the atmosphere. As the white dwarf consists of degenerate matter, the accreted hydrogen is unable to expand even though its temperature increases. Runaway fusion occurs when the temperature of this atmospheric layer reaches ~20 million K, initiating nuclear burning via the CNO cycle.
If the accretion rate is just right, hydrogen fusion may occur in a stable manner on the surface of the white dwarf, giving rise to a supersoft X-ray source, but for most binary system parameters, the hydrogen burning is thermally unstable and rapidly converts a large amount of the hydrogen into other, heavier chemical elements in a runaway reaction, liberating an enormous amount of energy. This blows the remaining gases away from the surface of the white dwarf and produces an extremely bright outburst of light.
The rise to peak brightness may be very rapid, or gradual; after the peak, the brightness declines steadily. The time taken for a nova to decay by 2 or 3 magnitudes from maximum optical brightness is used for grouping novae into speed classes. Fast novae typically will take less than 25 days to decay by 2 magnitudes, while slow novae will take more than 80 days.
Despite its violence, usually the amount of material ejected in a nova is only about of a solar mass, quite small relative to the mass of the white dwarf. Furthermore, only five percent of the accreted mass is fused during the power outburst. Nonetheless, this is enough energy to accelerate nova ejecta to velocities as high as several thousand kilometers per second—higher for fast novae than slow ones—with a concurrent rise in luminosity from a few times solar to 50,000–100,000 times solar. In 2010 scientists using NASA's Fermi Gamma-ray Space Telescope discovered that a nova also can emit gamma rays (>100 MeV).
Potentially, a white dwarf can generate multiple novae over time as additional hydrogen continues to accrete onto its surface from its companion star. Where this repeated flaring is observed, the object is called a recurrent nova. An example is RS Ophiuchi, which is known to have flared seven times (in 1898, 1933, 1958, 1967, 1985, 2006, and 2021). Eventually, the white dwarf can explode as a Type Ia supernova if it approaches the Chandrasekhar limit.
Occasionally, novae are bright enough and close enough to Earth to be conspicuous to the unaided eye. The brightest recent example was Nova Cygni 1975. This nova appeared on 29 August 1975, in the constellation Cygnus about 5 degrees north of Deneb, and reached magnitude 2.0 (nearly as bright as Deneb). The most recent were V1280 Scorpii, which reached magnitude 3.7 on 17 February 2007, and Nova Delphini 2013. Nova Centauri 2013 was discovered 2 December 2013 and so far is the brightest nova of this millennium, reaching magnitude 3.3.
Helium novae
A helium nova (undergoing a helium flash) is a proposed category of nova event that lacks hydrogen lines in its spectrum. The absence of hydrogen lines may be caused by the explosion of a helium shell on a white dwarf. The theory was first proposed in 1989, and the first candidate helium nova to be observed was V445 Puppis, in 2000. Since then, four other novae have been proposed as helium novae.
Occurrence rate and astrophysical significance
Astronomers have estimated that the Milky Way experiences roughly 25 to 75 novae per year. The number of novae actually observed in the Milky Way each year is much lower, about 10, probably because distant novae are obscured by gas and dust absorption. As of 2019, 407 probable novae had been recorded in the Milky Way. In the Andromeda Galaxy, roughly 25 novae brighter than about 20th magnitude are discovered each year, and smaller numbers are seen in other nearby galaxies.
Spectroscopic observation of nova ejecta nebulae has shown that they are enriched in elements such as helium, carbon, nitrogen, oxygen, neon, and magnesium. Classical nova explosions are galactic producers of the element lithium. The contribution of novae to the interstellar medium is not great; novae supply only as much material to the galaxy as do supernovae, and only as much as red giant and supergiant stars.
Observed recurrent novae such as RS Ophiuchi (those with periods on the order of decades) are rare. Astronomers theorize, however, that most, if not all, novae recur, albeit on time scales ranging from 1,000 to 100,000 years. The recurrence interval for a nova is less dependent on the accretion rate of the white dwarf than on its mass; with their powerful gravity, massive white dwarfs require less accretion to fuel an eruption than lower-mass ones. Consequently, the interval is shorter for high-mass white dwarfs.
V Sagittae is unusual in that the time of its next eruption can be predicted fairly accurately; it is expected to recur in approximately 2083, plus or minus about 11 years.
Subtypes
Novae are classified according to the light curve decay speed, referred to as either type A, B, C and R, or using the prefix "N":
NA: fast novae, with a rapid brightness increase, followed by a brightness decline of 3 magnitudes—to about brightness—within 100 days.
NB: slow novae, with a brightness decline of 3 magnitudes in 150 days or more.
NC: very slow novae, also known as symbiotic novae, staying at maximum light for a decade or more and then fading very slowly.
NR/RN: recurrent novae, where two or more eruptions separated by 80 years or less have been observed. These are generally also fast.
Remnants
Some novae leave behind visible nebulosity, material expelled in the nova explosion or in multiple explosions.
Novae as distance indicators
Novae have some promise for use as standard candle measurements of distances. For instance, the distribution of their absolute magnitude is bimodal, with a main peak at magnitude −8.8, and a lesser one at −7.5. Novae also have roughly the same absolute magnitude 15 days after their peak (−5.5). Nova-based distance estimates to various nearby galaxies and galaxy clusters have been shown to be of comparable accuracy to those measured with Cepheid variable stars.
Recurrent novae
A recurrent nova (RN) is an object that has been seen to experience repeated nova eruptions. The recurrent nova typically brightens by about 9 magnitudes, whereas a classical nova may brighten by more than 12 magnitudes.
Although it is estimated that as many as a quarter of nova systems experience multiple eruptions, only ten recurrent novae (listed below) have been observed in the Milky Way.
Several extragalactic recurrent novae have been observed in the Andromeda Galaxy (M31) and the Large Magellanic Cloud. One of these extragalactic novae, M31N 2008-12a, erupts as frequently as once every 12 months.
On 20 April 2016, the Sky & Telescope website reported a sustained brightening of T Coronae Borealis from magnitude 10.5 to about 9.2 starting in February 2015. A similar event had been reported in 1938, followed by another outburst in 1946. By June 2018, the star had dimmed slightly but still remained at an unusually high level of activity. In March or April 2023, it dimmed to magnitude 12.3. A similar dimming occurred in the year before the 1945 outburst, indicating that it would likely erupt between March and September 2024. this predicted outburst has not yet occurred.
Extragalactic novae
Novae are relatively common in the Andromeda Galaxy (M31); several dozen novae (brighter than apparent magnitude +20) are discovered in M31 each year. The Central Bureau for Astronomical Telegrams (CBAT) has tracked novae in M31, M33, and M81.
See also
References
Further reading
External links
General Catalog of Variable Stars, Sternberg Astronomical Institute, Moscow
AAVSO Variable Star of the Month. Novae: May 2001
Extragalactic Novae
Astronomical events
Stellar phenomena | Nova | Physics,Astronomy | 2,508 |
25,912,881 | https://en.wikipedia.org/wiki/Thematic%20coherence | In developmental psychology, thematic coherence is an organization of a set of meanings in and through an event. In education, for example, the thematic coherence happens when a child during a classroom session understands what all the talking is about.
This expression was termed by Habermas and Bluck (2000), along with other terms such as temporal coherence, biographical coherence, and causal coherence, to describe the coherence that people talk about while narrating their own personal experiences (the many different episodes in their life, most especially in childhood and adolescence) which need to be structured within a context.
In conversation — although this technique also can be found in literature — the thematic coherence is when a person (or character) "is able to derive a general theme or principle about the self based on a narrated sequence of events."
See also
Child development
Developmental psychology
Centration
Private speech
Speech perception
Speech repetition
References
Developmental psychology
Education theory
Pedagogy | Thematic coherence | Biology | 202 |
18,996,403 | https://en.wikipedia.org/wiki/Neuregulin%203 | Neuregulin 3, also known as NRG3, is a neural-enriched member of the neuregulin protein family which in humans is encoded by the NRG3 gene. The NRGs are a group of signaling proteins part of the superfamily of epidermal growth factor, EGF like polypeptide growth factor. These groups of proteins possess an 'EGF-like domain' that consists of six cysteine residues and three disulfide bridges predicted by the consensus sequence of the cysteine residues.
The neuregulins are a diverse family of proteins formed through alternative splicing from a single gene; they play crucial roles in regulating the growth and differentiation of epithelial, glial and muscle cells. These groups of proteins also aid cell-cell associations in the breast, heart and skeletal muscles. Four different kinds of neuregulin genes have been identified, namely: NRG1 NRG2 NRG3 and NRG4. While the NRG1 isoforms have been extensively studied, there is little information available about the other genes of the family. NRGs bind to the ERBB3 and ERBB4 tyrosine kinase receptors; they then form homodimers or heterodimers, often consisting of ERBB2, which is thought to function as a co-receptor as it has not been observed to bind any ligand. NRGs bind to the ERBB receptors to promote phosphorylation of specific tyrosine residues on the C-terminal link of the receptor and the interactions of intracellular signaling proteins.
NRGs also play significant roles in developing, maintaining, and repair of the nervous system; this is because NRG1, NRG2 and NRG3 are widely expressed in the central nervous system and also in the olfactory system. Studies have observed that in mice, NRG3 is limited to the developing Central nervous system as well as the adult form; previous studies also highlight the roles of NRG1, ERBB2, and ERBB4 in the development of the heart. Mice deficient in ERBB2, ERBB4, or NRG1 were observed to die at the mid-embryogenesis stage from the termination of myocardial trabeculae development in the ventricle. These results confirm that NRG1 expression in the endocardium is a significant ligand required to activate expression of ERBB2 and ERBB4 in the myocardium.
Function
Neuregulins are ligands of the ERBB-family receptors, while NRG1 and NRG2 are able to bind and activate both ERBB3 and ERBB4, NRG3 binding stimulates tyrosine phosphorylation, and can only bind to the extracellular domain of the ERBB4 receptor tyrosine kinase but not to the other members of the ERBB family receptors; ERBB2 and ERBB3.
NRG1, plays critical roles in the development of the embryonic cerebral cortex when it controls migration and sequencing of the cortical cell. Contrary to NRG1, there is limited information on pre-mRNA splicing of the NRG3 gene, together with its transcriptional profile and function in the brain. The recent discovery of hFBNRG3 (human fetal brain NRG3; DQ857894) which is an alternative cloned isoform of NRG3 from human fetal brain, promotes the survival of oligodendrocyte with the aid of ERBB4/PI3K/AKT1 pathway and also partakes in NRG3-ERBB4 signaling in neurodevelopment and brain functionalities.
Even though studies have revealed that NRG1 and NRG3 are paralogues, the EGF domain of NRG3 is only 31% identical to NRG1. The N-terminal domain of NRG3 resembles that of Sensory And Motor Neuron Derived Factor; SMDF because it lacks Ig-like as well as Kringle-like domains that are attributed to many NRG1 isomers. Hydropathy profile studies have shown that NRG3 lacks a hydrophobic N-terminal signal sequence common in secreted proteins, but contains a region of non-polar or uncharged amino acids in position (W66–V91). An amino acid region found in SMDF is similar to this non polar site of NRG3 and has been proposed to act as an internal, uncleaved signal sequence that functions as a translocation agent across the endoplasmic reticulum membrane.
Clinical significance
Recent human genetic studies reveals neuregulin 3 gene (NRG3) as a potential risk gene responsible for different kinds of neuro-developmental disorders, resulting to schizophrenia, stunted development, attention deficit related disorders and bipolar disorders when structural and genetic variations occur within the gene
Most importantly, variants of the NRG3 gene have been linked to a susceptibility to schizophrenia. An increase in Isoform-specific models of NRG3 involved in schizophrenia have been reported, and observed to have an interaction with rs10748842; a NRG3 risk polymorphism, which indicates that NRG3 transcriptional dysregulation is a molecular risk mechanism.
These isoforms have also been linked to Hirschsprung's disease.
Schizophrenia
Several genes in the NRG-ERBB signaling pathway have been implicated in genetic predisposition to schizophrenia, Neuregulin 3 (NRG3) encodes a protein similar to its paralog NRG1 and both play important roles in the developing nervous system. As observed with other pathologies like autism and schizophrenia, several members of any given protein family have a high chance of association with the same phenotype, individually or together.
A recent study of the temporal, diagnostic, and tissue-specific modulation of NRG3 isoform expression in human brain development, employed the use of qRT-PCR ; quantitative polymerase chain reaction to quantify 4 classes of NRG3 in human postmortem dorsolateral prefrontal cortex from 286 normal and affected (bipolar or extreme depressive disorder) candidates with age range of 14 weeks to 85 years old. The researches observed that each the 4 isoform class (I-IV) of NRG3 showed unique expression trajectories across human neopallium development and aging.
NRG3 class I was increased in bipolar and major depressive disorder, in agreement with observations in schizophrenia.
NRG3 class II was increased in bipolar disorder, and class III was increased in major depression cases.
NRG3 class I, II and IV were actively involved in the developmental stages,
The rs10748842 risk genotype predicted elevated class II and III expression, consistent with previous reports in the brain, with tissue-specific analyses suggesting that classes II and III are brain-specific isoforms of NRG3.
References
Further reading
Neurotrophic factors | Neuregulin 3 | Chemistry | 1,428 |
13,541,266 | https://en.wikipedia.org/wiki/Demarcation%20line | A political demarcation line is a geopolitical border, often agreed upon as part of an armistice or ceasefire.
Africa
Moroccan Wall, delimiting the Moroccan-controlled part of Western Sahara from the Sahrawi-controlled part
Americas
During European imperialism overseas, the lines of amity were drawn to differentiate Europe from the rest of the world. The Line of Demarcation was one specific line drawn along a meridian in the Atlantic Ocean as part of the Treaty of Tordesillas in 1494 to divide new lands claimed by Portugal from those of Spain. This line was drawn in 1493 after Christopher Columbus returned from his maiden voyage to the Americas.
The Mason–Dixon line (or "Mason and Dixon's Line") is a demarcation line between four U.S. states, forming part of the borders of Pennsylvania, Maryland, Delaware, and West Virginia (then part of Virginia). It was surveyed between 1763 and 1767 by Charles Mason and Jeremiah Dixon in the resolution of a border dispute between British colonies in Colonial America.
Asia
Middle East
The Blue Line is a border demarcation between Lebanon and Israel published by the United Nations on 7 June 2001 for the purposes of determining whether Israel had fully withdrawn from Lebanon.
The term Green Line is used to refer to the 1949 Armistice lines established between Israel and its neighbours (Egypt, Jordan, Lebanon and Syria) after the 1948 Arab–Israeli War.
The Purple Line was the ceasefire line between Israel and Syria after the 1967 Six-Day War.
The Green Line (Lebanon) refers a line of demarcation in Beirut, Lebanon during the Lebanese Civil War from 1975 to 1990. It separated the mainly Muslim factions in West Beirut from the predominantly Christian East Beirut controlled by the Lebanese Front.
South and East Asia
The McMahon Line is a line dividing China and India, drawn on a map attached to the Simla Convention, a treaty negotiated between the British Empire, China, and Tibet in 1914.
The Military Demarcation Line, sometimes referred to as the Armistice Line, is the border between North Korea and South Korea. The Military Demarcation Line was established by the Korean Armistice Agreement as the line between the two Koreas at the end of Korean War in 1953.
The Northern Limit Line or North Limit Line (NLL) is a disputed maritime demarcation line in the Yellow Sea between North Korea and South Korea.
The Line of Actual Control established by India and the People's Republic of China between Aksai Chin and Ladakh after the Sino-Indian War of 1962.
The Line of Control established by India and Pakistan over the disputed region of Kashmir.
The nine-dash line appears on maps used by the People's Republic of China and the Republic of China (Taiwan) accompanying their South China Sea claims, which are challenged by Malaysia, the Philippines, and Vietnam.
Europe
The Curzon Line was a demarcation line proposed in 1920 by British Foreign Secretary Lord Curzon as a possible armistice line between Poland to the west and the Soviet republics to the east during the Polish-Soviet War of 1919–21. The modern Poland–Belarus and Poland–Ukraine borders mostly follow the Curzon line.
The Foch Line was a temporary demarcation line between Poland and Lithuania proposed by the Entente in the aftermath of World War I.
The demarcation line in France in Vichy France imposed by Nazi Germany from 1940 to 1942, with the German-occupied zone in the north and a free zone in the south.
The Line of Contact was a demarcation line between Soviet-aligned forces and forces aligned with the Western allies, marking where Soviet-aligned forces and Western-aligned forces met as they advanced into Germany and Austria at the end of World War II in Europe.
The Bosnian Inter-Entity Boundary Line is an ethno-administrative border established by the Dayton Agreement that followed the end of the Bosnian War.
See also
Demilitarized zone
Borders | Demarcation line | Physics | 795 |
627,772 | https://en.wikipedia.org/wiki/Sotades | Sotades (; 3rd century BC) was an Ancient Greek poet.
Sotades was born in Maroneia, either the one in Thrace, or in Crete. He lived in Alexandria during the reign of Ptolemy II Philadelphus (285–246 BC). The city was at that time a remarkable center of learning, with a great deal of artistic and literary activity, including epic poetry and the Great Library. Only a few genuine fragments of his work have been preserved; those in Stobaeus are generally considered spurious. Ennius translated some poems of this kind, included in his book of satires under the name of Sota. He had a son named Apollonius. He has been credited with the invention of the palindrome.
Sotades was the chief representative of the writers of obscene and even satirical poems, called "kinaidoi" (), composed in the Ionic dialect and in the metre named after him. One of his poems attacked Ptolemy II Philadelphus's marriage to his own sister Arsinoe II, from which came the infamous line: "You're sticking your prick in an unholy hole." For this, Sotades was imprisoned, but he escaped to the city of Caunus, where he was afterwards captured by the admiral Patroclus, shut up in a leaden chest, and thrown into the sea.
British Orientalist and explorer Sir Richard Francis Burton (1821–1890) hypothesised the existence of a "Sotadic zone". He asserted that there exists a geographic zone in which pederasty is prevalent and celebrated among the indigenous inhabitants, and named it after Sotades.
See also
Sotadean metre
References
External links
Sotades from the Wiki Classical Dictionary
Sotades (2) from Smith, Dictionary of Greek and Roman Biography and Mythology (1867)
Ancient Greek poets
Ancient Thracian Greeks
Erotic poetry
Greek erotica writers
Greek male writers
Obscenity
Palindromists
Pederasty in ancient Greece
Pornography
3rd-century BC Greek people
3rd-century BC poets
People from the Ptolemaic Kingdom
People from Maroneia | Sotades | Physics | 441 |
14,605,106 | https://en.wikipedia.org/wiki/Maltoporin | Maltoporins (or LamB porins) are bacterial outer membrane proteins of the porin family. Maltoporin forms a trimeric structure which facilitates the diffusion of maltodextrins across the outer membrane of Gram-negative bacteria. The membrane channel is formed by an antiparallel beta-barrel.
Most pores used for diffusion contain only 16 antiparallel strands, but maltoporin has 18. The structure of maltoporin contains long loops and short turns. The long loops are in contact with the cell exterior and the turns are in contact with the periplasm. This channel is involved in sugar transport. The sugar initially binds to the first greasy residue with van der Waals forces. The sugar continues through the channel by guided diffusion of the sugar along the greasy residues which form a "slide".
Maltoporin's original name was LamB because it is a bacteriophage lambda receptor. This channel is specific for maltosaccharides, whose affinity for the channel increases as the length of the chain increases.
References
Protein domains
Outer membrane proteins | Maltoporin | Biology | 225 |
48,140,171 | https://en.wikipedia.org/wiki/Bubble%20sensor | Bubble sensors are used to detect the presence of bubbles in fluid-filled tubes. They play a vital role in many fields, including medical technology, process control, pharmaceuticals, and the petroleum industry. The most common type of sensors used are ultrasonic or capacitor based.
Ultrasonic sensors
Ultrasonic sensors use two techniques to detect bubbles, one method involves transmitting sound waves from a transducer through the fluid to a second transducer that detects the waves. A second method involves pulse-echo, sound waves are transmitted to the fluid, reflected and received by the same transmitter that sent it. In both of these methods bubbles will have an effect on the velocity, attenuation and the scattering of the sound, thus they are easily detected.
Capacitive sensors
Capacitive sensors due to their ease of fabrication and minimization capability have found uses in a number of industries, they can be efficiently designed on a printed circuit board. A capacitor consist of two parallel electrodes with a capacitance computed as,
C =εA/d (1)
Where ε is the permittivity of the dielectric medium, A is the cross sectional area of the electrodes and d is the distance between them. Liquids have a higher dielectric constant than gas; when an air bubble is in a fluid-filled tube the capacitance is reduced and the output voltage rises. The size of the bubble is inversely related to the measured capacitance. Table 1 shows an example of the characteristics of a particular capacitive sensor being researched.
Table 1 Example Characteristics of a Particular Capacitive Sensor
Uses in extracorporeal blood circuits
In various medical treatments that use Extracorporeal Blood Circuits (ECBC) the detection of air bubbles in the blood is vital for the patient safety. An air bubble in an artery that supplies blood to the heart or brain can cause serious injury or death. In an ECBC the bubble sensor is placed on the arterial pump that is supplying the blood to the heart. Depending on the size of the bubble detected the pump will respond in different ways. The sensors allow the operator to set a size threshold for the size of the bubbles to detect. If the bubbles are below the threshold it is displayed to the user as microbubble activity. When a bubble equal to or greater than the threshold is detected an audio and visual alarm is generated and the arterial pump is stopped, this effectively terminates the cardiopulmonary bypass. The operator must identify and remove the bubble before bypass can be re-instated.
References
Medical equipment
Sensors
Bubbles (physics) | Bubble sensor | Chemistry,Technology,Engineering,Biology | 532 |
45,019,534 | https://en.wikipedia.org/wiki/Hydnellum%20cristatum | Hydnellum cristatum is a tooth fungus in the family Bankeraceae found in Europe and North America. It was originally described as a species of Hydnum by Italian mycologist Giacomo Bresadola in 1902. Joost Stalpers transferred it to the genus Hydnellum in 1993.
References
External links
Fungi described in 1902
Fungi of Europe
Fungi of North America
Inedible fungi
cristatum
Fungus species | Hydnellum cristatum | Biology | 89 |
56,043,631 | https://en.wikipedia.org/wiki/NGC%205114 | NGC 5114 is a lenticular galaxy located about 170 million light-years away in the constellation Centaurus. The galaxy was discovered by astronomer John Herschel on June 3, 1836.
See also
List of NGC objects (5001–6000)
References
External links
Centaurus
Lenticular galaxies
5114
46828
Astronomical objects discovered in 1836
Discoveries by John Herschel | NGC 5114 | Astronomy | 74 |
7,654,389 | https://en.wikipedia.org/wiki/Metallic%20path%20facilities | Metallic path facility (MPF) are the unshielded twisted pair of copper wires that run from a main distribution frame (MDF) at a local telephone exchange to the customer. In this variant, both broadband and voice (baseband) services, together potentially with a video on demand service, are provided to the end user by a single communications provider. MPF services are typically delivered through use of an MSAN.
Shared metallic path facility (SMPF) is based on the same technology as MPF, but denotes a variant whereby an Internet Service Provider (ISP) provides a broadband service to the end user but hands the voice (baseband) service back to the PTT/ILEC. Hence the provision of services over the end users copper wires might be shared between two providers. With SMPF, the non-incumbent service provider could purchase wholesale the voice service provision from the PTT/ILEC to allow the former to control the customer relationship for both broadband and voice services. In the UK at least, this service is called Wholesale Line Rental (WLR). SMPF services are typically delivered through use of a DSLAM.
Both terms are commonly used, for example by Ofcom and Openreach in the UK, to denote a local-loop unbundling service, designed to ensure a former monopoly player (deemed to have Significant Market Power, or SMP) allows a level playing field or Equivalence of Inputs.
See also
Local loop
Local-loop unbundling
Main distribution frame
Telephone exchange
References
Broadband
Networking hardware
Local loop | Metallic path facilities | Engineering | 314 |
38,343,676 | https://en.wikipedia.org/wiki/Arumberia | Arumberia is an enigmatic fossil from the Ediacaran period originally described from the Arumbera Sandstone, Northern Territory, Australia but also found in the Urals, East Siberia, England and Wales, Northern France, the Avalon Peninsula, India and Brazil. Several morphologically distinct species are recognized.
Description
Initially discovered by Glaessner and Walter (1975), Arumberia was described as a problematic cupped-body fossil of an Ediacaran soft-bodied organism characterized by hollow compressible ribbed bodies composed of flexible tissue. Brasier (1979) deemed it a pseudofossil arising from turbid water flow in shallow marine or deltaic environments, due in part to physical and morphological similarities to flume-induced structures previously observed by Dzulynski and Walton (1965). Arumberia appears as a poorly-delimited series of fine parallel grooves arising from a single region or point. Arumberia banksii consists of an array of straight to gently curved parallel to subparallel ridges (rugae) about 1 – 3 mm wide and separated by flat to gently concave furrows of 1 – 7 mm in width. Relief from ridge top to furrow bottom is less than 1 mm. Ridge ranges in length from 1.5 cm to 14.5 cm. Generally the ridges are parallel, but they also bifurcate. Ridges are developed on plane and rippled surfaces.
List of species
There are four species of Arumberia that have been formally recognized. Arumberia banksi has thin siliciclastic biolaminites known as rugose structures including those with subparallel or fanning-out series of rugae (Arumberia banksi s.str.) and sub-parallel series of branching rugae (Arumberia vindhyanensis). The Arumberia multykensis variety is found in greenish gray siltstones as series of near-parallel ridges with positive hyporelief up to 1 mm wide and 0.5 mm high, with 10 mm spacing between the ridges while A. usavaensis occurs as a series of near parallel ridges on the wavy surfaces on linguoid ripple marks which stretch along the paleoflow direction and flatten out as microterraces on the leeward side of the ripples. Arumberia usavensis is found on both upper and lower surfaces of sandstone beds as well as inside sandstones and siltstones. Arumberia beckeri and Arumberia ollii are morphologically distinct from A. banksi and are filamentous and ribbon-shaped compressed macrofossils which host authigenic clay minerals and are most likely unrelated to Arumberia.
Locations
Arumberia was first described in Neoproterozoic red sandstones of the lower Arumbera sandstone formation of the Amadeus Basin in Northern Territory, Australia. It has since been found in Argentina, Newfoundland, England, Wales, northeastern Europe, the eastern Sayan Mountains in Russia, Central India, and Rajasthan. In addition, Arumberia has been reported from lower Palaeozoic strata in Brittany, France and from the Upper Ediacaran Cerro Negro Formation in Argentina.
Identity
The identity of Arumberia is controversial. Arumberia has been originally interpreted as a 5–20 cm high cup-like organism, apparently composed of flexible tissue, attached to the sea bottom by a blunt apex or, later, as a colonial organism made of flexible, thin-walled tubes tightly joined through their length. Affinities with Ernietta, Conostichus, Pteridinium, Palaeoplatoda, Phyllozoon and Bergaueria and Chuaria have been conjectured. Spheroidal objects found along with Arumberia have also been interpreted as "dispersable stage" of Arumberia itself. Arumberia has been interpreted as a microbial mat morphotype developed in response to environmental perturbations in terminal Ediacaran shallow marine basins
Conversely, a non biological interpretation has been put forward Past experiments reproduced Arumberia-like traces from flume experiments and from the flux of water around small objects. The absence of Arumberia-like structures after the Ediacaran period could be due to the unique properties of the microbial mat that covered the sea floor at the period. However, there is still debate, with recent analysis of Urals' Arumberia-like structures leaning towards a biological interpretation as an organism adapted to shallow water environments. The rugae of Arumberia are considered to form from exclusively biological processes as observed in modern microbial mats and not from sediment desiccation, cracking or other abiotic processes. Fine wisps of organic material have been observed in thin section of Arumberia cut perpendicular to bedding planes, further suggesting that Arumberia was a living organism.
Recent work describes Arumberia as the fossilized remains of highly organized shallow marine microbial colonies; a microbially-induced sedimentary structure (MISS),; a series of slide marks underneath tough biomats that were exposed to tractional currents carrying sediment, or a biopolymer-bearing lichenized fungi. As such, Arumberia structures remain an enigmatic Ediacaran fossil.
Taphonomy
Facies
The Ediacaran facies where Arumberia has been found are interpreted as terrigenous sedimentary rocks in shallow marine or fluviolacustrine (intertidal or delta plain) settings that may have been affected by desiccation and salinity. However, an alternative interpretation of the Ediacaran facies where Arumberia is found is that they were coastal gypsiferous paleosols of intertidal to supratidal settings. Terminal Ediacaran (560 Ma) facies in Baltica dominantly contain biomarkers (hopane and sterane ratios) characteristic of bacterially-dominated communities in shallow marine oligotrophic settings. These values reflect a high ratio of bacterial to eukaryotic biomass and suggest these ecosystems were nutrient-limited and dominated by bacterial communities, which may have imposed growth constraints and evolutionary modifications to Ediacaran organisms like Arumberia.
Burial compaction and diagenesis
The original interpretation of Arumberia as a conical or cup-shaped depression structure is contrary to modern interpretations of Arumberia as a bulge or impression. Extensive burial from overburden typical in areas where Arumberia is found would greatly compact any bulge or depression of a typical soft-bodied marine metazoan; however, it has been suggested that diagenic silicification, ferrugination or pyritization can provide critical fossil rigidity during burial which supports the soft-body marine metazoan description of Arumberia. The presence of pyrite may impart significant resistance to burial compaction in Arumberia, but to date no study has demonstrated a thick pyritic film of sufficient strength to withstand burial compaction above an unpyritized Ediacaran fossil, which is thought to be a requirement for the preservational models that involve the pyritization of soft-body metazoans like Arumberia.
Analysis of thin sections of Arumberia show remarkable resistance to compaction, which may have been due to the presence of a pyrite sole-veneer. The diagenetic oxidation of pyrite to hematite can remove all traces of the pyrite sole-veneer, so it is difficult to determine the true influence of pyritization on burial compaction in Arumberia. Alternatively, the remarkable resistance to burial compaction may be due to the presence of a resistant biopolymer like chitin which is typical in marine and terrestrial lichens alike Exactly how the fossils are preserved remains controversial, and more research into the taphonomy of Arumberia and other Ediacarans is needed.
See also
List of Ediacaran organisms
Ediacaran biota
References
Ediacaran life
Trace fossils
Incertae sedis
Fossils of Australia | Arumberia | Biology | 1,646 |
31,333,132 | https://en.wikipedia.org/wiki/BauNetz | BauNetz Media is a German online platform offering services for architects, planners, and designers. The online magazine BauNetz is dedicated to daily news in international architecture. The magazine was started in 1996 and is located at Berlin Charlottenburg. BauNetz Media is part of the Paris-based group Infopro Digital. In 2018, Baunetz recorded a monthly average of more than 1 million visits and a good 10 million page impressions, according to IVW-audited data.
References
External links
Architecture magazines
Architecture websites
Visual arts magazines published in Germany
Magazines established in 1996
Magazines published in Berlin
Online magazines
1996 establishments in Germany
German-language magazines | BauNetz | Engineering | 130 |
5,183,323 | https://en.wikipedia.org/wiki/Macro%20recorder | A macro recorder is software that records macros for playback at a later time.
The main advantage of using a macro recorder is that it allows a user to easily perform complex operations much faster and with less effort without requiring custom computer programming or scripting.
Built-in macro recorders
Most word processors, text editors, and other office programs have a built-in macro recorder to automate the user's actions.
Standalone macro recorders
Not all software comes with a built-in macro recorder. A standalone macro-recorder program allows a user to "record" mouse and keyboard functions for "playback" at a later time. This allows automating any activity in any software application: from copy-pasting spreadsheet data to operating system maintenance actions.
Most macro recorders do not attempt to analyze or interpret what the user did when the macro was recorded. This can cause problems when trying to play back a macro if the user's desktop environment has changed. For example, if the user has changed their desktop resolution, moved icons, or moved the task bar, the mouse macro may not perform the way the user intended. That's one of the reasons for preferring keyboard macros over the mouse-oriented ones.
However, some recorders do attempt to analyze user actions, trying to record mouse activity in window-related, not screen-related coordinates, for instance, or to detect exactly what widget a user selected.
Possible features of standalone macro recorders include:
a built-in editor that allows a macro to be composed rather than recorded. This includes adding conditional statements, custom commands such as "open file", "launch website" or "shutdown computer".
conversion of a macro to a compressed executable file (".exe") that can run independently, without the need for the software that generated the macro to be present on the user's computer.
History
The emacs text editor is well known for its macro-recording ability, whose name is an acronym for Editing MACroS.
See also
Mouse tracking
Programming by demonstration
Session replay
Keystroke logging
References
Automation software | Macro recorder | Engineering | 427 |
28,408,840 | https://en.wikipedia.org/wiki/Pytkeev%20space | In mathematics, and especially topology, a Pytkeev space is a topological space that satisfies qualities more subtle than a convergence of a sequence. They are named after E. G. Pytkeev, who proved in 1983 that sequential spaces have this property.
Definitions
Let X be a topological space. For a subset S of X let S denote the closure of S. Then a point x is called a Pytkeev point if for every set A with , there is a countable -net of infinite subsets of A. A Pytkeev space is a space in which every point is a Pytkeev point.
Examples
Every sequential space is also a Pytkeev space. This is because, if then there exists a sequence {ak} that converges to x. So take the countable π-net of infinite subsets of A to be }.
If X is a Pytkeev space, then it is also a Weakly Fréchet–Urysohn space.
References
Further reading
Topology | Pytkeev space | Physics,Mathematics | 213 |
45,299,768 | https://en.wikipedia.org/wiki/DiY%20networking | DIY networking is an umbrella term for different types of grassroots networking, such as wireless community network, mesh network, ad-hoc network, stressing on the possibility that Wireless technology offers to create "offline" or "off-the-cloud" local area networks (LAN), which can operate outside the Internet. Do it yourself (DiY) networking is based on such Wireless LAN networks that are created organically through the interconnection of nodes owned and deployed by individuals or small organizations. Even when the Internet is easily accessible, such DiY networks form an alternative, autonomous option for communication and services, which (1) ensures that all connected devices are in de facto physical proximity, (2) offers opportunities and novel capabilities for creative combinations of virtual and physical contact, (3) enables free, anonymous and easy access, without the need for pre-installed applications or any credentials, and (4) can create feelings of ownership and independence, and lead to the appropriation of the hybrid space in the long-run.
DiY networks follow the Do-It-Yourself subculture, and provide the technological means for more participatory processes, benefiting from the grassroots engagement of citizens in the design of hybrid, digital and physical, space through novel forms of social networking, crowd sourcing, and citizen science. But for these possibilities to be materialized there are many practical, social, political, and economic challenges that need to be addressed.
Although DiY could be also used for illegal purposes, the DiY concept has become more and more popular in the mainstream academic literature, activism, art, popular media, and everyday practice, and especially in the case of communications networks there are more and more related scientific papers, books, and online articles. There is a large potential for new, novel, and free locality-aware services and opportunities that demand anonymous and easy access, such as Online Social Networking (OSN) via DiY-Based Sites. Single-board computers such as Arduino, or Raspberry Pi, are commonly used for DiY networking purposes, since such computers are open-source, relatively cheap, have low power demands, support multiple protocols, and are portable.
In 2016, the EU Horizon2020 research funding framework, and more specifically CAPS (Collective Awareness Platforms for Sustainability and social innovation) has funded two 3-year projects on DIY networking: 1) project MAZI, "A DIY networking toolkit for location-based collective awareness", focusing on small-scale networks and aiming to provide tools and interdisciplinary knowledge for individual or small groups to create their own off-the-cloud networks, and 2) project netCommons, "Network Infrastructure as Commons", focusing on existing large-scale community networks like Guifi.net, Freifunk, Ninux and combining research from different disciplines in close collaboration with key actors to address important economic, social, and political challenges that these networks face today.
Regarding terminology, there is often criticism on the use of the term “Do It Yourself” to characterize collective action projects, such as the creation of a network. Alternative terms, more “collaborative”, include “Do It With Others”, “Do It Together”, or “Do It Ourselves”. The preference for the term DIY is first practical, since it is a common abbreviation that does not need explanation. But it also stresses the fact that although it is not possible to build a whole network by yourself, you can indeed build by yourself, or yourselves, one of its nodes. And even if this node is often built using off-the-shelf commercial equipment, it is still placed on your space, owned, installed, and maintained by you.
See also
List of wireless community networks by region
Freifunk
Guifi.net
Ninux
Wireless community network
References
Do it yourself
Wireless networking | DiY networking | Technology,Engineering | 784 |
2,379,782 | https://en.wikipedia.org/wiki/Integrated%20logistics%20support | Integrated logistics support (ILS) is a technology in the system engineering to lower a product life cycle cost and decrease demand for logistics by the maintenance system optimization to ease the product support. Although originally developed for military purposes, it is also widely used in commercial customer service organisations.
ILS defined
In general, ILS plans and directs the identification and development of logistics support and system requirements for military systems, with the goal of creating systems that last longer and require less support, thereby reducing costs and increasing return on investments. ILS therefore addresses these aspects of supportability not only during acquisition, but also throughout the operational life cycle of the system. The impact of ILS is often measured in terms of metrics such as reliability, availability, maintainability and testability (RAMT), and sometimes System Safety (RAMS).
ILS is the integrated planning and action of a number of disciplines in concert with one another to assure system availability. The planning of each element of ILS is ideally developed in coordination with the system engineering effort and with each other. Tradeoffs may be required between elements in order to acquire a system that is: affordable (lowest life cycle cost), operable, supportable, sustainable, transportable, and environmentally sound. In some cases, a deliberate process of Logistics Support Analysis will be used to identify tasks within each logistics support element.
The most widely accepted list of ILS activities include:
Reliability engineering, maintainability engineering and maintenance (preventive, predictive and corrective) planning
Supply (spare part) support acquire resources
Support and test equipment/equipment support
Manpower and personnel
Training and training support
Technical data/publications
Computer resources support
Facilities
Packaging, handling, storage and transportation
Design interface
Decisions are documented in a life cycle sustainment plan (LCSP), a Supportability Strategy, or (most commonly) an Integrated Logistics Support Plan (ILSP). ILS planning activities coincide with development of the system acquisition strategy, and the program will be tailored accordingly. A properly executed ILS strategy will ensure that the requirements for each of the elements of ILS are properly planned, resourced, and implemented. These actions will enable the system to achieve the operational readiness levels required by the warfighter at the time of fielding and throughout the life cycle. ILS can be also used for civilian projects, as highlighted by the ASD/AIA ILS Guide.
It is considered common practice within some industries - primarily Defence - for ILS practitioners to take a leave of absence to undertake an ILS Sabbatical; furthering their knowledge of the logistics engineering disciplines. ILS Sabbaticals are normally taken in developing nations - allowing the practitioner an insight into sustainment practices in an environment of limited materiel resources.
Adoption
ILS is a technique introduced by the US Army to ensure that the supportability of an equipment item is considered during its design and development. The technique was adopted by the UK MoD in 1993 and made compulsory for the procurement of the majority of MOD equipment.
Influence on Design. Integrated Logistic Support will provide important means to identify (as early as possible) reliability issues / problems and can initiate system or part design improvements based on reliability, maintainability, testability or system availability analysis
Design of the Support Solution for minimum cost. Ensuring that the Support Solution considers and integrates the elements considered by ILS. This is discussed fully below.
Initial Support Package. These tasks include calculation of requirements for spare parts, special tools, and documentation. Quantities required for a specified initial period are calculated, procured, and delivered to support delivery, installation in some of the cases, and operation of the equipment.
The ILS management process facilitates specification, design, development, acquisition, test, fielding, and support of systems.
Maintenance planning
Maintenance planning begins early in the acquisition process with development of the maintenance concept. It is conducted to evolve and establish requirements and tasks to be accomplished for achieving, restoring, and maintaining the operational capability for the life of the system. Maintenance planning also involves Level Of Repair Analysis (LORA) as a function of the system acquisition process. Maintenance planning will:
Define the actions and support necessary to ensure that the system attains the specified system readiness objectives with minimum Life Cycle Cost (LCC).
Set up specific criteria for repair, including Built-In Test Equipment (BITE) requirements, testability, reliability, and maintainability; support equipment requirements; automatic test equipment; and manpower skills and facility requirements.
State specific maintenance tasks, to be performed on the system.
Define actions and support required for fielding and marketing the system.
Address warranty considerations.
The maintenance concept must ensure prudent use of manpower and resources. When formulating the maintenance concept, analysis of the proposed work environment on the health and safety of maintenance personnel must be considered.
Conduct a LORA to optimize the support system, in terms of LCC, readiness objectives, design for discard, maintenance task distribution, support equipment and ATE, and manpower and personnel requirements.
Minimize the use of hazardous materials and the generation of waste.
Supply support
Supply support encompasses all management actions, procedures, and techniques used to determine requirements to:
Acquire support items and spare parts.
Catalog the items.
Receive the items.
Store and warehouse the items.
Transfer the items to where they are needed.
Issue the items.
Dispose of secondary items.
Provide for initial support of the system.
Acquire, distribute, and replenish inventory
Support and test equipment
Support and test equipment includes all equipment, mobile and fixed, that is required to perform the support functions, except that equipment which is an integral part of the system. Support equipment categories include:
Handling and Maintenance Equipment.
Tools (hand tools as well as power tools).
Metrology and measurement devices.
Calibration equipment.
Test equipment.
Automatic test equipment.
Support equipment for on- and off-equipment maintenance.
Special inspection equipment and depot maintenance plant equipment, which includes all equipment and tools required to assemble, disassemble, test, maintain, and support the production and/or depot repair of end items or components.
This also encompasses planning and acquisition of logistic support for this equipment.
Manpower and personnel
Manpower and personnel involves identification and acquisition of personnel with skills and grades required to operate and maintain a system over its lifetime. Manpower requirements are developed and personnel assignments are made to meet support demands throughout the life cycle of the system. Manpower requirements are based on related ILS elements and other considerations. Human factors engineering (HFE) or behavioral research is frequently applied to ensure a good man-machine interface. Manpower requirements are predicated on accomplishing the logistics support mission in the most efficient
and economical way. This element includes requirements during the planning and decision process to optimize numbers, skills, and positions. This area considers:
Man-machine and environmental interface
Special skills
Human factors considerations during the planning and decision process
Training and training devices
Training and training devices support encompasses the processes, procedures, techniques, training devices, and equipment used to train personnel to operate and support a system. This element defines qualitative and quantitative requirements for the training of operating and support personnel throughout the life cycle of the system. It includes requirements for:
Competencies management
Factory training
Instructor and key personnel training
New equipment training team
Resident training
Sustainment training
User training
HAZMAT disposal and safe procedures training
Embedded training devices, features, and components are designed and built into a specific system to provide training or assistance in the use of the system. (One example of this is the HELP files of many software programs.) The design, development, delivery, installation, and logistic support of required embedded training features, mockups, simulators, and training aids are also included.
Technical data
Technical Data and Technical Publications consists of scientific or technical information necessary to translate system requirements into discrete engineering and logistic support documentation. Technical data is used in the development of repair manuals, maintenance manuals, user manuals, and other documents that are used to operate or support the system. Technical data includes, but may not be limited to:
Technical manuals
Technical and supply bulletins
Transportability guidance technical manuals
Maintenance expenditure limits and calibration procedures
Repair parts and tools lists
Maintenance allocation charts
Corrective maintenance instructions
Preventive maintenance and Predictive maintenance instructions
Drawings/specifications/technical data packages
Software documentation
Provisioning documentation
Depot maintenance work requirements
Identification lists
Component lists
Product support data
Flight safety critical parts list for aircraft
Lifting and tie down pamphlet/references
Hazardous Material documentation
Computer resources support
Computer Resources Support includes the facilities, hardware, software, documentation, manpower, and personnel needed to operate and support computer systems and the software within those systems. Computer resources include both stand-alone and embedded systems. This element is usually planned, developed, implemented, and monitored by a Computer Resources Working Group (CRWG) or Computer Resources Integrated Product Team (CR-IPT) that documents the approach and tracks progress via a Computer Resources Life-Cycle Management Plan (CRLCMP). Developers will need to ensure that planning actions and strategies contained in the ILSP and CRLCMP are complementary and that computer resources support for the operational software, and ATE software, support software, is available where and when needed.
Packaging, handling, storage, and transportation (PHS&T)
This element includes resources and procedures to ensure that all equipment and support items are preserved, packaged, packed, marked, handled, transported, and stored properly for short- and long-term requirements. It includes material-handling equipment and packaging, handling and storage requirements, and pre-positioning of material and parts. It also includes preservation and packaging level requirements and storage requirements (for example, sensitive, proprietary, and controlled items). This element includes planning and programming the details associated with movement of the system in its shipping configuration to the ultimate destination via transportation modes and networks available and authorized for use. It further encompasses establishment of critical engineering design parameters and constraints (e.g., width, length, height, component and system rating, and weight) that must be considered during system development. Customs requirements, air shipping requirements, rail shipping requirements, container considerations, special movement precautions, mobility, and transportation asset impact of the shipping mode or the contract shipper must be carefully assessed. PHS&T planning must consider:
System constraints (such as design specifications, item configuration, and safety precautions for hazardous material)
Special security requirements
Geographic and environmental restrictions
Special handling equipment and procedures
Impact on spare or repair parts storage requirements
Emerging PHS&T technologies, methods, or procedures and resource-intensive PHS&T procedures
Environmental impacts and constraints
Facilities
The Facilities logistics element is composed of a variety of planning activities, all of which are directed toward ensuring that all required permanent or semi-permanent operating and support facilities (for instance, training, field and depot maintenance, storage, operational, and testing) are available concurrently with system fielding. Planning must be comprehensive and include the need for new construction as well as modifications to existing facilities. It also includes studies to define and establish impacts on life cycle cost, funding requirements, facility locations and improvements, space requirements, environmental impacts, duration or frequency of use, safety and health standards requirements, and security restrictions. Also included are any utility requirements, for both fixed and mobile facilities, with emphasis on limiting requirements of scarce or unique resources.
Design interface
Design interface is the relationship of logistics-related design parameters of the system to its projected or actual support resource requirements. These design parameters are expressed in operational terms rather than as inherent values and specifically relate to system requirements and support costs of the system. Programs such as "design for testability" and "design for discard" must be considered during system design. The basic requirements that need to be considered as part of design interface include:
Reliability
Maintainability
Standardization
Interoperability
Safety
Security
Usability
Environmental and HAZMAT
Privacy, particularly for computer systems
Legal
See also
Reliability, availability and serviceability (computer hardware)
References
The references below cover many relevant standards and handbooks related to Integrated logistics support.
Standards
Army Regulation 700-127 Integrated Logistics Support, 27 September 2007
British Defence Standard 00-600 Integrated Logistics Support for MOD Projects
Federal Standard 1037C in support of MIL-STD-188
IEEE 1332, IEEE Standard Reliability Program for the Development and Production of Electronic Systems and Equipment, Institute of Electrical and Electronics Engineers.
MIL-STD-785, Reliability Program for Systems and Equipment Development and Production, U.S. Department of Defense.
MIL-STD 1388-1A Logistic Support Analysis (LSA)
MIL-STD 1388-2B Requirements for a Logistic Support Analysis Record
MIL-STD-1629A, Procedures for Performing a Failure Mode, Effects and Criticality Analysis (FMECA)
MIL-STD-2173, Reliability Centered Maintenance Requirements, U.S. Department of Defense (superseded by NAVAIR 00-25-403)
OPNAVINST 4130.2A
DEF(AUST)5691 Logistic Support Analysis
DEF(AUST)5692 Logistic Support Analysis Record Requirements for the Australian Defence Organisation
Specifications - not standards
The ASD/AIA Suite of S-Series ILS specifications
SX000i - International guide for integrated logistic support (under development)
S1000D - International specification for technical publications using a common source database
S2000M - International specification for materiel management - Integrated data processing
S3000L - International specification for Logistics Support Analysis - LSA
S4000P - International specification for developing and continuously improving preventive maintenance
S5000F - International specification for operational and maintenance data feedback (under development)
S6000T - International specification for training needs analysis - TNA (definition on-going)
SX001G - Glossary for the Suite of S-specifications
SX002D - Common Data Model
AECMA 1000D (Technical Publications) - Refer to S1000D above
AECMA 2000M (initial provisioning) - Refer to S2000M above
DI-ILSS-80095, Data Item Description: Integrated Logistics Support Plan (ILSP) (17 Dec 1985)
Handbooks
Integrated Logistics Support Handbook, third edition - James V. Jones
MIL-HDBK-217F, Reliability Prediction of Electronic Equipment, U.S. Department of Defense .
MIL-HDBK-338B, Electronic Reliability Design Handbook, U.S. Department of Defense.
MIL-HDBK-781A, Reliability Test Methods, Plans, and Environments for Engineering Development, Qualification, and Production, U.S. Department of Defense.
NASA Probabilistic Risk Assessment Handbook
NASA Fault Tree Assessment handbook
MIL-HDBK-2155, Failure Reporting, Analysis and Corrective Action Taken, U.S. Department of Defense
MIL-HDBK-502A, Product Support Analysis, U.S. Department of Defense
Resources
Systems Assessments, Integrated Logistics and COOP Support Services, 26 August 2008
AeroSpace and Defence (ASD) Industries Association of Europe
Integrated Logistics Support, The Design Engineering Link by Walter Finkelstein, J.A. Richard Guertin, 1989,
Article References
Military logistics
Systems engineering | Integrated logistics support | Engineering | 3,059 |
5,289,467 | https://en.wikipedia.org/wiki/Brown%20powder | Brown powder or prismatic powder, sometimes referred as "cocoa powder" due to its color, was a propellant used in large artillery and ship's guns from the 1870s to the 1890s. While similar to black powder, it was chemically formulated and formed hydraulically into a specific grain shape to provide slower burn rates with neutral or progressive burning, as opposed to the faster and regressive burn typical of randomly shaped grains of black powder produced by crushing and screening powder formed into sheets in a press box, as was typical for cannon powder previously.
Characteristics
For pure explosive damage, high burn rates or detonation speeds (and accompanying brisance) are generally preferable, but in guns and especially cannons, slower-burning powder decreases firing stresses. This allows for lighter, longer (and more accurate) barrels with associated decreases in production and maintenance costs. Further modifications of its burning rate were achieved by shaping the powder grains into prismatic shapes, typically single-perforated hexagonal or octagonal prisms.
They became obsolete as a propellant due to the introduction of nitro-explosive propellants such as Poudre B, in France, and later by Nobel's ballistite and, in Britain, by cordite. These new propellants produced less smoke, particularly less black smoke.
Composition
The differences in burning rate were achieved by several means. Changes to formulation were altering ingredients relative percentage by weight and using differently processed charcoals for fuel than those of a standard 75:15:10 (potassium nitrate:charcoal:sulfur) black cannon powder.
Typically, sulfur was either not used in brown powders, or sulfur content reduced to around 1% by weight from the usual 10%. The reduction or outright removal of sulfur slowed the burn rate, while replacement of higher molecular weight sulfur dioxide by carbon dioxide or monoxide in the propellant gas mixture gave a higher specific impulse.
Differently processed charcoals were used. Fully carbonized charcoal (mostly composed of elemental carbon) in black powder provides its distinctive black color, while its replacement with an incompletely carbonized, brownish colored charcoal produces a dark brown appearance, hence the names "brown powder" or "cocoa powder". The less carbonized charcoal was more reactive than fully carbonized charcoal, somewhat making up for the easy ignition characteristics usually provided by sulfur. The brown charcoal also helped to produce sturdier grains and replaced sulfur in the role of a binder.
Further modifications of burn rate were achieved by shaping the individual powder grains, often into prismatic shapes such as single-perforated hexagonal or octagonal prisms.
History
Large-grained powder, made in the traditional way as flat sheets but screened to larger sizes, was introduced in the 1850s by U.S. Army Major Thomas Rodman for his large-calibre cannon. In 1875 Lammot du Pont invented Hexagonal powder for large artillery, which was pressed using shaped plates with a small center core; about diameter, like a wagon wheel nut, the center hole widened as the grain burned. By 1880 naval guns were using Hexagonal grains, in height. Very large grain powders, being subject to defects in manufacturing, did not completely remove the danger of overpressure, as demonstrated in the 1880 accident on the Italian ironclad Duilio, which involved powder made at the chemical works at Fossano.
In 1882 the German Rottweil Company developed Prismatic Brown Powder (PBC), which was also adopted by the Royal Navy in 1884. It retarded burning even further by using only 2 percent sulfur and using charcoal made from rye straw that had not been completely charred. It was pressed into prisms with a central hole, similar to the DuPont Hexagonal.
The French Navy instead developed the Slow Burning Cocoa (SBC) powder, which had grains of about ; still only 40% of it burned, the rest was ejected as heavy black smoke.
The first smokeless propellant, the guncotton-based Poudre B was introduced by the French Navy in 1886, triggering rapid development of smokeless compounds which replaced brown powder.
References
Explosives
Firearm propellants
Powders | Brown powder | Physics,Chemistry | 843 |
11,820,466 | https://en.wikipedia.org/wiki/Leptographium%20microsporum | Leptographium microsporum is a species of fungus in the family Ophiostomataceae. It is a plant pathogen.
References
Fungal plant pathogens and diseases
Fungi described in 1935
Ophiostomatales
Fungus species | Leptographium microsporum | Biology | 50 |
63,165,844 | https://en.wikipedia.org/wiki/Cumulus%20Association | The Cumulus Association is a global association of higher education institutions in the fields of art, design, and media. Currently, there are 350 members from 60 countries.
Cumulus was founded in 1990 by the Aalto University School of Arts, Design and Architecture in Finland and the Royal College of Art in London in cooperation with the Danish Design School, Gerrit Rietvelt Academy, University of Duisburg-Essen and University of Applied Arts Vienna. The network was established to coordinate collaboration between schools, and to facilitate student and teacher exchange within the European Union Erasmus programme. The network was transferred to Cumulus Association in 2001.
In the last 30 years, Cumulus has become a global association that organizes biannual conferences and initiates projects and workshops with member institutions. Their aim is to improve the quality of art, design, and media education and to help students, professors, and other faculty members work internationally. In addition to academic collaboration, Cumulus facilitates collaboration with businesses, public institutions, and governments with an interest in art and design education and research.
To stimulate design actions, projects, and research leading to a more sustainable society, Cumulus representatives signed the Kyoto Design Declaration in March 2008. To implement the ideals of the Declaration, the Cumulus Green Award was established. Cumulus Green is an international award focused on cultivating and leading global cultures, societies, and industries towards more ecological and responsible solutions.
References
Arts organizations established in 1990
Art and design organizations
Educational institutions established in 1990 | Cumulus Association | Engineering | 302 |
2,443,873 | https://en.wikipedia.org/wiki/Jean-Raymond%20Abrial | Jean-Raymond Abrial (born 6 November 1938) is a French computer scientist and inventor of the Z and B formal methods.
Abrial was a student at the École Polytechnique (class of 1958).
Abrial's 1974 paper Data Semantics laid the foundation for a formal approach to Data Models; although not adopted directly by practitioners, it directly influenced all subsequent models from the Entity-Relationship Model through to RDF.
J.-R. Abrial is the father of the Z notation (typically used for formal specification of software), during his time at the Programming Research Group under Prof. Tony Hoare within the Oxford University Computing Laboratory (now Oxford University Department of Computer Science), arriving in 1979 and sharing an office and collaborating with Cliff Jones. He later initiated the B-Method, with better tool-based software development support for refinement from a high-level specification to an executable program, including the Rodin tool. These are two important formal methods approaches for software engineering. He is the author of The B-Book: Assigning Programs to Meanings. For much of his career he has been an independent consultant. He was an invited professor at ETH Zurich from 2004 to 2009.
Abrial was elected to be a Member of the Academia Europaea in 2006.
See also
Rodin tool
References
External links
by Jonathan Bowen
Managing the Construction of Large Computerized Systems — article
Have we learned from the Wasa disaster (video) — talk by Jean-Raymond Abrial
1938 births
Living people
École Polytechnique alumni
French computer scientists
Members of the Department of Computer Science, University of Oxford
Formal methods people
Z notation
Computer science writers
Software engineers
Software engineering researchers
Academic staff of ETH Zurich
Members of Academia Europaea | Jean-Raymond Abrial | Mathematics | 352 |
61,097 | https://en.wikipedia.org/wiki/Roche%20limit | In celestial mechanics, the Roche limit, also called Roche radius, is the distance from a celestial body within which a second celestial body, held together only by its own force of gravity, will disintegrate because the first body's tidal forces exceed the second body's self-gravitation. Inside the Roche limit, orbiting material disperses and forms rings, whereas outside the limit, material tends to coalesce. The Roche radius depends on the radius of the first body and on the ratio of the bodies' densities.
The term is named after Édouard Roche (, ), the French astronomer who first calculated this theoretical limit in 1848.
Explanation
The Roche limit typically applies to a satellite's disintegrating due to tidal forces induced by its primary, the body around which it orbits. Parts of the satellite that are closer to the primary are attracted more strongly by gravity from the primary than parts that are farther away; this disparity effectively pulls the near and far parts of the satellite apart from each other, and if the disparity (combined with any centrifugal effects due to the object's spin) is larger than the force of gravity holding the satellite together, it can pull the satellite apart. Some real satellites, both natural and artificial, can orbit within their Roche limits because they are held together by forces other than gravitation. Objects resting on the surface of such a satellite would be lifted away by tidal forces. A weaker satellite, such as a comet, could be broken up when it passes within its Roche limit.
Since, within the Roche limit, tidal forces overwhelm the gravitational forces that might otherwise hold the satellite together, no satellite can gravitationally coalesce out of smaller particles within that limit. Indeed, almost all known planetary rings are located within their Roche limit. (Notable exceptions are Saturn's E-Ring and Phoebe ring. These two rings could possibly be remnants from the planet's proto-planetary accretion disc that failed to coalesce into moonlets, or conversely have formed when a moon passed within its Roche limit and broke apart.)
The gravitational effect occurring below the Roche limit is not the only factor that causes comets to break apart. Splitting by thermal stress, internal gas pressure, and rotational splitting are other ways for a comet to split under stress.
Determination
The limiting distance to which a satellite can approach without breaking up depends on the rigidity of the satellite. At one extreme, a completely rigid satellite will maintain its shape until tidal forces break it apart. At the other extreme, a highly fluid satellite gradually deforms leading to increased tidal forces, causing the satellite to elongate, further compounding the tidal forces and causing it to break apart more readily.
Most real satellites would lie somewhere between these two extremes, with tensile strength rendering the satellite neither perfectly rigid nor perfectly fluid. For example, a rubble-pile asteroid will behave more like a fluid than a solid rocky one; an icy body will behave quite rigidly at first but become more fluid as tidal heating accumulates and its ices begin to melt.
But note that, as defined above, the Roche limit refers to a body held together solely by the gravitational forces which cause otherwise unconnected particles to coalesce, thus forming the body in question. The Roche limit is also usually calculated for the case of a circular orbit, although it is straightforward to modify the calculation to apply to the case (for example) of a body passing the primary on a parabolic or hyperbolic trajectory.
Rigid satellites
The rigid-body Roche limit is a simplified calculation for a spherical satellite. Irregular shapes such as those of tidal deformation on the body or the primary it orbits are neglected. It is assumed to be in hydrostatic equilibrium. These assumptions, although unrealistic, greatly simplify calculations.
The Roche limit for a rigid spherical satellite is the distance, , from the primary at which the gravitational force on a test mass at the surface of the object is exactly equal to the tidal force pulling the mass away from the object:
where is the radius of the primary, is the density of the primary, and is the density of the satellite. This can be equivalently written as
where is the radius of the secondary, is the mass of the primary, and is the mass of the secondary. A third equivalent form uses only one property for each of the two bodies, the mass of the primary and the density of the secondary, is
These all represent the orbital distance inside of which loose material (e.g. regolith) on the surface of the satellite closest to the primary would be pulled away, and likewise material on the side opposite the primary will also go away from, rather than toward, the satellite.
Fluid satellites
A more accurate approach for calculating the Roche limit takes the deformation of the satellite into account. An extreme example would be a tidally locked liquid satellite orbiting a planet, where any force acting upon the satellite would deform it into a prolate spheroid.
The calculation is complex and its result cannot be represented in an exact algebraic formula. Roche himself derived the following approximate solution for the Roche limit:
However, a better approximation that takes into account the primary's oblateness and the satellite's mass is:
where is the oblateness of the primary.
The fluid solution is appropriate for bodies that are only loosely held together, such as a comet. For instance, comet Shoemaker–Levy 9's decaying orbit around Jupiter passed within its Roche limit in July 1992, causing it to fragment into a number of smaller pieces. On its next approach in 1994 the fragments crashed into the planet. Shoemaker–Levy 9 was first observed in 1993, but its orbit indicated that it had been captured by Jupiter a few decades prior.
See also
Roche lobe
Chandrasekhar limit
Spaghettification (the extreme case of tidal distortion)
Hill sphere
Sphere of influence (black hole)
Black hole
Triton (moon) (Neptune's satellite)
Comet Shoemaker–Levy 9
References
Sources
2.44 is mentioned on page 258.
External links
Discussion of the Roche Limit;
Audio: Cain/Gay – Astronomy Cast Tidal Forces Across the Universe – August 2007
Roche Limit Description from NASA
Concepts in astrophysics
Equations of astronomy
Gravity
Planetary rings
Space science
Tidal forces
Solar System | Roche limit | Physics,Astronomy | 1,273 |
20,777,128 | https://en.wikipedia.org/wiki/Kunitz%20STI%20protease%20inhibitor | Kunitz soybean trypsin inhibitor is a type of protein contained in legume seeds which functions as a protease inhibitor. Kunitz-type Soybean Trypsin Inhibitors are usually specific for either trypsin or chymotrypsin. They are thought to protect seeds against consumption by animal predators.
Background
Two types of trypsin inhibitors are found in soy: the Kunitz-type soybean trypsin inhibitor (STI, discovered by Moses Kunitz and sometimes abbreviated as KTI) and the Bowman-Birk inhibitor (BBI). STI is a large (20,100 daltons), strong inhibitor of trypsin, while BBI is much smaller (8,000 daltons) and inhibits both trypsin and chymotrypsin. Both inhibitors have significant anti-nutritive effects in the body, affecting digestion by hindering protein hydrolysis and activation of other enzymes in the gut. STI is found in much larger concentrations than BBI in soy, however, to achieve the highest nutritional value from soy, both of these inhibitors must be denatured in some way. Whole soybeans have been reported to contain 17–27 mg of trypsin inhibitor per gram.
Protease inhibitory activity is decreased by cooking soybeans, leading to low levels in soy products such as tofu and soy milk.
Structure
Proteins from the Kunitz family contain from 170 to 200 amino acid residues and one or two intra-chain disulfide bonds. The best conserved region is found in their N-terminal section. The crystal structures of soybean trypsin inhibitor (STI), trypsin inhibitor DE-3 from the coral tree Erythrina afra (ETI) and the bifunctional proteinase K/alpha-amylase inhibitor from wheat (PK13) have been solved, showing them to share the same beta trefoil fold structure as those of interleukin 1 and heparin-binding growth factors.
Despite the structural similarity, STI shows no interleukin-1 bioactivity, presumably as a result of their primary sequence disparities. The active inhibitory site containing the scissile bond is located in the loop between beta-strands 4 and 5 in STI and ETI.
Action and consequences of trypsin inhibitors
Trypsin inhibitors require a specific three-dimensional structure in order to inactivate trypsin in the body. They bind strongly to trypsin, blocking its active site and instantly forming a highly stable adduct and halting digestion of certain proteins. Trypsin, a serine protease, is responsible for cleaving the polypeptide backbone following arginine or lysine.
After a meal, trypsinogen release is stimulated by cholecystokinin and undergoes specific proteolysis for activation. Free trypsin is then able to activate other serine proteases, such as chymotrypsin, elastase, and more trypsin (by autocatalysis), or continue breaking down proteins. However, if trypsin inhibitors (specifically STI) are present, the majority of trypsin in the cycle of digestion is inactivated and ingested proteins remain whole. Effects of this occurrence include gastric distress, and pancreatic hyperplasia (proliferation of cells) or hypertrophy (enlargement of cells).
The amount of soy inhibitors is directly related to the amount of trypsin it will inhibit, therefore a product with high concentration of soy is likely to produce large values of inhibition. In a rat model, animals were fed either soy protein concentrate or direct concentrate of STI. In both instances, after a week the rats showed a dose-related increase in pancreas weight due to both hyperplasia and hypertrophy. This indicates that long-term consumption of a diet high in soy with strong trypsin inhibitor activity may produce unwanted effects in humans as well.
Inactivation of Trypsin Inhibitors
A significant amount of research is being done to determine the best method of inhibitor inactivation. The most successful methods found so far include:
Heat
Freezing
Addition of Sulfites
Gastrobodies
STI is highly resistant to pepsin, enabling STI to avoid degradation in the stomach and then inhibit trypsin. Hence STI was engineered into an antibody mimetic called a gastrobody, aiming to address the problems of antibody degradation in the gut following oral delivery. Loops of STI were randomized and selected by phage display for binding to a target of interest (a toxin from Clostridioides difficile).
Cancer Research
While trypsin inhibitors have been widely regarded as anti-nutritive factors in soy, research is currently being done on the inhibitors’ possible anti-carcinogenic characteristics. Some research has shown that protease inhibitors can cause irreversible suppressive effect on carcinogenic cell growth. However, the mechanism is still unknown. The cancers showing positive results for this new development are colon, oral, lung, liver, and esophageal cancers. Further research is still necessary to determine things such as the method of delivery for this natural anti-carcinogen, as well as performing extensive clinical trials in this area.
References
External links
Antibody mimetics
Protein domains | Kunitz STI protease inhibitor | Chemistry,Biology | 1,115 |
3,854,461 | https://en.wikipedia.org/wiki/41P/Tuttle%E2%80%93Giacobini%E2%80%93Kres%C3%A1k | 41P/Tuttle–Giacobini–Kresák is a periodic comet in the Solar System. The comet nucleus is estimated to be 1.4 kilometers in diameter.
Discovered by Horace Parnell Tuttle on May 3, 1858, and re-discovered independently by Michel Giacobini and Ľubor Kresák in 1907 and 1951 respectively, it is a member of the Jupiter family of comets.
2006 apparition
As of June 1, 2006, Comet 41P was a 10th magnitude object for telescopes, located on the Cancer-Leo border, with a predicted maximum of about 10 at perihelion on June 11. This comet is of interest as it has been noted to flare dramatically. In 1973 the flare was 10 magnitudes brighter than predicted, reaching easy naked-eye visibility at apparent magnitude 4. However, by June 22, the comet had diminished to about magnitude 11, having produced no flare of note.
2011 apparition
The comet was not observed during the 2011 unfavorable apparition since the perihelion passage occurred when the comet was on the far side of the Sun.
2017 apparition
41P was recovered on November 10, 2016, at apparent magnitude 21 by Pan-STARRS. On April 1, 2017, the comet passed from the Earth. The comet was expected to brighten to around magnitude 7 and be visible in binoculars.
Proposed exploration
In the 1960s European Space Research Organisation investigated sending a probe to the comet.
References
External links
41P/Tuttle-Giacobini-Kresak at the Minor Planet Center's Database
41P at Kronk's Cometography
Periodic comets
041P
041P
0041
041P
041P
041P
18580503 | 41P/Tuttle–Giacobini–Kresák | Astronomy | 359 |
53,102,285 | https://en.wikipedia.org/wiki/Mass-spring-damper%20model | The mass-spring-damper model consists of discrete mass nodes distributed throughout an object and interconnected via a network of springs and dampers. This model is well-suited for modelling object with complex material properties such as nonlinearity and viscoelasticity.
Packages such as MATLAB may be used to run simulations of such models. As well as engineering simulation, these systems have applications in computer graphics and computer animation.
Derivation (Single Mass)
Deriving the equations of motion for this model is usually done by summing the forces on the mass (including any applied external forces :
By rearranging this equation, we can derive the standard form:
where
is the undamped natural frequency and is the damping ratio. The homogeneous equation for the mass spring system is:
This has the solution:
If then is negative, meaning the square root will be imaginary and therefore the solution will have an oscillatory component.
See also
Numerical methods
Soft body dynamics#Spring/mass models
Finite element analysis
References
Classical mechanics
Mechanical vibrations | Mass-spring-damper model | Physics,Engineering | 207 |
72,348,357 | https://en.wikipedia.org/wiki/Pandoravirus%20yedoma | Pandoravirus yedoma is a virus that originated 48,500 years ago which was discovered in the deep Siberian permafrost in 2022. The scientists also revived 13 new pathogens and characterized them as 'zombie viruses'. It has been shown to infect amoeba cells (particularly A. castellanii) killing them in the process.
References
Bamfordvirae
Unaccepted virus taxa | Pandoravirus yedoma | Biology | 85 |
4,031,767 | https://en.wikipedia.org/wiki/Cross-site%20cooking | Cross-site cooking is a type of browser exploit which allows a site attacker to set a cookie for a browser into the cookie domain of another site server.
Cross-site cooking can be used to perform session fixation attacks, as a malicious site can fixate the session identifier cookie of another site.
Other attack scenarios may also be possible, for example: attacker may know of a security vulnerability in server, which is exploitable using a cookie. But if this security vulnerability requires e.g. an administrator password which attacker does not know, cross-site cooking could be used to fool innocent users to unintentionally perform the attack.
Cross site. Cross-site cooking is similar in concept to cross-site scripting, cross-site request forgery, cross-site tracing, cross-zone scripting etc., in that it involves the ability to move data or code between different web sites (or in some cases, between e-mail / instant messages and sites). These problems are linked to the fact that a web browser is a shared platform for different information / applications / sites. Only logical security boundaries maintained by browsers ensures that one site cannot corrupt or steal data from another. However a browser exploit such as cross-site cooking can be used to move things across the logical security boundaries.
Origins
The name cross-site cooking and concept was presented by Michał Zalewski in 2006. The name is a mix of "cookie" and "cross-site", attempting to describe the nature of cookies being set across sites.
In Michał Zalewski's article of 2006, Benjamin Franz was credited for his discovery, who in May 1998 reported a cookie domain related vulnerability to vendors. Benjamin Franz published the vulnerability and discussed it mainly as a way to circumvent "privacy protection" mechanisms in popular browsers. Michał Zalewski concluded that the bug, 8 years later, was still present (unresolved) in some browsers and could be exploited for cross-site cooking. Various remarks such as "vendors [...] certainly are not in a hurry to fix this" were made by Zalewski and others.
References
External links
Cross-Site Cooking article by Michal Zalewski. Details concept, 3 bugs which enables Cross Site Cooking. One of these bugs is the age old bug originally found by Benjamin Franz.
Web security exploits | Cross-site cooking | Technology | 489 |
7,331,570 | https://en.wikipedia.org/wiki/Pressure%20switch | A pressure switch is a form of switch that operates an electrical contact when a certain set fluid pressure has been reached on its input. The switch may be designed to make contact either on pressure rise or on pressure fall. Pressure switches are widely used in industry to automatically supervise and control systems that use pressurized fluids.
Another type of pressure switch detects mechanical force; for example, a pressure-sensitive mat is used to automatically open doors on commercial buildings. Such sensors are also used in security alarm applications such as pressure sensitive floors.
Construction and types
A pressure switch for sensing fluid pressure contains a capsule, bellows, Bourdon tube, diaphragm or piston element that deforms or displaces proportionally to the applied pressure. The resulting motion is applied, either directly or through amplifying levers, to a set of switch contacts. Since pressure may be changing slowly and contacts should operate quickly, some kind of over-center mechanism such as a miniature snap-action switch is used to ensure quick operation of the contacts. One sensitive type of pressure switch uses mercury switches mounted on a Bourdon tube; the shifting weight of the mercury provides a useful over-center characteristic.
The pressure switch may be adjustable, by moving the contacts or adjusting tension in a counterbalance spring. Industrial pressure switches may have a calibrated scale and pointer to show the set point of the switch. A pressure switch will have a hysteresis, that is, a differential range around its setpoint, known as the switch's deadband, inside which small changes of pressure do not influence the state of the contacts. Some types allow adjustment of the differential.
The pressure-sensing element of a pressure switch may be arranged to respond to the difference of two pressures. Such switches are useful when the difference is significant, for example, to detect a clogged filter in a water supply system. The switches must be designed to respond only to the difference and not to false-operate for changes in the common mode pressure.
The contacts of the pressure switch may be rated a few tenths of an ampere to around 15 amperes, with smaller ratings found on more sensitive switches. Often a pressure switch will operate a relay or other control device, but some types can directly control small electric motors or other loads.
Since the internal parts of the switch are exposed to the process fluid, they must be chosen to balance strength and life expectancy against compatibility with process fluids. For example, rubber diaphragms are commonly used in contact with water, but would quickly degrade if used in a system containing mineral oil.
Switches designed for use in hazardous areas with flammable gas have enclosure to prevent an arc at the contacts from igniting the surrounding gas. Switch enclosures may also be required to be weatherproof, corrosion resistant, or submersible.
An electronic pressure switch incorporates some variety of pressure transducer (strain gauge, capacitive element, or other) and an internal circuit to compare the measured pressure to a set point. Such devices may provide improved repeatability, accuracy and precision over a mechanical switch.
Pneumatic
Uses of pneumatic pressure switches include:
Switch a household well water pump automatically when water is drawn from the pressure tank.
Switching off an electrically driven gas compressor when a set pressure is achieved in the reservoir
Switching off a gas compressor, whenever there is no feed in the suction stage.
in-cell charge control in a battery
Switching on an alarm light in the cockpit of an aircraft if cabin pressure (based on altitude) is critically low.
Air filled hoses that activate switches when vehicles drive over them. Common for counting traffic and at gas stations.
Hydraulic
Hydraulic pressure switches have various uses in automobiles, for example, to warn if the engine's oil pressure falls below a safe level, or to control automatic transmission torque converter lock-up. Prior to the 1960s, a pressure switch was used in the hydraulic braking circuit to control power to the brake lights; more recent automobiles use a switch directly activated by the brake pedal.
In dust control systems (bag filter), a pressure switch is mounted on the header which will raise an alarm when air pressure in the header is less than necessary. A differential pressure switch may be installed across a filter element to sense increased pressure drop, indicating the need for filter cleaning or replacement.
Examples
Pressure sensitive mat
A pressure sensitive mat provides a contact signal when force is applied anywhere within the area of the mat. Some mats provide a single signal, while others can resolve the position of the applied force within the mat. Pressure sensitive mats can be used to operate electrically operated doors, or as part of an interlock system to ensure machine operators are clear of dangerous areas of a process before it operates. Pressure sensitive mats can be used to detect persons walking over a particular point, as part of a security alarm system or to count attendance, or for other purposes.
See also
Dynamic pressure
List of sensors
Pressure sensor
References
External links
Pneumatic tools
Hydraulic tools
Security technology | Pressure switch | Physics | 1,013 |
77,368,450 | https://en.wikipedia.org/wiki/Arcs%20%28board%20game%29 | Arcs: Conflict & Collapse in the Reach is a space opera board game designed by Cole Wehrle, illustrated by Kyle Ferrin, and published by Leder Games in 2024 alongside Arcs: The Blighted Reach Expansion, a large expansion which significantly modifies the base game into a three-act legacy campaign. In Arcs, players compete to gain the most points by fulfilling variable objectives, taking actions through a trick-taking system and using different dice to attack enemy starships, with each player possessing variable powers. Following initial releases to Kickstarter backers, the base game and expansion were released to retail on October 1 2024.
Gameplay
The base game of Arcs is a fast-paced strategy game set in the "Reach", an area of outer space. Players portray spacefaring societies, and attempt to win by obtaining galactic supremacy, gaining points by fulfilling various objectives called "ambitions", which include constructing buildings, upgrading their respective spaceships, gathering resources from the various planets depicted on the board, and waging war. Each game is played over five "chapters", or rounds, and begins with a randomized setup.
A large amount of gameplay revolves around trick-taking using action cards. This begins each round with one player, who has the 'initiative', leading by placing a card onto the table from their hand and taking actions based on the card's suit, and other players following based on the number on that card, sometimes being restricted into copying the same action as a result. Some cards allow more actions to be taken, with a base number of 1 actions per turn and extra actions allowed if resources, which are also used to score points, are spent. The player who leads is able to declare an ambition, determining how players score points in that round, but in doing so drops the number on their played card to zero, meaning that others can easily overtake them in the turn order. Available court cards enable players to customise their personal gameplay through individual powers, and can be stolen by other players.
Players fight with starship pieces which can be in three states; healthy, damaged or off the board, effectively allowing them two points of damage. Three types of dice are used for different types of attack on the board, with blue dice offering low damage but no risk, red dice being more aggressive but coming with significant risk, and orange dice which enable the theft of resources but at a high risk. The attacking player can choose to distribute damage among their own ships while concentrating damage on individual enemy ships, thus encouraging players to play aggressively.
The Blighted Reach Expansion
Arcs: The Blighted Reach Expansion is a day-one expansion of the base game which turns it into a three-session campaign with changing rules. At the beginning of the campaign, players are given a choice between two "fates", some of which introduce new systems to the game which affect all players and which can change between each game of the campaign if that player chooses to do so. The board state is retained between each game in the campaign.
Development
Wehrle has stated that after the completion of the design of his 2021 game Oath: Chronicles of Empire and Exile, he "was filled with all sorts of odd ideas that didn't fit into that game", and "wanted to stay in the space but design something new," making a more "narratively chunky" game. He was inspired by roguelike games.
Initially marketed during development as Arcs: Collapse and Conflict in the Void, the game was announced on October 3, 2021. At this point, Wehrle described it as a "short campaign game" playable in 2 to 4 sessions for 3 to 4 players and with a total run time of five hours maximum. In February 2022, Wehrle stated that the game would have "40 to 50" different objective cards to pursue, leading to "tens of thousands of different possible game states", including secret objectives, with the result of each game having a direct impact on the setup of the next.
On May 3, 2022, A Kickstarter campaign was scheduled for May 24 until June 14 that year, with shipment to backers estimated to begin in December 2023. Unlike Wehrle's 2018 game Root, players of Arcs were expected to have starting identities rather than asymmetric factions, and unlike Oath, the game world would be reset after three games of the campaign.
On May 17, 2022, Leder Games announced that it planned to separate the campaign section of Arcs from its replayable base game, instead marketing the campaign as an expansion; Wehrle cited potential future struggles in marketing Arcs as a big-box experience, as well as the need to provide an "arcade mode" for players to understand the game better before beginning an "overwhelming" campaign as the main reasons for this change. In doing this, the game could also be designed with the intention to add further expansions in the form of add-on modules by their other designers. He designed the campaign expansion as a "three-act structure", in which the acts were individual playthroughs that each flowed into the next game through analog "procedural generation". Wehrle intended for the game to operate in a similar way to games such as Twilight Imperium and Eclipse, though with significantly quicker games of 60-90 minutes.
The Kickstarter campaign earned over $532,000 in its first five hours when launched. In March 2023, while Arcs was in its early access development stage, Wehrle stated that the game would feature a two-player mode, requiring a restriction of the size of the map, alteration of how cards were drawn and change to how players gained resources and scored points. The game's full retail release was expected in September 2024, though it was eventually released on October 1 that year.
Reception
Arcs received high praise from critics. Rob Wieland of Forbes praised the game for its speed compared to Twilight Imperium, Eclipse and Star Wars: Rebellion. He remarked that the base game was "one that's stayed in the conversation with my friends long after we’ve tried it out," and compared the start of each game to "a Star Wars cold open", with players feeling as though they were "the head of a big space bureaucracy". Bell of Lost Souls compared the game to "if Warhammer 40,000 stopped pretending it wasn't as goofy and silly as it is" and called it a "space opera". In July 2024, Luis Aguasvivas of NPR listed Arcs as one of the best games of the year thus far. Polygon awarded it the Polygon Recommends badge, stating that its expansion was "completely over the top in all the best ways, and there’s nothing yet released quite like it", and it was "a magnificent design that deserves recognition as one of 2024’s best releases." Matt Thrower of IGN gave the game a 10/10 "masterpiece" rating, writing that the game was successful in its attempt to "balance challenging strategic elements with the classic fun of negotiation and dice-rolling", and that it was "an awesome thing to behold, carving a story arc of its own right through the annals of board game design".
See also
Root, Pax Pamir, John Company and Oath: Chronicles of Empire and Exile, other board games designed by Wehrle.
References
External links
Arcs page on the Leder Games website
American board games
Asymmetric board games
Science fiction board wargames
Kickstarter-funded tabletop games
Legacy games | Arcs (board game) | Physics | 1,545 |
32,017,763 | https://en.wikipedia.org/wiki/Particle%20damping | Particle damping is the use of particles moving freely in a cavity to produce a damping effect.
Introduction
Active and passive damping techniques are common methods of attenuating the resonant vibrations excited in a structure. Active damping techniques are not applicable under all circumstances due, for example, to power requirements, cost, environment, etc. Under such circumstances, passive damping techniques are a viable alternative. Various forms of passive damping exist, including viscous damping, viscoelastic damping, friction damping, and impact damping. Viscous and viscoelastic damping usually have a relatively strong dependence on temperature. Friction dampers, while applicable over wide temperature ranges, may degrade with wear. Due to these limitations, attention has been focused on impact dampers, particularly for application in cryogenic environments or at elevated temperatures.
Particle damping technology is a derivative of impact damping with several advantages. Impact damping refers to only a single (somewhat larger) auxiliary mass in a cavity, whereas particle damping is used to imply multiple auxiliary masses of small size in a cavity. The principle behind particle damping is the removal of vibratory energy through losses that occur during impact of granular particles which move freely within the boundaries of a cavity attached to a primary system. In practice, particle dampers are highly nonlinear dampers whose energy dissipation, or damping, is derived from a combination of loss mechanisms, including friction and momentum exchange. Because of the ability of particle dampers to perform through a wide range of temperatures and frequencies and survive for a longer life, they have been used in applications such as the weightless environments of outer space, in aircraft structures, to attenuate vibrations of civil structures, and even in tennis rackets.
Advantages of particle dampers
They can perform in a large range of temperature without loss of temperature.
They can survive for a long life.
They can perform in a very wide range of frequencies, unlike viscoelastic dampers, which are highly frequency dependent.
The particles placed inside a cavity in a structure can be less in weight than the mass they replace.
Through analyses, one can find the right kind, size and consistency of particles for the given application.
Therefore, they are suited for applications where there is a need for long service in harsh environments.
Analysis of particle damping
The analysis of particle dampers is mainly conducted by experimental testing, simulations by discrete element method or finite element method, and by analytical calculations. The discrete element method makes use of particle mechanics, whereby individual particles are modeled with 6-degrees of freedom dynamics and their interactions result in the amount of energy absorbed/dissipated. This approach, although requires high power computing and the dynamic interactions of millions of particles, it is promising and may be used to estimate the effects of various mechanisms on damping. For instance, a study was performed using a model that simulated 10,000 particles in a cavity and studied the damping under various gravitational force effects.
Research literature review
A significant amount of research has been carried out in the area of analysis of particle dampers.
Olson presented a mathematical model that allows particle damper designs to be evaluated analytically. The model utilized the particle dynamics method and took into account the physics involved in particle damping, including frictional contact interactions and energy dissipation due to viscoelasticity of the particle material.
Fowler et al. discussed results of studies into the effectiveness and predictability of particle damping. Efforts were concentrated on characterizing and predicting the behaviour of a range of potential particle materials, shapes, and sizes in the laboratory environment, as well as at elevated temperature. Methodologies used to generate data and extract the characteristics of the nonlinear damping phenomena were illustrated with test results.
Fowler et al. developed an analytical method, based on the particle dynamics method, that used characterized particle damping data to predict damping in structural systems. A methodology to design particle damping for dynamic structures was discussed. The design methodology was correlated with tests on a structural component in the laboratory.
Mao et al. utilized DEM for computer simulation of particle damping. By considering thousands of particles as Hertz balls, the discrete element model was used to describe the motions of these multi-bodies and determine the energy dissipation.
Prasad et al. have investigated the damping performance of twenty different granular materials, which can be used to design particle dampers for different industries. They have also introduced the hybrid particle damper concept in which two different types of granular materials are mixed in order to achieve significantly higher vibration reduction in comparison to the particle dampers with a single type of granular materials.
Prasad et al. have developed a honeycomb damping plate concept, based on particle damping technique, to reduce low-frequency vibration amplitude from an onshore wind turbine generator.
Prasad et al. have suggested three different strategies to implement particle dampers in a wind turbine blade to reduce the vibration amplitude.
References
External links
Particle damping DEM simulation video
Powder cavity under harmonic base excitation
Mechanical engineering
Mechanical vibrations | Particle damping | Physics,Engineering | 1,023 |
68,394,738 | https://en.wikipedia.org/wiki/Thrust%20%28particle%20physics%29 | In high energy physics, thrust is a property, (one of the event shape observables) used to characterize the collision of high energy particles in a collider.
When two high energy particles collide, they typically produce jets of secondary particles. This happens when one or several quark-antiquark pairs are produced during the collision. Each colored quark/antiquark pair travels its separate way and subsequently hadronizes. Many new particles are created by the hadronization process and travel in approximately the same direction as the original pair. This set of particles constitutes a jet.
The thrust quantifies the coherence, or ″jettiness″ of the group of particles resulting from one collision. It is defined as:
,
where is the momentum of particle , and is a unit vector that maximizes and defines the thrust axis. The sum is over all the final particles resulting from the collision. In practice, the sum may be carried over the detected particles only.
The thrust is stable under collinear splitting of particles, and therefore it is a robust observable, largely insensitive to the details of the specific hadronization process.
References
Experimental particle physics
Quantum chromodynamics | Thrust (particle physics) | Physics | 249 |
14,684,866 | https://en.wikipedia.org/wiki/Illinois%20Soil%20Nitrogen%20Test | The Illinois Soil Nitrogen Test ("ISNT") is a method for measuring the amount of Nitrogen in soil that is available for use by plants as a nutrient. The test predicts whether the addition of nitrogen fertilizer to agricultural land will result in increased crop yields.
Nitrogen is essential for plant development. Indeed, for crops that are destined to be food for farm animal or human consumption, incorporation of nitrogen into the crop is an important goal, since this forms the basis for protein in the human diet.
Nitrogen is commonly present in soils in many forms, and there are many ways to measure this nitrogen. None of these are completely satisfactory as a measure of the nitrogen that is available for use by crops. The ISNT is a new (2007) method for measuring nitrogen available for plant uptake.
ISNT estimates the amount of nitrogen present in the soil as amino sugar nitrogen. With respect to corn and soybeans, the optimal range for plant growth appears to be around 225 to 240 mg/Kg. Some form of nitrogen fertilizer is needed if levels are below this range. On the other hand, if levels are above this range, addition of nitrogen fertilizer will not increase crop yield.
In the corn belt, since about 1975, the predominant method of estimating the amount of nitrogen needed for corn has been the "yield-based" method. A farmer first estimates the yield of corn he intends to produce. He then applies 1.1 to 1.4 lbs of nitrogen per bushel of expected yield. ISNT represents an alternative approach to managing nitrogen application. However, ISNT does not offer a simple answer as to the amount of nitrogen fertilizer that is needed, or as to the optimal form of that fertilizer.
In field trials in Illinois, some fields have been found to be under-fertilized when managed according to the "yield-based" method, as judged by the ISNT. In the majority of trials, however, the yield-based method calls for the addition of nitrogen far in excess of the levels needed for optimal crop production. This nitrogen, which is applied by farmers at great cost, does not find its way into the crop, but is lost to the atmosphere or leaches into waterways.
Within the corn belt, stalks and other crop residues are left in the field with the intention of enhancing the amount of organic material in the soil. Excessive nitrogen application, however, appears to promote the rapid decomposition of organic matter in the soil, resulting in release of carbon dioxide. As a result, the amount of organic material in soils managed according to the yield-based method in the corn belt appears to be decreasing in spite of the large amounts of crop residues left in the fields.
See also
Agriculture
Agronomy
Soil science
References
Agricultural chemicals
Agricultural soil science
Agronomy
Nitrogen cycle
Soil chemistry | Illinois Soil Nitrogen Test | Chemistry | 574 |
21,027,442 | https://en.wikipedia.org/wiki/Snogo%20Snow%20Plow | The SnoGo Snow Blower was used on the Trail Ridge Road in Rocky Mountain National Park, United States. Manufactured in 1932 by the Klauer Engineering Company of Dubuque, Iowa, the plow was actually a snowblower and featured advanced features such as an enclosed cab, four wheel drive and roll-up windows. It was used in the park until 1952.
The blower used a Climax Blue Streak six-cylinder gasoline engine (6x7=1188cuin), developing 175 horsepower at 1200 RPM. Through the use of an eight-speed gearbox, speed could be varied from 1/4 mile/hr up to a maximum of 25 mph when not moving snow. It was claimed to be capable of throwing snow 100 feet to the side.
Similar blowers, used at Crater Lake and Yosemite National Parks, no longer exist. In 1943 the plow was loaned to Rapid City Air Base for use in keeping the airfield's runways clear, then returned in the spring. The National Park Service gave the plow to the city of Estes Park, Colorado in 1952, which used it until 1979, when it was damaged by water entering through the exhaust. The plow was returned to the park and put on display at the Beaver Meadows Visitor Center. The park plans to restore the plow to operating condition.
See also
National Register of Historic Places listings in Larimer County, Colorado
References
External links
Current manufacturer's website
Transportation in Larimer County, Colorado
National Register of Historic Places in Rocky Mountain National Park
Snowplows
Vehicles introduced in 1932
Industrial equipment on the National Register of Historic Places
Buildings and structures in Larimer County, Colorado
Road transportation on the National Register of Historic Places
Transportation on the National Register of Historic Places in Colorado | Snogo Snow Plow | Engineering | 360 |
66,285,578 | https://en.wikipedia.org/wiki/N-Desmethyltamoxifen | N-Desmethyltamoxifen (developmental code name ICI-55,548) is a major metabolite of tamoxifen, a selective estrogen receptor modulator (SERM). N-Desmethyltamoxifen is further metabolized into endoxifen (4-hydroxy-N-desmethyltamoxifen), which is thought to be the major active form of tamoxifen in the body. In one study, N-desmethyltamoxifen had an affinity for the estrogen receptor of 2.4% relative to estradiol. For comparison, tamoxifen, endoxifen, and afimoxifene (4-hydroxytamoxifen) had relative binding affinities of 2.8%, 181%, and 181%, respectively.
References
Amines
Hormonal antineoplastic drugs
Human drug metabolites
Prodrugs
Selective estrogen receptor modulators
Triphenylethylenes | N-Desmethyltamoxifen | Chemistry | 216 |
6,239,965 | https://en.wikipedia.org/wiki/NAPQI | NAPQI, also known as NAPBQI or N-acetyl-p-benzoquinone imine, is a toxic byproduct produced during the xenobiotic metabolism of the analgesic paracetamol (acetaminophen). It is normally produced only in small amounts, and then almost immediately detoxified in the liver.
However, under some conditions in which NAPQI is not effectively detoxified (usually in the case of paracetamol overdose), it causes severe damage to the liver. This becomes apparent 3–4 days after ingestion and may result in death from fulminant liver failure several days after the overdose.
Metabolism
In adults, the primary metabolic pathway for paracetamol is glucuronidation. This yields a relatively non-toxic metabolite, which is excreted into bile and passed out of the body. A small amount of the drug is metabolized via the cytochrome P-450 pathway (to be specific, CYP3A4 and CYP2E1) into NAPQI, which is extremely toxic to liver tissue, as well as being a strong biochemical oxidizer. In an average adult, only a small amount (approximately 10% of a therapeutic paracetamol dose) of NAPQI is produced, which is inactivated by conjugation with glutathione (GSH). The amount of NAPQI produced differs in certain populations.
The minimum dosage at which paracetamol causes toxicity usually is 7.5 to 10g in the average person. The lethal dose is usually between 10 g and 15 g. Concurrent alcohol intake lowers these thresholds significantly. Chronic alcoholics may be more susceptible to adverse effects due to reduced glutathione levels. Other populations may experience effects at lower or higher dosages depending on differences in P-450 enzyme activity and other factors which affect the amount of NAPQI produced. In general, however, the primary concern is accidental or intentional paracetamol overdose.
When a toxic dose of paracetamol is ingested, the normal glucuronide pathway is saturated and large amounts of NAPQI are produced. Liver reserves of glutathione are depleted by conjugation with this excess NAPQI. The mechanism by which toxicity results is complex, but is believed to involve reaction between unconjugated NAPQI and critical proteins as well as increased susceptibility to oxidative stress caused by the depletion of glutathione.
Poisoning
The prognosis is good for paracetamol overdoses if treatment is initiated up to 8 hours after the drug has been taken. Most hospitals stock the antidote (acetylcysteine), which replenishes the liver's supply of glutathione, allowing the NAPQI to be metabolized safely. Without early administration of the antidote, fulminant liver failure follows, often in combination with kidney failure, and death generally occurs within several days.
Mechanism and antidote
NAPQI becomes toxic when GSH is depleted by an overdose of acetaminophen, Glutathione is an essential antidote to overdose. Glutathione conjugates to NAPQI and helps to detoxify it. In this capacity, it protects cellular protein thiol groups, which would otherwise become covalently modified; when all GSH has been spent, NAPQI begins to bind to certain enzymes like N-10 formyltetrahydrofolate dehydrogenase and glutamate dehydrogenase, reducing their activity and killing the cells in the process. This, along with the depletion of GSH which significantly impairs the function of mitochondria, plays a significant role in the development of paracetamol toxicity.
The preferred treatment for an overdose of this painkiller is the administration of N-acetyl-L-cysteine (either via oral or IV administration)), which is processed by cells to L-cysteine and used in the de novo synthesis of GSH.
See also
Cytochrome P450 oxidase
Liver failure
Centrilobular necrosis
References
Further reading
Human drug metabolites
Chemical pathology
Hepatotoxins
Imines
Toxins | NAPQI | Chemistry,Biology,Environmental_science | 893 |
15,086,544 | https://en.wikipedia.org/wiki/Coccidioides%20posadasii | Coccidioides posadasii is a pathogenic fungus that, along with Coccidioides immitis, is the causative agent of coccidioidomycosis, or valley fever in humans. It resides in the soil in certain parts of the Southwestern United States, northern Mexico, and some other areas in the Americas, but its evolution was connected to its animal hosts.
Coccidioides posadasii and C. immitis are morphologically identical, but genetically and epidemiologically distinct. C. posadasii was identified as a separate species other than C. immitis in 2002 after a phylogenetic analysis. The two species can be distinguished by DNA polymorphisms and different rates of growth in the presence of high salt concentrations: C. posadasii grows more slowly. It also differs epidemiologically, since it is found outside the San Joaquin Valley. Unlike C. immitis, which is geographically largely limited to California, C. posadasii can also be found in northern Mexico and South America.
Early history
As an intern in Buenos Aires in 1892, Alejandro Posadas described an Argentine soldier that had a dermatological problem since 1889. Posadas had seen the patient while a medical student in 1891 and skin biopsies revealed organisms resembling the protozoan Coccidia. The patient died in 1898 but during the interim Posadas successfully transmitted the infection to a dog, a cat, and a monkey, by inoculating them with material from his patient.
In 1899 a 40 year old manual laborer from the San Joaquin Valley, a native of the Azores, entered a San Francisco hospital with fungating lesions similar to those of Posadas' patient. Dr. Emmet Rixford, a surgeon at San Francisco's Cooper Medical College, in attempts to determine the cause, concluded it was not from inadvertent self-inoculation. Further research produced a chronic ulcer in a rabbit and a lesion in a dog both excreting pus with the same organisms. Rixford issued a report, co-authored by Dr. Thomas Caspar Gilchrist (1862–1927), that was printed in 1896, one year after the patient died. A pathologist at Johns Hopkins Medical School and Gilchrist studied the material and determined the microbe was not a fungus but a protozoan resembling Coccidia. With the help of parasitologist C.W. Stiles, the organism was named Coccidioides (“resembling Coccidia”) immitis (“not mild”). Four years later William Ophüls and Herbert C. Moffitt proved that C. immitis was not a protozoan but was a fungus that existed in 2 forms. In 1905 Ophüls called the infections "coccidioidal granuloma" and that it could develop from inhalation of the organism. Also in 1905 Samuel Darling studied a case and, referring to the misnamed organism a protozoan, named it Histoplasma capsulatum, meaning three major endemic fungi in the United States were all initially misidentified as protozoa.
Studies by Cooke on the immunology of the disease, and in 1927 a filtrate of culture specimens, later named coccidioidin, began to be used in skin testing to delineate the epidemiology of infection. In 1929 a second-year medical student, Harold Chope, was studying C. immitis in the laboratory of Ernest Dickson at Stanford University Medical School, and breathed in spores becoming infected but he later recovered. In 1934 Myrnie Gifford, a physician at San Francisco General Hospital, joined the Health Department of Kern County, California. She had observed that San Joaquin Valley Fever patients often suffered from erythema nodosum, and all tested positive for coccidioidomycosis. She met Ernest Dickson when he visited her in Kern County, California, and together they presented evidence to the California Medical Association. The two determined that San Joaquin fever represented C. immitis infection. The Kern County Health Department began obtaining epidemiologic histories and skin testing all cases involving Valley Fever. The investigations revealed, among other things, that a majority of the cases described a history of dust exposure, that coccidioidomycosis was common in the area, and that racial differences determined the host's response to the fungus.
Chope left Stanford Medical School and Dickson recruited a classmate, Charles E. Smith, to replace him. Smith began an extensive 17-month study of coccidioidomycosis in Kern and Tulare County, that also began a lifelong professional focus of study of C. immitis and coccidioidomycosis, even after he became Dean of the School of Public Health at the University of California at Berkeley in 1951, until his death in 1967. Research by Smith resulted in more than a few discoveries that included serologic testing, that chlamydospores of the fungus c. immitis could be wind-blown dispersing the spores when the hot weather converted the soil to dust, scientific results of military personnel testing in the southern San Joaquin Valley before and during WWII, as well as people of Japanese descent (many US citizens) interned in camps, prisoners of war, and agricultural workers. Diagnoses of active disease and skin testing, showed that it was also found in southern Nevada and Utah, western Texas, as well as Arizona, where the southern and central areas appeared to impose the highest risk of infection in the United States. Smith's research added to the fundamental discoveries of microbiology, epidemiology, clinical findings, and diagnosis that had emerged since Posadas' initial case report in 1892.
Later history
Progress in studies from 1997 to 2007, including genomic restriction fragment length polymorphism (RFLP) concluded that there were two separate species. Earlier the two were referred to as types I and II, and later as Non-California and California distributions, determined as clades through microsatellite analyses. Genealogical Concordance Phylogenetic Species Recognition (GCPSR) criteria were met, so the two entities were proposed and generally recognized as two separate species: Coccidioides immitis, and the novel species Coccidioides posadasii.
References
External links
Coccidioides posadasii overview, life cycle image at MetaPathogen resource
Onygenales
Fungal pathogens of humans
Fungus species | Coccidioides posadasii | Biology | 1,339 |
24,503,261 | https://en.wikipedia.org/wiki/Crataegus%20%C3%97%20macrocarpa | Crataegus × macrocarpa, is a hybrid between two species of Crataegus (hawthorn), C. laevigata and C. rhipidophylla, both in series Crataegus. A chemotaxonomic investigation comparing flavonoid patterns in C. × macrocarpa and its putative parent species corroborated their supposed relationship. It is sometimes confused with C. × media, the hybrid between C. monogyna and C. laevigata.
Under the rules of botanical nomenclature the name C. × macrocarpa covers all intermediate forms between the two parent species, including backcrosses.
References
macrocarpa
Hybrid plants | Crataegus × macrocarpa | Biology | 144 |
3,525,304 | https://en.wikipedia.org/wiki/Right-to-left%20script | In a script (commonly shortened to right to left or abbreviated RTL, RL-TB or Role), writing starts from the right of the page and continues to the left, proceeding from top to bottom for new lines. Arabic script is the most widespread RTL writing system in modern times, being used as an official script in 29 sovereign states. Hebrew and Thaana scripts are other RTL writing systems that are official in Israel and the Maldives respectively.
Right-to-left can also refer to (TB-RL or vertical) scripts of tradition, such as Chinese, Japanese, and Korean, though in modern times they are also commonly written (with lines going from top to bottom). Books designed for predominantly vertical TBRL text open in the same direction as those for RTL horizontal text: the spine is on the right and pages are numbered from right to left.
These scripts can be contrasted with many common modern writing systems, where writing starts from the left of the page and continues to the right.
The Arabic script is mostly but not exclusively right-to-left; mathematical expressions, numeric dates and numbers bearing units are embedded from left to right.
Uses
As usage of the Arabic script spread, the repertoire of 28 characters used to write the Arabic language was supplemented to accommodate the sounds of many other languages such as Kashmiri, Kurdish, Pashto, Persian etc. While the Hebrew alphabet is used to write the Hebrew language, it is also used to write other Jewish languages such as Yiddish and Ladino.
Syriac and Mandaean (Mandaic) scripts are derived from Aramaic and are written RTL. Samaritan is similar, but developed from Proto-Hebrew rather than Aramaic. Many other ancient and historic scripts derived from Aramaic inherited its right-to-left direction.
Several languages have both Arabic RTL and non-Arabic LTR writing systems. For example, Sindhi is commonly written in Arabic and Devanagari scripts, and a number of others have been used. Kurdish may be written in the Arabic or Latin script.
Thaana appeared around 1600 CE. Most modern scripts are LTR, but Niko (1949), Mende Kikakui (19th century), Adlam (1980s) and Hanifi Rohingya (1980s) were created in modern times and are RTL.
Ancient examples of text using alphabets such as Phoenician, Greek, or Old Italic may exist variously in left-to-right, right-to-left, or boustrophedon order; therefore, it is not always possible to classify some ancient writing systems as purely RTL or LTR.
Computing support
Right-to-left, top-to-bottom text is supported in common computer software. Often, this support must be explicitly enabled. Right-to-left text can be mixed with left-to-right text in bi-directional text.
List of RTL scripts
Examples of right-to-left scripts (with ISO 15924 codes in brackets) are:
Current scripts
Arabic script ( 160, 161) – used for Arabic, Persian, Urdu, Kashmiri, Punjabi (Shahmukhi) and many other languages.
Hebrew alphabet ( 125) – used for Hebrew, Yiddish and some other Jewish languages.
Thaana ( 170) – used for Dhivehi.
Syriac alphabet ( 135, variants 136–138 , , ) – used for varieties of the Syriac language.
Mandaic alphabet ( 140) – closely related to Syriac, used for the Mandaic language.
Samaritan script ( 123) – closely related to Paleo-Hebrew, used for the Samaritans' writings.
Mende Kikakui ( 438) – for Mende in Sierra Leone. Devised by Mohammed Turay and Kisimi Kamara in the late 19th century. Still used, but only by about 500 people.
N'Ko script ( 165) – devised in 1949 for the Manding languages of West Africa.
Garay alphabet – designed in 1961 for the Wolof language.
Adlam ( 166) – devised in the 1980s for writing the Fula languages of West and Central Africa.
Hanifi Rohingya ( 167) – developed in the 1980s for the Rohingya language.
Yezidi ( 192) – used for two 12th- or 13th-century Yazidi Kurdish texts; attempts have been made to revive it since 2013.
Ancient scripts
Indus script
Egyptian hieroglyphs
Cypriot syllabary ( 403) – predates Phoenician influence.
Phoenician alphabet ( 115) – ancient, precursor to Hebrew, Imperial Aramaic, and Greek.
Imperial Aramaic alphabet ( 124) – ancient, closely related to Hebrew and Phoenician. Spread widely by the Neo-Assyrian and Achaemenid empires. The later Palmyrene form ( 126) was also used to write Aramaic.
Old South Arabian ()
Old North Arabian ()
Pahlavi scripts (130–133: , , , ) – derived from Aramaic.
Avestan alphabet ( 134) – from Pahlavi, with added letters. Used for recording the Zoroastrian sacred texts during the Sassanid era.
Hatran alphabet ( 127), used to write the Aramaic of Hatra
Sogdian ( 141 and 142) and Manichaean ( 139, associated with the Manichaean religion) – derived from Syriac. Sogdian eventually rotated from RTL to top-to-bottom, giving rise to the Old Uyghur, Mongolian, and Manchu vertical scripts.
Nabatean alphabet () – intermediate between Syriac and Arabic.
Old Ge'ez alphabet ( 495)
Kharosthi ( 305) – an ancient script of India, derived from Aramaic.
Old Turkic runes (also called Orkhon runes 175)
Old Hungarian runes ( 176).
Old Italic alphabets ( 210) – Early Etruscan was RTL but LTR examples later became more common. Umbrian, Oscan, and Faliscan were written right-to-left. Unicode treats Old Italic as left-to-right, to match modern usage. Some texts are boustrophedon
Old Latin could be written from right to left (as were Etruscan and early Greek) or boustrophedon.
Lydian alphabet ( 116) – ancient; some texts are left-to-right or boustrophedon.
See also
Bidirectional text
Complex text layout (CTL)
Script (Unicode)
Writing system
References
External links
Everson, Michael (2001-01-08) Roadmapping early Semitic scripts https://www.unicode.org/L2/L2001/01024-n2311.pdf
Buntz, Carl-Martin (2000-12-21) L2/01-007, Iranianist Meeting Report: Encoding Iranian Scripts in Unicode https://www.unicode.org/L2/L2001/01007-iran.txt
Character encoding
Writing direction | Right-to-left script | Technology | 1,435 |
46,631,677 | https://en.wikipedia.org/wiki/Farley%E2%80%93Buneman%20instability | The Farley–Buneman instability, or FB instability, is a microscopic plasma instability named after Donald T. Farley and Oscar Buneman. It is similar to the ionospheric Rayleigh-Taylor instability.
It occurs in collisional plasma with neutral component, and is driven by drift currents. It can be thought of as a modified two-stream instability arising from the difference in drifts of electrons and ions exceeding the ion acoustic speed. It occurs in collisional plasma with neutrals driven by drift current for two stream instability for unmagnetized plasma it becomes "Buneman instability".
It is present in the equatorial and polar ionospheric E-regions. In particular, it occurs in the equatorial electrojet due to the drift of electrons relative to ions, and also in the trails behind ablating meteoroids.
Since the FB fluctuations can scatter electromagnetic waves, the instability can be used to diagnose the state of ionosphere by the use of electromagnetic pulses.
Conditions
To derive the dispersion relation below, we make the following assumptions. First, quasi-neutrality is assumed. This is appropriate if we restrict ourselves to wavelengths longer than the Debye length. Second, the collision frequency between ions and background neutral particles is assumed to be much greater than the ion cyclotron frequency, allowing the ions to be treated as unmagnetized. Third, the collision frequency between electrons and background neutrals is assumed to be much less than the electron cyclotron frequency. Finally, we only analyze low frequency waves so that we can neglect electron inertia. Because the Buneman instability is electrostatic in nature, only electrostatic perturbations are considered.
Dispersion relation
We use linearized fluid equations (equation of motion, equation of continuity) for electrons and ions with Lorentz force and collisional terms. The equation of motion for each species is:
Electrons:
Ions:
where
is the mass of species
is the velocity of species
is the temperature of species
is the frequency of collisions between species s and neutral particles
is the charge of an electron
is the electron number density
is the Boltzmann constant
Note that electron inertia has been neglected, and that both species are assumed to have the same number density at every point in space ().The collisional term describes the momentum loss frequency of each fluid due to collisions of charged particles with neutral particles in the plasma. We denote as the frequency of collisions between electrons and neutrals, and as the frequency of collisions between ions and neutrals. We also assume that all perturbed properties, such as species velocity, density, and the electric field, behave as plane waves. In other words, all physical quantities will behave as an exponential function of time and position (where is the wave number):
This can lead to oscillations if the frequency is a real number, or to either exponential growth or exponential decay if is complex. If we assume that the ambient electric and magnetic fields are perpendicular to one another and only analyze waves propagating perpendicular to both of these fields, the dispersion relation takes the form of:
where is the drift and is the acoustic speed of ions. The coefficient described the combined effect of electron and ion collisions as well as their cyclotron frequencies and :
Growth rate
Solving the dispersion we arrive at frequency given as:
where describes the growth rate of the instability. For FB we have the following:
Buneman instability
The dispersion relation is
and the growth rate is
See also
Plasma stability
Plasma Instabilities
References
Plasma instabilities | Farley–Buneman instability | Physics | 727 |
29,077,039 | https://en.wikipedia.org/wiki/Boletellus%20ananas | Boletellus ananas, commonly known as the pineapple bolete, is a mushroom in the family Boletaceae, and the type species of the genus Boletellus. It is distributed in southeastern North America, northeastern South America, Asia, and New Zealand, where it grows scattered or in groups on the ground, often at the base of oak and pine trees. The fruit body is characterized by the reddish-pink (or pinkish-tan to yellowish if an older specimen) scales on the cap that are often found hanging from the edge. The pore surface on the underside of the cap is made of irregular or angular pores up to 2 mm wide that bruise a blue color. It is yellow when young but ages to a deep olive-brown color. Microscopically, B. ananas is distinguished by large spores with cross striae on the ridges and spirally encrusted hyphae in the marginal appendiculae and flesh of the stem. Previously known as Boletus ananas and Boletus coccinea (among other synonyms), the species was given its current name by William Alphonso Murrill in 1909. Two varieties of Boletellus ananas have been described. Like many other boletes, this species is considered edible, but it is not recommended for consumption.
Taxonomy
The species was first named by Moses Ashley Curtis as Boletus ananas in 1848, based on specimens he found near the Santee River, in South Carolina. In 1909, William Murrill described the new genus Boletellus and made Boletellus ananas the type species. According to Murrill, the taxon Boletus isabellinus, described by Charles Horton Peck in 1897 from specimens collected in Ocean Springs, Mississippi, is a synonym of B. ananas; Peck described this species from undeveloped specimens. Wally Snell later doubted Murrill's conclusion in a 1933 publication; he considered the differences in the spore structure too great to consider the species conspecific with B. ananas, although he admitted it was impossible to come to any definitive conclusions until mature fruit bodies and spore prints were available for study. Rolf Singer and colleagues (1992) suggested the name Boletellus coccineus for Boletellus ananas. Singer created this name, however, in the mistaken belief that the earliest available name for the taxon was Boletus coccineus, proposed by Elias Magnus Fries in 1838. However, Fries's name is an illegitimate later homonym (compare with Boletus coccineus, named by Bulliard in 1791), and Singer's combination is actually based on Strobilomyces coccineus, named by Pier Andrea Saccardo in 1888. The earliest available name for the species is therefore Boletus ananas M.A. Curtis 1848, the basionym of Boletellus ananas.
Boletellus ananas, as the type species of the genus Boletellus, is in section Boletellus that Singer based on the scaly, dry cap with red-pink tones, a marginal veil that clasps the stem when immature, and longitudinally ridged spores that are greater than 16 μm long. The genus name Boletellus means "small boletus", while the specific epithet ananas alludes to the name for pineapple, referring to the pineapple-like pattern of scales on the cap surface. The mushroom is commonly known as the "pineapple bolete".
Description
The cap of B. ananas is wide and plano-convex (flat on one side and rounded on the other). It is covered with squamules (small scales) that can be either pressed against the cap or curved back on itself. The squamules range in color from reddish brown to red-tan, to pink to pinkish gray, and they are more concentrated and more scaly in the center of the cap, extending out of cream to light orange-pink to light pink-red floccose ground. The margin clasps the stem when young; at maturity it separates into triangular veil remnants (appendiculae) that measure 6–12 by 3–10 mm. The color of these appendiculae range from buff-white to faint pink. The flesh is 2–3 mm thick at the edge of the cap, 7–10 mm over the tubes, and 11–18 mm centrally. It is buff white to light yellow, and quickly turns bluish upon exposure to air. The tubes are 1–5 mm long at the margin, 10–20 mm in the center, and 4–6 mm at the stem. They are broadly and deeply depressed around the stem, of irregular lengths, bright yellow to olive-yellow to mustard-yellow, and also rapidly turn blue upon exposure. The pores are the same color as the tubes, and rapidly turn blue-green with pressure; they are angular, and there are about 0.5–1.5 pores per mm. The stem is by wide, and gradually becomes larger towards the base to 10–19 mm. The top part of the stem is cream to pink, the middle finely longitudinally striate, with the striations darkening with handling, red-lavender to brown-red, lighter with age. Immediately above the basal tomentum the stem surface is cream-colored with few striations. The basal tomentum is made of stiff, coarse white hairs over the lower 6–50 mm. The flesh of the stem is solid (i.e., not hollow) white to buff-tan to light yellow, and turns slightly blue with exposure. The odor is not distinctive (although it has been described as "musty") and the taste is mild.
Microscopic characteristics
The spores are olivaceous-brown in medium to heavy deposit. They are inamyloid, almond-shaped, contain one or more oil droplets, and measure 17.5–22.2 by 6.4-8 μm. The spore wall is 0.5–1 μm thick, with 12–14 longitudinal ridges. These ridges are less than 1 μm tall, occasionally bifurcating, converging at poles, with minute cross-striae. Although these cross-striae are visible when observed with light microscopy, they are not evident when viewed with scanning electron microscopy. The hilar appendage (the region of a spore which attaches to the basidium via the sterigma) is 0.3–1 μm long. The basidia are four-spored, club-shaped, and have numerous refractive globules; they measure 39–57 by 11–15 μm. The pleurocystidia (cystidia on the face of a gill) are 42–47 by 8-12 μm, swollen and beaked, slightly capitate. They are abundant, arising from the subhymenium, projecting 19.3–29.6 μm above the hymenial palisade, thin-walled, hyaline, and devoid of refractive contents. The cheilocystidia (cystidia on the edge of a gill) are 19–42 by 5–11 μm, swollen, cylindrical to narrowly cub-shaped, thin-walled, and infrequent. The flesh of the hymenium is boletoid and strongly divergent (composed of different tissue layers). The mediostratum (middle tissue layer) is 24.7–45.7 μm wide, and made of many parallel, slightly interwoven hyphae. The lateral stratum hyphae are 4.4–8.4 μm wide, hyaline, gelatinized in a dilute solution of potassium hydroxide (KOH), and regularly septate. The cap cuticle is a densely interwoven trichodermial palisade (an erect, roughly parallel chains of closely packed cells) of cylindrical elements with inflated terminal cells. The terminal cells are 23.5–51.9 by 9.4–16.8 μm, inamyloid, cylindrical to club-shaped, interwoven, and concentrated on the squamules. The marginal appendiculae are composed of wefts of interwoven inflated hyphae, some with faint golden spirally arranged encrusting pigments that are evident when mounted in water, KOH, and Melzer's reagent. The flesh of the cap is composed of highly interwoven hyphae measuring 7.4–11.1 μm wide that are hyaline in water, gelatinized and hyaline in KOH, and regularly septate. The stipitipellis (stem cuticle) is a trichodermial palisade of cylindrical elements with inflated terminal cells. The terminal cells project 30.4–63 μm, and they are cylindrical to club-shaped, occasionally with an abrupt tapering point. The flesh of the stem is made of densely interwoven hyphae that are 4.9–7.2 μm wide, with spirally arranged, faint golden encrusting pigments that can be seen in KOH, Melzer's reagent, and water. Clamp connections are absent in this species.
Varieties
The typical variety of Boletellus ananas has consistently larger fruit bodies than B. ananas var. minor Singer from Brazil and Nicaragua, and lacks the thick-walled cheilocystidia of B. ananas var. crassotunicatus Singer from Nicaragua and Panama.
Edibility
Although the mushroom is used as a food in Mexico, field guides list it as "inedible" or "not recommended".
Similar species
Strobilomyces strobilaceus is roughly similar in appearance because of its rough scaly cap and lacerated margin, but may be distinguished from B. ananas by smooth stem without a ring, different spores, and flesh that is less tough. The Australian species Boletellus ananiceps has spores with narrow longitudinal ribs that do not have cross-striae. B. dissiliens has colors that are not red as in B. ananas, and pores that can become reddish in maturity. Further, the cap flesh of B. dissiliens turns blue upon exposure to air.
Ecology, habitat and distribution
The fruit bodies of B. ananas typically grow scattered or in groups under oak and pine trees, often on their bases. In Guyana, the mushroom typically fruits singly or in pairs within above ground level on the trunks of the tropical tree Dicymbe corymbosa (subfamily Caesalpinioideae), associated with ectomycorrhizas within humic accumulations. It is rarely found fruiting on the ground on heavily decayed, root-penetrated wood. Rolf Singer suggested that the fungus was not mycorrhizal, noting that as well as occurring under or on the bases of both pine and oaks, it occurred in scanty humus and debris accumulated on rock walls. Singer concluded that the species prefers to grow on hard surfaces. Harry D. Thiers, in his study of the bolete flora of Texas, wrote that B. ananas was a rare species that often fruited abundantly following an extended period of rain and high humidity.
Some varieties of B. ananas from southeastern North America, Costa Rica, Brazil, Panama, Nicaragua, and Guyana have been noted to fruit on tree trunks, although terrestrial fruiting has been reported in Malaysia and Central America. Due to the typically elevated fruiting habit and occurrence on dead wood, the ectomycorrhizal status of B. ananas has been debated; in the protolog Murrill noted "it always occurs either as a wound parasite on pine trunks or about the base of living pine trees". All collections have been made in association with ectotrophic host trees including Pinus and Quercus species in southeastern North America and Central America, Quercus humboldtii in Colombia, various Fagaceae and Dipterocarpaceae species in Malaysia, and Leptospermum and Pinus species in New Zealand. In Guyana, the humic deposits on Dicymbe trunks bearing B. ananas are consistently permeated with abundant ectomycorrhizas. The fungus was reported as forming mycorrhizal associations with eucalypts in Australia, based on fruit body association with trees.
Its North American distribution encompasses a range extending north from North Carolina to Florida, west to Texas and south to Mexico, and Central America. In 2008, it was reported for the first time in the Upper Potaro and Upper Ireng River Basins in Guyana. It has also been collected from New Zealand, Asia (including China, Korea, Malaysia, and Taiwan), and possibly Australia.
See also
List of North American boletes
References
External links
Mushroom Observer Images
ananas
Fungi described in 1848
Fungi of Australia
Fungi of New Zealand
Fungi of North America
Fungi of South America
Fungus species
Taxa named by Moses Ashley Curtis | Boletellus ananas | Biology | 2,718 |
10,923,250 | https://en.wikipedia.org/wiki/Southern%20marsupial%20mole | The southern marsupial mole (Notoryctes typhlops), also known as the itjaritjari () or itjari-itjari, is a mole-like marsupial found in the western central deserts of Australia. It is extremely adapted to a burrowing way of life. It has large, shovel-like forepaws and silky fur, which helps it move easily. It also lacks complete eyes as it has little need for them. It feeds on earthworms and larvae.
History of discovery
Although the southern marsupial mole was probably known by Aboriginal Australians for thousands of years, the first specimen examined by the scientific community was collected in 1888. Stockman W. Coulthard made the discovery on Idracowra Pastoral Lease in the Northern Territory by following some unusual prints that led him to the animal lying under a tussock. Not knowing what to do with the strange creature, he wrapped it in a kerosene soaked rag, placed it in a revolver cartridge box and forwarded it to E. C. Stirling, the Director of the South Australian Museum. Due to the poor transportation conditions of the time, the specimen reached its destination in a badly decomposed state. Hence, Stirling was unable to find any evidence of the pouch or epipubic bones and decided the creature was not a marsupial.
Nineteenth century scientists believed that marsupials and eutherians had evolved from the same primitive ancestor and were looking for a living specimen that would serve as the missing link. Because the marsupial mole closely resembled the golden moles of Africa, some scientists concluded that the two were related and that they had found the proof. This, however, is not the case, as became obvious by examining better-preserved specimens that had a marsupial pouch. The striking similarities of the two species are, in fact, the result of convergent evolution.
Taxonomy and phylogeny
Although the family Notoryctidae is poorly represented in the fossil record there is evidence of at least one distinct genus Yalkaparidon, in the early Miocene sediments in the Riversleigh deposit in northern Australia.
Due to their highly specialized morphology and the fact that notoryctids share many common characteristics with other marsupials, there has been much debate surrounding their phylogeny. However, recent molecular studies indicate that notoryctids are not closely related to any of the other marsupial families and should be placed in an order of their own, Notoryctemorphia.
Furthermore, molecular data suggests that Notoryctemorphia separated from other marsupials around 64 million years ago. Although at this time South America, Antarctica and Australia were still joined the order evolved in Australia for at least 40-50 million years. The Riversleigh fossil material suggests that Notoryctes was already well adapted for burrowing and probably lived in the rainforest that covered much of Australia at that time. The increase in aridity at the end of Tertiary was likely one of the key contributing factors to the development of the current highly specialized form of marsupial mole. The marsupial mole had been burrowing long before the Australian deserts came into being.
Morphology
The southern marsupial mole is small in size, with a head and body length of , a tail length of and a weight of . The body is covered with short, dense, silky fur with a pale cream to white color often tinted by the iron oxides from the soil which gives it a reddish chestnut brown tint. It has a light brownish pink nose and mouth and no vibrissae.
The cone shaped head merges directly with the body, and there is no obvious neck region. The limbs are short and powerful, and digits III and IV of the manus have large spade-like claws. The dentition varies with individuals and, because the molars have a root of only one third of the length, it has been assumed that moles cannot deal with hard food substances.
The dorsal surface of the rostrum and the back of the tail have no fur and the skin is heavily keratinized. There is no external evidence of the eyes, and the optic nerve is absent. It does, however, have a pigment layer where the eyes should be, probably a vestige of the retina. Both lachrymal glands and Jacobson's organ are well developed, and it has been suggested that the former plays a role in lubricating the nasal passages and Jacobson's organ.
The external ear openings are covered with fur and do not have pinnae. The nostrils are small vertical slits right below the shield-like rostrum. Although the brain has been regarded as very primitive and represents the "lowliest marsupial brain", the olfactory bulbs and the tubercula olfactoria are very well developed. This seems to suggest that the olfactory sense plays an important role in the marsupial mole’s life, as it would be expected for a creature living in an environment lacking visual stimuli. The middle ear seems to be adapted for the reception of low-frequency sounds.
Adaptations
In an example of convergent evolution, the southern marsupial mole resembles the Namib Desert golden mole (Eremitalpa granti namibensis) and other specialised fossorial animals in having a low and unstable body temperature, ranging between . It does not have an unusually low resting metabolic rate, and the metabolic rate of burrowing is 60 times higher than that of walking or running. Because it lives underground, where the temperature is considerably lower than at the surface, the southern marsupial mole does not seem to have any special adaptations to desert life. It is not known whether it drinks water or not, but due to the irregularity of rainfall it is assumed that it does not.
Habitat and distribution
The habitat of the southern marsupial mole is not well known, and is generally based on scattered records. It has been often recorded in sandy dunes or flats, usually where spinifex is present. Its habitat seems to be restricted to areas where the sand is soft, as it cannot tunnel through harder materials. Although little is known about its exact distribution, sightings, aboriginal informants and museum records indicate that it lives in the central sandy desert regions of Western Australia, northern South Australia and the Northern Territory. Recent studies indicate that its habitat also includes Great Victoria and Gibson Deserts.
Behavior
Due to the lack of any field studies regarding the marsupial moles, there is little known about their behavior. Observations of captive animals are limited since most of the moles do not survive much longer than a month after capture.
Surface behavior
It sometimes wanders above the surface where traces of several animals have been found. While most evidence indicates that it does this seldom and moves just a few meters before burrowing back underground, on some occasions multiple tracks were found suggesting that one or more animals have moved above ground for several hours. According to Aboriginal sources, marsupial moles may surface at any time of day, but seem to prefer to do so after rain and in the cooler season.
Captive animals have been observed to feed above ground and then return underground to sleep. Occasionally it has been recorded to suddenly "faint" on the surface without waking up for several hours until disturbed.
Above the ground it moves in a sinuous fashion, using its powerful forelimbs to haul the body over the surface and its hind limbs to push forward. The forelimbs are extended forward in unison with the opposite hind limb. Moles move about the surface with frantic haste but little speed, as one observer once likened it to a "Volkswagen Beetle heaving its way through the sand".
Burrowing behavior
While burrowing, the southern marsupial mole does not make permanent tunnels, but the sand caves in and tunnels back-fill as the animal moves along. For this reason its burrowing style has been compared to "swimming through the sand”". The only way its tunnels can be identified is as a small oval shape of loose sand. Although it spends most of its active time 20-100 cm below the surface, tunneling horizontally or at shallow angles, it sometimes for no apparent reason turns suddenly and burrows vertically to depths of up to 2.5 meters.
Although most food sources are likely to occur at depths of approximately 50 cm from the surface, the temperature of these environments varies greatly from less than 15°C during winter to over 35°C during summer. While one of the captive moles was observed shivering when the temperature dropped under 16°C, it seems probable that moles can select the temperature of their environment by burrowing at different depths.
Diet
Little is known about the southern marsupial mole's diet, and all information is based on the gut content of preserved animals and on observations made on captive specimens. All evidence seems to suggest that the mole is mainly insectivorous, preferring insect eggs, larvae and pupae to the adults. Based on observations made on captive animals, it seems that one of the favorite food choices was beetle larvae, especially Scarabaeidae. Because burrowing requires high energy expenditure it seems unlikely that the mole searches for its food in this prey impoverished environment, and suggests that it probably feeds within nests. It has been also recorded to eat adult insects, seeds and lizards. Below the desert sands of Australia, the marsupial mole searches for burrowing insects and small reptiles. Instead of building a tunnel, it "swims" through the ground, allowing the sand to collapse behind it.
Social behavior
There is little known about the social and reproductive behavior of these animals, but all evidence seems to suggest that it leads a solitary life. There are no traces of large burrows where more than one individual might meet and communicate. Although it is not known how the male locates the female, it is assumed that they do so using their highly developed olfactory sense.
The fact that the middle ear seems to be morphologically suited for capturing low frequency sounds, and that moles produce high pitched vocalizations when handled, indicates that this kind of sound that propagates more easily underground may be used as a form of communication.
Human interactions
The southern marsupial mole was known for thousands of years to Australia’s Indigenous people and was part of their mythology. It was associated with certain sites and dreaming trails such as Uluru and the Anangu-Pitjantjatjara Lands. They were regarded with sympathy, probably due to its harmless nature, and were only eaten during hard times.
Aboriginal people have good tracking skills and generally cooperate with researchers in teaching them these skills and help finding specimens. Their involvement is instrumental in gathering information about the species’ habitat and behavior.
Historical records suggest that the southern marsupial mole was relatively common in the late 19th century and early 20th century. There was a large trade in marsupial mole skins in the Finke River region between 1900 and 1920. Large numbers of aborigines arrived at the trading post with 5-6 pelts each for sale to trade for food and other commodities. It is estimated that hundreds to several thousand skins were traded at these meetings, and that at the time the mole was relatively common.
Conservation status
So little is known about the southern marsupial mole that it is difficult to assess its exact distribution and how it varied over the last decades. However circumstantial evidence suggests that their numbers are dwindling. Although the decreasing acquisition rate is difficult to interpret due to the chance nature of the findings, there are reasons for concern. About 90% of medium-sized marsupials in arid Australia have become threatened due to cat and fox predation. A recent study indicates that remains of marsupial moles have been found in 5% of the cats and foxes faecal pellets examined. Moles are also sensitive to changes in the availability of their food caused by changing fire regimes and the impact of herbivores. The southern marsupial mole is currently listed as endangered by the IUCN. Efforts to protect this species focus on advocating for maintaining a healthy population of moles to better understand their biology and behavior, and for conducting field studies to monitor the species distribution and abundance with the help of Aborigines.
References
External links
Facts and Status from Arkive
Marsupial Mole from marsupialsociety.org
Southern Marsupial Mole from environment.gov.au
Notoryctidae
Marsupials of Australia
Mammals of South Australia
Mammals of the Northern Territory
EDGE species
Endangered fauna of Australia
Mammals described in 1889
Species that are or were threatened by invasive species
Taxa named by Edward Charles Stirling | Southern marsupial mole | Biology | 2,571 |
25,477,746 | https://en.wikipedia.org/wiki/MTConnect | MTConnect is a manufacturing technical standard to retrieve process information from numerically controlled machine tools. As explained by a member of the team that developed it, "This standard specifies the open-source, royalty-free communications protocol based on XML and HTTP Internet technology for real-time data sharing between shopfloor equipment such as machine tools and computer systems. MTConnect provides a common vocabulary with standardized definitions for the meaning of data that machine tools generate, making the data interpretable by software applications." A simple, real-world example of how this tool is used to improve shop management is given by the same author.
History
The initiative began as a result of lectures given by David Edstrom of Sun Microsystems and David Patterson, professor of Computer Science at the University of California, Berkeley (UCB) at the 2006 annual meeting of the Association for Manufacturing Technology (AMT). The two lectures promoted an open communication standard to enable Internet connectivity to manufacturing equipment.
Initial development was carried out by a joint effort between the UCB Electrical Engineering and Computer Sciences (EECS) department, the UCB Mechanical Engineering (ME) department (both in the College of Engineering) and the Georgia Institute of Technology, using input from industry representatives. The resulting standard is available under royalty-free licensing terms.
Description
MTConnect is a protocol designed for the exchange of data between shop floor equipment and software applications used for monitoring and data analysis. MTConnect is referred to as a read-only standard, meaning that it only defines the extraction (reading) of data from control devices, not the writing of data to a control device. Freely available, open standards are used for all aspects of MTConnect. Data from shop floor devices is presented in XML format, and is retrieved from information providers, called Agents, using Hypertext Transfer Protocol (HTTP) as the underlying transport protocol. MTConnect provides a RESTful interface, which means the interface is stateless. No session must be established to retrieve data from an MTConnect Agent, and no logon or logoff sequence is required (unless overlying security protocols are added which do). Lightweight Directory Access Protocol (LDAP) is recommended for discovery services.
Version 1.0 was released in December 2008.
The first public demonstration of MTConnect occurred at the International Manufacturing Technology Show (IMTS) held in Chicago, Illinois September 2008. There, 25 industrial equipment manufacturers networked their machinery control systems, providing process information that could be retrieved from any web-enabled client connected to the network.
Subsequent demonstrations occurred at EMO (the European machine tool show) in Milan, Italy in October 2009, and the 2010 IMTS in Chicago.
Standard
The MTConnect standard has three sections. The first section provides information on the protocol and structure of the XML documents via XML schemas. The second section specifies the machine tool components and the description of the available data. The third and last section specifies the organization of the data streams that can be provided from a manufacturing device. The MTConnect Institute is considering adding a fourth section to support mobile assets that include tools and work-holdings.
MTConnect took an incremental approach to defining the requirements for manufacturing device communications. It did not exhaustively define every possible piece of data an application can collect from a manufacturing device, but it works forward from business and research objectives to define the required elements to meet those needs. The standard catalogued important components and data items for metal cutting devices. MTConnect provides an extensible XML schema to allow implementors to add custom data to meet their specific needs, while providing as much commonality as possible.
On September 16, 2010, The MTConnect Institute and the OPC Foundation announced cooperation between the respective organizations.
Applications
The maintenance cost and losses in productivity with unplanned downtime for machine tool components such as spindle bearings and ball screws could be reduced if one could proactively take action prior to failure. In addition, cutting tools and inserts are expensive to replace when they are still in good condition, but replacing the tools too late can be costly due to scrap and re-work. The proposed health monitoring application will use MTConnect to extract controller data and pattern recognition algorithms to assess the health condition of the spindle and machine tool axes. The health assessment approach is based on running a routine program each shift in which the most recent data patterns are compared to the baseline data patterns. An online tool condition monitoring module is also proposed and uses controller data such as the spindle motor current, with other add on sensors (vibration, acoustic emission) to accurately estimate and predict tool wear. With the added transparency of the machine tool health information, one can take proactive actions before significant downtime or productivity losses occur.
References
External links
MTConnect - What is it? (introduction with information videos)
MTConnect Institute Home Page
Modern Machine Shop magazine article 'MTConnect Is For Real'
Control Design magazine article: MTConnect Standardizes Data, Lets Machines and Users Talk Same Language
Industrial computing
Industry-specific XML-based standards
Computer-aided manufacturing | MTConnect | Technology,Engineering | 1,053 |
769,148 | https://en.wikipedia.org/wiki/Ludwig%20Prandtl | Ludwig Prandtl (4 February 1875 – 15 August 1953) was a German fluid dynamicist, physicist and aerospace scientist. He was a pioneer in the development of rigorous systematic mathematical analyses which he used for underlying the science of aerodynamics, which have come to form the basis of the applied science of aeronautical engineering. In the 1920s, he developed the mathematical basis for the fundamental principles of subsonic aerodynamics in particular; and in general up to and including transonic velocities. His studies identified the boundary layer, thin-airfoils, and lifting-line theories. The Prandtl number was named after him.
Early years
Prandtl was born in Freising, near Munich, on 4 February 1875. His mother suffered from a lengthy illness and, as a result, Ludwig spent more time with his father, a professor of engineering. His father also encouraged him to observe nature and think about his observations.
Prandtl entered the Technische Hochschule Munich in 1894 and graduated with a Ph.D. under guidance of Professor August Foeppl in six years. His thesis was "On Tilting Phenomena, an Example of Unstable Elastic Equilibrium" (1900),
After university, Prandtl went to work in the Maschinenfabrik Augsburg-Nürnberg to improve a suction device for shavings removal in the manufacturing process. While working there, he discovered that the suction tube did not work because the lines of flow separated from the walls of the tube, so the expected pressure rise in the sharply-divergent tube never occurred. This phenomenon had been previously noted by Daniel Bernoulli in a similar hydraulic case. Prandtl recalled that this discovery led to the reasoning behind his boundary-layer approach to resistance in slightly-viscous fluids.
Later years
In 1901 Prandtl became a professor of fluid mechanics at the technical school in Hannover, later the Technical University Hannover and then the University of Hannover. It was here that he developed many of his most important theories. On August 8, 1904, he delivered a groundbreaking paper, Über Flüssigkeitsbewegung bei sehr kleiner Reibung (On the Motion of Fluids in Very Little Friction), at the Third International Mathematics Congress in Heidelberg. In this paper, he described the boundary layer and its importance for drag and streamlining. The paper also described flow separation as a result of the boundary layer, clearly explaining the concept of stall for the first time. Several of his students made attempts at closed-form solutions, but failed, and in the end the approximation contained in his original paper remains in widespread use.
The effect of the paper was so great that Prandtl would succeed Hans Lorenz as director of the Institute for Technical Physics at the University of Göttingen later in the year. In 1907, during his time at Göttingen, Prandtl was tasked with establishing a new facility for model studies of motorized airships called Motorluftschiffmodell-Versuchsanstalt (MVA), later the Aerodynamische Versuchsanstalt (AVA) in 1919. The facility was focused on wind tunnel measurements of airship models with the goal of shapes with minimal air resistance. During WWI, it was used as a large research establishment with many tasks including lift and drag on airfoils, aerodynamics of bombs, and cavitation on submarine propeller blades. In 1925, the university spun off his research arm to create the Kaiser Wilhelm Institute for Flow Research (now the Max Planck Institute for Dynamics and Self-Organization).
Due to the complexity of Prandtl's boundary layer ideas in his 1904 paper, the spread of the concept was initially slow. Many people failed to adopt the idea due to lack of understanding. There was a halt on new boundary layer discoveries until 1908 when two of his students at Gottingen, Blasius and Boltze, released their dissertations on the boundary layer. Blasius' dissertation explained what happened with the boundary layer when a flat plate comes in parallel contact with a uniform stream. Boltze's research was similar to Blasius' but applied Prandtl's theory to spherical shapes instead of flat objects. Prandtl expanded upon the ideas in his student's dissertations to include a thermal boundary layer associated with heat transfer.
There would be three more papers from Gottingen researchers regarding the boundary layer released by 1914. Due to similar reasons to Prandtl's 1904 paper, these first 7 papers on the boundary layer would be slow to spread outside of Gottingen. Partially due to World War I, there would be a lack of papers published regarding the boundary layer until another of Prandtl's students, Theodore Von Karman, published a paper in 1921 on the momentum integral equation across the boundary layer.
Following earlier leads by Frederick Lanchester from 1902–1907, Prandtl worked with Albert Betz and Max Munk on the problem of a useful mathematical tool for examining lift from "real world" wings. The results were published in 1918–1919, known as the Lanchester–Prandtl wing theory. He also made specific additions to study cambered airfoils, like those on World War I aircraft, and published a simplified thin-airfoil theory for these designs. This work led to the realization that on any wing of finite length, wing-tip effects became very important to the overall performance and characterization of the wing. Considerable work was included on the nature of induced drag and wingtip vortices, which had previously been ignored. Prandtl showed that an elliptical spanwise lift distribution the most efficient, giving the minimum induced drag for the given span. These tools enabled aircraft designers to make meaningful theoretical studies of their aircraft before they were built.
Prandtl later extended his theory to describe a bell-like lift distribution, reducing the loads near the tip of the wings by washing out the wing tips until negative downwash was obtained, which gave the minimum induced drag for any given wing structural weight. However, this new lift distribution drew less interest than the elliptical distribution and was initially ignored in most practical aircraft designs. This concept has been rediscovered by other researchers and has become increasingly important (see also the Prandtl-D experimental aircraft).
Prandtl and his student Theodor Meyer developed the first theories of supersonic shock waves and flow in 1908. The Prandtl–Meyer expansion fans allowed for the construction of supersonic wind tunnels. He had little time to work on the problem further until the 1920s, when he worked with Adolf Busemann and created a method for designing a supersonic nozzle in 1929. Today, all supersonic wind tunnels and rocket nozzles are designed using the same method. A full development of supersonics would have to wait for the work of Theodore von Kármán, a student of Prandtl at Göttingen.
Prandtl developed the concept of "circulation" which proved to be particularly important for the hydrodynamics of ship propellers. He did most of the experimental work at his lab in Göttingen from 1910-1918 with his assistant Albert Betz and student Max Munk. Most of his discoveries related to circulation would be kept secret from the western world until after World War I.
Prior to World War I, the Society of German Natural Scientists and Physicians (GDNÄ) was the only opportunity for applied mathematicians, physicists, and engineers in German speaking countries to discuss. In 1920, they met in Bad Nauheim and came to the conclusion that there was a need for a new umbrella for applied sciences due to their experience during the war. In the same year, physicists primarily from industrial laboratories formed a new society called the German Physical Society (DGTP). In September 1921, the two societies held a meeting with the German Mathematical Society (DMV) in Jena. In its first volume, ZAMM (Journal of Applied Mathematics and Mechanics) stated that this meeting "for the first time, applied mathematics and mechanics was coming to its own to a larger extent" This journal advertised the common goals of Prandtl, Theodore von Kármán, Richard von Mises, and Hans Reissner.
On top of the foundation of ZAMM, the GAMM (International Association of Applied Mathematics and Mechanics) was also formed due to the joint efforts of Prandtl and his peers. After these initial meetings of GAMM, it became clear that there was now a new international community of mathematicians, "scientific engineers", and physicists.
Other work examined the problem of compressibility at high subsonic speeds, known as the Prandtl–Glauert correction. This became very useful during World War II as aircraft began approaching supersonic speeds for the first time. He also worked on meteorology, plasticity and structural mechanics. He also made significant contributions to the field of tribology.
Following Prandtl's investigation into instabilities from 1921-1929, he then moved to exploring developed turbulence. This was also being investigated by Kármán, resulting in a race to formulate a solution for the velocity profile in developed turbulence. Regarding the professional rivalry that started between the two, Kármán commented: “I came to realize that ever since I had come to Aachen my old professor and I were in a kind of world competition. The competition was gentlemanly, of course. But it was first-class rivalry nonetheless, a kind of Olympic games, between Prandtl and me, and beyond that between Göttingen and Aachen. The ‘playing field’ was the Congress of Applied Mechanics. Our ‘ball’ was the search for a universal law of turbulence.” Around 1930, the race ended in a draw as both men concluded that the inverse square of skin friction was related to the logarithmic value of the product of Reynold's number and skin friction as seen below where k and C are constants.
Prandtl and von Kármán's work on the boundary was influential and adopted by aerodynamic and hydrodynamic experts around the world after WWI. In May 1932, the International Conference on Hydromechanical Problems of Ship Propulsion was held in Hamburg. Günther Kempf showcased a number of experiments at the conference which confirmed many of the theoretical discoveries of von Kármán and Prandtl.
Prandtl and the Third Reich
After Hitler's rise to power and the establishment of the Third Reich, Prandtl continued his role as director of the Kaiser Wilhelm Society. During this period, the Nazi air ministry, led by Hermann Göring, often used Prandtl's international reputation as a scientist to promote Germany's scientific agenda. Prandtl appears to have happily served as an ambassador for the Nazi regime, writing in 1937 to a NACA representative "I believe that Fascism in Italy and National Socialism in Germany represent very good beginnings of new thinking and economics." Prandtl's support for the regime is apparent in his letters to G. I. Taylor and his wife in 1938 and 1939. Referring to Nazi Germany's treatment of Jews, Prandtl wrote "The struggle, which Germany unfortunately had to fight against the Jews, was necessary for its self-preservation." Prandtl also claimed that "If there will be war, the guilt to have caused it by political measures is this time unequivocally on the side of England."
As a member of the German Physical Society (DPG), Prandtl assisted Carl Ramsauer in drafting the DPG Petition in 1941. The DPG Petition would be published in 1942 and argued that physics in Germany was falling behind that of the United States due to rejection of "Jewish Physics" (relativity and quantum theory) from German physicists. After publication of the DPG Petition, the belief of "German Physics" superiority deteriorated to allow for German students to study these new fields in school.
Publications
Paul Peter Ewald, Theodor Pöschl, Ludwig Prandtl; authorized translation by J. Dougall and W.M. Deans The Physics of Solids and Fluids: With Recent Developments Blackie and Son (1930).
Death and afterwards
Prandtl worked at Göttingen until he died on 15 August 1953. His work in fluid dynamics is still used today in many areas of aerodynamics and chemical engineering. He is often referred to as the father of modern aerodynamics.
The crater Prandtl on the far side of the Moon is named in his honor.
The Ludwig-Prandtl-Ring is awarded by Deutsche Gesellschaft für Luft- und Raumfahrt in his honor for outstanding contribution in the field of aerospace engineering.
In 1992, Prandtl was inducted into the International Air & Space Hall of Fame at the San Diego Air & Space Museum.
Notable students
Jakob Ackeret
Albert Betz
Paul Richard Heinrich Blasius
Adolf Busemann
Kurt Hohenemser
Theodore von Kármán
Lu Shijia (Hsiu-Chen Chang-Lu)
Hubert Ludwieg
Hilda M. Lyon (1932–33)
Hans Multhopp
Max Munk
Johann Nikuradse
Reinhold Rudenberg
Hermann Schlichting
Walter Tollmien
Victor Vâlcovici
Vishnu Madav Ghatage
Karl Wieghardt
Theodor Meyer
See also
Tesla turbine
Particle image velocimetry
Wind tunnel
Subsonic and transonic wind tunnel
Pitot tube
Prandtl's one-seventh-power law
NASA research aircraft, Prandtl-D (Preliminary Research Aerodynamic Design to Lower Drag) and Prandtl-M (Preliminary Research Aerodynamic Design to Land on Mars), both backronyms honoring Prandtl
References
External links
Ludwig Prandtl's Biography in German, , 258 pages
Ludwig Prandtl's Biography in English, , 265 pages
Ludwig Prandtl's Boundary Layer
Video recording of the E. Bodenschatz's lecture on life and work of Ludwig Prandtl
1875 births
1953 deaths
Aerodynamicists
Commanders Crosses of the Order of Merit of the Federal Republic of Germany
German fluid dynamicists
German theoretical physicists
20th-century German physicists
People from Freising
Recipients of the Knights Cross of the War Merit Cross
Technical University of Munich alumni
Academic staff of the University of Göttingen
Foreign members of the Royal Society
Tribologists
Max Planck Institute directors
RWTH Aachen University alumni | Ludwig Prandtl | Materials_science | 2,894 |
3,282,143 | https://en.wikipedia.org/wiki/Robust%20control | In control theory, robust control is an approach to controller design that explicitly deals with uncertainty. Robust control methods are designed to function properly provided that uncertain parameters or disturbances are found within some (typically compact) set. Robust methods aim to achieve robust performance and/or stability in the presence of bounded modelling errors.
The early methods of Bode and others were fairly robust; the state-space methods invented in the 1960s and 1970s were sometimes found to lack robustness, prompting research to improve them. This was the start of the theory of robust control, which took shape in the 1980s and 1990s and is still active today.
In contrast with an adaptive control policy, a robust control policy is static, rather than adapting to measurements of variations, the controller is designed to work assuming that certain variables will be unknown but
bounded.
Criteria for robustness
Informally, a controller designed for a particular set of parameters is said to be robust if it also works well under a different set of assumptions. High-gain feedback is a simple example of a robust control method; with sufficiently high gain, the effect of any parameter variations will be negligible. From the closed-loop transfer function perspective, high open-loop gain leads to substantial disturbance rejection in the face of system parameter uncertainty. Other examples of robust control include sliding mode and terminal sliding mode control.
The major obstacle to achieving high loop gains is the need to maintain system closed-loop stability. Loop shaping which allows stable closed-loop operation can be a technical challenge.
Robust control systems often incorporate advanced topologies which include multiple feedback loops and feed-forward paths. The control laws may be represented by high order transfer functions required to simultaneously accomplish desired disturbance rejection performance with the robust closed-loop operation.
High-gain feedback is the principle that allows simplified models of operational amplifiers and emitter-degenerated bipolar transistors to be used in a variety of different settings. This idea was already well understood by Bode and Black in 1927.
The modern theory of robust control
The theory of robust control system began in the late 1970s and early 1980s and soon developed a number of techniques for dealing with bounded system uncertainty.
Probably the most important example of a robust control technique is H-infinity loop-shaping, which was developed by Duncan McFarlane and Keith Glover of Cambridge University; this method minimizes the sensitivity of a system over its frequency spectrum, and this guarantees that the system will not greatly deviate from expected trajectories when disturbances enter the system.
An emerging area of robust control from application point of view is sliding mode control (SMC), which is a variation of variable structure control (VSC). The robustness properties of SMC with respect to matched uncertainty as well as the simplicity in design attracted a variety of applications.
While robust control has been traditionally dealt with along deterministic approaches, in the last two decades this approach has been criticized on the basis that it is too rigid to describe real uncertainty, while it often also leads to over conservative solutions. Probabilistic robust control has been introduced as an alternative, see e.g. that interprets robust control within the so-called scenario optimization theory.
Another example is loop transfer recovery (LQG/LTR), which was developed to overcome the robustness problems of linear-quadratic-Gaussian control (LQG) control.
Other robust techniques includes quantitative feedback theory (QFT), passivity based control, Lyapunov based control, etc.
When system behavior varies considerably in normal operation, multiple control laws may have to be devised. Each distinct control law addresses a specific system behavior mode. An example is a computer hard disk drive. Separate robust control system modes are designed in order to address the rapid magnetic head traversal operation, known as the seek, a transitional settle operation as the magnetic head approaches its destination, and a track following mode during which the disk drive performs its data access operation.
One of the challenges is to design a control system that addresses these diverse system operating modes and enables smooth transition from one mode to the next as quickly as possible.
Such state machine-driven composite control system is an extension of the gain scheduling idea where the entire control strategy changes based upon changes in system behavior.
See also
Control theory
Control engineering
Fractional-order control
H-infinity control
H-infinity loop-shaping
Sliding mode control
Intelligent control
Process control
Robust decision making
Root locus
Servomechanism
Stable polynomial
State space (controls)
System identification
Stability radius
Iso-damping
Active disturbance rejection control
Quantitative feedback theory
References
Further reading
Control theory
Stochastic control | Robust control | Mathematics | 924 |
16,693,506 | https://en.wikipedia.org/wiki/Pathogenic%20fungus | Pathogenic fungi are fungi that cause disease in humans or other organisms. Although fungi are eukaryotic, many pathogenic fungi are microorganisms. Approximately 300 fungi are known to be pathogenic to humans; their study is called "medical mycology". Fungal infections are estimated to kill more people than either tuberculosis or malaria—about two million people per year.
In 2022 the World Health Organization (WHO) published a list of fungal pathogens which should be a priority for public health action.
Markedly more fungi are known to be pathogenic to plant life than those of the animal kingdom. The study of fungi and other organisms pathogenic to plants is called plant pathology.
Pathogens of particular concern
According to the World Health Organization (WHO) in 2022 pathogens of particular concern are:
Critical priority Cryptococcus neoformans, Candida auris, Aspergillus fumigatus, Candida albicans.
High priority Nakaseomyces glabrata (Candida glabrata), Histoplasma spp., eumycetoma causative agents, Mucorales, Fusarium spp., Candida tropicalis, Candida parapsilosis.
Medium priority Scedosporium spp., Lomentospora prolificans, Coccidioides spp., Pichia kudriavzeveii (Candida krusei), Cryptococcus gattii, Talaromyces marneffei, Pneumocystis jirovecii, Paracoccidioides spp.
Candida
Candida species cause infections in individuals with deficient immune systems. Candida species tend to be the culprit of most fungal infections and can cause both systemic and superficial infection. Th1-type cell-mediated immunity (CMI) is required for clearance of a fungal infection.
Candida albicans is a kind of diploid yeast that commonly occurs among the human gut microflora. C. albicans is an opportunistic pathogen in humans. Abnormal over-growth of this fungus can occur, particularly in immunocompromised individuals. C. albicans has a parasexual cycle that appears to be stimulated by environmental stress.
C. auris, first described in 2009, is resistant to many frontline antifungal drugs, disinfectants, and heat, which makes it extremely difficult to eradicate. Like many fungal pathogens it mostly affects immunocompromised people; if in the blood or other organs and tissues, mortality is about 50%.
Other species of Candida may be pathogenic as well, including Candida stellatoidea, C. tropicalis, C. pseudotropicalis, C. krusei, C. parapsilosis, and C. guilliermondii.
Aspergillus
The most common pathogenic species are Aspergillus fumigatus and Aspergillus flavus. Aspergillus flavus produces aflatoxin which is both a toxin and a carcinogen and which can potentially contaminate foods such as nuts. Aspergillus fumigatus and Aspergillus clavatus can cause allergic disease. Some Aspergillus species cause disease on grain crops, especially maize, and synthesize mycotoxins including aflatoxin. Aspergillosis is the group of diseases caused by Aspergillus. The symptoms include fever, cough, chest pain or breathlessness. Usually, only patients with weakened immune systems or with other lung conditions are susceptible.
The spores of Aspergillus fumigatus are ubiquitous in the atmosphere. A. fumigatus is an opportunistic pathogen. It can cause potentially lethal invasive infection in immunocompromised individuals. A. fumigatus has a fully functional sexual cycle that produces cleistothecia and ascospores.
Cryptococcus
Cryptococcus neoformans can cause a severe form of meningitis and meningo-encephalitis in patients with HIV infection and AIDS. The majority of Cryptococcus species live in the soil and do not cause disease in humans. Cryptococcus neoformans is the major human and animal pathogen. Papiliotrema laurentii and Naganishia albida, both formerly referred to Cryptococcus, have been known to occasionally cause moderate-to-severe disease in human patients with compromised immunity. Cryptococcus gattii is endemic to tropical parts of the continent of Africa and Australia and can cause disease in non-immunocompromised people.
Infecting C. neoformans cells are usually phagocytosed by alveolar macrophages in the lung. The invading C. neoformans cells may be killed by the release of oxidative and nitrosative molecules by these macrophages. However some C. neoformans cells may survive within the macrophages. The ability of the pathogen to survive within the macrophages probably determines latency of the disease, dissemination and resistance to antifungal agents. In order to survive in the hostile intracellular environment of the macrophage, one of the responses of C. neoformans is to upregulate genes employed in responses to oxidative stress.
The haploid nuclei of C. neoformans can undergo nuclear fusion (karyogamy) to become diploid. These diploid nuclei may then undergo meiosis, including recombination, resulting in the formation of haploid basidiospores that are able to disperse. Meiosis may facilitate repair of C. neoformans DNA in response to macrophage challenge.
Histoplasma
Histoplasma capsulatum can cause histoplasmosis in humans, dogs and cats. The fungus is most prevalent in the Americas, India and southeastern Asia. It is endemic in certain areas of the United States. Infection is usually due to inhaling contaminated air.
Pneumocystis
Pneumocystis jirovecii (or Pneumocystis carinii) can cause a form of pneumonia in people with weakened immune systems, such as premature children, patients on immunosuppressive treatment, the elderly and AIDS patients.
Stachybotrys
Stachybotrys chartarum or "black mold" can cause respiratory damage and severe headaches. It frequently occurs in houses and in regions that are chronically damp.
Host defense mechanisms
Endothermy
Mammalian endothermy and homeothermy are potent nonspecific defenses against most fungi. A comparative genomic study found that in opportunistic fungi there are few if any specialised virulence traits consistently linked to opportunistic pathogenicity of fungi in humans apart from the ability to grow at 37 °C.
Barrier tissues
The skin, respiratory tract, gastrointestinal tract, and the genital-urinary tract induced inflammation are common bodily regions of fungal infection.
Immune response
Studies have shown that hosts with higher levels of immune response cells such as monocytes/macrophages, dendritic cells, and invariant natural killer (iNK) T-cells exhibited greater control of fungal growth and protection against systemic infection. Pattern recognition receptors (PRRs) play an important role in inducing an immune response by recognizing specific fungal pathogens and initiating an immune response.
In the case of mucosal candidiasis, the cells that produce cytokine IL-17 are extremely important in maintaining innate immunity.
Link to extremotolerance
A comprehensive comparison of distribution of opportunistic pathogens and stress-tolerant fungi in the fungal tree of life showed that polyextremotolerance and opportunistic pathogenicity consistently appear in the same fungal orders and that the co-occurrence of opportunism and extremotolerance (e.g. osmotolerance and psychrotolerance) is statistically significant. This suggests that some adaptations to stressful environments may also promote fungal survival during the infection.
See also
List of human diseases associated with infectious pathogens
Microbiology
Microsporidia
Mycology
Plant pathology
Plague Inc.
References
Further reading
External links
Ecmm.eu: Official European Confederation of Medical Mycology website
.
Fungi and humans
Fungus common names
Mycology | Pathogenic fungus | Biology | 1,718 |
76,397,416 | https://en.wikipedia.org/wiki/Europium%28III%29%20iodate | Europium(III) iodate is an inorganic compound with the chemical formula Eu(IO3)3. It can be produced by hydrothermal reaction of europium(III) nitrate or europium(III) oxide and iodic acid in water at 230 °C. It can be thermally decomposed as follows:
It reacts hydrothermally with iodine pentoxide and molybdenum trioxide at 200 °C to obtain Eu(MoO2)(IO3)4(OH).
References
Europium compounds
Iodates | Europium(III) iodate | Chemistry | 116 |
33,942,954 | https://en.wikipedia.org/wiki/Genetically%20modified%20food%20in%20the%20European%20Union | Genetic engineering in the European Union has varying degrees of regulation.
Regulation
History
Until the 1990s, Europe's regulation was less strict than in the United States, one turning point being cited as the export of the United States' first GM-containing soy harvest in 1996. The GM soy made up about 2% of the total harvest at the time, and Eurocommerce and European food retailers required that it be separated. Although the European Commission (EC) did eventually relent, this sparked American concerns that Europe would soon become a tighter regulatory environment - it was conditioned on sale as processed products and never as seed. The Clinton Administration was widely urged to harmonize standards in its impending second term to guarantee an open European market. In 1998, the use of MON810, a Bt expressing maize conferring resistance to the European corn borer, was approved for commercial cultivation in Europe. Shortly thereafter, the EU enacted a de facto moratorium on new approvals of GMOs pending new regulatory laws passed in 2003.
Those new laws provided the EU with possibly the most stringent GMO regulations in the world. The European Food Safety Authority (EFSA) was created in 2002 with the primary goal of preventing future food crises in Europe. All GMOs, along with irradiated food, are considered "new food" and subject to extensive, case-by-case, science-based food evaluation by the EFSA. The criteria for authorization fall into four broad categories: "safety", "freedom of choice", "labelling" and "traceability". The EFSA reports to the European Commission (EC), which then drafts a proposal for granting or refusing the authorisation. This proposal is submitted to the Section on GM Food and Feed of the Standing Committee on the Food Chain and Animal Health; if accepted, it will be adopted by the EC or passed on to the Council of Agricultural Ministers. Once in the Council it has three months to reach a qualified majority for or against the proposal; if no majority is reached, the proposal is passed back to the EC, which will then adopt the proposal. However, even after authorization, individual EU member states can ban individual varieties under a 'safeguard clause' if there are "justifiable reasons" that the variety might cause harm to humans or the environment. The member state must then supply sufficient evidence that this is the case. The commission is obliged to investigate these cases and either overturn the original registrations or request the country to withdraw its temporary restriction. The laws of the EU also required that member nations establish coexistence regulations. In many cases, national coexistence regulations include minimum distances between fields of GM crops and non-GM crops. The distances for GM maize from non-GM maize for the six largest biotechnology countries are: France – 50 metres, Britain – 110 metres for grain maize and 80 for silage maize, Netherlands – 25 metres in general and 250 for organic or GM-free fields, Sweden – 15–50 metres, Finland – data not available, and Germany – 150 metres and 300 from organic fields. Larger minimum distance requirements discriminate against adoption of GM crops by smaller farms.
In 2006, the World Trade Organization concluded that the EU moratorium, which had been in effect from 1999 to 2004, had violated international trade rules. The moratorium had not affected previously approved crops. The only crop authorised for cultivation before the moratorium was Monsanto's MON 810. The next approval for cultivation was the Amflora potato for industrial applications in 2010 which was grown in Germany, Sweden and the Czech Republic that year.
The slow pace of approval was criticized as endangering European food safety although as of 2012, the EU had authorized the use of 48 genetically modified organisms. Most of these were for use in animal feed (it was reported in 2012 that the EU imports about 30 million tons a year of GM crops for animal consumption.), food or food additives. Of these, 26 were varieties of maize. In July 2012, the EU gave approval for an Irish trial cultivation of potatoes resistant to the blight that caused the Great Irish Famine.
The safeguard clause mentioned above has been applied by many member states in various circumstances, and in April 2011 there were 22 active bans in place across six member states: Austria, France, Germany, Luxembourg, Greece, and Hungary. However, on review many of these have been considered scientifically unjustified.
In January 2005, the Hungarian government announced a ban on importing and planting of genetic modified maize seeds, which was subsequently authorized by the EU.
In February 2008, the French government used the safeguard clause to ban the cultivation of MON810 after Senator Jean-François Le Grand, chairman of a committee set up to evaluate biotechnology, said there were "serious doubts" about the safety of the product (although this ban was declared illegal in 2011 by the European Court of Justice and the French Conseil d'État). The French farm ministry reinstated the ban in 2012, but this was rejected by the EFSA.
In 2009 German Federal Minister Ilse Aigner announced an immediate halt to cultivation and marketing of MON810 maize under the safeguard clause.
In March 2010, Bulgaria imposed a complete ban on genetically modified crop growing either commercially or for trials. The cabinet of Boyko Borisov initially imposed a five-year moratorium, but later extended this to a permanent ban after widespread public protests against the introduction of genetically modified crops in the country.
In January 2013, Poland's government placed a ban on Monsanto's GM corn, MON 810. It launched a communication campaign with farmers, announcing they will now be strictly monitoring farms for GM corn crops. Poland is the eighth EU member to ban the production of GMOs even though they have been approved by European Food Safety Authority. Europe is not officially against the use of GM crops when it comes to laboratory research, and they are working to regulate the field.
In 2012, the European Food Safety Authority (EFSA) Panel on Genetically Modified Organisms (GMO) released a "Scientific opinion addressing the safety assessment of plants developed through cisgenesis and intragenesis" in a response to a request from the European Commission. The opinion was that while "the frequency of unintended changes may differ between breeding techniques and their occurrence cannot be predicted and needs to be assessed case by case", "similar hazards can be associated with cisgenic and conventionally bred plants, while novel hazards can be associated with intragenic and transgenic plants." In other words, cisgenic approaches, which introduce genes from the same species, should be considered similar in risk to conventional breeding approaches, whilst transgenic plants can come with new hazards.
In 2014, a panel of experts set up by the UK Biotechnology and Biological Sciences Research Council argued that "A regulatory system based on the characteristics of a novel crop, by whatever method it has been produced, would provide a more effective and robust regulation than current EU processes , which consider new crop varieties differently depending on the method used to produce them." They said that new forms of "genome editing" allow targeting specific sites and making precise changes in the DNA of crops. In the future it would become increasingly difficult if not impossible to tell which method has been used (conventional breeding or genetic engineering) to produce a novel crop. They proposed that existing EU regulatory system should be replaced with a more logical system like that used for new medicines.
In 2015, Germany, Poland, France, Scotland and several other member states opted out of cultivating GMO crops in their territory.
A Eurobarometer survey has indicated that "level of concern" about genetically engineered food in Europe has decreased significantly, from 69% in 2010 to 27% in 2019.
Around one quarter (26%) of the EU citizens indicate the presence of genetically modified ingredients in food or drinks as a concern in 2022 while only a smaller proportions (8%), the use of new biotechnology in food production, i.e. genome editing
Labeling and traceability
The regulations concerning the import and sale of GMOs for human and animal consumption grown outside the EU involve providing freedom of choice to the farmers and consumers. All food (including processed food) or feed which contains greater than 0.9% of approved GMOs must be labelled. On two occasions, GMOs unapproved by the EC have arrived in the EU and been forced to return to their port of origin. The first was in 2006 when a shipment of rice from America containing an experimental GMO variety (LLRice601) not meant for commercialisation arrived at Rotterdam. The second in 2009 when trace amounts of a GMO maize approved in the US were found in a "non-GM" soy flour cargo.
The coexistence of GM and non-GM crops has raised significant concern in many European countries and so EU law also requires that all GM food be traceable to its origin, and that all food with GM content greater than 0.9% be labelled. Due to high demand from European consumers for freedom of choice between GM and non-GM foods. EU regulations require measures to avoid mixing of foods and feed produced from GM crops and conventional or organic crops, which can be done via isolation distances or biological containment strategies. (Unlike the US, European countries require labeling of GM food.) European research programs such as Co-Extra, Transcontainer, and SIGMEA are investigating appropriate tools and rules for traceability. The OECD has introduced a "unique identifier" which is given to any GMO when it is approved, which must be forwarded at every stage of processing. Such measures are generally not used in North America because they are very costly and the industry admits of no safety-related reasons to employ them. The EC has issued guidelines to allow the co-existence of GM and non-GM crops through buffer zones (where no GM crops are grown). These are regulated by individual countries, and vary from 15 metres in Sweden to 800 metres in Luxembourg. All food (including processed food) or feed which contains greater than 0.9% of approved GMOs must be labelled.
Scope
In its regulations the European Union considers genetically modified organisms only to be food and feed for all intents and practical purposes, in difference to the definition of genetically modified organisms which encompasses animals.
Approach
The EU uses the precautionary principle, demanding a pre-market authorisation for any GMO to enter the market and a post-market environmental monitoring. Both the European Food Safety Authority (EFSA) and the member states author a risk assessment. This assessment must show that the food or feed is safe for human and animal health and the environment "under its intended conditions of use".
As of 2010, the EU treats all genetically modified crops (GMO crops), along with irradiated food as "new food". They are subject to extensive, case-by-case, science-based food evaluation by the European Food Safety Authority (EFSA). This agency reports to the European Commission, which then drafts proposals for granting or refusing authorisation. Each proposal is submitted to the "Section on GM Food and Feed of the Standing Committee on the Food Chain and Animal Health". If accepted, it is either adopted by the EC or passed on to the Council of Agricultural Ministers. The council has three months to reach a qualified majority for or against the proposal. If no majority is reached, the proposal is passed back to the EC, which then adopts the proposal.
The EFSA uses independent scientific research to advise the European Commission on how to regulate different foods in order to protect consumers and the environment. For GMOs, the EFSA's risk assessment includes molecular characterization, potential toxicity and potential environmental impact. Each GMO must be reassessed every 10 years. In addition, applicants who wish to cultivate or process GMOs must provide a detailed surveillance plan for after authorization. This ensures that the EFSA will know if risk to consumers or the environment heightens and that they can then act to lowed the risk or deauthorize the GMO.
, 49 GMO crops, consisting of
eight GM cottons,
28 GM maizes,
three GM oilseed rapes,
seven GM soybeans,
one GM sugar beet,
one GM bacterial biomass, and
one GM yeast biomass
have been authorised.
Review of authorisation
Member States of the EU may invoke a safeguard clause to temporarily restrict or prohibit use and/or sale of a GMO crop within their territory if they have justifiable reasons to consider that an approved GMO crop may be a risk to human health or the environment. The EC is obliged to investigate, and either overturn the original registrations or ask the country to withdraw its temporary restriction. By 2012, seven countries had submitted safeguard clauses. The EC investigated and rejected those from six countries ("...the scientific evidence currently available did not invalidate the original risk assessments for the products in question...") and one, the UK, withdrew.
Import rules
The EC Directorate-general for agriculture and rural development states that the regulations concerning the import and sale of GMOs for human and animal consumption grown outside the EU provide freedom of choice to farmers and consumers. All food (including processed food) or feed which contains greater than 0.9% of approved GMOs must be labelled.
As of 2010, GMOs unapproved by the EC had been found twice and returned to their port of origin: First in 2006 when a shipment of rice from the U.S. containing an experimental GMO variety (LLRice601) not meant for commercialisation arrived at Rotterdam, the second time in 2009, when trace amounts of a GMO maize approved in the US were found in a non-GM soy flour cargo. In 2012, the EU imported about 30 million tons of GM crops for animal consumption.
Adoption of GMO crops
Spain has been the largest producer of GM crops in Europe with of GM maize planted in 2013 equaling 20% of Spain's maize production.
Smaller amounts were produced in the Czech Republic, Slovakia, Portugal, Romania and Poland. France and Germany are the major opponents of genetically modified food in Europe, although Germany has approved Amflora a potato modified with higher levels of starch for industrial purposes. In addition to France and Germany, other European countries that placed bans on the cultivation and sale of GMOs include Austria, Hungary, Greece, and Luxembourg. Poland has also tried to institute a ban, with backlash from the European Commission. Bulgaria effectively banned cultivation of genetically modified organisms on 18 March 2010.
In 2010, Austria, Bulgaria, Cyprus, Hungary, Ireland, Latvia, Lithuania, Malta, Slovenia and the Netherlands wrote a joint paper requesting that individual countries should have the right to decide whether to cultivate GM crops. By the year 2010, the only GMO food crop with approval for cultivation in Europe was MON 810, a Bt expressing maize conferring resistance to the European corn borer that gained approval in 1998.
In March 2010 a second GMO, a potato called Amflora, was approved for cultivation for industrial applications in the EU by the European Commission and was grown in Germany, Sweden and the Czech Republic that year. Amflora was withdrawn from the EU market in 2012, and in 2013 its approval was annulled by an EU court.
Fearing that gene flow could occur between related crops, the EC issued new guidelines in 2010 regarding the co-existence of GM and non-GM crops.
Co-existence is regulated by the use of buffer zones and isolation distances between the GM and non-GM crops. The guidelines are not binding and each Member State can implement its own regulations, which has resulted in buffer zones ranging from 15 metres (Sweden) to 800 metres (Luxembourg). Member States may also designate GM-free zones, effectively allowing them to ban cultivation of GM crops in their territory without invoking a safeguard clause.
Implementation in the Member States and in Switzerland
Bulgaria
In October 2015, Bulgaria announced it has opted out of growing genetically modified crops, effectively banning the cultivation of different types of GMO corn and soybeans.
France
France adopted the EU laws on growing GMOs in 2007 and was fined €10 million by the European Court of Justice for the six-year delay in implementing the laws. In February 2008, the French government used the safeguard clause to ban the cultivation of MON 810 after Senator Jean-François Le Grand, chairman of a committee to evaluate biotechnology, said there were "serious doubts" about the safety of the product. Twelve scientists and two economists on the committee accused Le Grand of misrepresenting the report and said they did not have "serious doubts", although questions remained concerning the impact of Bt-maize on health and the environment. The EFSA reviewed studies the French government had submitted to back up its claim, and concluded that there was no new evidence to undermine its prior safety findings and considered the decision "scientifically unfounded". The High Council for Biotechnology subcommittee dealing with economic, ethical and social aspects recommended an additional "GMO-free" label for anything containing less than 0.1% GMO which is due to come in late 2010. In 2011, the European Court of Justice and the French Conseil d'État ruled that the French farm ministry ban of MON 810 was illegal, as it failed "to give proof of the existence of a particularly high level of risk for the health and the environment".
On 17 September 2015, the French government announced it would effectively continue to ban GMO crops by enacting an "opt-out" provision, previously agreed to for the 28 EU member states in March 2015, by asking the European Commission for France to extend the GMO ban on nine additional strains of maize. The policy announcement was made simultaneously by the French farm and environment ministries.
Germany
In April 2009, German Federal Minister Ilse Aigner announced an immediate halt to cultivation and marketing of MON 810 maize under the safeguard clause. The ban was based on "expert opinion" that suggested there were reasonable grounds to believe that MON 810 maize presents a danger to the environment. Three French scientists reviewing the scientific evidence used to justify the ban concluded that it did not use a case-by-case approach, confused potential hazards with proven risks and ignored the meta-knowledge on Bt expressing maize, instead focusing on selected individual studies.
In August 2015, Germany announced its intention to ban genetically modified crops.
Northern Ireland
In September 2015, Northern Ireland announced a ban on genetically modified crops.
Romania
Romania grew GM soybeans in 1999, increasing the crop's yield by 30%, permitting the export of excess product. When the country joined the European Union in 2007 it was no longer allowed to grow the GM crop, resulting in the total area planted in soybeans dropping by 70%. The next year, this produced a trade deficit of €117.4m for purchase of replacement products. Romanian farmers have been very much in favour of relegalisation of GM soy.
Switzerland
In 1992, Switzerland voted in favour of the introduction of an article about assisted reproductive technologies and genetic engineering in the Swiss Federal Constitution. In 1995, Switzerland introduced regulations requiring labelling of food containing genetically modified organisms. It was one of the first countries to introduce labelling requirements for GMOs. In 2003, the Federal Assembly adopted the "Federal Act on Non-Human Gene Technology".
A federal popular initiative introducing a moratorium on genetically modified organisms in the Swiss agriculture was introduced from 2005 to 2010. Later, the Swiss parliament extended this moratorium to 2013. Between 2007 and 2011, the Swiss Government funded thirty projects to investigate the risks and benefits of GMOs. These projects concluded that there were no clear health or environmental dangers associated with planting GMOs. However, they also concluded that there was little economic incentive for farmers to adopt GMOs in Switzerland. The Swiss parliament then extended the moratorium to 2017, and then to 2021.
As of 2016, six cantons (Bern, Fribourg, Geneva, Jura, Ticino and Vaud) have introduced laws against genetically modified organisms in agriculture. More than one hundred communes have declared themselves free of genetically modified organisms. The cantons of Switzerland perform tests to assess the presence of genetically modified organisms in foodstuffs. In 2008, 3% of the tested samples contained detectable amounts of GMOs. In 2012, 12.1% of the samples analysed contained detectable amounts of GMOs (including 2.4% of GMOs forbidden in Switzerland). All the samples tested (except one) contained less than 0.9% of GMOs, which is the threshold that imposes labelling indicating the presence of GMOs in food.
Scotland
In August 2015, the Scottish government announced that it would "shortly submit a request that Scotland is excluded from any European consents for the cultivation of GM crops, including the variety of genetically modified maize already approved and six other GM crops that are awaiting authorisation".
See also
Regulation of genetic engineering
European Food Safety Authority
Notes and references
External links
European Food and Safety Authority
EU Register of authorised GMOs – European Commission
Regulation of genetically modified organisms
European Union and agriculture
Food safety in the European Union
European Union regulations | Genetically modified food in the European Union | Engineering,Biology | 4,323 |
75,476,697 | https://en.wikipedia.org/wiki/1%2C4-Butanedithiol | 1,4-Butanedithiol is an organosulfur compound with the formula . It is a malodorous, colorless liquid that is highly soluble in organic solvents. The compound has found applications in biodegradable polymers.
Reactions
Alkylation with geminal dihalides gives 1,3-dithiepanes. Oxidation gives the cyclic disulfide 1,2-dithiane:
It forms self-assembled monolayers on gold.
It is also used in polyadditions along with 1,4-butanediol to form sulfur-containing polyester and polyurethanes containing diisocyanate. Several of these polymers are considered biodegradable and many of their components are sourced from non-petroleum oils.
Related compounds
Dithiothreitol
1,3-Propanedithiol
References
Reagents for organic chemistry
Thiols
Foul-smelling chemicals | 1,4-Butanedithiol | Chemistry | 192 |
66,845,609 | https://en.wikipedia.org/wiki/Wood-pasture%20hypothesis | The wood-pasture hypothesis (also known as the Vera hypothesis and the megaherbivore theory) is a scientific hypothesis positing that open and semi-open pastures and wood-pastures formed the predominant type of landscape in post-glacial temperate Europe, rather than the common belief of primeval forests. The hypothesis proposes that such a landscape would be formed and maintained by large wild herbivores. Although others, including landscape ecologist Oliver Rackham, had previously expressed similar ideas, it was the Dutch researcher Frans Vera, who, in his 2000 book Grazing Ecology and Forest History, first developed a comprehensive framework for such ideas and formulated them into a theorem. Vera's proposals, although highly controversial, came at a time when the role grazers played in woodlands was increasingly being reconsidered, and are credited for ushering in a period of increased reassessment and interdisciplinary research in European conservation theory and practice. Although Vera largely focused his research on the European situation, his findings could also be applied to other temperate ecological regions worldwide, especially the broadleaved ones.
Vera's ideas have met with both rejection and approval in the scientific community, and continue to lay an important foundation for the rewilding-movement. While his proposals for widespread semi-open savanna as the predominant landscape of temperate Europe in the early to mid-Holocene have at large been rejected, they do partially agree with the established wisdom about vegetation structure during previous interglacials. Moreover, modern research has shown that, under the current climate, free-roaming large grazers can indeed influence and even temporarily halt vegetation succession. Whether the Holocene prior to the rise of agriculture provides an adequate approximation to a state of "pristine nature" at all has also been questioned, since by that time anatomically modern humans had already been omnipresent in Europe for millennia, with in all likelihood profound effects on the environment.
The severe loss of megafauna at the end of the Pleistocene and beginning of the Holocene known as the Quaternary extinction event, which is frequently linked to human activities, did not leave Europe unscathed and brought about a profound change in the European large mammal assemblage and thus ecosystems as a whole, which probably also affected vegetation patterns. The assumption, however, that the pre-Neolithic represents pristine conditions is a prerequisite for both the "high-forest theory" and the Vera hypothesis in their respective original forms. Whether or not the hypothesis is supported may thus further depend on whether or not the pre-Neolithic Holocene is accepted as a baseline for pristine nature, and thus also on whether the Quaternary extinction of megafauna is considered (primarily) natural or man-made.
Vera's hypothesis has important repercussions for nature conservation especially, because it advocates for a reorientation of emphasis away from the protection of old-growth forest (as per the competing high forest theory) and towards the conservation of open and semi-open grasslands and wood pastures, through extensive grazing. This aspect in particular has attracted considerable attention, and has made Vera's hypothesis an important point of reference for conservation grazing and rewilding initiatives. The wood-pasture hypothesis also has points of contact with traditional agricultural practices in Europe, which may conserve biodiversity in a similar way to wild herbivore herds.
Names and definitions
Frans Vera's hypothesis has many names, since Vera himself did not provide a distinguished name for it. Instead, he simply referred to it as the alternative hypothesis, alternative to the high-forest theory, which he called the null hypothesis. As a result, it has been called by many names over the years, including the wood-pasture hypothesis, the wooded pasture hypothesis, the Vera hypothesis, the temperate savanna hypothesis and the open woodland hypothesis. Especially in Continental Europe, it is commonly known as the megaherbivore hypothesis and literal translations of it.
Vera limited the geographic area of his ideas to Western and Central Europe between 45°N and 58°N latitude and 5°W and 25°E longitude. This includes most of the British Isles and everything between France (except the Southern third) and Poland and Southern Scandinavia to the Alps. Furthermore, he confined it to altitudes below . By extension, the North American East Coast is also addressed as an analogy with a comparable climate.
High-forest theory
Heinrich Cotta: high-forest theory
In his 1817 work Anweisungen zum Waldbau (Directions for Silviculture), Heinrich Cotta posited that if humans abandoned his native Germany, in the space of 100 years it would be "covered with wood". This assumption laid the foundation for what is now called the high-forest theory, which assumes that deciduous forests are the naturally predominant ecosystem type in the temperate, broad-leaved regions.
Frederic Clements: linear succession
Later, this position was accompanied by Clements' formulation of the theory of linear succession, meaning that under the right conditions bare ground would, over time, invariably become colonised by a succession of plant communities eventually leading to closed stands dominated by the tallest plant species. Because in most of the temperate hemisphere the potentially tallest plants are trees, the final product would therefore chiefly be forest. Albeit with changes in conceptualisation and some modifications, this concept remains the one favoured by most, and provides the conceptual framework for many forest-related methods and customs in forestry and conservation. This includes the doctrine advocated by German forest-ecologist Knut Sturm, which highlights the importance of non-intervention and space of time for forest protection, as it is implemented in forest reserves such as Białowieża.
Further refinements
Clements' notion of stable climax communities was later challenged and refined by authorities such as Arthur Tansley, Alexander Watt and Robert Whittaker, who championed the inclusion of dynamic processes, like temporary collapse of canopy cover because of windthrow, fire or calamities, into Clements' framework. This, however, did not change anything about the status of the "high-forest theory" as the commonly accepted view; that without human intervention closed-canopy forest would dominate the global temperate regions as the potential natural vegetation. This is also the concept that was advocated by European plant experts like Heinz Ellenberg, Johannes Iversen and Franz Firbas.
The reconstruction of vegetation history
Apart from theoretical considerations, this concept has relied and continues to rely heavily on both field observations and, more recently, on findings from pollen analysis, which allow inferences about the vegetation structure of past epochs. For example, vegetation trends can be reconstructed from the ratio of tree pollen to pollen associated with grassland. Pollen analysis is the most widely used means of generating historic vegetation data and the analysis of pollen data has provided a solid database from which a predominance of forest throughout the early stages of the Holocene of temperate Europe, especially the Atlantic, is generally inferred, although the possibility of regional differences remains open. On that basis, the history of vegetation in Europe is generally reconstructed as a history of forest.
Pollen analysis, however, has been criticized for its inherent bias towards wind-pollinated plant species and, importantly, wind-pollinated trees, and has been shown to overestimate forest cover. To account for this bias, a corrective model (REVEALS) is used, whose application leads to results that differ substantially from those drawn from the traditional comparison of pollen percentages alone. Alternatively to or in combination with pollen, fossil indicator organisms – such as beetles and molluscs – can be used to reconstruct vegetation structure.
Large herbivores and high-forest theory
There is no general agreement on herbivores and their influence on succession in natural ecosystems in the temperate hemisphere. In the high-forest theory framework, wild herbivores are mostly considered as minor factors, derived from the assumption that the natural vegetation was forest. Therefore, wild herbivores were characterised by Tansley as followers of succession, not as actively influencing it, because otherwise Europe would not have been forested. From this assumption the principle was developed that the natural abundance of herbivores does not hinder forest succession, which means that herbivore numbers are necessarily considered too high once as they impede natural forest regeneration. For example, WWF Russia considers five to seven animals the optimal density of bison per 1000 ha (10 km²), because if the population exceeds 13 animals per 1000 ha, first signs of vegetation suppression are observed. Consequently, the bison population in Białowieża is controlled by culling. Similarly, it is widely believed that two to seven deer per is a sustainable number based on the assumption that if deer numbers exceed this bar, they start having a negative impact on woodland regeneration. Consequently, culling is commonly seen as necessary to reduce a perceived overabundance of deer to sustainable levels and mimic natural predation.
Others, however, have criticised this view. In a 2023 publication, Brice B. Hanberry and Edward K. Faison argued that in the eastern United States, where white-tailed deer are commonly considered overabundant due to the extirpation of wolves and cougars, there are currently no more deer than there were historically when these predators were present. Furthermore, they found that even at densities that are perceived as too high, the influence of deer may be ecologically beneficial. The assumption that population control through hunting is necessary in order to mimic the effect of natural predators is also not entirely supported by scientific analyses of natural predator-prey dynamics. Instead, the control of herbivore numbers in nature probably depends on other factors. A perhaps more important influence predators may have on prey animals is the landscape of fear their presence can create, promoting landscape heterogeneity. However, in the presence of megafauna over , which are largely immune to predation, even this ability is limited. Overall, how ungulate populations are controlled in nature is controversial, and food availability is an important constraint, even in the presence of apex predators.
In regions with relatively intact large-mammal assemblages in Africa and Asia, as well as in European rewilding areas where "naturalistic grazing" is practised, herbivore biomass exceeds the values commonly deemed appropriate for temperate forests many times over. Here, herbivore biomass reaches a maximum of per , while the mammoth steppe with an estimated per km2 falls within a similar range. The herbivore biomass of Britain during the Eemian interglacial has been estimated as more than per km2, which is equivalent to more than 2.5 fallow deer per ha. Hence, the ecologist Christopher Sandom and others have suggested that the comparatively high forest cover of the pre-Neolithic European Holocene may be a consequence of megaherbivore extinctions during the Quaternary extinction event, as compared to the last interglacial in Europe with a pristine megafauna, the Eemian, the early stages of the Holocene appear to have been much more forested. According to the authors, this is unlikely to be the result of the latter's only slightly cooler climate as compared to the Eemian. However, this is also subject to debate.
Background: grazers and browsers
The impact herbivores have on the landscape level depends on their way of feeding. Namely, browsers like roe deer, elk and the black rhino focus on woody vegetation, while the diet of grazers like horse, cattle and the white rhino is dominated by grasses and forbs. Intermediate feeders, like the wisent and the red deer, fall in between. Generally, grazers tend to be more social, less selective in their food choices and forage more intensively. Therefore, their impact on vegetation composition tends to be higher, as well as their ability to maintain open spaces.
Since the extinction of the aurochs in 1627 and the wild horse around 1900, none of the remaining large wild herbivores in Europe is an obligate grazer. Similarly, domesticated descendants of aurochs and wild horse, cattle and horse, are now largely kept in stables, factory farms and close to settlements, making them effectively extinct in the landscape. What remains are browsers and mixed feeders – roe deer, red deer, elk, wild boar, wisent and beaver, often in low densities. Backbreeding-projects, such as the German Taurus project and the Dutch Tauros programme are addressing this issue by breeding domestic cattle that can be released into the landscape as hardy and sufficiently similar proxies to act as ecological replacements for the aurochs. Similarly, primitive horse breeds such as the Konik, Exmoor pony and the Sorraia are being used as proxies for the tarpan.
Frans Vera
Vera argued that the dominating landscape-type of the early to mid-Holocene was not closed forest, but a semi-open, park-like one. This semi-open landscape, he proposed, was created and maintained by large herbivores. During the Holocene, these herbivores included aurochs, European bison, red deer and tarpan. Up to the Quaternary extinctions, many other megafaunal mammals like the straight-tusked elephant or Merck's rhinoceros existed in Europe as well, that probably kept the forests open during warm interglacial periods like the Eemian interglacial. Vera also postulated that lowland forest did not emerge on a large scale before the onset of the Neolithic period and subsequent local extinctions of herbivores, which in turn allowed forests to thrive more unhindered. Indeed, investigations point to at least locally open circumstances, for example in floodplains, on infertile soils, chalklands and in submediterranean and continental areas, but maintain that forest largely dominated.
In his book Vera also discussed the decline of ancient oak-hickory-forest communities in Eastern North America. Many forests that stem from Pre-Columbian times (old-growth forests) feature light-demanding oaks and hickories prominently. However, these do not readily regenerate in modern forests; a phenomenon commonly referred to as oak regeneration failure. Instead, shade-tolerant species such as red maple and American beech dominate increasingly. While the cause is still poorly understood, a lack of natural fire is commonly presumed to play a role. Vera instead suggested that the grazing and browsing of wild herbivores, most importantly American bison, created the conditions oaks and hickories need for successful regeneration to happen, and explained the modern lack of regeneration of these species in forests with the mass-slaughter of bisons committed by European settlers.
Paleoecological evidence drawn from fossil Coleoptera deposits has also shown that, albeit rare, beetle species associated with grasslands and other open landscapes were present throughout the Holocene of Western Europe, which points to open habitats being present, but restricted. However, paleoecological data from previous interglacials when the larger megafauna was still present indicate widespread warm temperate savannah. This could mean that elephants and rhinos were more effective creators of open landscapes than the herbivores left after the Quaternary extinction event. On the other hand, traditional animal husbandry may have mitigated the effects of possibly human-induced megafaunal die-off, allowing the survival of species of the open landscape previously created and maintained by megafauna.
Frans Vera was not the first to question the high-forest paradigm. Botanist Francis Rose had expressed doubts already in the 1960s, knowing about British plant and lichen species and their light requirements. The relationship between large grazers and landscape openness, and the significance of the Quaternary extinctions of megafauna in this regard, had also been recognized prior to Vera. In 1992, for example, the archaeologist Wilhelm Schüle theorized that the genesis of closed forest in temperate Europe was the result of prehistoric man-made megafauna extinctions. Landscape ecologist Oliver Rackham, in a 1998 article entitled "Savanna in Europe", envisaged a kind of savanna as the original predominant landscape type of northwestern Europe. Vera, however, was the first to develop a comprehensive theorem to explain why forest did not dominate even in the Holocene, and to thus propose a real alternative to the high-forest theory.
In some of its aspects, the wood-pasture hypothesis bears similarity to which was proposed by but challenged and refuted by scholars such as Reinhold Tüxen and .
Main arguments
Oak and hazel
Vera relies on several lines of argument based on experiments, ecology, evolutionary ecology, palynology, history and etymology. One of his main arguments is of an ecological nature; the widespread lack of successful regeneration of light-demanding tree species in modern forests. Especially the lack of regeneration of pedunculate oak, sessile oak (together hereafter addressed as "oak") and common hazel in Europe. He contrasts this reality with European pollen deposits from previous ages, where oak and hazel often form a dominant amount of pollen, making a dominance of these species in previous ages conceivable. Especially in regard to hazel, sufficient flowering is only achieved when enough sunlight is available, i.e. the plant grows outside of a closed canopy. He argues that the only explanation for the great abundance of oak and hazel pollen in previous ages is that the primeval landscape was open, and this contrast forms the principal theorem of his hypothesis. It has also been suggested that oak requires disturbances for successful establishment, disturbance large herbivores may provide.
However, pollen records from islands that lacked many of the large grazers and browsers that, according to Vera, were essential for the maintenance of landscapes with an open character in temperate Europe show almost no differences in comparison to mainland Europe. More specifically, pollen records from Holocene Ireland, which during the early Holocene was apparently, owing to a lack of fossils, devoid of any big herbivores except for abundant wild boar and rare red deer, show almost equally high percentages of oak and hazel pollen. Thus it could be concluded that large herbivores were not a required factor for the degree of openness in a landscape, and that the abundance of pollen from species that are unable to reproduce and regenerate sufficiently under a closed canopy, such as hazel and oak, can only be explained by other factors like windthrow and natural fires.
Vera's notion may be supported by observations over the course of 20 years forest regeneration in forest gaps created by windthrow, which showed that hornbeam and beech dominate the emerging stands and largely displace oaks on fertile, nutrient-rich soil. However, after the last Ice Age oak returned earlier to Central and Western Europe than beech or hornbeam, which may have contributed to its commonness, at least during the early Holocene. Still, other shade-tolerant tree species like lime and elm were equally fast returnees, and do not seem to have limited oak abundance.
On the other hand, substantial natural oak-regeneration commonly takes place outside of forests in fringe and transitional habitats, suggesting that a focus on regeneration in forests in an attempt to explain oak regeneration failure may be insufficient in regard to the ecology of Central European oak species. Rather, an underestimated reason for widespread failure of oak regeneration may be found in the direct effects of land-use changes since the early modern period, which has led to a more simplistic, homogeneous landscape, as spontaneous regeneration of both oak and hazel does frequently occur in margins, thickets, and low-grazing-intensity or abandoned pasture/arable land. Overall, oak is an adept coloniser of open areas and especially of transitional zones between vegetation zones such as forest and open grassland. Looking for regeneration within forests may therefore be futile from the outset. There is, therefore, no general "failure" in oak regeneration, but only a failure of oak regeneration within closed forests. This, however, may be expectable and natural given oak's colonising nature.
Furthermore, new species of oak mildew (Erysiphe alphitoides) observed on European oaks for the first time at the beginning of the 20th century have been cited as a possible reason for the modern lack of oak regeneration in forests, since they affect the shade tolerance, particularly of young pedunculate and sessile oaks. Although the origin of these new oak pathogens remains obscure, it seems to be an invasive species from the tropics, possibly conspecific with a pathogen found on mangos.
Ecological anachronisms
Vera prominently argued that since other light-demanding and often thorny woody species exist in Europe—species such as common hawthorn, midland hawthorn, blackthorn, Crataegus rhipidophylla, wild pear and crab apple—their ecology can only be explained under the influence of large herbivores, and that in the absence of these they represent an anachronism.
Shortcomings of pollen analysis
Vera further contested that pollen diagrams can adequately display past species occurrences since, inherently, pollen deposits tend to overrepresent species that are wind-pollinated and notoriously underrepresent species that are pollinated by insects. Furthermore, he proposed that an absence of grass pollen in pollen diagrams can be explained by high grazing pressure, which would prevent the grasses from flowering. Under such conditions, he claimed, open environments with only scattered mature trees may appear as closed forests in pollen deposits. He consequently proposed that the conspicuous scarcity of grass pollen in pollen deposits dating from the pre-Neolithic Holocene might not necessarily speak against the existence of open environments dominated by grasses. However, it is generally considered that over 60% tree pollen in pollen deposits indicates a closed forest canopy, which is true for the vast majority of European early to mid-Holocene deposits. Sites with less than 50% arboreal pollen, on the other hand, are consistently associated with human activities.
Circular reasoning
Vera stressed that the prevailing high-forest theory was born out of observations of spontaneous regeneration in the absence of grazing animals. He argued that the presupposition that these animals do not exert a significant influence on natural regeneration, and thus on the vegetation structure as a whole, has been made without comparative confirmation, and is therefore a circular argument. Indeed, modern forestry and forest theory arose largely in the modern era and went hand in hand with the ongoing inclosure of common land throughout Europe. A consequence thereof was in many cases a ban of livestock from the forests, which had previously largely been open woodland pastures, often dominated by oaks. These were multifunctional and used for a range of purposes, from pannage and livestock grazing to the harvest of tree hay, coppice, timber and oak galls for the manufacture of ink, as well as for the production of charcoal, crops and fruit. This former usage of forests is often still revealed by a big age gap between tree generations, particularly if the oldest trees are mainly oaks, and many Central European forest reserves originated as common wood-pastures.
Shifted baselines
In nature conservation, a shifted baseline is a baseline for conservation targets and desired population sizes that is based on non-pristine conditions. In this sense, the term was coined by marine biologist Daniel Pauly when he observed that some fisheries scientists used the population sizes of fish at the beginning of their own careers to assess a desired baseline, notwithstanding whether the fishing stocks they used as baselines had already been diminished by human exploitation. He noticed, that the estimations these scientists took for reference markedly differed from historical accounts. Consequently, he concluded that over generations the perception of what is considered to be normal would change, and so may what is considered a depleted population. Pauly called this the shifting baseline syndrome.In line with this, it may be argued that the prevalence of closed-canopy forest as the prevailing conservation narrative in Europe similarly arises from multiple shifted baselines:
While it is plausble that lions (Panthera speleae, P. leo leo), leopards (Panthera pardus spelaea, P. pardus tulliana), hyenas (Hyaena hyaena prisca, Crocuta crocuta spelaea), dholes (Cuon alpinus europeus), wild ass (Equus hydruntinus, E. hemionus kulan) and moon bears (Ursus thibetanus mediterraneus, U. t. permjak), among other victims of European Quaternary and Holocene extinctions, would still be native to Europe, had they not been evicted by humans, none of these species are listed as such in the EU's Habitats Directive's annexes. Likewise, globally extinct megafauna such as straight-tusked elephants and rhinos would likely be native to Europe without human interference, and they would in all probability have a strong positive impact on biodiversity and ecosystem functions. It is therefore very likely that the megafauna extinctions of the late Pleistocene and early Holocene had profound implications for European and worldwide ecosystems, especially given the paramount importance comparable animals have for modern ecosystems.
Vera pointed out that words like wold and forest used to have different connotations than they do today. While today, a forest is a dense and reasonably large tract of trees, the medieval Latin forestis, from which it derives, assigned open stands of trees, and was a wild and uncultivated land home also to aurochs and wild horses. According to historical sources, these forestis included hawthorn, blackthorn, wild cherry, wild apple and wild pear, as well as oaks, all of which are light-demanding species that cannot regenerate successfully in closed-canopy forest. From this Vera concluded that original wildwoods still existed in Europe during the Medieval period. Thus, when scholars of the 19th and 20th century assumed that grazing animals had destroyed the original European closed-canopy wildwoods, they were misinterpreting these terms. Instead, these forests, he found, had been destroyed following the industrial revolution and the population growth it caused, which in turn caused overexploitation.
He further argued that from this initial misinterpretation gave rise to another misinterpretation: that forest regeneration would naturally take place inside the forest. Thus, scholars of the 19th and 20th century such as interpreted medieval grazing regulations to allow tree regeneration in coppiced mantle and fringe vegetation as intended to allow regeneration in a forest. In their time, solid firewood was preferred to the medieval coppice bundles, e.g. faggots. However, the production of solid firewood required the felling of trees at an age when they could no longer produce suckers, an ability that trees commonly lose with progressing age. This then led to a different management system: the replacement by saplings planted or naturally regenerated via, for example, shelterwood cuttings. Initially, these trees regenerated inside the forests were differentiated from wild growth outside the forests. In German, the former were referred to as natural regeneration (Naturverjüngung) while the latter had a different name: Holzwildwuchse. Thus, natural regeneration was not synonymous with the natural regeneration of trees in a natural situation. It was not until the 19th and 20th centuries that this distinction was abandoned in German. However, in the absence of thorny nurse bushes, which disappeared due to the shadow under the trees, the planted trees then had to be protected manually. The "natural regeneration" was therefore still depended on work like ploughing, removal of browsing pressure and the suppression of weeds, making it not "natural" in the conventional sense. Instead, according to Vera, the original meaning of the word "natural" in this context was that a seed fell from a tree and then grew by itself, as opposed to being planted. This shift in expectation of where regeneration of trees was to be expected, from thorny fringes of groves in wood-pastures to the interior of closed tree stands, then led to the notion that herbivores were detrimental to forest regeneration, and necessitated fenced-out areas, tree shelters and population control via hunting.
Considered "alien" to the landscape, akin to invasive species, cattle and horses were now also removed from the forests, as it happened in former wood-pastures like Białowieża, because they were seen as harmful to the creation of a new old-growth forest. At the same time, the introduction of the potato made pannage, the fattening of pigs on acorns, obsolete, and grass species specifically bred for a high yield superseded the traditional pasturing, mostly of cattle, in wood-pastures. Together, these mechanisms created the spatial separation between livestock rearing and forestry, grassland and forest enshrined into modern law and practice.
Finally, the biodiversity losses associated with the conversion of open grassland, mantle and fringe vegetation and open-grown trees into closed-canopy forests were legitimised by the assumption that the forest was the only natural ecosystem, and hence species losses were casualties of a natural cause.
However, a strong argument that may put Vera's etymological evidence into perspective altogether is that the composition of medieval woodlands may not be relevant to their naturalness. Since by the medieval period agricultural traditions had already been ubiquitous in most of Europe for millennia, it may be unrealistic to assume that what people of the time perceived and labelled as wilderness may indeed have been one. Instead, it is doubtful that pristine conditions had survived in the Central- and Western European lowlands, Vera's area of study, at any rate up to this point.
Succession in grazed ecosystems
There are several ecological processes at work in herbivore grazing systems, namely associational resistance, shifting mosaics, cyclic succession, and gap dynamics. These processes would collectively transform the surrounding landscape, as per Vera's model.
Associational resistance
The term associational resistance describes facilitating relationships between plants that grow close to each other, against both biotic and abiotic stresses like browsing, drought, or salinity. In relation to grazed ecosystems, it can allow for the recruitment of trees and other palatable woody species, via thorny nurse bushes, in these environments. It has been proposed and demonstrated that associational resistance can be a key process in grazed environments, ensuring natural succession.In temperate Europe, succession on pastures commonly starts with so called "islets" ("Geilstellen"), patches of dung which are avoided by the herbivores for an amount of time after deposition, sufficient to allow the establishment of relatively unpalatable species such as rushes, nettles and hummocks of tall grasses like tussock grass. These swards, in turn, provide protection for thorny shrubs such as blackthorn, roses, hawthorn, juniper, bramble, holly and barberry during their early years, when they do not yet have protective thorns and are therefore vulnerable. Once the thorny saplings are fully established, they grow bigger over time and subsequently allow other, less resilient species to establish in their thorn protection, forming mantle and fringe vegetation together with species such as guelder rose, wild privet and dogwood. Other species such as mazzard, checker tree, rowan and whitebeam, which are distributed by fruit-eating birds through their faeces, would also frequently be placed within these shrubs, through resting birds leaving their droppings.
On the other hand, nut-bearing species such as hazel, beech, chestnut, pedunculate and sessile oak would become "planted" somewhat deliberately in the vicinity of those shrubs by rodents such as red squirrel and wood mouse, the nuthatch and corvids such as crows, magpies, ravens and especially jays, which store them for winter supply. In Europe, the Eurasian jay represents the most important seed disperser of oak, burying acorns individually or in small groups. Eurasian jays not only bury acorns in depths favoured by oak saplings, but seemingly also prefer spots with sufficient light availability, i.e. open grassland and transitions between grassland and shrubland, seeking for vertical structures such as shrubs in the near surroundings. Since oak is relatively light-demanding while not having the ability to regenerate on its own under high browsing pressure, these habits of the jay presumably benefit oak, since they provide the conditions oak requires for optimal growth and health. On a similar note, the nuthatch seems to assume a prominent role for hazel dispersal.
In addition, species such as wild pear, crab apple and whitty pear, which bear relatively large fruit, would find propagators in herbivores such as roe deer, red deer and cattle, or in omnivores such as the wild boar, red fox, the European badger and the raccoon, while wind-dispersed species such as maple, elm, lime or ash would land within these shrubs by chance.
Thorny bushes play an important role in tree regeneration in the European lowlands, and evidence is emerging that similar processes can also ensure the survival of browsing-sensitive species like rowan in browsed boreal forests.
Shifting mosaics and cyclic succession
A natural pasture ecosystem would therefore undergo various stages of succession, starting with unpalatable perennial plants, which provide shelter for thorny woody plants. Second, these would start to form thickets and enable the establishment of larger, palatable shrubs and trees respectively. Over time these would then outshadow the unpalatable but light-demanding thickets and emerge as big solitary trees, in the case of single-standing shrubs like hawthorn, or groups of trees in the case of expanding blackthorn shrubs. Because of the herbivore disturbance (browsing, trampling, wallowing, dust bathing), not even shade-tolerant tree saplings would be able to grow under the established trees. Therefore, once the established trees would start to decay, either due to old age or other factors like pathogens, illness, lightning strike or windbreak, this would leave open, bare land behind, for grasses and unpalatable species to colonise, closing the cycle.
On a large scale, different successional stages would thus contribute an ecosystem where open grassland, scrubland, emerging tree growth, groves of trees and solitary trees exist next to each other, and the alternation between these various successional stages would create dynamic shifting mosaics of vegetation. This in turn stimulates high biodiversity. Consequently, Vera's counter-proposal to the linear succession and Watt's gap-phase model of closed-canopy forest, to which it has been compared is a model of successional cycles known as the shifting mosaics model.
In effect however, not all areas would have necessarily been subject to this permanent change. Since grazing animals generally prefer to spend time in grasslands rather than in closed stands of trees, it would practically be possible for three different landscape types to coexist over longer periods in the same spots: permanently open areas, permanently closed groves and areas subject to constant shifting mosaics.
The prehistoric baseline
The Eemian landscape
Although Vera himself limited his argument to the Holocene and the fauna present into historical times, research better supports his claims in regard to earlier interglacials. Modern humans have likely exerted a strong influence in Europe since their first appearance here during the Weichselian glaciation, which has led some researchers to criticize Vera's choice of the early to mid Holocene as his benchmark for pristine nature. Instead, they argue that pristine nature only existed in Europe before the entering of Homo sapiens. They argue that the best model for what a truly natural landscape during a warm period in Europe would look like is the Eemian interglacial, which was the last warm period before the current Holocene, approximately 130,000 to 115,000 years ago, and the last warm period before Homo sapiens. While archaic humans existed in the form of neanderthals, their influence was probably only localised, due to their low population density. During this warm period, paleoecological data indeed suggest that semi-open landscapes, as postulated by Vera, were widespread and common, most likely maintained by large herbivores. Next to these semi-open landscapes, however, the researchers also found evidence for closed-canopy forest. Overall, the Eemian landscape appears to have been very dynamic and probably consisted of varying degrees of openness, including open grasslands, wood pastures, light-open woodland and closed-canopy forest.
The European megafauna
The Eemian interglacial was one of many warm interglacials during the Quaternary, of which the Holocene (or Flandrian interglacial) is the most recent. These alternating glacial and interglacial periods, triggered by the Milankovitch cycles, in turn had a profound influence on life. In Middle to Late Pleistocene Europe, the result of this cycling was that two very different faunal and floral assemblages took turns in Central Europe. The warm-temperate Palaeoloxodon-faunal assemblage, consisting of the straight-tusked elephant, Merck's rhinoceros, the narrow-nosed rhinoceros, Hippopotamuses, European water buffalo, aurochs, and several species of deer, among others (including most of today's European fauna), had its core area in the Mediterranean. The warm-temperate assemblage periodically expanded from there into the rest of Europe during warm interglacials, and receded during glacial periods into refugia in the Mediterranean. Meanwhile, the cold-temperate faunal assemblage of the mammoth steppe, consisting of the woolly mammoth, woolly rhinoceros, reindeer, saiga, muskox, steppe bison, arctic fox and lemming among others, was spread across vast areas of Northern Eurasia as well as North America, and during periodic cold glacials advanced deep into Europe. Other animals, such as horses, steppe lions, the scimitar cat, the Ice Age spotted hyena and wolves were part of both faunal assemblages. Both groups of animals spread and retreated cyclically, depending on whether the climate favoured one or the other, but essentially remained intact in refugia that continued to provide the conditions they preferred.
The Quaternary extinction event
Prior to the Last Glacial Maximum however, elements of the warm-temperate Palaeoloxodon-fauna (hippopotamus, straight-tusked elephant, the two Stephanorhinus species and neanderthals, for example) as well as the steppe species Elasmotherium sibricum started to disappear and eventually went extinct. At the onset of the Last Glacial Maximum, populations of Ice Age spotted hyena and the cave bear complex (Ursus spelaea, Ursus ingressus) seem to have collapsed large-scale, and became extinct next. After the Last Glacial Maximum and towards the Holocene, extinctions continued, with many emblematic "Ice Age species" of the mammoth steppe and adjacent habitats, such as the woolly rhinoceros, the steppe lion, the giant deer and the woolly mammoth falling victim, although small regional populations of woolly mammoth and steppe bison held out well into the Holocene, and the giant deer was present in the southern Ural region into historical times. These extinctions have been variously credited to human impact, climate change, or a combination of the two.
These extinctions were not limited to Europe or the Palearctic, but rather occurred on all continents except for Antarctica, in temporal connection to the migration of Homo sapiens. Together, these extinctions are commonly known as the Quaternary extinction event. Whereas today megafaunal Proboscideans, Rhinocerotidae and Hippopotamidae that commonly attain weights of exclusively exist in the global south, notably Sub-Saharan Africa and South and Southeast Asia, land mammals of comparable or greater size used to roam the northern hemisphere and South America until relatively recently. By 10,000 BC, the megafauna of the global north had alternately died out or been severely geographically restricted. Notable examples include various Proboscideans, Rhinocerotidae, ground sloths as well as all South American ungulates, glyptodontines and diprotodontids.
In addition, many mammals above that were spread across all continents except for Antarctica prior to the Quaternary extinction event have since declined across their range, or become locally or globally extinct, respectively. Modern taxa with a once wider distribution include the Eurasian saiga, wapiti-deer, the Asian black bear, bisons, the dhole, lions, the leopard, the jaguar, and the giant anteater. Research has also shown that the extant megafaunal species that survived the extinction event experienced a sharp population decline starting at the same time and continuing to the present day. While the exact cause of these events remains debated, it seems clear that ecological niches in Europe, the Middle East, big parts of Asia, and the Americas were left unoccupied.
The impact of megafauna extinctions
The effects of the global extinction of megafauna are likely to have been far-reaching and damaging to ecosystems, and continue to be. The late Quaternary extinction event is unprecedented in the Cenozoic (i.e. since the extinction of the non-avian dinosaurs) in its selectivity for large animals. Accordingly, the modern European megafauna-extirpated ecosystems deviate strongly from the megafauna-rich evolutionary norm. Similar to how herds of herbivores like wildebeest, zebra, impala, buffalo, and elephants drive African savanna vegetation patterns, and not vice versa (i.e. the vegetation dictates the activities of these herbivores), it now seems likely that herbivore herds could have provided similar ecosystem functions in the temperate regions before the Quaternary extinctions.
In Europe, where many species such as the straight-tusked elephant, two species of Stephanorhinus and the hippopotamus among many others were lost, this meant that their ecosystem functions – such as plant matter consumption and seed dispersal – were lost as well. Without the disturbance these animals provide, it is argued, forests could develop unhindered and landscapes became more uniform. As this is detrimental to species adapted to the presence of megafauna, some scholars advocate for the reintroduction of these animals where possible, or the introduction of modern proxy species to replace extinct species and their ecological impact, an advocacy known as Pleistocene rewilding.
Towards a resolution
Vera's ideas have been called a "challenge to orthodox thinking" and his book has been widely acclaimed by colleagues. It is credited as the spark of much debate about the character of historic and prehistoric landscapes in Europe. However, testing using pollen data generally does not support Vera's claims for widespread semi-open savanna during early stages of the Holocene, but rather lends support to the competing and more widely accepted high-forest theory. Similarly, modelling approaches and the use of beetle diversity as an indicator for landscape openness also support the view of a predominance of forest throughout the early and middle Holocene in most of Europe. Consequently, the botanist John Birks has argued for the rejection of the wood-pasture hypothesis. He did, however, acknowledge that the role grazing animals played in forest composition is being reevaluated, and was formerly largely ignored by Quaternary paleoecologists.
On the other hand, consensus is building that while forest did most likely dominate throughout the early stages of the Holocene, it was never as dense and overarching as previously assumed. Studies also indicate that forest cover varied considerably between regions, and was comparably high in Central Europe and lower in the Atlantic regions. Besides climate, topography must have also played a significant role. The aurochs at least seems to have favoured fertile, low-lying riverine areas and plains, which may have led to locally open conditions, while the hill and mountain ranges were more heavily forested. Overall, dense closed-canopy forest probably covered no more than 60% of most areas, with the remainder divided between open woodlands, savannas and open areas. This made the early to mid-Holocene Europe more forested than either today or during earlier interglacials, but not a continuous woodland.
In a 2005 response to Vera, Kathy Hodder et al. highlighted the importance of disturbance factors other than herbivory, particularly fire, to prehistoric landscapes, pointing out that both the high-forest theory and Vera's model have largely ignored this possibility. This stands in connection to the discovery of fire-loving beetle species and charcoal deposits in the European pre-Neolithic Holocene. In the same paper, they also argued that the influence of large herbivores can be acknowledged without this necessarily implying that they created the open, park-like landscapes described by Vera.
At the same time, research has shown that under the current climate free-roaming large grazers can indeed influence and even temporarily halt vegetation succession, as proposed by Vera. Vera's choice of the Mesolithic as his benchmark for pristine nature has also been criticized, because the role people played during this period is unclear. Anatomically modern humans have been present in Europe since 50-40 kya, and studies indicate that already in the early Holocene, human impact on the environment was second in importance only to climate, surpassing herbivore disturbance. However, the late-Pleistocene expansion of modern humans out of Africa is frequently cited as cause for the simultaneous global extinction of primarily large mammals. In a 2014 paper, rewilding ecologist Christopher Sandom et al. found that the depauperate megafauna that remained in Europe after these extinctions may be the reason for the reduced landscape openness. They reached this conclusion by comparing beetle deposits from the Holocene and Eemian of Britain as indicators for the degree of openness. These beetles, they found, indicated that during the Eemian interglacial, the last interglacial with a pristine megafauna, landscape openness was associated with high megafauna densities. In contrast, closed forest predominated in the early Holocene in the absence of megafauna. The importance of the impact of large herbivores on vegetation and the significance of megafauna extinctions in this regard has also been highlighted in other studies.
Implications and tangents
Implications for conservation practice
Vera's hypothesis has important implications for conservation theory and practice, because it puts emphasis on the importance of grasslands in temperate Europe and their legitimacy as natural landscapes with intrinsic conservation value. Under the high forest framework, these and related landscape types such as heathland were viewed as purely or mostly anthropogenic landscapes, naturally confined to areas marginal enough to prevent woodland formation. Instead it was believed that the broadleaved regions were dominated by climax communities of shade-tolerant species, interrupted only occasionally by collapses of forest cover and disturbances through fire, storm or browsing. Examples of this school of thought include Białowieża on the Polish-Belarusian border as well as the Hainich in Central Germany.
The logical consequence of this was that species associated with grasslands, forest fringes and old, open-grown trees disappeared on large scale, since many ecosystems in Europe, including highly species-rich grasslands in Romania, strictly depend on some management and are negatively impacted if the areas are left fallow and overgrown by forest vegetation. Similarly, the displacement of aspen in boreal forests seems to be accelerated more because of increasing competition in the increasingly closed stands than via browsing.
In Europe, grasslands were maintained by large herbivores over the last 1,8 million years, resulting in an exceptional diversity of species in many European grasslands. For example, on a wooded meadow in Estonia, 76 species of plant per were counted in 2000, making it one of the world's record sites. Similarly high numbers were counted at other locations in Eastern Europe, making the region one of the hotspots for plant species richness on small scale worldwide. However, grasslands in Europe and elsewhere are increasingly under threat, including from forest encroachment following abandonment, ill-conceived forest restoration schemes, overgrazing and agricultural intensification. Especially the notion that most grasslands derive from human management and as such are essentially degraded former woodlands suitable for reforestation has been called into question more recently and is threatening native grassland ecosystems worldwide. For Europe, studies have demonstrated the local persistence of grasslands throughout the Holocene as natural ecosystems, the important role they play for insects, for example, and the potential for biodiversity enhancement that lies in their maintenance by reintroduced large herbivores. At the same time, up to 90% of European semi-natural grasslands, meaning grasslands that were formerly maintained by humans and their livestock, have disappeared during the 20th century, with losses especially high in Western, Northern and Central Europe.Given the significant importance oaks have as habitat for wood-eating insect communities in Europe, it has been pointed out that traditional forest management may not deliver all the benefits dead oak wood has for these species, since these often depend on surrounding circumstances such as sun-exposure. Instead, conservation of highly species-rich plant communities of open oak woodlands may best be achieved through traditional grazing management.
In the traditional framework of closed-canopy forest as the aspired ideal, the losses of species dependent on open areas were seen as collateral damage necessary for the creation of this ideal and had to be accepted because species associated with open areas were seen as hemerophiles anyway, which would have followed human clearings into Central and Western Europe only in the Holocene and would have originally been restricted to Southern and Eastern Europe. Taking into account that this results in overall biodiversity loss, traditional agricultural landscapes were then in turn recognised as important refuges for species-groups associated with open landscapes, seen as either a by-product of post-Neolithic agricultural traditions or relics of Pleistocene assemblages that formed alongside the now-extinct Pleistocene megafauna for which introduced domestic animals were partial substitutes. In both cases, their continued survival would largely depend on the continued execution of traditional agricultural practices.
Vera's hypothesis implies both that the model of primeval forest and the resulting rhetoric are the result of a major fallacy in nature conservation, paleoecology and forestry, and that the preservation of open and half-open landscapes and their germane biodiversity does not depend on agricultural practices, but rather on the maintenance by large herbivores, whether wild or domesticated.
Rewilding and practical implementation
The validity of Vera's hypothesis remains debated among ecologists and conservationists, but it is often considered a fruitful approach for conservation, and thus has been widely implemented in daily practice. The resulting rewilding-advocacy differs from more traditional conservation primarily in that it emphasises a hands-off approach. Instead of intervening to preserve or revive specific species or ecosystem types, the principle is to reduce human intervention to a minimum and instead reintroduce natural ecosystem dynamics, with emphasis being put on returning large mammals to the landscape.
Examples of such projects include the Dutch conservation area Oostvaardersplassen, which was initiated by Vera, as well as the Knepp estate in Sussex. Isabella Tree, co-owner of the latter, has named Vera and his ideas as important reasons for her and her husband to consider rewilding their private estate with fallow deer, red deer, English Longhorn cattle (as ecological proxies for the extinct aurochs) and Tamworth pigs (as proxies for the wild boar).
Furthermore, in the shape of Rewilding Europe, a pan-European organization that aims for creating wild spaces in Europe by re-establishing food chains and reintroducing missing species has identified Vera's proposals as key to complex, biodiverse ecosystems. Taking them into account, it works to establish free-moving herds of European bison, aurochs-proxies (e.g. Tauros-cattle), proxies for the wild tarpan (e.g. Konik, Exmoor pony) as well as water buffalo and kulan (which were present in Europe until the early Holocene) to create dynamic ecosystems maintained by the grazing and browsing activity of these herbivores.
Ecology of wood-pastures
Grazed woodlands, wood-pastures and pastures in Europe harbour high biodiversity. Rare perennial plant species commonly or exclusively associated with these ecosystems in Europe include hellebores, peonies, asphodels, dittany, black false hellebore and bastard balm. The tree layer is often dominated by a number of oak species and many rare, local and threatened species such as Florentine wild apple, Lebanese wild apple, medlar, sorb tree, pears and wild plums are more often found in European silvopastoral systems than in commercial forest. Rare or declining bird species such as the European roller, hoopoe, several species of shrike, owls (scops owl, little owl) as well as wrynecks and middle spotted woodpeckers are attracted by wood-pastures in particular. In Iberia, the semi-natural oak-woodlands known as dehesa/montado are home to endemic species such as the Spanish imperial eagle and the Iberian lynx. Wood-pastures also provide important habitat for many species of invertebrates. Due to the abundance of large, old trees, wood-pastures are especially important for saproxylic beetles. This includes spectacular and rare species such as capricorn beetle, stag beetles (such as Lucanus cervus), variable chafer and click beetles. In the British Isles alone nearly 1800 species of invertebrates depend on decaying wood, including 700 species of beetles and about 730 species of flies.
Traditional land use
Many aspects of Vera's theory resonate well with traditional pastoral systems and agricultural practices across Europe and other parts of the world. This is especially true for regions where the pasturing of grazing animals has been carried out for hundreds and thousands of years. The old English saying "The thorn is the mother of the oak", referring to the recruitment of oaks inside thorny shrubs, attests to the knowledge about processes such as associational resistance as part of old traditional farming knowledge that was present in rural communities well before the theory itself was proposed in its current form. The phrase is commonly attributed to Humphry Repton, but was used by the writer Arthur Standish as early as 1613 and probably has origins even earlier. Following Vera's argumentation, wood-pastures and related farming systems as ancient land-use systems can also be viewed as essentially mimicking the primaeval European wilderness. This goes hand-in-hand with the fact that, for instance, 63 of the ecosystems listed in Annex I of the Habitats Directive of the European Union strictly depend on low-intensity use and maintenance work, mostly in the form of grazing and mowing. These habitats are labelled as high nature value farmland (HNV farmland), and the fact that traditional farming, in particular, can potentially harbour exceptional biodiversity values may in part be due to such mimicking effects that some forms of human use (such as grazing, pollarding, coppicing and hedgelaying) have in analogy to ecosystem services formerly exercised by the megafauna.
Sergey Zimov's megaherbivore decline model
While Vera's hypothesis focuses on temperate regions and especially temperate Europe, an argumentatively related model has more recently been proposed for high latitude regions of modern taiga and tundra biomes, where formerly mammoth steppe predominated. It essentially challenges the widespread view that the Pleistocene megafauna of the northern steppe vanished as a consequence of the warming climate at the advent of the Holocene and the consequent turnover of cold-adapted grassland and herb ecosystems into expanding forests and tundra dominated by mosses, lichens and dwarf trees. Instead, it argues that vice versa the declining megafauna was the precondition for the vegetational turnover, and that healthy megafauna populations could have maintained their preferred environment, the mammoth steppe, even under the stresses of the warming climate if human-induced extinctions had not occurred. Consequently, Sergey Zimov, one of the main supporters of this model, proposes that ecosystems functionally similar to the mammoth steppe of the Pleistocene could also function under modern circumstances, and seeks to prove this in the form of Pleistocene park. He and his son have since begun to reintroduce species that are now extinct in Yakutia, and to introduce species that are ecologically similar to those present in the region during the Pleistocene that have since become globally extinct. These include wild species like reindeer, muskox, bison and wisent, as well as hardy domestic breeds like Bactrian camels, Kalmyk cattle, domestic yaks and Orenburg goats. With these, the project hopes to revive the mammoth steppe, at least in fractions of its former expanse.
See also
New Forest – An ancient common with a significant proportion of wood pasture
Moccas Park
Hatfield Forest
Windsor Great Park
Epping Forest
Zuid-Kennemerland National Park
Marcescence – possibly an adaptation to prevent browsing of twigs and buds in winter
Globally Important Agricultural Heritage Systems
Notes
References
Sources
Further reading
Bunzel-Drüke, Margret; Luick, Rainer (2024): "Master builders of biodiversity". Naturschutz und Landschaftsplanung.
Jepson, Paul and Blythe, Cain (2020). Rewilding: The Radical New Science of Ecological Recovery, Icon Books Ltd. ISBN 978-1-78578-627-3
2000 introductions
Ecology
Ecological theories
Rewilding
Agroforestry
Hypotheses | Wood-pasture hypothesis | Biology | 11,864 |
5,765,458 | https://en.wikipedia.org/wiki/Lo%C3%A8ve%20Prize | The Line and Michel Loève International Prize in Probability (known as the Loève Prize) is an American mathematical award. It is awarded every two years, and is intended to recognize outstanding contributions by researchers in mathematical probability who are under 45 years old.
History
The Line and Michel Loève International Prize in Probability, usually referred to as the Loève Prize, was created in 1992 in honor of Michel Loève from a bequest to UC Berkeley by his widow Line.
Description
It is awarded every two years, and is intended to recognize outstanding contributions by researchers in mathematical probability who are under 45 years old.
With a prize value of around $30,000, this is one of the most generous awards in any specific mathematical subdiscipline.
Winners
Past winners of the prize are:
2023 – Jian Ding
2021 – Ivan Corwin
2019 – Allan Sly
2017 – Hugo Duminil-Copin
2015 – Alexei Borodin
2013 – Sourav Chatterjee
2011 – Scott Sheffield
2009 – Alice Guionnet
2007 – Richard Kenyon
2005 – Wendelin Werner
2003 – Oded Schramm
2001 – Yuval Peres
1999 – Alain-Sol Sznitman
1997 – Jean-François Le Gall
1995 – Michel Talagrand
1993 – David Aldous
See also
List of mathematics awards
References
External links
Information about the prize maintained by David Aldous
Information about the prize on the UC Berkeley website
Mathematics awards
Awards established in 1992 | Loève Prize | Technology | 288 |
38,399,847 | https://en.wikipedia.org/wiki/Terex%20THS15%20Motorscraper | The Terex THS15 Motorscraper was a concept machine scraper displayed for the first time at Minexpo 2000. This machine features some unusual design concepts, including an adjustable cutting edge on the bowl to reduce friction when loading. Other notable features were the rear-mounted drivetrain (there was no engine on the front module) and a hydrostatic transmission, which featured hydraulic wheel motors. At least two prototypes were made, and these featured noticeable differences in front end styling. A digital copy of the brochure for this machine is available through ozebooks. Both the THS15 scrapers were spotted for sale in used machinery dealers by 2011 and their fate is unknown. Terex never went ahead with production and subsequently abandoned motor scraper manufacture altogether.
Transmission
Hydrostatic transmissions have many benefits; however, they are not usually suited to machines that travel at higher speeds over longer distances. The hydrostatic drive may have been a contributing factor in abandoning the project. Changes in the way earthmoving is done, including the use of excavators and dumptrucks, has also eroded the market for scrapers. Therefore, this machine exhibited many revolutionary design concepts but was probably too costly to put into production in a declining market sector.
References
THS 15 original manufactures brochure dated 2001
Ozebooks
Haddock, Keith. "The earthmoving Encyclopedia" Motorbooks
Terex vehicles | Terex THS15 Motorscraper | Engineering | 288 |
788,567 | https://en.wikipedia.org/wiki/Thermopile | A thermopile is an electronic device that converts thermal energy into electrical energy. It is composed of several thermocouples connected usually in series or, less commonly, in parallel. Such a device works on the principle of the thermoelectric effect, i.e., generating a voltage when its dissimilar metals (thermocouples) are exposed to a temperature difference.
Operation
Thermocouples operate by measuring the temperature differential from their junction point to the point in which the thermocouple output voltage is measured. Once a closed circuit is made up of more than one metal and there is a difference in temperature between junctions and points of transition from one metal to another, a current is produced as if generated by a difference of potential between the hot and cold junction.
Thermocouples can be connected in series as thermocouple pairs with a junction located on either side of a thermal resistance layer. The output from the thermocouple pair will be a voltage directly proportional to the temperature difference across the thermal resistance layer and also to the heat flux through the thermal resistance layer. Adding more thermocouple pairs in series increases the magnitude of the voltage output.
Thermopiles can be constructed with a single thermocouple pair, composed of two thermocouple junctions, or multiple thermocouple pairs.
Thermopiles do not respond to absolute temperature, but generate an output voltage proportional to a local temperature difference or temperature gradient. The amount of voltage and power are very small and they are measured in milli-watts and milli-volts using controlled devices that are specifically designed for such purpose.
Applications
Thermopiles are used to provide an output in response to temperature as part of a temperature measuring device, such as the infrared thermometers widely used by medical professionals to measure body temperature, or in thermal accelerometers to measure the temperature profile inside the sealed cavity of the sensor. They are also used widely in heat flux sensors and pyrheliometers and gas burner safety controls. The output of a thermopile is usually in the range of tens or hundreds of millivolts. As well as increasing the signal level, the device may be used to provide spatial temperature averaging.Thermopiles are also used to generate electrical energy from, for instance, heat from electrical components, solar wind, radioactive materials, laser radiation or combustion. The process is also an example of the Peltier effect (electric current transferring heat energy) as the process transfers heat from the hot to the cold junctions.
There are also the so-called thermopile sensors, which are power meters based on the principle that the optical or laser power is converted to heat and the resulting increase in temperature is measured by a thermopile.
See also
Seebeck effect, the physical effect responsible for the generation of voltage in a thermopile
Thermoelectric materials, high-performance materials that can be used to construct a compact thermopile that delivers high power
References
External links
TPA81 Thermopile detector Array Technical Specification
Electrical components
Thermoelectricity | Thermopile | Technology,Engineering | 661 |
20,759,965 | https://en.wikipedia.org/wiki/Anipamil | Anipamil is a calcium channel blocker, specifically of the phenylalkylamine type. This type is separate from its more common cousin Dihydropyridine. Anipamil is an analog of the more common drug verapamil, which is the most common type of phenylalkylamine style calcium channel blocker. Anipamil has been shown to be a more effective antiarrhythmic medication than verapamil because it does not cause hypertension as seen in verapamil. It is able to do this by bonding to the myocardium tighter than verapamil.
Study of Effects in Rabbits
Anipamil is used to prevent the thickening of aortic muscles in rabbits with hypertension . A study was done by the American Journal of Hypertension to understand the effects anipamil may have on the smooth muscle cell in hypertensive rabbits. After using monoclonal antimyosin antibodies to recognize smooth muscle, nonmuscle myosin heavy chains and identify various aortic smooth muscle types, twenty four rabbits with hypertension were studied over a period of 2.5-4 months. The rabbits were split into two groups of 12, and six rabbits from each group received a 40 mg oral dose of anipamil daily while the remaining received an oral placebo daily. Cryosections of primary and secondary smooth aortic muscle were then taken for morphometry and immunocytochemical studies to understand the potential change in phenotype after the addition of anipamil. The study revealed that smooth aortic muscle treated with anipamil demonstrates less thickening of the muscle and an increase in differentiated cell phenotype . The results of this study showed anipamil has a direct effect on smooth muscle cell phenotypes .
Another study occurred at the Institute of Pharmacological Sciences at the University of Milan to understand the antiatherosclerotic effects of anipamil in Cholesterol-Fed Rabbits. In this study, there were three groups of 18 rabbits: control group, cholesterol-fed group, and a cholesterol-fed group receiving the drug. The control group received a standard pelleted show of 120 grams daily. The cholesterol-fed group received 1.6 grams of cholesterol per day. The cholesterol-red group receiving the drug received 1.6 grams of cholesterol per day along with an anipamil dose of 10 milligrams per day. During the two week, four week, and eight week mark, the amount of plasma cholesterol, triglycerides, and high density lipoprotein cholesterol were measured. The results showed that HDL cholesterol increased at 2-fold and plasma cholesterol increased at a 25-fold. Anipamil had no effect on the plasma cholesterol levels or HDL cholesterol and a decrease in the amount of aortic surface covered by plaque in the cholesterol-fed groups. The study showed that the consumption of anipamil at 10 milligrams per kilogram reduced the amount of plaque covering aortic surface and the amount of cholesterol in the aorta in cholesterol-fed rabbits. This study also showed that the antiatherosclerotic effects of anipamil has no effect on plasma cholesterol levels.
References
Amines
Calcium channel blockers
Nitriles
3-Methoxyphenyl compounds | Anipamil | Chemistry | 737 |
585,308 | https://en.wikipedia.org/wiki/Chroot | chroot is an operation on Unix and Unix-like operating systems that changes the apparent root directory for the current running process and its children. A program that is run in such a modified environment cannot name (and therefore normally cannot access) files outside the designated directory tree. The term "chroot" may refer to the system call or the wrapper program. The modified environment is called a chroot jail.
History
The chroot system call was introduced during development of Version 7 Unix in 1979. One source suggests that Bill Joy added it on 18 March 1982 – 17 months before 4.2BSD was released – in order to test its installation and build system. All versions of BSD that had a kernel have chroot(2). An early use of the term "jail" as applied to chroot comes from Bill Cheswick creating a honeypot to monitor a hacker in 1991.
The first article about a jailbreak has been discussed on the security column of SunWorld Online which is written by Carole Fennelly; the August 1999 and January 1999 editions cover most of the chroot() topics.
To make it useful for virtualization, FreeBSD expanded the concept and in its 4.0 release in 2000 introduced the jail command.
By 2002, an article written by Nicolas Boiteux described how to create a jail on Linux.
By 2003, first internet microservices providers with Linux jails provide SAAS/PAAS (shell containers, proxy, ircd, bots, ...) services billed for consumption into the jail by usage.
By 2005, Sun released Solaris Containers (also known as Solaris Zones), described as "chroot on steroids."
By 2008, LXC (upon which Docker was later built) adopted the "container" terminology and gained popularity in 2013 due to inclusion into Linux kernel 3.8 of user namespaces.
Uses
A chroot environment can be used to create and host a separate virtualized copy of the software system. This can be useful for:
Testing and development A test environment can be set up in the chroot for software that would otherwise be too risky to deploy on a production system.
Dependency control Software can be developed, built and tested in a chroot populated only with its expected dependencies. This can prevent some kinds of linkage skew that can result from developers building projects with different sets of program libraries installed.
Compatibility Legacy software or software using a different ABI must sometimes be run in a chroot because their supporting libraries or data files may otherwise clash in name or linkage with those of the host system.
Recovery Should a system be rendered unbootable, a chroot can be used to move back into the damaged environment after bootstrapping from an alternate root file system (such as from installation media, or a Live CD).
Privilege separation Programs are allowed to carry open file descriptors (for files, pipelines and network connections) into the chroot, which can simplify jail design by making it unnecessary to leave working files inside the chroot directory. This also simplifies the common arrangement of running the potentially vulnerable parts of a privileged program in a sandbox, in order to pre-emptively contain a security breach. Note that chroot is not necessarily enough to contain a process with root privileges.
Limitations
The chroot mechanism is not intended to defend against intentional tampering by privileged (root) users. A notable exception is NetBSD, on which chroot is considered a security mechanism and no escapes are known. On most systems, chroot contexts do not stack properly and chrooted programs with sufficient privileges may perform a second chroot to break out. To mitigate the risk of these security weakness, chrooted programs should relinquish root privileges as soon as practical after chrooting, or other mechanisms – such as FreeBSD jails – should be used instead. Note that some systems, such as FreeBSD, take precautions to prevent a second chroot attack.
On systems that support device nodes on ordinary filesystems, a chrooted root user can still create device nodes and mount the file systems on them; thus, the chroot mechanism is not intended by itself to be used to block low-level access to system devices by privileged users. It is not intended to restrict the use of resources like I/O, bandwidth, disk space or CPU time. Most Unixes are not completely file system-oriented and leave potentially disruptive functionality like networking and process control available through the system call interface to a chrooted program.
At startup, programs expect to find scratch space, configuration files, device nodes and shared libraries at certain preset locations. For a chrooted program to successfully start, the chroot directory must be populated with a minimum set of these files. This can make chroot difficult to use as a general sandboxing mechanism. Tools such as Jailkit can help to ease and automate this process.
Only the root user can perform a chroot. This is intended to prevent users from putting a setuid program inside a specially crafted chroot jail (for example, with a fake and file) that would fool it into a privilege escalation.
Some Unixes offer extensions of the chroot mechanism to address at least some of these limitations (see Implementations of operating system-level virtualization technology).
Graphical applications on chroot
It is possible to run graphical applications on a chrooted environment, using methods such as:
Use xhost (or copy the secret from .Xauthority)
Nested X servers like Xnest or the more modern Xephyr (or start a real X server from inside the jail)
Accessing the chroot via SSH using the X11 forwarding (ssh -X) feature
xchroot an extended version of chroot for users and Xorg/X11 forwarding (socat/mount)
An X11 VNC server and connecting a VNC client outside the environment.
Atoms is a Linux Chroot Management Tool with a User-Friendly GUI.
Notable applications
The Postfix mail transfer agent operates as a pipeline of individually chrooted helper programs.
Like 4.2BSD before it, the Debian and Ubuntu internal package-building farms use chroots extensively to catch unintentional build dependencies between packages. SUSE uses a similar method with its build program. Fedora, Red Hat, and various other RPM-based distributions build all RPMs using a chroot tool such as mock.
Many FTP servers for POSIX systems use the chroot mechanism to sandbox untrusted FTP clients. This may be done by forking a process to handle an incoming connection, then chrooting the child (to avoid having to populate the chroot with libraries required for program startup).
If privilege separation is enabled, the OpenSSH daemon will chroot an unprivileged helper process into an empty directory to handle pre-authentication network traffic for each client. The daemon can also sandbox SFTP and shell sessions in a chroot (from version 4.9p1 onwards).
ChromeOS can use a chroot to run a Linux instance using Crouton, providing an otherwise thin OS with access to hardware resources. The security implications related in this article apply here.
Linux host kernel virtual file systems and configuration files
To have a functional chroot environment in Linux, the kernel virtual file systems and configuration files also have to be mounted/copied from host to chroot.
# Mount Kernel Virtual File Systems
TARGETDIR="/mnt/chroot"
mount -t proc proc $TARGETDIR/proc
mount -t sysfs sysfs $TARGETDIR/sys
mount -t devtmpfs devtmpfs $TARGETDIR/dev
mount -t tmpfs tmpfs $TARGETDIR/dev/shm
mount -t devpts devpts $TARGETDIR/dev/pts
# Copy /etc/hosts
/bin/cp -f /etc/hosts $TARGETDIR/etc/
# Copy /etc/resolv.conf
/bin/cp -f /etc/resolv.conf $TARGETDIR/etc/resolv.conf
# Link /etc/mtab
chroot $TARGETDIR rm /etc/mtab 2> /dev/null
chroot $TARGETDIR ln -s /proc/mounts /etc/mtab
See also
List of Unix commands
Operating system-level virtualization
Sandbox (computer security)
sudo
References
External links
Integrating GNU/Linux with Android using chroot
Computer security procedures
Free virtualization software
Unix process- and task-management-related software
Virtualization software
Linux kernel features
System calls | Chroot | Engineering | 1,848 |
1,077,450 | https://en.wikipedia.org/wiki/G.%20N.%20Watson | George Neville Watson (31 January 1886 – 2 February 1965) was an English mathematician, who applied complex analysis to the theory of special functions. His collaboration on the 1915 second edition of E. T. Whittaker's A Course of Modern Analysis (1902) produced the classic "Whittaker and Watson" text. In 1918 he proved a significant result known as Watson's lemma, that has many applications in the theory on the asymptotic behaviour of exponential integrals.
Life
He was born in Westward Ho! in Devon the son of George Wentworth Watson, a schoolmaster and genealogist, and his wife, Mary Justina Griffith.
He was educated at St Paul's School in London, as a pupil of F. S. Macaulay. He then studied Mathematics at Trinity College, Cambridge. There he encountered E. T. Whittaker, though their overlap was only two years.
From 1914 to 1918 he lectured in Mathematics at University College, London. He became Professor of Pure Mathematics at the University of Birmingham in 1918, replacing Prof R S Heath, and remained in this role until 1951.
He was awarded an honorary MSc Pure Science in 1919 by Birmingham University.
He was President of the London Mathematical Society 1933/35.
He died at Leamington Spa on 2 February 1965.
Works
His Treatise on the theory of Bessel functions (1922) also became a classic, in particular in regard to the asymptotic expansions of Bessel functions.
He subsequently spent many years on Ramanujan's formulae in the area of modular equations, mock theta functions and q-series, and for some time looked after Ramanujan's lost notebook.
Ramanujan discovered many more modular equations than all of his mathematical predecessors combined. Watson provided proofs for most of Ramanujan's modular equations. Bruce C. Berndt completed the project begun by Watson and Wilson. Much of Berndt's book Ramanujan's Notebooks, Part 3 (1998) is based upon the prior work of Watson.
Watson's interests included solvable cases of the quintic equation. He introduced Watson's quintuple product identity.
Honours and awards
In 1919 Watson was elected a Fellow of the Royal Society, and in 1946, he received the Sylvester Medal from the Society. He was president of the London Mathematical Society from 1933 to 1935.
He is sometimes confused with the mathematician G. L. Watson, who worked on quadratic forms, and G. Watson, a statistician.
Family
In 1925 he married Elfrida Gwenfil Lane daughter of Thomas Wright Lane.
References
1886 births
1965 deaths
People from Bideford
20th-century English mathematicians
Mathematical analysts
Mathematics education in the United Kingdom
People educated at St Paul's School, London
Academics of the University of Birmingham
Senior Wranglers
Fellows of the Royal Society
De Morgan Medallists
Alumni of Trinity College, Cambridge | G. N. Watson | Mathematics | 595 |
18,228,545 | https://en.wikipedia.org/wiki/GAI%20%28Arabidopsis%20thaliana%20gene%29 | GAI or Gibberellic-Acid Insensitive is a gene in Arabidopsis thaliana which is involved in regulation of plant growth. GAI represses the pathway of gibberellin-sensitive plant growth. It does this by way of its conserved DELLA motif.
References
Further reading
External links
PubMed Search
Transcription factors
Signal transduction
Arabidopsis thaliana genes
Gene expression | GAI (Arabidopsis thaliana gene) | Chemistry,Biology | 82 |
55,983 | https://en.wikipedia.org/wiki/Black%20rat | The black rat (Rattus rattus), also known as the roof rat, ship rat, or house rat, is a common long-tailed rodent of the stereotypical rat genus Rattus, in the subfamily Murinae. It likely originated in the Indian subcontinent, but is now found worldwide.
The black rat is black to light brown in colour with a lighter underside. It is a generalist omnivore and a serious pest to farmers because it feeds on a wide range of agricultural crops. It is sometimes kept as a pet. In parts of India, it is considered sacred and respected in the Karni Mata Temple in Deshnoke.
Taxonomy
Mus rattus was the scientific name proposed by Carl Linnaeus in 1758 for the black rat.
Three subspecies were once recognized, but today are considered invalid and are now known to be actually color morphs:
Rattus rattus rattus – roof rat
Rattus rattus alexandrinus – Alexandrine rat
Rattus rattus frugivorus – fruit rat
Characteristics
A typical adult black rat is long, not including a tail, and weighs , depending on the subspecies. Black rats typically live for about one year in the wild and up to four years in captivity. Despite its name, the black rat exhibits several colour forms. It is usually black to light brown in colour with a lighter underside. In England during the 1920s, several variations were bred and shown alongside domesticated brown rats. This included an unusual green-tinted variety.
Origin
The black rat was present in prehistoric Europe and in the Levant during postglacial periods. The black rat in the Mediterranean region differs genetically from its South Asian ancestor by having 38 instead of 42 chromosomes. Its closest relative is the Asian house rat (R. tanezumi) from Southeast Asia. The two diverged about 120,000 years ago in southwestern Asia. It is unclear how the rat made its way to Europe due to insufficient data, although a land route seems more likely based on the distribution of European haplogroup "A". The black rat spread throughout Europe with the Roman conquest, but declined around the 6th century, possibly due to collapse of the Roman grain trade, climate cooling, or the Justinianic Plague. A genetically different rat population of haplogroup A replaced the Roman population in the medieval times in Europe.
It is a resilient vector for many diseases because of its ability to hold so many infectious bacteria in its blood. It was formerly thought to have played a primary role in spreading bacteria contained in fleas on its body, such as the plague bacterium (Yersinia pestis) which is responsible for the Plague of Justinian and the Black Death. However, recent studies have called this theory into question and instead posit humans themselves as the vector, as the movements of the epidemics and the black rat populations do not show historical or geographical correspondence. A study published in 2015 indicates that other Asiatic rodents served as plague reservoirs, from which infections spread as far west as Europe via trade routes, both overland and maritime. Although the black rat was certainly a plague vector in European ports, the spread of the plague beyond areas colonized by rats suggests that the plague was also circulated by humans after reaching Europe.
Distribution and habitat
The black rat originated in India and Southeast Asia, and spread to the Near East and Egypt, and then throughout the Roman Empire, reaching Great Britain as early as the 1st century AD. Europeans subsequently spread it throughout the world. The black rat is again largely confined to warmer areas, having been supplanted by the brown rat (Rattus norvegicus) in cooler regions and urban areas. In addition to the brown rat being larger and more aggressive, the change from wooden structures and thatched roofs to bricked and tiled buildings favored the burrowing brown rats over the arboreal black rats. In addition, brown rats eat a wider variety of foods, and are more resistant to weather extremes.
Black rat populations can increase exponentially under certain circumstances, perhaps having to do with the timing of the fruiting of the bamboo plant, and cause devastation to the plantings of subsistence farmers; this phenomenon is known as mautam in parts of India.
Black rats are thought to have arrived in Australia with the First Fleet, and subsequently spread to many coastal regions in the country.
Black rats adapt to a wide range of habitats. In urban areas they are found around warehouses, residential buildings, and other human settlements. They are also found in agricultural areas, such as in barns and crop fields. In urban areas, they prefer to live in dry upper levels of buildings, so they are commonly found in wall cavities and false ceilings. In the wild, black rats live in cliffs, rocks, the ground, and trees. They are great climbers and prefer to live in palms and trees, such as pine trees. Their nests are typically spherical and made of shredded material, including sticks, leaves, other vegetation and cloth. In the absence of palms or trees, they can burrow into the ground. Black rats are also found around fences, ponds, riverbanks, streams, and reservoirs.
Behaviour and ecology
It is thought that male and female rats have similarly sized home ranges during the winter, but male rats increase the size of their home range during the breeding season. Along with differing between rats of different sex, home range also differs depending on the type of forest in which the black rat inhabits. For example, home ranges in the southern beech forests of the South Island, New Zealand appear to be much larger than the non-beech forests of the North Island. Due to the limited number of rats that are studied in home range studies, the estimated sizes of rat home ranges in different rat demographic groups are inconclusive.
Diet and foraging
Black rats are considered omnivores and eat a wide range of foods, including seeds, fruit, stems, leaves, fungi, and a variety of invertebrates and vertebrates. They are generalists, and thus not very specific in their food preferences, which is indicated by their tendency to feed on any meal provided for cows, swine, chickens, cats and dogs. They are similar to the tree squirrel in their preference of fruits and nuts. They eat about per day and drink about per day. Their diet is high in water content. They are a threat to many natural habitats because they feed on birds and insects. They are also a threat to many farmers, since they feed on a variety of agricultural-based crops, such as cereals, sugar cane, coconuts, cocoa, oranges, and coffee beans.
The black rat displays flexibility in its foraging behaviour. It is a predatory species and adapts to different micro-habitats. It often meets and forages together in close proximity within and between sexes. It tends to forage after sunset. If the food cannot be eaten quickly, it searches for a place to carry and hoard to eat at a later time. Although it eats a broad range of foods, it is a highly selective feeder; only a restricted selection of the foods is dominating. When offered a wide diversity of foods, it eats only a small sample of each. This allows it to monitor the quality of foods that are present year round, such as leaves, as well as seasonal foods, such as herbs and insects. This method of operating on a set of foraging standards ultimately determines the final composition of its meals. Also, by sampling the available food in an area, it maintains a dynamic food supply, balance its nutrient intake, and avoids intoxication by secondary compounds.
Nesting behaviour
Through the usage of tracking devices such as radio transmitters, rats have been found to occupy dens located in trees, as well as on the ground. In Puketi Forest in the Northland Region of New Zealand, rats have been found to form dens together. Rats appear to den and forage in separate areas in their home range depending on the availability of food resources. Research shows that, in New South Wales, the black rat prefers to inhabit lower leaf litter of forest habitat. There is also an apparent correlation between the canopy height and logs and the presence of black rats. This correlation may be a result of the distribution of the abundance of prey as well as available refuges for rats to avoid predators. As found in North Head, New South Wales, there is positive correlation between rat abundance, leaf litter cover, canopy height, and litter depth. All other habitat variables showed little to no correlation. While this species' relative, the brown (Norway) rat, prefers to nest near the ground of a building the black rat will prefer the upper floors and roof. Because of this habit they have been given the common name roof rat.
Diseases
Black rats (or their ectoparasites) can carry a number of pathogens, of which bubonic plague (via the Oriental rat flea), typhus, Weil's disease, toxoplasmosis and trichinosis are the best known. It has been hypothesized that the displacement of black rats by brown rats led to the decline of the Black Death. This theory has, however, been deprecated, as the dates of these displacements do not match the increases and decreases in plague outbreaks.
Rats serve as outstanding vectors for transmittance of diseases because they can carry bacteria and viruses in their systems. A number of bacterial diseases are common to rats, and these include Streptococcus pneumoniae, Corynebacterium kutsheri, Bacillus piliformis, Pasteurella pneumotropica, and Streptobacillus moniliformis, to name a few. All of these bacteria are disease causing agents in humans. In some cases, these diseases are incurable.
Predators
The black rat is prey to cats and owls in domestic settings. In less urban settings, rats are preyed on by weasels, foxes and coyotes. These predators have little effect on the control of the black rat population because black rats are agile and fast climbers. In addition to agility, the black rat also uses its keen sense of hearing to detect danger and quickly evade mammalian and avian predators.
As an invasive species
Damage caused
After Rattus rattus was introduced into the northern islands of New Zealand, they fed on the seedlings, adversely affecting the ecology of the islands. Even after eradication of R. rattus, the negative effects may take decades to reverse. When consuming these seabirds and seabird eggs, these rats reduce the pH of the soil. This harms plant species by reducing nutrient availability in soil, thus decreasing the probability of seed germination. For example, research conducted by Hoffman et al. indicates a large impact on 16 indigenous plant species directly preyed on by R. rattus. These plants displayed a negative correlation in germination and growth in the presence of black rats.
Rats prefer to forage in forest habitats. In the Ogasawara islands, they prey on the indigenous snails and seedlings. Snails that inhabit the leaf litter of these islands showed a significant decline in population on the introduction of Rattus rattus. The black rat shows a preference for snails with larger shells (greater than 10 mm), and this led to a great decline in the population of snails with larger shells. A lack of prey refuges makes it more difficult for the snail to avoid the rat.
Complex pest
The black rat is a complex pest, defined as one that influences the environment in both harmful and beneficial ways. In many cases, after the black rat is introduced into a new area, the population size of some native species declines or goes extinct. This is because the black rat is a good generalist with a wide dietary niche and a preference for complex habitats; this causes strong competition for resources among small animals. This has led to the black rat completely displacing many native species in Madagascar, the Galapagos, and the Florida Keys. In a study by Stokes et al., habitats suitable for the native bush rat, Rattus fuscipes, of Australia are often invaded by the black rat and are eventually occupied by only the black rat. When the abundances of these two rat species were compared in different micro-habitats, both were found to be affected by micro-habitat disturbances, but the black rat was most abundant in areas of high disturbance; this indicates it has a better dispersal ability.
Despite the black rat's tendency to displace native species, it can also aid in increasing species population numbers and maintaining species diversity. The bush rat, a common vector for spore dispersal of truffles, has been extirpated from many micro-habitats of Australia. In the absence of a vector, the diversity of truffle species would be expected to decline. In a study in New South Wales, Australia it was found that, although the bush rat consumes a diversity of truffle species, the black rat consumes as much of the diverse fungi as the natives and is an effective vector for spore dispersal. Since the black rat now occupies many of the micro-habitats that were previously inhabited by the bush rat, the black rat plays an important ecological role in the dispersal of fungal spores. By eradicating the black rat populations in Australia, the diversity of fungi would decline, potentially doing more harm than good.
Control methods
Large-scale rat control programs have been taken to maintain a steady level of the invasive predators in order to conserve the native species in New Zealand such as kokako and mohua. Pesticides, such as pindone and 1080 (sodium fluoroacetate), are commonly distributed via aerial spray by helicopter as a method of mass control on islands infested with invasive rat populations. Bait, such as brodifacoum, is also used along with coloured dyes (used to deter birds from eating the baits) in order to kill and identify rats for experimental and tracking purposes. Another method to track rats is the use of wired cage traps, which are used along with bait, such as rolled oats and peanut butter, to tag and track rats to determine population sizes through methods like mark-recapture and radio-tracking. Tracking tunnels (coreflute tunnels containing an inked card) are also commonly used monitoring devices, as are chew-cards containing peanut butter. Poison control methods are effective in reducing rat populations to nonthreatening sizes, but rat populations often rebound to normal size within months. Besides their highly adaptive foraging behavior and fast reproduction, the exact mechanisms for their rebound is unclear and are still being studied.
In 2010, the Sociedad Ornitológica Puertorriqueña (Puerto Rican Bird Society) and the Ponce Yacht and Fishing Club launched a campaign to eradicate the black rat from the Isla Ratones (Mice Island) and Isla Cardona (Cardona Island) islands off the municipality of Ponce, Puerto Rico.
Decline in population
Eradication projects have eliminated black rats from Lundy in the Bristol Channel (2006) and from the Shiant Islands in the Outer Hebrides (2016). Populations probably survive on other islands (e.g. Inchcolm) and in localised areas of the British mainland. Recent National Biodiversity Network data show populations around the U.K., particularly in ports and port towns.
See also
Karni Mata Temple, Deshnoke, Rajasthan, India.
Polynesian rat
Urban plague
References
Further reading
List of books and articles about rats
External links
Photos and video at ARKive
Rattus
Rodents of Asia
Rodents of Europe
Mammals of Azerbaijan
Mammals of Nepal
Stored-product pests
Mammals described in 1758
Taxa named by Carl Linnaeus
Rodents of Borneo | Black rat | Biology | 3,188 |
47,962,742 | https://en.wikipedia.org/wiki/Tally%20marks | Tally marks, also called hash marks, are a form of numeral used for counting. They can be thought of as a unary numeral system.
They are most useful in counting or tallying ongoing results, such as the score in a game or sport, as no intermediate results need to be erased or discarded. However, because of the length of large numbers, tallies are not commonly used for static text. Notched sticks, known as tally sticks, were also historically used for this purpose.
Early history
Counting aids other than body parts appear in the Upper Paleolithic. The oldest tally sticks date to between 35,000 and 25,000 years ago, in the form of notched bones found in the context of the European Aurignacian to Gravettian and in Africa's Late Stone Age.
The so-called Wolf bone is a prehistoric artifact discovered in 1937 in Czechoslovakia during excavations at Dolní Věstonice, Moravia, led by Karl Absolon. Dated to the Aurignacian, approximately 30,000 years ago, the bone is marked with 55 marks which may be tally marks. The head of an ivory Venus figurine was excavated close to the bone.
The Ishango bone, found in the Ishango region of the present-day Democratic Republic of Congo, is dated to over 20,000 years old. Upon discovery, it was thought to portray a series of prime numbers. In the book How Mathematics Happened: The First 50,000 Years, Peter Rudman argues that the development of the concept of prime numbers could only have come about after the concept of division, which he dates to after 10,000 BC, with prime numbers probably not being understood until about 500 BC. He also writes that "no attempt has been made to explain why a tally of something should exhibit multiples of two, prime numbers between 10 and 20, and some numbers that are almost multiples of 10." Alexander Marshack examined the Ishango bone microscopically, and concluded that it may represent a six-month lunar calendar.
Clustering
Tally marks are typically clustered in groups of five for legibility. The cluster size 5 has the advantages of (a) easy conversion into decimal for higher arithmetic operations and (b) avoiding error, as humans can far more easily correctly identify a cluster of 5 than one of 10.
Writing systems
Roman numerals, the Brahmi and Chinese numerals for one through three (一 二 三), and rod numerals were derived from tally marks, as possibly was the ogham script.
Base 1 arithmetic notation system is a unary positional system similar to tally marks. It is rarely used as a practical base for counting due to its difficult readability.
The numbers 1, 2, 3, 4, 5, 6 ... would be represented in this system as
1, 11, 111, 1111, 11111, 111111 ...
Base 1 notation is widely used in type numbers of flour; the higher number represents a higher grind.
Unicode
In 2015, Ken Lunde and Daisuke Miura submitted a proposal to encode various systems of tally marks in the Unicode Standard. However, the box tally and dot-and-dash tally characters were not accepted for encoding, and only the five ideographic tally marks (正 scheme) and two Western tally digits were added to the Unicode Standard in the Counting Rod Numerals block in Unicode version 11.0 (June 2018). Only the tally marks for the numbers 1 and 5 are encoded, and tally marks for the numbers 2, 3 and 4 are intended to be composed from sequences of tally mark 1 at the font level.
See also
Notes
References
Elementary mathematics
Mathematical notation
Numeral systems
Numerals
Korean language | Tally marks | Mathematics | 764 |
20,698,519 | https://en.wikipedia.org/wiki/Parastichy | Parastichy, in phyllotaxy, is the spiral pattern of particular plant organs on some plants, such as areoles on cacti stems, florets in sunflower heads and scales in pine cones. These spirals involve the insertion of a single primordium.
See also
Embryology
Gerrit van Iterson
Phyllotaxis
References
External links
Smith College, Spiral Lattices & Parastichy
Interactive Parastichies Explorer
Plant morphology | Parastichy | Biology | 98 |
19,051 | https://en.wikipedia.org/wiki/Manganese | Manganese is a chemical element; it has symbol Mn and atomic number 25. It is a hard, brittle, silvery metal, often found in minerals in combination with iron. Manganese was first isolated in the 1770s. It is a transition metal with a multifaceted array of industrial alloy uses, particularly in stainless steels. It improves strength, workability, and resistance to wear. Manganese oxide is used as an oxidising agent; as a rubber additive; and in glass making, fertilisers, and ceramics. Manganese sulfate can be used as a fungicide.
Manganese is also an essential human dietary element, important in macronutrient metabolism, bone formation, and free radical defense systems. It is a critical component in dozens of proteins and enzymes. It is found mostly in the bones, but also the liver, kidneys, and brain. In the human brain, the manganese is bound to manganese metalloproteins, most notably glutamine synthetase in astrocytes.
It is familiar in the laboratory in the form of the deep violet salt potassium permanganate. It occurs at the active sites in some enzymes. Of particular interest is the use of a Mn-O cluster, the oxygen-evolving complex, in the production of oxygen by plants.
Characteristics
Physical properties
Manganese is a silvery-gray metal that resembles iron. It is hard and very brittle, difficult to fuse, but easy to oxidize. Manganese and its common ions are paramagnetic. Manganese tarnishes slowly in air and oxidizes ("rusts") like iron in water containing dissolved oxygen.
Isotopes
Naturally occurring manganese is composed of one stable isotope, 55Mn. Several radioisotopes have been isolated and described, ranging in atomic weight from 46 u (46Mn) to 72 u (72Mn). The most stable are 53Mn with a half-life of 3.7 million years, 54Mn with a half-life of 312.2 days, and 52Mn with a half-life of 5.591 days. All of the remaining radioactive isotopes have half-lives of less than three hours, and the majority of less than one minute. The primary decay mode in isotopes lighter than the most abundant stable isotope, 55Mn, is electron capture and the primary mode in heavier isotopes is beta decay. Manganese also has three meta states.
Manganese is part of the iron group of elements, which are thought to be synthesized in large stars shortly before the supernova explosion. 53Mn decays to 53Cr with a half-life of 3.7 million years. Because of its relatively short half-life, 53Mn is relatively rare, produced by cosmic rays impact on iron. Manganese isotopic contents are typically combined with chromium isotopic contents and have found application in isotope geology and radiometric dating. Mn–Cr isotopic ratios reinforce the evidence from 26Al and 107Pd for the early history of the Solar System. Variations in 53Cr/52Cr and Mn/Cr ratios from several meteorites suggest an initial 53Mn/55Mn ratio, which indicate that Mn–Cr isotopic composition must result from in situ decay of 53Mn in differentiated planetary bodies. Hence, 53Mn provides additional evidence for nucleosynthetic processes immediately before coalescence of the Solar System.
Allotropes
Four allotropes (structural forms) of solid manganese are known, labeled α, β, γ and δ, and occurring at successively higher temperatures. All are metallic, stable at standard pressure, and have a cubic crystal lattice, but they vary widely in their atomic structures.
Alpha manganese (α-Mn) is the equilibrium phase at room temperature. It has a body-centered cubic lattice and is unusual among elemental metals in having a very complex unit cell, with 58 atoms per cell (29 atoms per primitive unit cell) in four different types of site. It is paramagnetic at room temperature and antiferromagnetic at temperatures below .
Beta manganese (β-Mn) forms when heated above the transition temperature of . It has a primitive cubic structure with 20 atoms per unit cell at two types of sites, which is as complex as that of any other elemental metal. It is easily obtained as a metastable phase at room temperature by rapid quenching. It does not show magnetic ordering, remaining paramagnetic down to the lowest temperature measured (1.1 K).
Gamma manganese (γ-Mn) forms when heated above . It has a simple face-centered cubic structure (four atoms per unit cell). When quenched to room temperature it converts to β-Mn, but it can be stabilized at room temperature by alloying it with at least 5 percent of other elements (such as C, Fe, Ni, Cu, Pd or Au), and these solute-stabilized alloys distort into a face-centered tetragonal structure.
Delta manganese (δ-Mn) forms when heated above and is stable up to the manganese melting point of . It has a body-centered cubic structure (two atoms per cubic unit cell).
Chemical compounds
Common oxidation states of manganese are +2, +3, +4, +6, and +7, although all oxidation states from −3 to +7 have been observed. Manganese in oxidation state +7 is represented by salts of the intensely purple permanganate anion . Potassium permanganate is a commonly used laboratory reagent because of its oxidizing properties; it is used as a topical medicine (for example, in the treatment of fish diseases). Solutions of potassium permanganate were among the first stains and fixatives to be used in the preparation of biological cells and tissues for electron microscopy.
Aside from various permanganate salts, Mn(VII) is represented by the unstable, volatile derivative Mn2O7. Oxyhalides (MnO3F and MnO3Cl) are powerful oxidizing agents. The most prominent example of Mn in the +6 oxidation state is the green anion manganate, [MnO4]2−. Manganate salts are intermediates in the extraction of manganese from its ores. Compounds with oxidation states +5 are somewhat elusive, and often found associated to an oxide (O2−) or nitride (N3−) ligand. One example is the blue anion hypomanganate [MnO4]3−.
Mn(IV) is somewhat enigmatic because it is common in nature but far rarer in synthetic chemistry. The most common Mn ore, pyrolusite, is MnO2. It is the dark brown pigment of many cave drawings but is also a common ingredient in dry cell batteries. Complexes of Mn(IV) are well known, but they require elaborate ligands. Mn(IV)-OH complexes are an intermediate in some enzymes, including the oxygen evolving center (OEC) in plants.
Simple derivatives Mn3+ are rarely encountered but can be stabilized by suitably basic ligands. Manganese(III) acetate is an oxidant useful in organic synthesis. Solid compounds of manganese(III) are characterized by its strong purple-red color and a preference for distorted octahedral coordination resulting from the Jahn-Teller effect.
A particularly common oxidation state for manganese in aqueous solution is +2, which has a pale pink color. Many manganese(II) compounds are known, such as the aquo complexes derived from manganese(II) sulfate (MnSO4) and manganese(II) chloride (MnCl2). This oxidation state is also seen in the mineral rhodochrosite (manganese(II) carbonate). Manganese(II) commonly exists with a high spin, S = 5/2 ground state because of the high pairing energy for manganese(II). There are no spin-allowed d–d transitions in manganese(II), which explain its faint color.
Organomanganese compounds
Manganese forms a large variety of organometallic derivatives, i.e., compounds with Mn-C bonds. The organometallic derivatives include numerous examples of Mn in its lower oxidation states, i.e. Mn(−III) up through Mn(I). This area of organometallic chemistry is attractive because Mn is inexpensive and of relatively low toxicity.
Of greatest commercial interest is "MMT", methylcyclopentadienyl manganese tricarbonyl, which is used as an anti-knock compound added to gasoline (petrol) in some countries. It features Mn(I). Consistent with other aspects of Mn(II) chemistry, manganocene () is high-spin. In contrast, its neighboring metal iron forms an air-stable, low-spin derivative in the form of ferrocene (). When conducted under an atmosphere of carbon monoxide, reduction of Mn(II) salts gives dimanganese decacarbonyl , an orange and volatile solid. The air-stability of this Mn(0) compound (and its many derivatives) reflects the powerful electron-acceptor properties of carbon monoxide. Many alkene complexes and alkyne complexes are derived from .
In Mn(CH3)2(dmpe)2, Mn(II) is low spin, which contrasts with the high spin character of its precursor, MnBr2(dmpe)2 (dmpe = (CH3)2PCH2CH2P(CH3)2). Polyalkyl and polyaryl derivatives of manganese often exist in higher oxidation states, reflecting the electron-releasing properties of alkyl and aryl ligands. One example is [Mn(CH3)6]2−.
History
The origin of the name manganese is complex. In ancient times, two black minerals were identified from the regions of the Magnetes (either Magnesia, located within modern Greece, or Magnesia ad Sipylum, located within modern Turkey). They were both called magnes from their place of origin, but were considered to differ in sex. The male magnes attracted iron, and was the iron ore now known as lodestone or magnetite, and which probably gave us the term magnet. The female magnes ore did not attract iron, but was used to decolorize glass. This female magnes was later called magnesia, known now in modern times as pyrolusite or manganese dioxide. Neither this mineral nor elemental manganese is magnetic. In the 16th century, manganese dioxide was called manganesum (note the two Ns instead of one) by glassmakers, possibly as a corruption and concatenation of two words, since alchemists and glassmakers eventually had to differentiate a magnesia nigra (the black ore) from magnesia alba (a white ore, also from Magnesia, also useful in glassmaking). Michele Mercati called magnesia nigra manganesa, and finally the metal isolated from it became known as manganese (). The name magnesia eventually was then used to refer only to the white magnesia alba (magnesium oxide), which provided the name magnesium for the free element when it was isolated much later.
Manganese dioxide, which is abundant in nature, has long been used as a pigment. The cave paintings in Gargas that are 30,000 to 24,000 years old are made from the mineral form of MnO2 pigments.
Manganese compounds were used by Egyptian and Roman glassmakers, either to add to, or remove, color from glass. Use as "glassmakers soap" continued through the Middle Ages until modern times and is evident in 14th-century glass from Venice.
Because it was used in glassmaking, manganese dioxide was available for experiments by alchemists, the first chemists. Ignatius Gottfried Kaim (1770) and Johann Glauber (17th century) discovered that manganese dioxide could be converted to permanganate, a useful laboratory reagent. Kaim also may have reduced manganese dioxide to isolate the metal, but that is uncertain. By the mid-18th century, the Swedish chemist Carl Wilhelm Scheele used manganese dioxide to produce chlorine. First, hydrochloric acid, or a mixture of dilute sulfuric acid and sodium chloride was made to react with manganese dioxide, and later hydrochloric acid from the Leblanc process was used and the manganese dioxide was recycled by the Weldon process. The production of chlorine and hypochlorite bleaching agents was a large consumer of manganese ores.
Scheele and others were aware that pyrolusite (mineral form of manganese dioxide) contained a new element. Johan Gottlieb Gahn isolated an impure sample of manganese metal in 1774, which he did by reducing the dioxide with carbon.
The manganese content of some iron ores used in Greece led to speculations that steel produced from that ore contains additional manganese, making the Spartan steel exceptionally hard. Around the beginning of the 19th century, manganese was used in steelmaking and several patents were granted. In 1816, it was documented that iron alloyed with manganese was harder but not more brittle. In 1837, British academic James Couper noted an association between miners' heavy exposure to manganese and a form of Parkinson's disease. In 1912, United States patents were granted for protecting firearms against rust and corrosion with manganese phosphate electrochemical conversion coatings, and the process has seen widespread use ever since.
The invention of the Leclanché cell in 1866 and the subsequent improvement of batteries containing manganese dioxide as cathodic depolarizer increased the demand for manganese dioxide. Until the development of batteries with nickel–cadmium and lithium, most batteries contained manganese. The zinc–carbon battery and the alkaline battery normally use industrially produced manganese dioxide because naturally occurring manganese dioxide contains impurities. In the 20th century, manganese dioxide was widely used as the cathodic for commercial disposable dry batteries of both the standard (zinc–carbon) and alkaline types.
Manganese is essential to iron and steel production by virtue of its sulfur-fixing, deoxidizing, and alloying properties. This application was first recognized by the British metallurgist Robert Forester Mushet (1811–1891) who, in 1856, introduced the element, in the form of Spiegeleisen.
Occurrence
Manganese comprises about 1000 ppm (0.1%) of the Earth's crust and is the 12th most abundant element. Soil contains 7–9000 ppm of manganese with an average of 440 ppm. The atmosphere contains 0.01 μg/m3. Manganese occurs principally as pyrolusite (MnO2), braunite (Mn2+Mn3+6)SiO12), psilomelane , and to a lesser extent as rhodochrosite (MnCO3).
The most important manganese ore is pyrolusite (MnO2). Other economically important manganese ores usually show a close spatial relation to the iron ores, such as sphalerite. Land-based resources are large but irregularly distributed. About 80% of the known world manganese resources are in South Africa; other important manganese deposits are in Ukraine, Australia, India, China, Gabon and Brazil. According to 1978 estimate, the ocean floor has 500 billion tons of manganese nodules. Attempts to find economically viable methods of harvesting manganese nodules were abandoned in the 1970s.
In South Africa, most identified deposits are located near Hotazel in the Northern Cape Province, (Kalahari manganese fields), with a 2011 estimate of 15 billion tons. In 2011 South Africa produced 3.4 million tons, topping all other nations.
Manganese is mainly mined in South Africa, Australia, China, Gabon, Brazil, India, Kazakhstan, Ghana, Ukraine and Malaysia.
Production
For the production of ferromanganese, the manganese ore is mixed with iron ore and carbon, and then reduced either in a blast furnace or in an electric arc furnace. The resulting ferromanganese has a manganese content of 30–80%. Pure manganese used for the production of iron-free alloys is produced by leaching manganese ore with sulfuric acid and a subsequent electrowinning process.
A more progressive extraction process involves directly reducing (a low grade) manganese ore by heap leaching. This is done by percolating natural gas through the bottom of the heap; the natural gas provides the heat (needs to be at least 850 °C) and the reducing agent (carbon monoxide). This reduces all of the manganese ore to manganese oxide (MnO), which is a leachable form. The ore then travels through a grinding circuit to reduce the particle size of the ore to between 150 and 250 μm, increasing the surface area to aid leaching. The ore is then added to a leach tank of sulfuric acid and ferrous iron (Fe2+) in a 1.6:1 ratio. The iron reacts with the manganese dioxide (MnO2) to form iron hydroxide (FeO(OH)) and elemental manganese (Mn).
This process yields approximately 92% recovery of the manganese. For further purification, the manganese can then be sent to an electrowinning facility.
Oceanic environment
In 1972, the CIA's Project Azorian, through billionaire Howard Hughes, commissioned the ship Hughes Glomar Explorer with the cover story of harvesting manganese nodules from the sea floor. That triggered a rush of activity to collect manganese nodules, which was not actually practical until the 2020s. The real mission of Hughes Glomar Explorer was to raise a sunken Soviet submarine, the K-129, with the goal of retrieving Soviet code books.
An abundant resource of manganese in the form of manganese nodules found on the ocean floor. These nodules, which are composed of 29% manganese, are located along the ocean floor. The environmental impacts of nodule collection are of interest.
Dissolved manganese (dMn) is found throughout the world's oceans, 90% of which originates from hydrothermal vents. Particulate Mn develops in buoyant plumes over an active vent source, while the dMn behaves conservatively. Mn concentrations vary between the water columns of the ocean. At the surface, dMn is elevated due to input from external sources such as rivers, dust, and shelf sediments. Coastal sediments normally have lower Mn concentrations, but can increase due to anthropogenic discharges from industries such as mining and steel manufacturing, which enter the ocean from river inputs. Surface dMn concentrations can also be elevated biologically through photosynthesis and physically from coastal upwelling and wind-driven surface currents. Internal cycling such as photo-reduction from UV radiation can also elevate levels by speeding up the dissolution of Mn-oxides and oxidative scavenging, preventing Mn from sinking to deeper waters. Elevated levels at mid-depths can occur near mid-ocean ridges and hydrothermal vents. The hydrothermal vents release dMn enriched fluid into the water. The dMn can then travel up to 4,000 km due to the microbial capsules present, preventing exchange with particles, lowing the sinking rates. Dissolved Mn concentrations are even higher when oxygen levels are low. Overall, dMn concentrations are normally higher in coastal regions and decrease when moving offshore.
Soils
Manganese occurs in soils in three oxidation states: the divalent cation, Mn2+ and as brownish-black oxides and hydroxides containing Mn (III,IV), such as MnOOH and MnO2. Soil pH and oxidation-reduction conditions affect which of these three forms of Mn is dominant in a given soil. At pH values less than 6 or under anaerobic conditions, Mn(II) dominates, while under more alkaline and aerobic conditions, Mn(III,IV) oxides and hydroxides predominate. These effects of soil acidity and aeration state on the form of Mn can be modified or controlled by microbial activity. Microbial respiration can cause both the oxidation of Mn2+ to the oxides, and it can cause reduction of the oxides to the divalent cation.
The Mn(III,IV) oxides exist as brownish-black stains and small nodules on sand, silt, and clay particles. These surface coatings on other soil particles have high surface area and carry negative charge. The charged sites can adsorb and retain various cations, especially heavy metals (e.g., Cr3+, Cu2+, Zn2+, and Pb2+). In addition, the oxides can adsorb organic acids and other compounds. The adsorption of the metals and organic compounds can then cause them to be oxidized while the Mn(III,IV) oxides are reduced to Mn2+ (e.g., Cr3+ to Cr(VI) and colorless hydroquinone to tea-colored quinone polymers).
Applications
Steel
Manganese is essential to iron and steel production by virtue of its sulfur-fixing, deoxidizing, and alloying properties. Manganese has no satisfactory substitute in these applications in metallurgy. Steelmaking, including its ironmaking component, has accounted for most manganese demand, presently in the range of 85% to 90% of the total demand. Manganese is a key component of low-cost stainless steel. Often ferromanganese (usually about 80% manganese) is the intermediate in modern processes.
Small amounts of manganese improve the workability of steel at high temperatures by forming a high-melting sulfide and preventing the formation of a liquid iron sulfide at the grain boundaries. If the manganese content reaches 4%, the embrittlement of the steel becomes a dominant feature. The embrittlement decreases at higher manganese concentrations and reaches an acceptable level at 8%. Steel containing 8 to 15% of manganese has a high tensile strength of up to 863 MPa. Steel with 12% manganese was discovered in 1882 by Robert Hadfield and is still known as Hadfield steel (mangalloy). It was used for British military steel helmets and later by the U.S. military.
Aluminium alloys
Manganese is used in production of alloys with aluminium. Aluminium with roughly 1.5% manganese has increased resistance to corrosion through grains that absorb impurities which would lead to galvanic corrosion. The corrosion-resistant aluminium alloys 3004 and 3104 (0.8 to 1.5% manganese) are used for most beverage cans. Before 2000, more than 1.6 million tonnes of those alloys were used; at 1% manganese, this consumed 16,000 tonnes of manganese.
Batteries
Manganese(IV) oxide was used in the original type of dry cell battery as an electron acceptor from zinc, and is the blackish material in carbon–zinc type flashlight cells. The manganese dioxide is reduced to the manganese oxide-hydroxide MnO(OH) during discharging, preventing the formation of hydrogen at the anode of the battery.
MnO2 + H2O + e− → MnO(OH) +
The same material also functions in newer alkaline batteries (usually battery cells), which use the same basic reaction, but a different electrolyte mixture. In 2002, more than 230,000 tons of manganese dioxide was used for this purpose.
Resistors
Copper alloys of manganese, such as Manganin, are commonly found in metal element shunt resistors used for measuring relatively large amounts of current. These alloys have very low temperature coefficient of resistance and are resistant to sulfur. This makes the alloys particularly useful in harsh automotive and industrial environments.
Fertilizers and feed additive
Manganese oxide and sulfate are components of fertilizers. In the year 2000, an estimated 20,000 tons of these compounds were used in fertilizers in the US alone. A comparable amount of Mn compounds was also used in animal feeds.
Niche
Methylcyclopentadienyl manganese tricarbonyl is an additive in some unleaded gasoline to boost octane rating and reduce engine knocking.
Manganese(IV) oxide (manganese dioxide, MnO2) is used as a reagent in organic chemistry for the oxidation of benzylic alcohols (where the hydroxyl group is adjacent to an aromatic ring). Manganese dioxide has been used since antiquity to oxidize and neutralize the greenish tinge in glass from trace amounts of iron contamination. MnO2 is also used in the manufacture of oxygen and chlorine and in drying black paints. In some preparations, it is a brown pigment for paint and is a constituent of natural umber.
Tetravalent manganese is used as an activator in red-emitting phosphors. While many compounds are known which show luminescence, the majority are not used in commercial application due to low efficiency or deep red emission. However, several Mn4+ activated fluorides were reported as potential red-emitting phosphors for warm-white LEDs. But to this day, only K2SiF6:Mn4+ is commercially available for use in warm-white LEDs.
The metal is occasionally used in coins; until 2000, the only United States coin to use manganese was the "wartime" nickel from 1942 to 1945. An alloy of 75% copper and 25% nickel was traditionally used for the production of nickel coins. However, because of shortage of nickel metal during the war, it was substituted by more available silver and manganese, thus resulting in an alloy of 56% copper, 35% silver and 9% manganese. Since 2000, dollar coins, for example the Sacagawea dollar and the Presidential $1 coins, are made from a brass containing 7% of manganese with a pure copper core. In both cases of nickel and dollar, the use of manganese in the coin was to duplicate the electromagnetic properties of a previous identically sized and valued coin in the mechanisms of vending machines. In the case of the later U.S. dollar coins, the manganese alloy was intended to duplicate the properties of the copper/nickel alloy used in the previous Susan B. Anthony dollar.
Manganese compounds have been used as pigments and for the coloring of ceramics and glass. The brown color of ceramic is sometimes the result of manganese compounds. In the glass industry, manganese compounds are used for two effects. Manganese(III) reacts with iron(II) to reduce strong green color in glass by forming less-colored iron(III) and slightly pink manganese(II), compensating for the residual color of the iron(III). Larger quantities of manganese are used to produce pink colored glass. In 2009, Mas Subramanian and associates at Oregon State University discovered that manganese can be combined with yttrium and indium to form an intensely blue, non-toxic, inert, fade-resistant pigment, YInMn Blue, the first new blue pigment discovered in 200 years.
Biochemistry
Many classes of enzymes contain manganese cofactors including oxidoreductases, transferases, hydrolases, lyases, isomerases and ligases. Other enzymes containing manganese are arginase and a Mn-containing superoxide dismutase (Mn-SOD). Some reverse transcriptases of many retroviruses (although not lentiviruses such as HIV) contain manganese. Manganese-containing polypeptides are the diphtheria toxin, lectins, and integrins.
The oxygen-evolving complex (OEC), containing four atoms of manganese, is a part of photosystem II contained in the thylakoid membranes of chloroplasts. The OEC is responsible for the terminal photooxidation of water during the light reactions of photosynthesis, i.e., it is the catalyst that makes the O2 produced by plants.
Human health and nutrition
Manganese is an essential human dietary element and is present as a coenzyme in several biological processes, which include macronutrient metabolism, bone formation, and free radical defense systems. Manganese is a critical component in dozens of proteins and enzymes. The human body contains about 12 mg of manganese, mostly in the bones. The soft tissue remainder is concentrated in the liver and kidneys. In the human brain, the manganese is bound to manganese metalloproteins, most notably glutamine synthetase in astrocytes.
Regulation
The U.S. Institute of Medicine (IOM) updated Estimated Average Requirements (EARs) and Recommended Dietary Allowances (RDAs) for minerals in 2001. For manganese, there was not sufficient information to set EARs and RDAs, so needs are described as estimates for Adequate Intakes (AIs). As for safety, the IOM sets Tolerable upper intake levels (ULs) for vitamins and minerals when evidence is sufficient. In the case of manganese, the adult UL is set at 11 mg/day. Collectively the EARs, RDAs, AIs and ULs are referred to as Dietary Reference Intakes (DRIs). Manganese deficiency is rare.
The European Food Safety Authority (EFSA) refers to the collective set of information as Dietary Reference Values, with Population Reference Intake (PRI) instead of RDA, and Average Requirement instead of EAR. AI and UL are defined the same as in the United States. For people ages 15 and older, the AI is set at 3.0 mg/day. AIs for pregnancy and lactation is 3.0 mg/day. For children ages 1–14 years, the AIs increase with age from 0.5 to 2.0 mg/day. The adult AIs are higher than the U.S. RDAs. The EFSA reviewed the same safety question and decided that there was insufficient information to set a UL.
For U.S. food and dietary supplement labeling purposes, the amount in a serving is expressed as a percent of Daily Value (%DV). For manganese labeling purposes, 100% of the Daily Value was 2.0 mg, but as of 27 May 2016 it was revised to 2.3 mg to bring it into agreement with the RDA. A table of the old and new adult daily values is provided at Reference Daily Intake.
Excessive exposure or intake may lead to a condition known as manganism, a neurodegenerative disorder that causes dopaminergic neuronal death and symptoms similar to Parkinson's disease.
Deficiency
Manganese deficiency in humans, which is rare, results in a number of medical problems. A deficiency of manganese causes skeletal deformation in animals and inhibits the production of collagen in wound healing.
Exposure
In water
Waterborne manganese has a greater bioavailability than dietary manganese. According to results from a 2010 study, higher levels of exposure to manganese in drinking water are associated with increased intellectual impairment and reduced intelligence quotients in school-age children. It is hypothesized that long-term exposure due to inhaling the naturally occurring manganese in shower water puts up to 8.7 million Americans at risk. However, data indicates that the human body can recover from certain adverse effects of overexposure to manganese if the exposure is stopped and the body can clear the excess.
Mn levels can increase in seawater when hypoxic periods occur. Since 1990 there have been reports of Mn accumulation in marine organisms including fish, crustaceans, mollusks, and echinoderms. Specific tissues are targets in different species, including the gills, brain, blood, kidney, and liver/hepatopancreas. Physiological effects have been reported in these species. Mn can affect the renewal of immunocytes and their functionality, such as phagocytosis and activation of pro-phenoloxidase, suppressing the organisms' immune systems. This causes the organisms to be more susceptible to infections. As climate change occurs, pathogen distributions increase, and in order for organisms to survive and defend themselves against these pathogens, they need a healthy, strong immune system. If their systems are compromised from high Mn levels, they will not be able to fight off these pathogens and die.
Gasoline
Methylcyclopentadienyl manganese tricarbonyl (MMT) is an additive developed to replace lead compounds for gasolines to improve the octane rating. MMT is used only in a few countries. Fuels containing manganese tend to form manganese carbides, which damage exhaust valves.
Air
Compared to 1953, levels of manganese in air have dropped. Generally, exposure to ambient Mn air concentrations in excess of 5 μg Mn/m3 can lead to Mn-induced symptoms. Increased ferroportin protein expression in human embryonic kidney (HEK293) cells is associated with decreased intracellular Mn concentration and attenuated cytotoxicity, characterized by the reversal of Mn-reduced glutamate uptake and diminished lactate dehydrogenase leakage.
Regulation
Manganese exposure in United States is regulated by the Occupational Safety and Health Administration (OSHA). People can be exposed to manganese in the workplace by breathing it in or swallowing it. OSHA has set the legal limit (permissible exposure limit) for manganese exposure in the workplace as 5 mg/m3 over an 8-hour workday. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit (REL) of 1 mg/m3 over an 8-hour workday and a short term limit of 3 mg/m3. At levels of 500 mg/m3, manganese is immediately dangerous to life and health.
Health and safety
Manganese is essential for human health, albeit in milligram amounts.
The current maximum safe concentration under U.S. EPA rules is 50 μg Mn/L.
Manganism
Manganese overexposure is most frequently associated with manganism, a rare neurological disorder associated with excessive manganese ingestion or inhalation. Historically, persons employed in the production or processing of manganese alloys have been at risk for developing manganism; however, health and safety regulations protect workers in developed nations. The disorder was first described in 1837 by British academic John Couper, who studied two patients who were manganese grinders.
Manganism is a biphasic disorder. In its early stages, an intoxicated person may experience depression, mood swings, compulsive behaviors, and psychosis. Early neurological symptoms give way to late-stage manganism, which resembles Parkinson's disease. Symptoms include weakness, monotone and slowed speech, an expressionless face, tremor, forward-leaning gait, inability to walk backwards without falling, rigidity, and general problems with dexterity, gait and balance. Unlike Parkinson's disease, manganism is not associated with loss of the sense of smell and patients are typically unresponsive to treatment with L-DOPA. Symptoms of late-stage manganism become more severe over time even if the source of exposure is removed and brain manganese levels return to normal.
Chronic manganese exposure has been shown to produce a parkinsonism-like illness characterized by movement abnormalities. This condition is not responsive to typical therapies used in the treatment of PD, suggesting an alternative pathway to the typical dopaminergic loss within the substantia nigra. Manganese may accumulate in the basal ganglia, leading to the abnormal movements. A mutation of the SLC30A10 gene, a manganese efflux transporter necessary for decreasing intracellular Mn, has been linked with the development of this Parkinsonism-like disease. The Lewy bodies typical to PD are not seen in Mn-induced parkinsonism.
Animal experiments have given the opportunity to examine the consequences of manganese overexposure under controlled conditions. In (non-aggressive) rats, manganese induces mouse-killing behavior.
Toxicity
Manganese compounds are less toxic than those of other widespread metals, such as nickel and copper. However, exposure to manganese dusts and fumes should not exceed the ceiling value of 5 mg/m3 even for short periods because of its toxicity level. Manganese poisoning has been linked to impaired motor skills and cognitive disorders.
Neurodegenerative diseases
A protein called DMT1 is the major transporter in manganese absorption from the intestine and may be the major transporter of manganese across the blood–brain barrier. DMT1 also transports inhaled manganese across the nasal epithelium. The proposed mechanism for manganese toxicity is that dysregulation leads to oxidative stress, mitochondrial dysfunction, glutamate-mediated excitotoxicity, and aggregation of proteins.
See also
Manganese exporter, membrane transport protein
List of countries by manganese production
Parkerizing
References
Sources
External links
National Pollutant Inventory – Manganese and compounds Fact Sheet
International Manganese Institute
NIOSH Manganese Topic Page
Manganese at The Periodic Table of Videos (University of Nottingham)
All about Manganese Dendrites
Electric Arc Furnace (EAF) Slag
Chemical elements
Transition metals
Deoxidizers
Chemical hazards
Dietary minerals
Reducing agents
Chemical elements with body-centered cubic structure
Native element minerals | Manganese | Physics,Chemistry,Materials_science | 7,802 |
71,952,883 | https://en.wikipedia.org/wiki/List%20of%20Art%20Deco%20architecture%20in%20Oklahoma | This is a list of buildings that are examples of the Art Deco architectural style in Oklahoma, United States.
Ardmore
Ardmore Municipal Auditorium, Ardmore, 1943
Hardy Murphy Coliseum, Ardmore, 1943
Tivoli Theatre, Ardmore, 1915 and 1935
YWCA, Ardmore, 1938
Clinton
Clinton Armory, Clinton, 1937
Fire Station, Clinton, 1930s
McLain Rogers Park, Clinton, 1934
Enid
102 Independence, Enid, 1938
113 North. Grand, Enid, 1940
115 North Grand, Enid, 1940
323 Broadway, Enid, 1938
Arcadia Theatre, 226 West Randolph Avenue, Enid, 1931
Broadway Tower, Enid, 1931
Cherokee Theatre (now retail), Enid, 1928
Enid Armory, Enid, 1936
Eugene S. Briggs Auditorium, Enid, 1957
Garfield County Courthouse, Enid, 1896 and 1930
Taft Elementary School, Enid, 1937
Triangle Business Center (former Bass Building), Enid, 1930
Woolworth's, Enid, 1910 and 1921
McAlester
110–114 East Choctaw (former Woolworth's), McAlester
International Temple, Supreme Assembly, Order of the Rainbow for Girls, McAlester, 1951
McAlester Armory, McAlester, 1936
McAlester Scottish Rite Temple, McAlester, 1907 and 1930
OKLA Theater, McAlester, 1931 and 1948
Muskogee
304 East Callahan, Muskogee, 1925
540 West Court (former Chrysler–DeSoto Dealership), Muskogee, 1948
Fire Station No. 3, Muskogee, 1940s
Fire Station No. 4, Muskogee, 1940s
Roxy Theatre, Muskogee, 1948
Norman
301–302 South Porter, Norman, 1930
747 Asp (former cleaner's), Norman, 1930
Boomer Theater, Norman, 1947
Cleveland County Courthouse, Norman, 1940
Corner Thomas Garage (now A-1 Automotive), Norman, 1940s
Hiland Dairy, Norman, 1940s
Logan Apartments, Norman
University Theatre, Norman, 1930
Varsity Theatre (now retail), Norman
Oklahoma City
100 Park Avenue Building, Oklahoma City, 1923
Agnew Theater, Oklahoma City, 1947
Borden's Dairy Building, Oklahoma City, 1947
Cain's Coffee Building, Oklahoma City, 1919
Century Building, Oklahoma City
Cheever's Flowers (now Cheever's Cafe), Oklahoma City, 1935
City Place Tower, Oklahoma City, 1931
Civic Center Music Hall, Oklahoma City, 1937
Doctors Building, Oklahoma City, 1948
Edmond Armory, Edmond, 1937
First National Center, Oklahoma City, 1931
Jewel Theater, Oklahoma City, 1931
Lawyers Title Building, Oklahoma City, 1930
Lyric at the Plaza Theater, Oklahoma City, 1935
May Theatre, Oklahoma City, 1946
Norton–Johnson Buick Company, Oklahoma City, 1930
Nuway Laundry & Cleaners, Oklahoma City, 1940s
Oklahoma County Courthouse, Oklahoma City, 1937
Oklahoma Opry, Oklahoma City, 1946
Raylyn Taylor Salon, Oklahoma City
Santa Fe Depot, Oklahoma City, 1934
Sewage Treatment Plant, Oklahoma City
Taft Middle School, Oklahoma City, 1931
United States Post Office, Courthouse, and Federal Office Building, Oklahoma City, 1912
Will Rogers Theater Events Center, Oklahoma City, 1946
Shawnee
Auditorium, Shawnee
Hornbeck Theatre, Shawnee, 1947
Pottawatomie County Courthouse, Shawnee, 1934
Tulsa
11th Street Bridge, Tulsa, 1916
Adah Robinson Residence, Tulsa, 1929
Art Deco Lofts and Apartments, Tulsa, 1929
Boston Avenue Methodist Church,Tulsa, 1929
Boulder on the Park, Tulsa, 1923
Brady Theater, Tulsa, 1910
Central High School, Tulsa, 1925
Christ the King Church, Tulsa, 1928
Cities Service Station #8, Tulsa, 1940
City Veterinary Hospital, Tulsa, 1942
Continental Supply Company Building, Tulsa, 1921
Day Building (now Nelson's Buffeteria), Tulsa, 1926
Eleventh Street Arkansas River Bridge, Tulsa, 1929
Expo Square Pavilion, Tulsa, 1932
Fawcett Building, Tulsa, 1926
Fire Station No. 13, Tulsa, 1931
Fleeger Residence, Tulsa, 1937
Guaranty Laundry, Tulsa, 1928
Hawks Ice Cream, Tulsa, 1948
Jesse D. Davis Residence, Tulsa, 1936
John Duncan Forsyth Residence, Tulsa, 1937
KVOO-TV Broadcast Facility, Tulsa, 1954
Marquette School, Tulsa, 1932
Mayo Motor Inn, Tulsa, 1950
McGay Residence, Tulsa, 1936
Merchant's Exhibit Building, Tulsa State Fairgrounds, Tulsa, 1930
Metropolitan Tulsa Transit Authority Transfer Center, Tulsa, 1999
Midwest Equitable Meter, Tulsa, 1929
Midwest Marble and Tile Building, Tulsa, 1945
Milady's Cleaners, Tulsa, 1930
National Guard Armory, Tulsa, 1942
National Supply Company (now U-Haul), Tulsa, 1930
Oak Lawn Cemetery Entrance Gates, Tulsa, 1930
Oklahoma Department of Transportation, Tulsa, 1940
Oklahoma Natural Gas Company Building, Tulsa, 1925
Page Warehouse, Tulsa, 1927
Petroleum Building, Tulsa, 1921
Philcade Building, Tulsa, 1931
Philtower Building, Tulsa, 1928
Phoenix Cleaners, Tulsa, 1937
Pythian Building, Tulsa, 1931
Riverside Studios, Tulsa, 1929
Service Pipeline Building (former ARCO Building), Tulsa, 1949
Sherman Residence, Tulsa, 1930s
Southwestern Bell Main Dial Building, Tulsa, 1924
Tulsa Club Building, Tulsa, 1927
Tulsa Fire Alarm Building, Tulsa, 1934
Tulsa Monument Company, Tulsa, 1936
Tulsa SPCA, Tulsa, 1931
Tulsa State Fairgrounds Pavilion, Tulsa, 1932
Tulsa Union Depot, Tulsa, 1931
Ungerman Residence, Tulsa, 1941
Warehouse Market, Tulsa, 1930
Webster High School, Tulsa, 1938
Westhope, Tulsa, 1929
Whentoff Residence, Tulsa, 1935
Will Rogers High School, Tulsa, 1939
Other cities
Adair County Courthouse, Stilwell, 1930
Allred Theatre, Pryor Creek, 1914 and 1942
Anadarko Armory, Anadarko, 1937
Armory, Cherokee
Atoka Armory, Atoka, 1936
Attucks School, Vinita, 1917
Avant's Cities Service Station, El Reno, 1933
Beard Motor Company, Bristol, 1947 and 1953
Bristow Firestone Service Station, Bristow, 1929
Campus Theatre, Stillwater, 1939
Canute Service Station, Canute, 1939
Central Fire Station, Ada
City Hall, Vinita
Claremore Auto Dealership, Claremore, 1930
Clayton High School Auditorium, Clayton, 1936
Bartlesville High School, Bartlesville, 1939
Grady County Courthouse, Chickasha, 1935
Gymnasium, Hennessey, 1941
Gymnasium, Pernell, 1941
Haskell County Courthouse, Stigler, 1931
Healdtown Armory, Healdtown, 1936
Holdenville Armory, Holdenville, 1936
Hominy Armory, Hominy, 1937
Hugo Armory, Hugo, 1936
Jefferson County Courthouse, Waurika, 1931
Kerr-Mac Service Station, Pauls Valley
Leachman Theatre (now a furniture showroom), Stillwater, 1948
Long Theatre, Keyes, 1947
Masonic Temple, Anadarko
Memorial Park Swimming Pool, Blackwell, 1940s
Minco Armory, Minco, 1936
Municipal Building, Fairview
Okmulgee Armory, Okmulgee, 1937
Page Memorial Library, Sand Springs, 1930
Pawnee County Courthouse, Pawnee, 1932
Pensacola Dam, between Disney and Langley, 1940
Poncan Theatre, Ponca City, 1927
Rialto Theatre, Alva, 1949
Roff Armory, Roff, 1937
Sallisaw High School, Sallisaw, 1939
Sayre Champlin Service Station, Sayre, 1934
Softener & Filter Unit, El Reno, 1930s or 1940s
Southwestern Bell Telephone Building, Stroud, 1929
Sulphur Armory, Sulphur, 1937
Tahlequah Armory, Tahlequah, 1937
Telephone Building, Waynoka
United States Post Office Coalgate, Coalgate, 1940
United States Post Office Hollis, Hollis, 1939
United States Post Office Nowata, Nowata, 1938
Wagoner Armory, Wagoner, 1938
Warren Theatre, Broken Arrow
Washita County Jail, Cordell
Washita Theatre, Chickasha, 1941
Weatherford Armory, Weatherford, 1937
Westland Theatre, Elk City, 1950
See also
List of Art Deco architecture
List of Art Deco architecture in the United States
References
"Architectural Surveys." Oklahoma Historical Society. Retrieved 2022-09-06.
"Art Deco & Streamline Moderne Buildings." Roadside Architecture.com. Retrieved 2019-01-03.
"Art Deco Buildings in Tulsa". Tulsa Preservation Commission. 2015-05-06. Retrieved 2019-01-03.
Cinema Treasures. Retrieved 2022-09-06
"Court House Lover". Flickr. Retrieved 2022-09-06
"New Deal Map". The Living New Deal. Retrieved 2020-12-25.
"SAH Archipedia". Society of Architectural Historians. Retrieved 2021-11-21.
External links
Art Deco
Lists of buildings and structures in Oklahoma | List of Art Deco architecture in Oklahoma | Engineering | 1,781 |
12,239,033 | https://en.wikipedia.org/wiki/Bolus%20%28digestion%29 | In digestion, a bolus (from Latin bolus, "ball") is a ball-like mixture of food and saliva that forms in the mouth during the process of chewing (which is largely an adaptation for plant-eating mammals). It has the same color as the food being eaten, and the saliva gives it an alkaline pH.
Under normal circumstances, the bolus is swallowed, and travels down the esophagus to the stomach for digestion.
See also
Chyme
Chyle
References
Digestive system | Bolus (digestion) | Biology | 109 |
23,135,936 | https://en.wikipedia.org/wiki/Xerocomus%20silwoodensis | Xerocomus silwoodensis is a species of bolete fungus first described in 2007. It was discovered by scientists on Silwood Campus, Imperial College, London and was named after this accordingly. Its discovery on a campus of a leading academic institution has been used to show how little is known about many species. Its discovery was rated as the seventh-best discovery of a new species in 2008 by the International Institute for Species Exploration. It has since been found at two other sites in the United Kingdom and also in France and Italy. It has therefore been asserted that it is a widespread but rare species.
Molecular analysis has shown it is closely related to X. subtomentosus, and it was probably previously overlooked due to its similar appearance. It has a strong preference in associating with Populus species whereas X. chrysonemus associates with Quercus, X. subtomentosus with broadleaved hosts and X. ferrugineus with conifers. Microscopically, the spores of X. silwoodensis resemble those of X. chrysonemus, but are different from both X. subtomentosus and X. ferrugineus.
References
Boletaceae
Fungi described in 2007
Fungi of Europe
Fungus species | Xerocomus silwoodensis | Biology | 260 |
12,693,385 | https://en.wikipedia.org/wiki/EBird | eBird is an online database of bird observations providing scientists, researchers and amateur naturalists with real-time data about bird distribution and abundance. Originally restricted to sightings from the Western Hemisphere, the project expanded to include New Zealand in 2008, and again expanded to cover the whole world in June 2010. eBird has been described as an ambitious example of enlisting amateurs to gather data on biodiversity for use in science.
eBird is an example of crowdsourcing, and has been hailed as an example of democratizing science, treating citizens as scientists, allowing the public to access and use their own data and the collective data generated by others.
History and purpose
Launched in 2002 by the Cornell Lab of Ornithology at Cornell University and the National Audubon Society, eBird gathers basic data on bird abundance and distribution at a variety of spatial and temporal scales. It was mainly inspired by the , created by Jacques Larivée in 1975. As of May 12, 2021, there were over one billion bird observations recorded through this global database. In recent years, there have been over 100 million bird observations recorded each year.
eBird's goal is to maximize the utility and accessibility of the vast numbers of bird observations made each year by recreational and professional birders. The observations of each participant join those of others in an international network. Due to the variability in the observations the volunteers make, AI filters observations through collected historical data to improve accuracy. The data are then available via internet queries in a variety of formats.
Use of database information
The eBird Database has been used by scientists to determine the connection between bird migrations and monsoon rains in India validating traditional knowledge. It has also been used to notice bird distribution changes due to climate change and help to define migration routes. A study conducted found that eBird lists were accurate at determining population trends and distribution if there were 10,000 checklists for a given area.
Criticism of data
eBird participation in urban areas remains spatially biased with information from higher-income neighborhoods being represented much more. This suggests that eBird data should not be considered reliable for planning purposes, or to understand urban ecology of birds. Such biases can be exacerbated due to events such as the COVID-19 outbreak when governmental policy restricted people's movements in many countries, which led to the data becoming greatly biased to urban locations relative to other habitats.
In another study, eBird data provided a different estimate of suitable habitat for the Nilgiri pipit relative to data collected by scientists (combining field observations and literature review). Authors therefore suggest that spatial distribution models based solely on eBird data should be regarded with caution.
eBird data sets have been shown to be biased not only spatially but temporally. While better roads and areas with denser human populations provided most of the data, eBird records also varied temporally with monthly fluctuations of uploads being very wide, and most of the data being provided on weekends. Inferences based on analyses where eBird data is not corrected to account for such large-scale and long-term biases will yield a biased understanding that indicate eBirder behaviors more than bird behaviors.
A study pointing out that citizen-scientists possess different levels of skill and suggesting that analyses should incorporate corrections for observer bias used eBird as an example.
Features
eBird documents the presence or absence of species, as well as bird abundance through checklist data. A web interface allows participants to submit their observations or view results via interactive queries of the database. Internet tools maintain personal bird records and enable users to visualize data with interactive maps, graphs, and bar charts. As of 2022, the eBird website is fully available in 14 languages (with different dialect options for three of them) and eBird supports common names for birds in 55 languages with 39 regional versions, for a total of 95 regional sets of common names.
eBird is a free service. Data are stored in a secure facility and archived daily, and are accessible to anyone via the eBird web site and other applications developed by the global biodiversity information community. For example, eBird data are part of the Avian Knowledge Network (AKN), which integrates observational data on bird populations across the western hemisphere and is a data source for the digital ornithological reference Birds of North America. In turn, the AKN feeds eBird data to international biodiversity data systems, such as the Global Biodiversity Information Facility.
Electronic kiosks
In addition to accepting records submitted from users' personal computers and mobile devices, eBird has placed electronic kiosks in prime birding locations, including one in the education center at the J. N. "Ding" Darling National Wildlife Refuge on Sanibel Island in Florida.
Integration in cars
eBird is a part of Starlink on the 2019 Subaru Ascent. It allows eBird to be integrated into the touch screen of the car.
Extent of information
Bird checklists
eBird collects information worldwide, but the vast majority of checklists are submitted from North America. The numbers of checklists listed in the table below include only complete checklists, where observers report all of the species that they can identify throughout the duration of the checklist.
Regional portals
eBird involves a number of regional portals for different parts of the world, managed by local partners. These portals include the following, separated by region.
United States
Alaska eBird
Arkansas eBird
eBird Northwest
Mass Audubon eBird
Maine eBird
eBird Missouri
NJ Audubon eBird
New Hampshire eBird
Minnesota eBird
Montana eBird
Pennsylvania eBird
Texas eBird
Virginia eBird
Vermont eBird
Wisconsin eBird
Canada
eBird Canada
eBird Québec
Caribbean
eBird Caribbean
eBird Puerto Rico
Mexico
eBird Mexico (aVerAves)
Central America
eBird Central America
South America
eBird Argentina
eBird Brasil
eBird Chile
eBird Colombia
eBird Paraguay
eBird Peru
Europe
eBird España
PortugalAves
eKuşbank (eBird Turkey)
Africa
eBird Rwanda
eBird Zambia
Asia
eBird India
eBird Israel
eBird Japan
eBird Malaysia
eBird Singapore
eBird Taiwan
Australia and New Zealand
eBird Australia
New Zealand eBird
Notes
References
External links
eBird website
List of publications using eBird data
2002 introductions
Biodiversity databases
Birdwatching
Citizen science
Cornell University
Ornithological citizen science | EBird | Biology,Environmental_science | 1,274 |
14,350,687 | https://en.wikipedia.org/wiki/Halogen%20bond | In chemistry, a halogen bond (XB or HaB) occurs when there is evidence of a net attractive interaction between an electrophilic region associated with a halogen atom in a molecular entity and a nucleophilic region in another, or the same, molecular entity. Like a hydrogen bond, the result is not a formal chemical bond, but rather a strong electrostatic attraction. Mathematically, the interaction can be decomposed in two terms: one describing an electrostatic, orbital-mixing charge-transfer and another describing electron-cloud dispersion. Halogen bonds find application in supramolecular chemistry; drug design and biochemistry; crystal engineering and liquid crystals; and organic catalysis.
Definition
Halogen bonds occur when a halogen atom is electrostatically attracted to a partial negative charge. Necessarily, the atom must be covalently bonded in an antipodal σ-bond; the electron concentration associated with that bond leaves a positively charged "hole" on the other side. Although all halogens can theoretically participate in halogen bonds, the σ-hole shrinks if the electron cloud in question polarizes poorly or the halogen is so electronegative as to polarize the associated σ-bond. Consequently halogen-bond propensity follows the trend F < Cl < Br < I.
There is no clear distinction between halogen bonds and expanded octet partial bonds; what is superficially a halogen bond may well turn out to be a full bond in an unexpectedly relevant resonance structure.
Donor characteristics
A halogen bond is almost collinear with the halogen atom's other, conventional bond, but the geometry of the electron-charge donor may be much more complex.
Multi-electron donors such as ethers and amines prefer halogen bonds collinear with the lone pair and donor nucleus.
Pyridine derivatives tend to donate halogen bonds approximately coplanar with the ring, and the two C–N–X angles are about 120°.
Carbonyl, thiocarbonyl-, and selenocarbonyl groups, with a trigonal planar geometry around the Lewis donor atom, can accept one or two halogen bonds.
Anions are usually better halogen-bond acceptors than neutral species: the more dissociated an ion pair is, the stronger the halogen bond formed with the anion.
Comparison to other bond-like forces
A parallel relationship can easily be drawn between halogen bonding and hydrogen bonding. Both interactions revolve around an electron donor/electron acceptor relationship, between a halogen-like atom and an electron-dense one. But halogen bonding is both much stronger and more sensitive to direction than hydrogen bonding. A typical hydrogen bond has energy of formation ; known halogen bond energies range from 10–200 kJ/mol.
The σ-hole concept readily extends to pnictogen, chalcogen and aerogen bonds, corresponding to atoms of Groups 15, 16 and 18 (respectively).
History
In 1814, Jean-Jacques Colin discovered (to his surprise) that a mixture of dry gaseous ammonia and iodine formed a shiny, metallic-appearing liquid. Frederick Guthrie established the precise composition of the resulting I2···NH3 complex fifty years later, but the physical processes underlying the molecular interaction remained mysterious until the development of Robert S. Mulliken's theory of inner-sphere and outer-sphere interactions. In Mulliken's categorization, the intermolecular interactions associated with small partial charges affect only the "inner sphere" of an atom's electron distribution; the electron redistribution associated with Lewis adducts affects the "outer sphere" instead.
Then, in 1954, Odd Hassel fruitfully applied the distinction to rationalize the X-ray diffraction patterns associated with a mixture of 1,4-dioxane and bromine. The patterns suggested that only 2.71 Å separated the dioxane oxygen atoms and bromine atoms, much closer than the sum (3.35 Å) of the atoms' van der Waals radii; and that the angle between the O−Br and Br−Br bond was about 180°. From these facts, Hassel concluded that halogen atoms are directly linked to electron pair donors in a direction with a bond direction that coincides with the axes of the orbitals of the lone pairs in the electron pair donor molecule. For this work, Hassel was awarded the 1969 Nobel Prize in Chemistry.
Dumas and coworkers first coined the term "halogen bond" in 1978, during their investigations into complexes of CCl4, CBr4, SiCl4, and SiBr4 with tetrahydrofuran, tetrahydropyran, pyridine, anisole, and di-n-butyl ether in organic solvents.
However, it was not until the mid-1990s, that the nature and applications of the halogen bond began to be intensively studied. Through systematic and extensive microwave spectroscopy of gas-phase halogen bond adducts, Legon and coworkers drew attention to the similarities between halogen-bonding and better-known hydrogen-bonding interactions.
In 2007, computational calculations by Politzer and Murray showed that an anisotropic electron density distribution around the halogen nucleus — the "σ-hole" — underlay the high directionality of the halogen bond. This hole was then experimentally observed using Kelvin probe force microscopy.
In 2020, Kellett et al. showed that halogen bonds also have a π-covalent character similar to metal coordination bonds. In August 2023 the "π-hole" was too experimentally observed
Applications
Crystal engineering
The strength and directionality of halogen bonds are a key tool in the discipline of crystal engineering, which attempts to shape crystal structures through close control of intermolecular interactions. Halogen bonds can stabilize copolymers or induce mesomorphism in otherwise isotropic liquids. Indeed, halogen bond-induced liquid crystalline phases are known in both alkoxystilbazoles and silsesquioxanes (pictured). Alternatively, the steric sensitivity of halogen bonds can cause bulky molecules to crystallize into porous structures; in one notable case, halogen bonds between iodine and aromatic π-orbitals caused molecules to crystallize into a pattern that was nearly 40% void.
Controlled polymerization
Conjugated polymers offer the tantalizing possibility of organic molecules with a manipulable electronic band structure, but current methods for production have an uncontrolled topology. Sun, Lauher, and Goroff discovered that certain amides ensure a linear polymerization of poly(diiododiacetylene). The underlying mechanism is a self-organization of the amides via hydrogen bonds that then transfers to the diiododiacetylene monomers via halogen bonds. Although pure diiododiacetylene crystals do not polymerize spontaneously, the halogen-bond induced organization is sufficiently strong that the cocrystals do spontaneously polymerize.
Biological macromolecules
Most biological macromolecules contain few or no halogen atoms. But when molecules do contain halogens, halogen bonds are often essential to understanding molecular conformation. Computational studies suggest that known halogenated nucleobases form halogen bonds with oxygen, nitrogen, or sulfur in vitro. Interestingly, oxygen atoms typically do not attract halogens with their lone pairs, but rather the π electrons in the carbonyl or amide group.
Halogen bonding can be significant in drug design as well. For example, inhibitor IDD 594 binds to human aldose reductase through a bromine halogen bond, as shown in the figure. The molecules fail to bind to each other if similar aldehyde reductase replaces the enzyme, or chlorine replaces the drug halogen, because the variant geometries inhibit the halogen bond.
Notes
References
Further reading
An early review:
Chemical bonding
Intermolecular forces | Halogen bond | Physics,Chemistry,Materials_science,Engineering | 1,655 |
23,253 | https://en.wikipedia.org/wiki/Parallax | Parallax is a displacement or difference in the apparent position of an object viewed along two different lines of sight and is measured by the angle or half-angle of inclination between those two lines. Due to foreshortening, nearby objects show a larger parallax than farther objects, so parallax can be used to determine distances.
To measure large distances, such as the distance of a planet or a star from Earth, astronomers use the principle of parallax. Here, the term parallax is the semi-angle of inclination between two sight-lines to the star, as observed when Earth is on opposite sides of the Sun in its orbit. These distances form the lowest rung of what is called "the cosmic distance ladder", the first in a succession of methods by which astronomers determine the distances to celestial objects, serving as a basis for other distance measurements in astronomy forming the higher rungs of the ladder.
Parallax also affects optical instruments such as rifle scopes, binoculars, microscopes, and twin-lens reflex cameras that view objects from slightly different angles. Many animals, along with humans, have two eyes with overlapping visual fields that use parallax to gain depth perception; this process is known as stereopsis. In computer vision the effect is used for computer stereo vision, and there is a device called a parallax rangefinder that uses it to find the range, and in some variations also altitude to a target.
A simple everyday example of parallax can be seen in the dashboards of motor vehicles that use a needle-style mechanical speedometer. When viewed from directly in front, the speed may show exactly 60, but when viewed from the passenger seat, the needle may appear to show a slightly different speed due to the angle of viewing combined with the displacement of the needle from the plane of the numerical dial.
Visual perception
Because the eyes of humans and other animals are in different positions on the head, they present different views simultaneously. This is the basis of stereopsis, the process by which the brain exploits the parallax due to the different views from the eye to gain depth perception and estimate distances to objects.
Animals also use motion parallax, in which the animals (or just the head) move to gain different viewpoints. For example, pigeons (whose eyes do not have overlapping fields of view and thus cannot use stereopsis) bob their heads up and down to see depth.
The motion parallax is exploited also in wiggle stereoscopy, computer graphics that provide depth cues through viewpoint-shifting animation rather than through binocular vision.
Distance measurement
Parallax arises due to a change in viewpoint occurring due to the motion of the observer, of the observed, or both. What is essential is relative motion. By observing parallax, measuring angles, and using geometry, one can determine distance.
Distance measurement by parallax is a special case of the principle of triangulation, which states that one can solve for all the sides and angles in a network of triangles if, in addition to all the angles in the network, the length of at least one side has been measured. Thus, the careful measurement of the length of one baseline can fix the scale of an entire triangulation network. In parallax, the triangle is extremely long and narrow, and by measuring both its shortest side (the motion of the observer) and the small top angle (always less than 1 arcsecond, leaving the other two close to 90 degrees), the length of the long sides (in practice considered to be equal) can be determined.
In astronomy, assuming the angle is small, the distance to a star (measured in parsecs) is the reciprocal of the parallax (measured in arcseconds): For example, the distance to Proxima Centauri is 1/0.7687 = .
On Earth, a coincidence rangefinder or parallax rangefinder can be used to find distance to a target. In surveying, the problem of resection explores angular measurements from a known baseline for determining an unknown point's coordinates.
Astronomy
Metrology
Measurements made by viewing the position of some marker relative to something to be measured are subject to parallax error if the marker is some distance away from the object of measurement and not viewed from the correct position. For example, if measuring the distance between two ticks on a line with a ruler marked on its top surface, the thickness of the ruler will separate its markings from the ticks. If viewed from a position not exactly perpendicular to the ruler, the apparent position will shift and the reading will be less accurate than the ruler is capable of.
A similar error occurs when reading the position of a pointer against a scale in an instrument such as an analog multimeter. To help the user avoid this problem, the scale is sometimes printed above a narrow strip of mirror, and the user's eye is positioned so that the pointer obscures its reflection, guaranteeing that the user's line of sight is perpendicular to the mirror and therefore to the scale. The same effect alters the speed read on a car's speedometer by a driver in front of it and a passenger off to the side, values read from a graticule, not in actual contact with the display on an oscilloscope, etc.
Photogrammetry
When viewed through a stereo viewer, aerial picture pair offers a pronounced stereo effect of landscape and buildings. High buildings appear to "keel over" in the direction away from the center of the photograph. Measurements of this parallax are used to deduce the height of the buildings, provided that flying height and baseline distances are known. This is a key component of the process of photogrammetry.
Photography
Parallax error can be seen when taking photos with many types of cameras, such as twin-lens reflex cameras and those including viewfinders (such as rangefinder cameras). In such cameras, the eye sees the subject through different optics (the viewfinder, or a second lens) than the one through which the photo is taken. As the viewfinder is often found above the lens of the camera, photos with parallax error are often slightly lower than intended, the classic example being the image of a person with their head cropped off. This problem is addressed in single-lens reflex cameras, in which the viewfinder sees through the same lens through which the photo is taken (with the aid of a movable mirror), thus avoiding parallax error.
Parallax is also an issue in image stitching, such as for panoramas.
Weapon sights
Parallax affects sighting devices of ranged weapons in many ways. On sights fitted on small arms and bows, etc., the perpendicular distance between the sight and the weapon's launch axis (e.g. the bore axis of a gun)—generally referred to as "sight height"—can induce significant aiming errors when shooting at close range, particularly when shooting at small targets. This parallax error is compensated for (when needed) via calculations that also take in other variables such as bullet drop, windage, and the distance at which the target is expected to be. Sight height can be used to advantage when "sighting in" rifles for field use. A typical hunting rifle (.222 with telescopic sights) sighted in at 75m will still be useful from without needing further adjustment.
Optical sights
In some reticled optical instruments such as telescopes, microscopes or in telescopic sights ("scopes") used on small arms and theodolites, parallax can create problems when the reticle is not coincident with the focal plane of the target image. This is because when the reticle and the target are not at the same focus, the optically corresponded distances being projected through the eyepiece are also different, and the user's eye will register the difference in parallaxes between the reticle and the target (whenever eye position changes) as a relative displacement on top of each other. The term parallax shift refers to the resultant apparent "floating" movements of the reticle over the target image when the user moves his/her head/eye laterally (up/down or left/right) behind the sight, i.e. an error where the reticle does not stay aligned with the user's optical axis.
Some firearm scopes are equipped with a parallax compensation mechanism, which consists of a movable optical element that enables the optical system to shift the focus of the target image at varying distances into the same optical plane of the reticle (or vice versa). Many low-tier telescopic sights may have no parallax compensation because in practice they can still perform very acceptably without eliminating parallax shift. In this case, the scope is often set fixed at a designated parallax-free distance that best suits their intended usage. Typical standard factory parallax-free distances for hunting scopes are 100 yd (or 90 m) to make them suited for hunting shots that rarely exceed 300 yd/m. Some competition and military-style scopes without parallax compensation may be adjusted to be parallax free at ranges up to 300 yd/m to make them better suited for aiming at longer ranges. Scopes for guns with shorter practical ranges, such as airguns, rimfire rifles, shotguns, and muzzleloaders, will have parallax settings for shorter distances, commonly for rimfire scopes and for shotguns and muzzleloaders. Airgun scopes are very often found with adjustable parallax, usually in the form of an adjustable objective (or "AO" for short) design, and may adjust down to as near as .
Non-magnifying reflector or "reflex" sights can be theoretically "parallax free". But since these sights use parallel collimated light this is only true when the target is at infinity. At finite distances, eye movement perpendicular to the device will cause parallax movement in the reticle image in exact relationship to the eye position in the cylindrical column of light created by the collimating optics. Firearm sights, such as some red dot sights, try to correct for this via not focusing the reticle at infinity, but instead at some finite distance, a designed target range where the reticle will show very little movement due to parallax. Some manufacturers market reflector sight models they call "parallax free", but this refers to an optical system that compensates for off axis spherical aberration, an optical error induced by the spherical mirror used in the sight that can cause the reticle position to diverge off the sight's optical axis with change in eye position.
Artillery-fire
Because of the positioning of field or naval artillery, each gun has a slightly different perspective of the target relative to the location of the fire-control system. When aiming guns at the target, the fire control system must compensate for parallax to assure that fire from each gun converges on the target.
Art
Several of Mark Renn's sculptural works play with parallax, appearing abstract until viewed from a specific angle. One such sculpture is The Darwin Gate (pictured) in Shrewsbury, England, which from a certain angle appears to form a dome, according to Historic England, in "the form of a Saxon helmet with a Norman window... inspired by features of St Mary's Church which was attended by Charles Darwin as a boy".
As a metaphor
In a philosophic/geometric sense: an apparent change in the direction of an object, caused by a change in observational position that provides a new line of sight. The apparent displacement, or difference of position, of an object, as seen from two different stations, or points of view. In contemporary writing, parallax can also be the same story, or a similar story from approximately the same timeline, from one book, told from a different perspective in another book. The word and concept feature prominently in James Joyce's 1922 novel, Ulysses. Orson Scott Card also used the term when referring to Ender's Shadow as compared to Ender's Game.
The metaphor is invoked by Slovenian philosopher Slavoj Žižek in his 2006 book The Parallax View, borrowing the concept of "parallax view" from the Japanese philosopher and literary critic Kojin Karatani. Žižek notes
See also
Binocular disparity
Lutz–Kelker bias
Parallax mapping, in computer graphics
Parallax scrolling, in computer graphics
Spectroscopic parallax
Triangulation, wherein a point is calculated given its angles from other known points
Trigonometry
True range multilateration, wherein a point is calculated given its distances from other known points
Xallarap
Notes
References
Bibliography
.
External links
Instructions for having background images on a web page use parallax effects
Actual parallax project measuring the distance to the moon within 2.3%
BBC's Sky at Night program: Patrick Moore demonstrates Parallax using Cricket. (Requires RealPlayer)
Berkeley Center for Cosmological Physics Parallax
Parallax on an educational website, including a quick estimate of distance based on parallax using eyes and a thumb only
Angle
Astrometry
Geometry in computer vision
Optics
Trigonometry
Vision | Parallax | Physics,Chemistry,Astronomy,Mathematics | 2,738 |
210,635 | https://en.wikipedia.org/wiki/Canadian%20Trusted%20Computer%20Product%20Evaluation%20Criteria | The Canadian Trusted Computer Product Evaluation Criteria (CTCPEC) is a computer security standard published in 1993 by the Communications Security Establishment to provide an evaluation criterion on IT products. It is a combination of the TCSEC (also called Orange Book) and the European ITSEC approaches.
CTCPEC led to the creation of the Common Criteria standard.
The Canadian System Security Centre, part of the Communications Security Establishment was founded in 1988 to establish a Canadian computer security standard.
The Centre published a draft of the standard in April 1992. The final version was published in January 1993.
References
External links
Computer security standards | Canadian Trusted Computer Product Evaluation Criteria | Technology,Engineering | 122 |
2,192,510 | https://en.wikipedia.org/wiki/Lherzolite | Lherzolite is a type of ultramafic igneous rock. It is a coarse-grained rock consisting of 40 to 90% olivine along with significant orthopyroxene and lesser amounts of calcic chromium-rich clinopyroxene. Minor minerals include chromium and aluminium spinels and garnets. Plagioclase can occur in lherzolites and other peridotites that crystallize at relatively shallow depths (20 – 30 km). At greater depth plagioclase is unstable and is replaced by spinel. At approximately 90 km depth, pyrope garnet becomes the stable aluminous phase. Garnet lherzolite is a major constituent of the Earth's upper mantle (extending to ~300 km depth). Lherzolite is known from the lower ultramafic part of ophiolite complexes (although harzburgite is more common in this setting), from alpine-type peridotite massifs, from fracture zones adjacent to mid-oceanic ridges, and as xenoliths in kimberlite pipes and alkali basalts. Partial melting of spinel lherzolite is one of the primary sources of basaltic magma.
The name is derived from its type locality, the Lherz Massif (an alpine peridotite complex, also known as orogenic lherzolite complex), at Étang de Lers, near Massat in the French Pyrenees; Étang de Lherz is the archaic spelling of this location.
The Lherz massif also contains harzburgite and dunite, as well as layers of spinel pyroxenite, garnet pyroxenite, and hornblendite. The layers represent partial melts extracted from the host peridotite during decompression in the mantle long before emplacement into the crust.
The Lherz massif is unique because it has been emplaced into Paleozoic carbonates (limestones and dolomites), which form mixed breccias of limestone-lherzolite around the margins of the massif.
The Moon's lower mantle may be composed of lherzolite.
References
Blatt, Harvey and Robert J. Tracy, 1996, Petrology: Igneous, Sedimentary and Metamorphic, 2nd ed., Freeman,
Ultramafic rocks | Lherzolite | Chemistry | 496 |
70,859,669 | https://en.wikipedia.org/wiki/Marine%20resources | Marine resources are resources (physical and biological entities) that are found in oceans and are useful for humans. The term was popularized through Sustainable Development Goal 14 which is about "Life below water" and is one of the 17 Sustainable Development Goals established by the United Nations in 2015. The official wording of the goal is to "Conserve and sustainably use the oceans, seas and marine resources for sustainable development".
Marine resources include:
biological diversity (marine biodiversity)
ecosystem services from marine ecosystems, such as marine coastal ecosystems and coral reefs
fish and seafood
minerals (for example deep sea mining)
oil and gas
renewable energy resources, such as marine energy
sand and gravel
tourism potential
Global goals
The text of Target 14.7 of Sustainable Development Goal 14 states: "By 2030, increase the economic benefits to small island developing states and least developed countries from the sustainable use of marine resources, including through sustainable management of fisheries, aquaculture and tourism".
Fisheries and aquaculture can contribute to alleviating poverty, hunger, malnutrition and economic growth. The contribution of sustainable fisheries to the global GDP was around 0.1% per year.
See also
Effects of climate change on oceans
Human impact on marine life
Marine conservation
References
Natural resources
Oceanography | Marine resources | Physics,Environmental_science | 250 |
22,699,244 | https://en.wikipedia.org/wiki/Noor%20Muhammad%20Butt | Noor Muhammad Butt (Urdu: نور محمد بٹ); b. 3 June 1936), , also known as N. M. Butt, is a Pakistani nuclear physicist and professor of physics at the Preston University who is known for his research publications in understanding the gamma-rays burst, Mössbauer effect, diffraction, later the nanotechnology.
Besides teaching courses in topics involving the modern physics, his career has spent working in branches of physics at the national laboratory in Nilore and has authored several college textbooks in physics based on his research, and presently serving as the chairman of the Institute of Nano Science and Technology at the Preston University.
Biography
Butt was born in Sialkot, Punjab, British India on 3 June 1936. He is of Kashmiri descent. He completed his matriculation from the Muslim High School, Sialkot. In 1951, Butt enrolled to attend the Murray College to study physics and graduated with BSc in physics in 1955, standing top in his class of 1955 of the Murray College.
He went to attend the Government College University in Lahore to study physics under Dr. Rafi Muhammad where his research was focused towards the quantum reaction in nuclear physics, covering the topic of bombardment of Li6 high-energy protons to emit the energy spectrum of alpha particle. In 1957, he graduated with MSc in nuclear physics after successfully defending his thesis written under dr. Rafi Muhammad. His graduation was noted in the local university press when he was conferred with the Roll of Honor by the university.
In 1963, he went to attend the Birmingham University in England on Commonwealth Scholarship for his doctoral studies, and carried out his doctoral studies under Dr. Philip Burton Moon. He studied topics in solid state physics and kinetic theory of solids under Rudolf Peierls while conducting his doctoral studies on the diffraction with Mössbauer effect through the spectroscopy techniques under Dr. William Burchman. In 1965, Butt successfully defended his doctoral thesis and was conferred with the PhD in nuclear physics under the supervision of Philip Burton Moon from the Birmingham University in England.
He remained associated with the Birmingham University in England and taught various courses on physics as a visiting professor while he collaborated with British physicist, Dr. D.A. O'Conner, at the Department of Nuclear Physics in Birmingham University. In 1993, Butt was conferred with the DSc in physics, titled: "Structure Properties of Cubic Crystals", that covered wide range of topics in material science and solid-state physics— though this research work was based on independent but authoritative and classified research sponsored by the British government.
Career at Government of Pakistan
Pakistan Atomic Energy Commission
In 1961, Butt secured employment in the Pakistan Atomic Energy Commission (PAEC) as a scientific officer, and worked on the problems involved in reactor physics before departing to England for his doctoral studies. Upon returning in 1966, Butt joined the Fast Neutron Physics Group at the Institute of Nuclear Science and Technology in Nilore, where he conducted work on the neutron diffraction to understand crystalline arrangement in the atomic structure.
His career is mostly spent at the Institute of Nuclear Science and Technology, the country national laboratory site in Nilore, where he was instrumental in conducting scientific investigations in solid-state materials using the lattice dynamical settings and powder diffraction techniques. Much of his scientific work at the national laboratory site remains classified in regards to its relation to the development of his nation's nuclear weapons. From 1966 to 1978, Butt engaged in further scientific understanding of the neutron, an important subatomic particle, and studied the utilization of the Pakistan Atomic Research Reactor with Neutron Activation Analysis. In 1978, Butt passed over his work in understanding neutron applications when he was appointed director of Nuclear Physics Division where he became interested in the nuclear binding energy in the nucleus, the isotopic island of stability, phonon and the Mössbauer effect.
While leading the Nuclear Physics Division, Butt oversaw the establishment of the "New Labs" where many of his contributions were vital in scientific understanding in synthetic elements such as plutonium 93Pu. In 1984, Butt was appointed as associate director of the Institute of Nuclear Science and Technology, and promoted to its director in 1991. In 1995, he eventually took over the Institute of Nuclear Science and Technology as its director-general, but left the directorship for a chief scientist position in 1996.
In 1998, Butt was part of a small team that eye-witnessed the nuclear chain reaction during the atomic tests in the mountains of Balochistan in Pakistan. In 1999, he was of the opinion to favor the development of the less destructive neutron bomb as opposed to the much larger blast radius guaranteed by the hydrogen-based design of nuclear weapons.
In 1999, Butt eventually left the Institute of Nuclear Science and Technology when he retired as first "Scientist Emeritus", and subsequently returned to academia to teach courses on physics.
Academia and publications
In 1957, Butt joined the faculty of physics at the Government College University, eventually becoming the lecturer in physics in 1958 and remained at his alma mater till 1961. He then taught physics at the University of Birmingham for several years and worked closely with British physicist, Dr. D. A. O'Conner, on the applications of diffraction, wave mechanics, and neutron scattering. Their work was sponsored and supported by the U.S. Department of Energy through the OSTI. Collaboration between Butt and O'Connor established the scientific confirmation of Ivar Waller's Theory of Phonons at the Bragg diffraction peaks using the Mössbauer spectroscopy from LiF's single crystals. Their work has been extensively cited for several decades and printed in several books including those of Cambridge University Press and North Holland Publishers in the United Kingdom. He remained associated with the CERN in Switzerland and the International Center for Theoretical Physics (ICTP) in Italy where his work on solid-state physics was widely recognized.
In 1973, Butt joined the faculty of natural science at the Quaid-i-Azam University and briefly taught a course on solid-state physics and taught courses on material science while supervised one of his student's PhD thesis at the Quaid-i-Azam University. He also served on the examiner board for doctoral and master's thesis programs at the Punjab University and Bahauddin Zakariya University. He taught courses on physics at the Quaid-i-Azam University until 2000 when he joined the University of Oxford to teach physics.
In 1973, Butt authored a college textbook, "Waves and Oscillation" that covers wide range of topics in wave mechanics, sound vibration, and theory of optics, which is extensively used by students in physics and mechanical engineering students in Pakistan. In 1996, he also authored a policy book, "CTBT & Its Implications".
Public advocacy
After his retirement from PAEC in 2000, Butt begin public advocacy for the benefits of the nanotechnology and engaged in providing education when he was appointed Chairman of National Commission on Nano-Science and Technology (NCNST) in 2003 and led till 2005. In addition, he also served as the chairman of Pakistan Science Foundation until 2010 where the PSF initiated several awareness programs on nanotechnology in Pakistan.
After 1998, Butt effectively countered the anti-nuclear movement in the country to roll back the country's nuclear capability by noting in the public that the country would also have to roll back its programs at its national laboratories and cutting-edge research in nuclear technology as it was being useful in energy generation, medicines, agriculture, medical usage of lasers, electronics, supercomputing, nanotechnology, and communication technology.
In 2010 interview with the news media, Butt also vehemently dismissed the American concerns about his nation's atomic weapons fall into the hands of terrorists as "farce claim" noting that they would be unable to select sequence targets to launch the missiles since they don't have required scientific education to understand the locking and triggering mechanism to activate nuclear devices.
Publications and honors
Bibliography
Awards and honours
Khwarizmi International Award (1995)
Sitara-e-Imtiaz (1992)
Gold Medal, Pakistan Academy of Sciences (1990)
ICTP Award in Solid State Physics (1979)
ICTP Award in Nuclear Physics (1970)
See also
Solid state physics
References
External links
Living people
1936 births
People from Sialkot
Pakistani people of Kashmiri descent
Murray College alumni
Pakistani nuclear physicists
Government College University, Lahore alumni
Alumni of the University of Birmingham
Pakistani expatriates in England
Spectroscopists
Project-706 people
Academic staff of the Government College University, Lahore
Academic staff of Quaid-i-Azam University
People associated with the University of Birmingham
People associated with CERN
Fellows of Pakistan Academy of Sciences
Pakistani textbook writers
Recipients of Sitara-i-Imtiaz
Pakistani science writers
Pakistani inventors
Nuclear weapons scientists and engineers | Noor Muhammad Butt | Physics,Chemistry | 1,780 |
18,909 | https://en.wikipedia.org/wiki/Magnesium | Magnesium is a chemical element; it has symbol Mg and atomic number 12. It is a shiny gray metal having a low density, low melting point and high chemical reactivity. Like the other alkaline earth metals (group 2 of the periodic table) it occurs naturally only in combination with other elements and almost always has an oxidation state of +2. It reacts readily with air to form a thin passivation coating of magnesium oxide that inhibits further corrosion of the metal. The free metal burns with a brilliant-white light. The metal is obtained mainly by electrolysis of magnesium salts obtained from brine. It is less dense than aluminium and is used primarily as a component in strong and lightweight alloys that contain aluminium.
In the cosmos, magnesium is produced in large, aging stars by the sequential addition of three helium nuclei to a carbon nucleus. When such stars explode as supernovas, much of the magnesium is expelled into the interstellar medium where it may recycle into new star systems. Magnesium is the eighth most abundant element in the Earth's crust and the fourth most common element in the Earth (after iron, oxygen and silicon), making up 13% of the planet's mass and a large fraction of the planet's mantle. It is the third most abundant element dissolved in seawater, after sodium and chlorine.
This element is the eleventh most abundant element by mass in the human body and is essential to all cells and some 300 enzymes. Magnesium ions interact with polyphosphate compounds such as ATP, DNA, and RNA. Hundreds of enzymes require magnesium ions to function. Magnesium compounds are used medicinally as common laxatives and antacids (such as milk of magnesia), and to stabilize abnormal nerve excitation or blood vessel spasm in such conditions as eclampsia.
Characteristics
Physical properties
Elemental magnesium is a gray-white lightweight metal, two-thirds the density of aluminium. Magnesium has the lowest melting () and the lowest boiling point () of all the alkaline earth metals.
Pure polycrystalline magnesium is brittle and easily fractures along shear bands. It becomes much more malleable when alloyed with small amounts of other metals, such as 1% aluminium. The malleability of polycrystalline magnesium can also be significantly improved by reducing its grain size to about 1 μm or less.
When finely powdered, magnesium reacts with water to produce hydrogen gas:
Mg(s) + 2 H2O(g) → Mg(OH)2(aq) + H2(g) + 1203.6 kJ/mol
However, this reaction is much less dramatic than the reactions of the alkali metals with water, because the magnesium hydroxide builds up on the surface of the magnesium metal and inhibits further reaction.
Chemical properties
Oxidation
The principal property of magnesium metal is its reducing power. One hint is that it tarnishes slightly when exposed to air, although, unlike the heavier alkaline earth metals, an oxygen-free environment is unnecessary for storage because magnesium is protected by a thin layer of oxide that is fairly impermeable and difficult to remove.
Direct reaction of magnesium with air or oxygen at ambient pressure forms only the "normal" oxide MgO. However, this oxide may be combined with hydrogen peroxide to form magnesium peroxide, MgO2, and at low temperature the peroxide may be further reacted with ozone to form magnesium superoxide Mg(O2)2.
Magnesium reacts with nitrogen in the solid state if it is powdered and heated to just below the melting point, forming Magnesium nitride Mg3N2.
Magnesium reacts with water at room temperature, though it reacts much more slowly than calcium, a similar group 2 metal. When submerged in water, hydrogen bubbles form slowly on the surface of the metal; this reaction happens much more rapidly with powdered magnesium. The reaction also occurs faster with higher temperatures (see ). Magnesium's reversible reaction with water can be harnessed to store energy and run a magnesium-based engine. Magnesium also reacts exothermically with most acids such as hydrochloric acid (HCl), producing magnesium chloride and hydrogen gas, similar to the HCl reaction with aluminium, zinc, and many other metals. Although it is difficult to ignite in mass or bulk, magnesium metal will ignite.
Magnesium may also be used as an igniter for thermite, a mixture of aluminium and iron oxide powder that ignites only at a very high temperature.
Organic chemistry
Organomagnesium compounds are widespread in organic chemistry. They are commonly found as Grignard reagents, formed by reaction of magnesium with haloalkanes. Examples of Grignard reagents are phenylmagnesium bromide and ethylmagnesium bromide. The Grignard reagents function as a common nucleophile, attacking the electrophilic group such as the carbon atom that is present within the polar bond of a carbonyl group.
A prominent organomagnesium reagent beyond Grignard reagents is magnesium anthracene, which is used as a source of highly active magnesium. The related butadiene-magnesium adduct serves as a source for the butadiene dianion.
Complexes of dimagnesium(I) have been observed.
Detection in solution
The presence of magnesium ions can be detected by the addition of ammonium chloride, ammonium hydroxide and monosodium phosphate to an aqueous or dilute HCl solution of the salt. The formation of a white precipitate indicates the presence of magnesium ions.
Azo violet dye can also be used, turning deep blue in the presence of an alkaline solution of magnesium salt. The color is due to the adsorption of azo violet by Mg(OH)2.
Forms
Alloys
As of 2013, magnesium alloys consumption was less than one million tonnes per year, compared with 50 million tonnes of aluminium alloys. Their use has been historically limited by the tendency of Mg alloys to corrode, creep at high temperatures, and combust.
Corrosion
In magnesium alloys, the presence of iron, nickel, copper, or cobalt strongly activates corrosion. In more than trace amounts, these metals precipitate as intermetallic compounds, and the precipitate locales function as active cathodic sites that reduce water, causing the loss of magnesium. Controlling the quantity of these metals improves corrosion resistance. Sufficient manganese overcomes the corrosive effects of iron. This requires precise control over composition, increasing costs. Adding a cathodic poison captures atomic hydrogen within the structure of a metal. This prevents the formation of free hydrogen gas, an essential factor of corrosive chemical processes. The addition of about one in three hundred parts arsenic reduces the corrosion rate of magnesium in a salt solution by a factor of nearly ten.
High-temperature creep and flammability
Magnesium's tendency to creep (gradually deform) at high temperatures is greatly reduced by alloying with zinc and rare-earth elements. Flammability is significantly reduced by a small amount of calcium in the alloy. By using rare-earth elements, it may be possible to manufacture magnesium alloys that are able to not catch fire at higher temperatures compared to magnesium's liquidus and in some cases potentially pushing it close to magnesium's boiling point.
Compounds
Magnesium forms a variety of compounds important to industry and biology, including magnesium carbonate, magnesium chloride, magnesium citrate, magnesium hydroxide (milk of magnesia), magnesium oxide, magnesium sulfate, and magnesium sulfate heptahydrate (Epsom salts).
As recently as 2020, magnesium hydride was under investigation as a way to store hydrogen.
Isotopes
Magnesium has three stable isotopes: , and . All are present in significant amounts in nature (see table of isotopes above). About 79% of Mg is . The isotope is radioactive and in the 1950s to 1970s was produced by several nuclear power plants for use in scientific experiments. This isotope has a relatively short half-life (21 hours) and its use was limited by shipping times.
The nuclide has found application in isotopic geology, similar to that of aluminium. is a radiogenic daughter product of , which has a half-life of 717,000 years. Excessive quantities of stable have been observed in the Ca-Al-rich inclusions of some carbonaceous chondrite meteorites. This anomalous abundance is attributed to the decay of its parent in the inclusions, and researchers conclude that such meteorites were formed in the solar nebula before the had decayed. These are among the oldest objects in the Solar System and contain preserved information about its early history.
It is conventional to plot / against an Al/Mg ratio. In an isochron dating plot, the Al/Mg ratio plotted is /. The slope of the isochron has no age significance, but indicates the initial / ratio in the sample at the time when the systems were separated from a common reservoir.
Production
Occurrence
Magnesium is the eighth-most-abundant element in the Earth's crust by mass and tied in seventh place with iron in molarity. It is found in large deposits of magnesite, dolomite, and other minerals, and in mineral waters, where magnesium ion is soluble.
Although magnesium is found in more than 60 minerals, only dolomite, magnesite, brucite, carnallite, talc, and olivine are of commercial importance.
The cation is the second-most-abundant cation in seawater (about the mass of sodium ions in a given sample), which makes seawater and sea salt attractive commercial sources for Mg. To extract the magnesium, calcium hydroxide is added to the seawater to precipitate magnesium hydroxide.
+ → +
Magnesium hydroxide (brucite) is poorly soluble in water and can be collected by filtration. It reacts with hydrochloric acid to magnesium chloride.
+ 2 HCl → + 2
From magnesium chloride, electrolysis produces magnesium.
Production quantities
World production was approximately 1,100 kt in 2017, with the bulk being produced in China (930 kt) and Russia (60 kt). The United States was in the 20th century the major world supplier of this metal, supplying 45% of world production even as recently as 1995. Since the Chinese mastery of the Pidgeon process the US market share is at 7%, with a single US producer left as of 2013: US Magnesium, a Renco Group company located on the shores of the Great Salt Lake.
In September 2021, China took steps to reduce production of magnesium as a result of a government initiative to reduce energy availability for manufacturing industries, leading to a significant price increase.
Pidgeon and Bolzano processes
The Pidgeon process and the Bolzano process are similar. In both, magnesium oxide is the precursor to magnesium metal. The magnesium oxide is produced as a solid solution with calcium oxide by calcining the mineral dolomite, which is a solid solution of calcium and magnesium carbonates:
Reduction occurs at high temperatures with silicon. A ferrosilicon alloy is used rather than pure silicon as it is more economical. The iron component has no bearing on the reaction, having the simplified equation:
The calcium oxide combines with silicon as the oxygen scavenger, yielding the very stable calcium silicate. The Mg/Ca ratio of the precursors can be adjusted by the addition of MgO or CaO.
The Pidgeon and the Bolzano process differ in the details of the heating and the configuration of the reactor. Both generate gaseous Mg that is condensed and collected. The Pidgeon process dominates the worldwide production. The Pidgeon method is less technologically complex and because of distillation/vapour deposition conditions, a high purity product is easily achievable. China is almost completely reliant on the silicothermic Pidgeon process.
Dow process
Besides the Pigeon process, the second most used process for magnesium production is electrolysis. This is a two step process. The first step is to prepare feedstock containing magnesium chloride and the second step is to dissociate the compound in electrolytic cells as magnesium metal and chlorine gas. The basic reaction is as follows:
The temperatures at which this reaction is operated is between 680 and 750 °C.
The magnesium chloride can be obtained using the Dow process, a process that mixes sea water and dolomite in a flocculator or by dehydration of magnesium chloride brines. The electrolytic cells are partially submerged in a molten salt electrolyte to which the produced magnesium chloride is added in concentrations between 6–18%. This process does have its share of disadvantages including production of harmful chlorine gas and the overall reaction being very energy intensive, creating environmental risks. The Pidgeon process is more advantageous regarding its simplicity, shorter construction period, low power consumption and overall good magnesium quality compared to the electrolysis method.
In the United States, magnesium was once obtained principally with the Dow process in Corpus Christi TX, by electrolysis of fused magnesium chloride from brine and sea water. A saline solution containing ions is first treated with lime (calcium oxide) and the precipitated magnesium hydroxide is collected:
(aq) + (s) + (l) → (aq) + (s)
The hydroxide is then converted to magnesium chloride by treatment with hydrochloric acid and heating of the product to eliminate water:
The salt is then electrolyzed in the molten state. At the cathode, the ion is reduced by two electrons to magnesium metal:
+ 2 → Mg
At the anode, each pair of ions is oxidized to chlorine gas, releasing two electrons to complete the circuit:
2 → (g) + 2
Carbothermic process
The carbothermic route to magnesium has been recognized as a low energy, yet high productivity path to magnesium extraction. The chemistry is as follows:
A disadvantage of this method is that slow cooling the vapour can cause the reaction to quickly revert. To prevent this from happening, the magnesium can be dissolved directly in a suitable metal solvent before reversion starts happening. Rapid quenching of the vapour can also be performed to prevent reversion.
YSZ process
A newer process, solid oxide membrane technology, involves the electrolytic reduction of MgO. At the cathode, ion is reduced by two electrons to magnesium metal. The electrolyte is yttria-stabilized zirconia (YSZ). The anode is a liquid metal. At the YSZ/liquid metal anode is oxidized. A layer of graphite borders the liquid metal anode, and at this interface carbon and oxygen react to form carbon monoxide. When silver is used as the liquid metal anode, there is no reductant carbon or hydrogen needed, and only oxygen gas is evolved at the anode. It was reported in 2011 that this method provides a 40% reduction in cost per pound over the electrolytic reduction method.
Rieke process
Rieke et al. developed a "general approach for preparing highly reactive metal powders by reducing metal salts in ethereal or hydrocarbon solvents using alkali metals as reducing agents" now known as the Rieke process. Rieke finalized the identification of Rieke metals in 1989, one of which was Rieke-magnesium, first produced in 1974.
History
The name magnesium originates from the Greek word for locations related to the tribe of the Magnetes, either a district in Thessaly called Magnesia or Magnesia ad Sipylum, now in Turkey. It is related to magnetite and manganese, which also originated from this area, and required differentiation as separate substances. See manganese for this history.
In 1618, a farmer at Epsom in England attempted to give his cows water from a local well. The cows refused to drink because of the water's bitter taste, but the farmer noticed that the water seemed to heal scratches and rashes. The substance obtained by evaporating the water became known as Epsom salts and its fame spread. It was eventually recognized as hydrated magnesium sulfate, ·7.
The metal itself was first isolated by Sir Humphry Davy in England in 1808. He used electrolysis on a mixture of magnesia and mercuric oxide. Antoine Bussy prepared it in coherent form in 1831. Davy's first suggestion for a name was 'magnium', but the name magnesium is now used in most European languages.
Uses
Magnesium metal
Magnesium is the third-most-commonly-used structural metal, following iron and aluminium. The main applications of magnesium are, in order: aluminium alloys, die-casting (alloyed with zinc), removing sulfur in the production of iron and steel, and the production of titanium in the Kroll process.
Magnesium is used in lightweight materials and alloys. For example, when infused with silicon carbide nanoparticles, it has extremely high specific strength.
Historically, magnesium was one of the main aerospace construction metals and was used for German military aircraft as early as World War I and extensively for German aircraft in World War II. The Germans coined the name "Elektron" for magnesium alloy, a term which is still used today. In the commercial aerospace industry, magnesium was generally restricted to engine-related components, due to fire and corrosion hazards. Magnesium alloy use in aerospace is increasing in the 21st century, driven by the importance of fuel economy. Magnesium alloys can act as replacements for aluminium and steel alloys in structural applications.
Aircraft
Wright Aeronautical used a magnesium crankcase in the WWII-era Wright R-3350 Duplex Cyclone aviation engine. This presented a serious problem for the earliest models of the Boeing B-29 Superfortress heavy bomber when an in-flight engine fire ignited the engine crankcase. The resulting combustion was as hot as 5,600 °F (3,100 °C) and could sever the wing spar from the fuselage.
Automotive
Mercedes-Benz used the alloy Elektron in the bodywork of an early model Mercedes-Benz 300 SLR; these cars competed in the 1955 World Sportscar Championship including a win at the Mille Miglia, and at Le Mans where one was involved in the 1955 Le Mans disaster when spectators were showered with burning fragments of elektron.
Porsche used magnesium alloy frames in the 917/053 that won Le Mans in 1971, and continues to use magnesium alloys for its engine blocks due to the weight advantage.
Volkswagen Group has used magnesium in its engine components for many years.
Mitsubishi Motors uses magnesium for its paddle shifters.
BMW used magnesium alloy blocks in their N52 engine, including an aluminium alloy insert for the cylinder walls and cooling jackets surrounded by a high-temperature magnesium alloy AJ62A. The engine was used worldwide between 2005 and 2011 in various 1, 3, 5, 6, and 7 series models; as well as the Z4, X1, X3, and X5.
Chevrolet used the magnesium alloy AE44 in the 2006 Corvette Z06.
Both AJ62A and AE44 are recent developments in high-temperature low-creep magnesium alloys. The general strategy for such alloys is to form intermetallic precipitates at the grain boundaries, for example by adding mischmetal or calcium.
Electronics
Because of low density and good mechanical and electrical properties, magnesium is used for manufacturing of mobile phones, laptop and tablet computers, cameras, and other electronic components. It was used as a premium feature because of its light weight in some 2020 laptops.
Source of light
When burning in air, magnesium produces a brilliant white light that includes strong ultraviolet wavelengths. Magnesium powder (flash powder) was used for subject illumination in the early days of photography. Later, magnesium filament was used in electrically ignited single-use photography flashbulbs. Magnesium powder is used in fireworks and marine flares where a brilliant white light is required. It was also used for various theatrical effects, such as lightning, pistol flashes, and supernatural appearances.
Magnesium is flammable, burning at a temperature of approximately , and the autoignition temperature of magnesium ribbon is approximately . Magnesium's high combustion temperature makes it a useful tool for starting emergency fires. Other uses include flash photography, flares, pyrotechnics, fireworks sparklers, and trick birthday candles. Magnesium is also often used to ignite thermite or other materials that require a high ignition temperature. Magnesium continues to be used as an incendiary element in warfare.
Flame temperatures of magnesium and magnesium alloys can reach , although flame height above the burning metal is usually less than . Once ignited, such fires are difficult to extinguish because they resist several substances commonly used to put out fires; combustion continues in nitrogen (forming magnesium nitride), in carbon dioxide (forming magnesium oxide and carbon), and in water (forming magnesium oxide and hydrogen, which also combusts due to heat in the presence of additional oxygen). This property was used in incendiary weapons during the firebombing of cities in World War II, where the only practical civil defense was to smother a burning flare under dry sand to exclude atmosphere from the combustion.
Chemical reagent
In the form of turnings or ribbons, to prepare Grignard reagents, which are useful in organic synthesis.
Other
As an additive agent in conventional propellants and the production of nodular graphite in cast iron.
As a reducing agent to separate uranium and other metals from their salts.
As a sacrificial (galvanic) anode to protect boats, underground tanks, pipelines, buried structures, and water heaters.
Alloyed with zinc to produce the zinc sheet used in photoengraving plates in the printing industry, dry-cell battery walls, and roofing.
Alloyed with aluminium with aluminium-magnesium alloys being used mainly for beverage cans, sports equipment such as golf clubs, fishing reels, and archery bows and arrows.
Many car and aircraft manufacturers have made engine and body parts from magnesium.
Magnesium batteries have been commercialized as primary batteries, and are an active topic of research for rechargeable batteries.
Compounds
Magnesium compounds, primarily magnesium oxide (MgO), are used as a refractory material in furnace linings for producing iron, steel, nonferrous metals, glass, and cement. Magnesium oxide and other magnesium compounds are also used in the agricultural, chemical, and construction industries. Magnesium oxide from calcination is used as an electrical insulator in fire-resistant cables.
Magnesium reacts with haloalkanes to give Grignard reagents, which are used for a wide variety of organic reactions forming carbon–carbon bonds.
Magnesium salts are included in various foods, fertilizers (magnesium is a component of chlorophyll), and microbe culture media.
Magnesium sulfite is used in the manufacture of paper (sulfite process).
Magnesium phosphate is used to fireproof wood used in construction.
Magnesium hexafluorosilicate is used for moth-proofing textiles.
Biological roles
Mechanism of action
The important interaction between phosphate and magnesium ions makes magnesium essential to the basic nucleic acid chemistry of all cells of all known living organisms. More than 300 enzymes require magnesium ions for their catalytic action, including all enzymes using or synthesizing ATP and those that use other nucleotides to synthesize DNA and RNA. The ATP molecule is normally found in a chelate with a magnesium ion.
Nutrition
Diet
Spices, nuts, cereals, cocoa and vegetables are good sources of magnesium. Green leafy vegetables such as spinach are also rich in magnesium.
Dietary recommendations
In the UK, the recommended daily values for magnesium are 300 mg for men and 270 mg for women. In the U.S. the Recommended Dietary Allowances (RDAs) are 400 mg for men ages 19–30 and 420 mg for older; for women 310 mg for ages 19–30 and 320 mg for older.
Supplementation
Numerous pharmaceutical preparations of magnesium and dietary supplements are available. In two human trials magnesium oxide, one of the most common forms in magnesium dietary supplements because of its high magnesium content per weight, was less bioavailable than magnesium citrate, chloride, lactate or aspartate.
Metabolism
An adult body has 22–26 grams of magnesium, with 60% in the skeleton, 39% intracellular (20% in skeletal muscle), and 1% extracellular. Serum levels are typically 0.7–1.0 mmol/L or 1.8–2.4 mEq/L. Serum magnesium levels may be normal even when intracellular magnesium is deficient. The mechanisms for maintaining the magnesium level in the serum are varying gastrointestinal absorption and renal excretion. Intracellular magnesium is correlated with intracellular potassium. Increased magnesium lowers calcium and can either prevent hypercalcemia or cause hypocalcemia depending on the initial level. Both low and high protein intake conditions inhibit magnesium absorption, as does the amount of phosphate, phytate, and fat in the gut. Unabsorbed dietary magnesium is excreted in feces; absorbed magnesium is excreted in urine and sweat.
Detection in serum and plasma
Magnesium status may be assessed by measuring serum and erythrocyte magnesium concentrations coupled with urinary and fecal magnesium content, but intravenous magnesium loading tests are more accurate and practical. A retention of 20% or more of the injected amount indicates deficiency. As of 2004, no biomarker has been established for magnesium.
Magnesium concentrations in plasma or serum may be monitored for efficacy and safety in those receiving the drug therapeutically, to confirm the diagnosis in potential poisoning victims, or to assist in the forensic investigation in a case of fatal overdose. The newborn children of mothers who received parenteral magnesium sulfate during labor may exhibit toxicity with normal serum magnesium levels.
Deficiency
Low plasma magnesium (hypomagnesemia) is common: it is found in 2.5–15% of the general population. From 2005 to 2006, 48 percent of the United States population consumed less magnesium than recommended in the Dietary Reference Intake. Other causes are increased renal or gastrointestinal loss, an increased intracellular shift, and proton-pump inhibitor antacid therapy. Most are asymptomatic, but symptoms referable to neuromuscular, cardiovascular, and metabolic dysfunction may occur. Alcoholism is often associated with magnesium deficiency. Chronically low serum magnesium levels are associated with metabolic syndrome, diabetes mellitus type 2, fasciculation, and hypertension.
Therapy
Intravenous magnesium is recommended by the ACC/AHA/ESC 2006 Guidelines for Management of Patients With Ventricular Arrhythmias and the Prevention of Sudden Cardiac Death for patients with ventricular arrhythmia associated with torsades de pointes who present with long QT syndrome; and for the treatment of patients with digoxin induced arrhythmias.
Intravenous magnesium sulfate is used for the management of pre-eclampsia and eclampsia.
Hypomagnesemia, including that caused by alcoholism, is reversible by oral or parenteral magnesium administration depending on the degree of deficiency.
There is limited evidence that magnesium supplementation may play a role in the prevention and treatment of migraine.
Sorted by type of magnesium salt, other therapeutic applications include:
Magnesium sulfate, as the heptahydrate called Epsom salts, is used as bath salts, a laxative, and a highly soluble fertilizer.
Magnesium hydroxide, suspended in water, is used in milk of magnesia antacids and laxatives.
Magnesium chloride, oxide, gluconate, malate, orotate, glycinate, ascorbate and citrate are all used as oral magnesium supplements.
Magnesium borate, magnesium salicylate, and magnesium sulfate are used as antiseptics.
Magnesium bromide is used as a mild sedative (this action is due to the bromide, not the magnesium).
Magnesium stearate is a slightly flammable white powder with lubricating properties. In pharmaceutical technology, it is used in pharmacological manufacture to prevent tablets from sticking to the equipment while compressing the ingredients into tablet form.
Magnesium carbonate powder is used by athletes such as gymnasts, weightlifters, and climbers to eliminate palm sweat, prevent sticking, and improve the grip on gymnastic apparatus, lifting bars, and climbing rocks.
Overdose
Overdose from dietary sources alone is unlikely because excess magnesium in the blood is promptly filtered by the kidneys, and overdose is more likely in the presence of impaired renal function. Overdose is not unlikely in case of excessive intake of supplements. Indeed, megadose therapy has caused death in a young child, and severe hypermagnesemia in a woman and a young girl who had healthy kidneys. The most common symptoms of overdose are nausea, vomiting, and diarrhea; other symptoms include hypotension, confusion, slowed heart and respiratory rates, deficiencies of other minerals, coma, cardiac arrhythmia, and death from cardiac arrest.
Function in plants
Plants require magnesium to synthesize chlorophyll, essential for photosynthesis. Magnesium in the center of the porphyrin ring in chlorophyll functions in a manner similar to the iron in the center of the porphyrin ring in heme. Magnesium deficiency in plants causes late-season yellowing between leaf veins, especially in older leaves, and can be corrected by either applying epsom salts (which is rapidly leached), or crushed dolomitic limestone, to the soil.
Safety precautions
Magnesium metal and its alloys can be explosive hazards; they are highly flammable in their pure form when molten or in powder or ribbon form. Burning or molten magnesium reacts violently with water. When working with powdered magnesium, safety glasses with eye protection and UV filters (such as welders use) are employed because burning magnesium produces ultraviolet light that can permanently damage the retina of a human eye.
Magnesium is capable of reducing water and releasing highly flammable hydrogen gas:
Mg(s) + 2 (l) → (s) + (g)
Therefore, water cannot extinguish magnesium fires. The hydrogen gas produced intensifies the fire. Dry sand is an effective smothering agent, but only on relatively level and flat surfaces.
Magnesium reacts with carbon dioxide exothermically to form magnesium oxide and carbon:
2 Mg(s) + (g) → 2 MgO(s) + C(s)
Hence, carbon dioxide fuels rather than extinguishes magnesium fires.
Burning magnesium can be quenched by using a Class D dry chemical fire extinguisher, or by covering the fire with sand or magnesium foundry flux to remove its air source.
See also
List of countries by magnesium production
Magnesium oil
Notes
References
Cited sources
External links
Magnesium at The Periodic Table of Videos (University of Nottingham)
Chemistry in its element podcast (MP3) from the Royal Society of Chemistry's Chemistry World: Magnesium
Chemical elements
Alkaline earth metals
Dietary minerals
Food additives
Pyrotechnic fuels
Reducing agents
Chemical elements with hexagonal close-packed structure | Magnesium | Physics,Chemistry | 6,447 |
8,209,052 | https://en.wikipedia.org/wiki/Weissenberg%20effect | In fluid dynamics, the Weissenberg effect is a phenomenon that occurs when a spinning rod is inserted into a solution of elastic liquid. Instead of being thrown outward, the solution is drawn towards the rod and rises up around it. This is a direct consequence of the normal stress that acts like a hoop stress around the rod. The effect is a common example of non-Newtonian fluid dynamics, which has been shown to occur for polystyrene.
The effect is named after Karl Weissenberg who published about it in 1947.
References
External links
The Isolation of, and the Initial Measurements of the Weissenberg Effect
Viscosity
Rheology | Weissenberg effect | Physics,Chemistry | 129 |
172,552 | https://en.wikipedia.org/wiki/Stunnel | Stunnel is an open-source multi-platform application used to provide a universal TLS/SSL tunneling service.
Stunnel is used to provide secure encrypted connections for clients or servers that do not speak TLS or SSL natively. It runs on a variety of operating systems, including most Unix-like operating systems and Windows. Stunnel relies on the OpenSSL library to implement the underlying TLS or SSL protocol.
Stunnel uses public-key cryptography with X.509 digital certificates to secure the SSL connection, and clients can optionally be authenticated via a certificate.
If linked against libwrap, it can be configured to act as a proxy–firewall service as well.
Stunnel is maintained by Polish programmer Michał Trojnara and released under the terms of the GNU General Public License (GPL) with OpenSSL exception.
Example
A stunnel can be used to provide a secure SSL connection to an existing non-SSL-aware SMTP mail server. Assuming the SMTP server expects TCP connections on port 25, the stunnel would be configured to map the SSL port 465 to non-SSL port 25. A mail client connects via SSL to port 465. Network traffic from the client initially passes over SSL to the stunnel application, which transparently encrypts and decrypts traffic and forwards unsecured traffic to port 25 locally. The mail server sees a non-SSL mail client.
The stunnel process could be running on the same or a different server from the unsecured mail application; however, both machines would typically be behind a firewall on a secure internal network (so that an intruder could not make its own unsecured connection directly to port 25).
See also
Tunneling protocol
References
External links
Cryptographic software
Free security software
Unix network-related software
Transport Layer Security implementation
Tunneling protocols
Network protocols | Stunnel | Mathematics,Engineering | 406 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.