id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
9,578,329 | https://en.wikipedia.org/wiki/V1280%20Scorpii | V1280 Scorpii (or Nova Scorpii 2007 Number 1) is the first of two novae that occurred in the constellation Scorpius during February 2007 (the second nova was the fainter V1281 Scorpii, which was discovered on 19 February 2007). Announced by the IAU in Electronic Telegram No. 835 and Circular No. 8803, the nova's magnitude was 9.6 when it was discovered on CCD images taken at 20:42 UT on 4 February 2007 by Yuji Nakamura of Kameyama, Mie, Japan. It was independently discovered on the same night at 20:30 UT by Yukio Sakurai of Mito, Ibaraki, Japan. It peaked at magnitude 3.79 on February 17, making it easily visible to the naked eye. V1280 Scorpii is two degrees south of M62.
The early period after V1280 Scorpii's eruption was observed in great detail by the Solar Mass Ejection Imager (SMEI) instrument on the Coriolis satellite. This satellite obtained a brightness value for the nova every 102 minutes. The rise to peak brightness was exceptionally slow. The SMEI light curve shows three well defined maxima for the nova, occurring around 03:00 UT on 16 February, 08:30 UT on 17 February and 05:00 UT on 19 February 2007. The nova declined slowly from peak brightness until the end of February 2007, at which time it began fading rapidly as dust formed in the ejected material. At the same time dust formation was causing the visual light curve to plummet, the infrared brightness increased. The formation of V1280 Scorpii's dusty structure was observed in the near (K-band) and mid (N band) infrared by the VLT interferometer, and the interferometer was able to measure the expansion of the dust shell. These were the first such observations for any nova.
On day 100, another brightening was observed, which corresponded to a second mass loss event. The expanding dust shell around the nova has an estimated velocity of 350 km/s. The mass of the nova's white dwarf has been estimated to be 0.6 , based on the SMEI light curve.
The morphology of V1280 Scorpii's nova remnant is complex, but overall bipolar in shape; on either sides of the nova are outflows emitting forbidden lines of oxygen (O III) and nitrogen (N II). Surrounding the center is an equatorial torus of dust, which variably blocks light.
References
External links
AAVSO announcement
AAVSO quick-look data
Map at SkyTonight
Photo of the nova
Novae
Scorpius
20070206
Scorpii, V1280 | V1280 Scorpii | [
"Astronomy"
] | 579 | [
"Novae",
"Astronomical events",
"Scorpius",
"Constellations"
] |
9,578,494 | https://en.wikipedia.org/wiki/High-speed%20flight | In high-speed flight, the assumptions of incompressibility of the air used in low-speed aerodynamics no longer apply. In subsonic aerodynamics, the theory of lift is based upon the forces generated on a body and a moving gas (air) in which it is immersed. At airspeeds below about , air can be considered incompressible in regards to an aircraft, in that, at a fixed altitude, its density remains nearly constant while its pressure varies. Under this assumption, air acts the same as water and is classified as a fluid.
Subsonic aerodynamic theory also assumes the effects of viscosity (the property of a fluid that tends to prevent motion of one part of the fluid with respect to another) are negligible, and classifies air as an ideal fluid, conforming to the principles of ideal-fluid aerodynamics such as continuity, Bernoulli's principle, and circulation. In reality, air is compressible and viscous. While the effects of these properties are negligible at low speeds, compressibility effects in particular become increasingly important as airspeed increases. Compressibility (and to a lesser extent viscosity) is of paramount importance at speeds approaching the speed of sound. In these transonic speed ranges, compressibility causes a change in the density of the air around an airplane.
During flight, a wing produces lift by accelerating the airflow over the upper surface. This accelerated air can, and does, reach supersonic speeds, even though the airplane itself may be flying at a subsonic airspeed (Mach number < 1.0). At some extreme angles of attack, in some airplanes, the speed of the air over the top surface of the wing may be double the airplane's airspeed. It is, therefore, entirely possible to have both supersonic and subsonic airflows on an airplane at the same time. When flow velocities reach sonic speeds at some locations on an airplane (such as the area of maximum camber on the wing), further acceleration will result in the onset of compressibility effects such as shock wave formation, drag increase, buffeting, stability, and control difficulties. Subsonic flow principles are
invalid at all speeds above this point.
See also
Coffin corner (aerodynamics)
Critical Mach number
Drag divergence Mach number
References
Sources
Airspeed | High-speed flight | [
"Physics"
] | 482 | [
"Wikipedia categories named after physical quantities",
"Airspeed",
"Physical quantities"
] |
9,578,853 | https://en.wikipedia.org/wiki/Aralkylamine%20N-acetyltransferase | Aralkylamine N-acetyltransferase (AANAT) (), also known as arylalkylamine N-acetyltransferase or serotonin N-acetyltransferase (SNAT), is an enzyme that is involved in the day/night rhythmic production of melatonin, by modification of serotonin. It is in humans encoded by the ~2.5 kb AANAT gene containing four exons, located on chromosome 17q25. The gene is translated into a 23 kDa large enzyme. It is well conserved through evolution and the human form of the protein is 80 percent identical to sheep and rat AANAT. It is an acetyl-CoA-dependent enzyme of the GCN5-related family of N-acetyltransferases (GNATs). It may contribute to multifactorial genetic diseases such as altered behavior in sleep/wake cycle and research is on-going with the aim of developing drugs that regulate AANAT function.
Nomenclature
The systematic name of this enzyme class is acetyl-CoA:2-arylethylamine N-acetyltransferase. Other names in common use include:
AANAT
Arylalkylamine N-acetyltransferase
Melatonin rhythm enzyme
Serotonin acetylase
Serotonin acetyltransferase
Serotonin N-acetyltransferase
The officially accepted name is aralkylamine N-acetyltransferase.
Function and mechanism
Tissue distribution
The AANAT mRNA transcript is mainly expressed in the central nervous system (CNS). It is detectable at low levels in several brain regions including the pituitary gland as well as in the retina. It is most highly abundant in the pineal gland which is the site of melatonin synthesis. Brain and pituitary AANAT may be involved in the modulation of serotonin-dependent aspects of human behavior and pituitary function.
Physiological function
In the pinealocyte cells of the pineal gland, aralkylamine N-acetyltransferase is involved in the conversion of serotonin to melatonin. It is the penultimate enzyme in the melatonin synthesis controlling the night/day rhythm in melatonin production in the vertebrate pineal gland. Melatonin is essential for seasonal reproduction, modulates the function of the circadian clock in the suprachiasmatic nucleus, and influences activity and sleep. Due to its important role in circadian rhythm, AANAT is subjected to extensive regulation that is responsive to light exposure (see Regulation). It may contribute to multifactorial genetic diseases such as altered behavior in sleep/wake cycle and mood disorders.
The chemical reactions catalyzed by AANAT
The primary chemical reaction that is catalyzed by aralkylamine N-acetyltransferase uses two substrates, acetyl-CoA and serotonin. AANAT catalyzes the transfer of the acetyl group of Acetyl-CoA to the primary amine of serotonin, thereby producing CoA and N-acetylserotonin. In humans, other endogenous substrates of the enzyme include specific trace amine neuromodulators, namely phenethylamine, tyramine, and tryptamine, in turn forming N-acetylphenethylamine, N-acetyltyramine, and N-acetyltryptamine.
In the biosynthesis of melatonin, N-acetylserotonin is further methylated by another enzyme, N-acetylserotonin O-methyltransferase (ASMT) to generate melatonin. The N-acetyltransferase reaction has been suggested to be the rate-determining step, and thus Serotonin N-acetyltransferase has emerged as a target for inhibitor design (see below).
AANAT obeys an ordered ternary-complex mechanism. The substrates bind sequentially (ordered) with acetyl-CoA binding to the free enzyme followed by the binding of serotonin to form the ternary complex. After the transfer of the acetyl group has occurred, the products are orderly released with N-acetyl-serotonin first and CoA last.
Structure
Arylkylamine N-acetyltransferase is a monomeric polypeptide with a length of 207 amino acid residues, and with a molecular weight of 23,344 daltons. The secondary structure consists of alpha helices and beta sheets. It is 28 percent helical (10 helices; 60 residues) and 23 percent beta sheet (9 strands; 48 residues). This family shares four conserved sequence motifs designated A-D. Motif B serves as the location of the serotonin binding slot. The structure was determined by X-ray diffraction.
Several structures have been solved for this class of enzymes, with PDB accession codes , , , and //.
Aralkylamine N-acetyltransferase has also been crystallized in complex with 14-3-3ζ from the 14-3-3 protein family, with the PDB accession code .
The GNAT superfamily
Aralkylamine N-acetyltransferase belongs to the GCN5-related N-acetyltransferase (GNAT) superfamily which consists 10,000 acetyltransferases, named so because of their sequence homology to a class of eukaryotic transcription factors, therein the yeast GCN5. Other well-studied members of the superfamily are glucosamine-6-phosphate N-acetyltransferase and histone acetyltransferases.
All members of this superfamily has a structurally conserved fold consisting of an N-terminal strand followed by two helices, three antiparallel β-strands, followed by a ‘‘signature’’ central helix, a fifth β-strand, a fourth α-helix and a final β-strand. These elements are nearly universally conserved in spite of poor pairwise identity in sequence alignments.
Regulation
Regulation of AANAT varies between species. In some, AANAT levels oscillate dramatically between light and dark periods, and thus control melatonin synthesis. In others, rhythm is regulated primarily on the protein level. One example is in rodents, where AANAT mRNA levels increase more than 100-fold in dark periods. In other species, cyclic AMP plays an important part in inhibition of proteolytic degradation of AANAT, elevating protein levels at night. Experiments using human AANAT expressed in a 1E7 cell line show an ~8-fold increase in enzyme activity upon exposure to forskolin.
Dynamic degradation of AANAT mRNA has proven essential to the circadian action of the enzyme. The 3’UTR sequences have importance with regards to the rhythmic degradation of AANAT mRNA in some species. In rodents, various hnRNPs maintain dynamic degradation of AANAT mRNA. In other species, such as ungulates and primates, the stable AANAT mRNAs with a shorter 3’UTR is suspected not to be under control of the hnRNPs that bind and direct degradation of AANAT mRNA in rodents.
Exposure to light induces signals to travel from retinal cells, ultimately causing a drop in norepinephrine stimulation of the pineal gland. This, in turn, leads to a signaling cascade, resulting in Protein Kinase A phosphorylation of two key Ser and Thr residues of serotonin N-acetyltransferase. Phosphorylation of these residues causes changes in catalytic activity through recruitment and interaction with 14-3-3 proteins, specifically 14-3-3ζ.
Another protein which interacts and regulates AANAT activity is protein kinase C. Protein kinase C acts, like protein kinase A, on threonine and serine residues, enhancing the stability and enzymatic activity of AANAT.
Inhibition of the acetyl-CoA-binding to the catalytic site through the formation and cleavage of intramolecular disulfide bonds has been suggested to be a mechanism of regulation. Formation of a disulfide bond between two cystein residues within the protein closes the hydrophobic funnel of the catalytic site, and thus acts as an on/off switch for catalytic activity. It is not yet certain if this mechanism is present in in vivo cells through the regulation of intracellular redox conditions, but it is suggested that glutathione (GSH) could be an in vivo regulator of the formation and cleavage of these disulfide bonds.
AANAT inhibitors and clinical relevance
Inhibitors of AANAT may eventually lead to development of a drug that would be useful in circadian biology research and in the treatment of sleep and mood disorders. Synthetic inhibitors of the enzyme have been discovered. However, no AANAT inhibitor with potent in vivo activity has been reported. Up to now, five classes of AANAT inhibitors have been described in the literature. Below are the five classes:
Melatonin derivatives
Since it was reported that melatonin is a competitive inhibitor of AANAT, this neurotransmitter seems to exert an autoregulatory control on its own biosynthesis. Thus, loose structural analogues of the indolamine hormone were evaluated on AANAT, and moderate inhibitors were discovered.
Peptidic inhibitors
Peptide combinatorial libraries of tri-, tetra-, and pentapeptides with various amino acid compositions were screened as potential sources of inhibitors, to see if it serves as either pure or mixed competitive inhibitor for the hAANAT enzyme. Molecular modeling and structure-activity relationship studies made it possible to pinpoint the amino acid residue of the pentapeptide inhibitor S 34461 that interacts with the cosubstrate-binding site.
Bisubstrate analogs
It is suggested that AANAT catalyzes the transfer of an acetyl group from acetyl-CoA to serotonin, with the involvement of an intermediate ternary complex, to produce N-acetylserotonin. Based on this mechanism, it might be expected that a bisubstrate analog inhibitor, derived from the tethering of indole and CoASH parts, could potentially mimic the ternary complex and exert strong inhibition of AANAT. The first bisubstrate analog (1), which links tryptamine and CoA via an acetyl bridge, was synthesized by Khalil and Cole, and shown to be a very potent and specific AANAT inhibitor.
N-Haloacetylated derivatives
AANAT has shown that it also has a secondary alkyltransferase activity as well as acetyltransferase activity. N-Haloacetyltryptamines were developed and serve as substrates of AANAT alkyltransferase and are also potent (low micromolar) in vitro inhibitors against AANAT acetyltransferase activity. AANAT catalyzes reaction between N-bromoacetyltryptamine (BAT) and reduced CoA, resulting a tight-binding bisubstrate analog inhibitor. The first synthesized cell-permeable inhibitor of AANAT N-bromoacetyltryptamine was studied further on melatonin secretion from rat and pig pineal glands. New N-halogenoacetyl derivatives leading to a strong in situ inhibition of AANAT. The concept behind the mechanism of action of these precursors was studied by following the biosynthesis of the inhibitor from tritiated-BAT in a living cell.
Rhodanine-based compounds
The first druglike and selective inhibitors of AANAT has been identified. Lawrence M. Szewczuk et al. have virtually screened more than a million compounds by 3D high-throughput docking into the active site of X-ray structure for AANAT, and then tested 241 compounds as inhibitors. One compound class which containing a rhodanine scaffold has shown low micromolar competitive inhibition against acetyl-CoA and proved to be effective in blocking melatonin production in pineal cells.
The recent study about inhibitor of AANAT has described the discovery of a new class of nonpeptidic AANAT inhibitors based on a 2,2′-bithienyl scaffold.
See also
Acetyltransferase
References
Further reading
External links
EC 2.3.1
Enzymes of known structure
Circadian rhythm | Aralkylamine N-acetyltransferase | [
"Biology"
] | 2,621 | [
"Behavior",
"Sleep",
"Circadian rhythm"
] |
9,579,143 | https://en.wikipedia.org/wiki/Aromatic%20amino%20acid | An aromatic amino acid is an amino acid that includes an aromatic ring.
Among the 20 standard amino acids, histidine, phenylalanine, tryptophan, tyrosine, are classified as aromatic.
Properties and function
Optical properties
Aromatic amino acids, excepting histidine, absorb ultraviolet light above and beyond 250 nm and will fluoresce under these conditions. This characteristic is used in quantitative analysis, notably in determining the concentrations of these amino acids in solution. Most proteins absorb at 280 nm due to the presence of tyrosine and tryptophan. Of the aromatic amino acids, tryptophan has the highest extinction coefficient; its absorption maximum occurs at 280 nm. The absorption maximum of tyrosine occurs at 274 nm.
Role in protein structure and function
Aromatic amino acids stabilize folded structures of many proteins. Aromatic residues are found predominantly sequestered within the cores of globular proteins, although often comprise key portions of protein-protein or protein-ligand interaction interfaces on the protein surface.
Aromatic amino acids as precursors
Aromatic amino acids often serve as the precursors to important biochemicals.
Histidine is the precursor to histamine.
Tryptophan is the precursor to 5-hydroxytryptophan and then serotonin, tryptamine, auxin, kynurenines, and melatonin.
Tyrosine is the precursor to L-DOPA, dopamine, norepinephrine (noradrenaline), epinephrine (adrenaline), and the thyroid hormone thyroxine. It is also precursor to octopamine and melanin in numerous organisms.
Phenylalanine is the precursor to tyrosine.
Biosynthesis
Shikimate pathway
In plants, the shikimate pathway first leads to the formation of chorismate, which is the precursor of phenylalanine, tyrosine, and tryptophan. These aromatic amino acids are the precursors of many secondary metabolites, all essential to a plant's biological functions, such as the hormones salicylate and auxin. This pathway contains enzymes that can be regulated by inhibitors, which can cease the production of chorismate, and ultimately the organism's biological functions. Herbicides and antibiotics work by inhibiting these enzymes involved in the biosynthesis of aromatic amino acids, thereby rendering them toxic to plants. Glyphosate, a type of herbicide, is used to control the accumulation of excess greens. In addition to destroying greens, Glyphosate can easily affect the maintenance of the gut microbiota in host organisms by specifically inhibiting the 5-enolpyruvylshikimate-3-phosphate synthase which prevents the biosynthesis of essential aromatic amino acids. Inhibition of this enzyme results in disorders such as gastrointestinal diseases and metabolic diseases.
Nutritional requirements
Animals obtain aromatic amino acids from their diet, but nearly all plants and some micro-organisms must synthesize their aromatic amino acids through the metabolically costly shikimate pathway in order to make them. Histidine, phenylalanine, tryptophan, are essential amino acids for animals. Since they are not synthesized in the human body, they must be derived from the diet. Tyrosine is semi-essential; therefore, it can be synthesized by the animal, but only from phenylalanine. Phenylketonuria, a genetic disorder that occurs as a result of the inability to breakdown phenylalanine, is due to a lack of the enzyme phenylalanine hydroxylase. A dietary lack of tryptophan can cause stunted skeletal development. Excessive intake of aromatic amino acids far beyond levels obtained through normal protein consumption might lead to hypertension, something which could go un-noticed for a long time in healthy individuals. It could be caused by other factors as well such as the use of various herbs and foods like chocolate which inhibit monoamine oxidase enzymes to varying degrees, and also some medications. Aromatic trace amines like tyramine can displace norepinephrine from peripheral monoamine vesicles and in people taking monoamine oxidase inhibitors (MAOIs) this occurs to the extent of being life threatening. Blue diaper syndrome is an autosomal recessive disease that is caused by poor tryptophan absorption in the body.
See also
Aromatic L-amino acid decarboxylase
Expanded genetic code
Phenylketonuria
Tyrosine hydroxylase
Neurotransmitter
Notes
References
Further reading
External links
Amino acids
| Aromatic amino acid | [
"Chemistry"
] | 951 | [
"Amino acids",
"Biomolecules by chemical classification"
] |
9,579,270 | https://en.wikipedia.org/wiki/Biological%20systems%20engineering | Biological systems engineering or biosystems engineering is a broad-based engineering discipline with particular emphasis on non-medical biology. It can be thought of as a subset of the broader notion of biological engineering or bio-technology though not in the respects that pertain to biomedical engineering as biosystems engineering tends to focus less on medical applications than on agriculture, ecosystems, and food science. The discipline focuses broadly on environmentally sound and sustainable engineering solutions to meet societies' ecologically related needs. Biosystems engineering integrates the expertise of fundamental engineering fields with expertise from non-engineering disciplines.
Background and organization
Many college and university biological engineering departments have a history of being grounded in agricultural engineering and have only in the past two decades or so changed their names to reflect the movement towards more diverse biological based engineering programs. This major is sometimes called agricultural and biological engineering, biological and environmental engineering, etc., in different universities, generally reflecting interests of local employment opportunities.
Since biological engineering covers a wide spectrum, many departments now offer specialization options. Depending on the department and the specialization options offered within each program, curricula may overlap with other related fields. There are a number of different titles for BSE-related departments at various universities. The professional societies commonly associated with many Biological Engineering programs include the American Society of Agricultural and Biological Engineers (ASABE) and the Institute of Biological Engineering (IBE), which generally encompasses BSE. Some program also participate in the Biomedical Engineering Society (BMES) and the American Institute of Chemical Engineers (AIChE).
A biological systems engineer has a background in what both environmental engineers and biologists do, thus bridging the gap between engineering and the (non-medical) biological sciences – although this is variable across academic institutions. For this reason, biological systems engineers are becoming integral parts of many environmental engineering firms, federal agencies, and biotechnology industries. A biological systems engineer will often address the solution to a problem from the perspective of employing living systems to enact change. For example, biological treatment methodologies can be applied to provide access to clean drinking water or for sequestration of carbon dioxide.
Specializations
Land and water resources engineering
Food engineering and bioprocess engineering
Machinery systems engineering
Natural resources and environmental engineering
Biomedical engineering
Academic programs in agricultural and biological systems engineering
Below is a listing of known academic programs that offer bachelor's degrees (B.S. or B.S.E.) in what ABET and/or ASABE terms "agricultural engineering", "biological systems engineering", "biological engineering", or similarly named programs. ABET accredits college and university programs in the disciplines of applied science, computing, engineering, and engineering technology. ASABE defines accredited programs within the scope of Ag/Bio Engineering.
North America
Central and South America
Europe
Asia
Africa
See also
Related engineering fields
Agricultural engineering
Aquaculture engineering
Biological engineering
Biomedical engineering
Civil engineering
Chemical engineering
Ecological engineering
Environmental engineering
Food engineering
Hydraulic engineering
Mechanical engineering
Sanitary engineering
Closely related sciences
Agriculture
Animal Science
Biology, Biochemistry, Microbiology
Chemistry
Ecology
Environmental science
Forestry
Horticulture
Hydrology
Plant Science
Soil science
References
Further reading
2003, Dennis R. Heldman (ed), Encyclopedia of agricultural, food, and biological engineering.
2002, Teruyuki Nagamune, Tai Hyun Park & Mark R. Marten (ed), Biological Systems Engineering, Washington, D.C. : American Chemical Society, 320 pages.
2012, Paige Brown Jarreau, What is Biological Engineering, http://www.scilogs.com/from_the_lab_bench/what-is-biological-engineering-ibe-2012/
External links
UC San Diego, Department of Bioengineering, UCSD BE part of University of California, San Diego
Biological engineering
Biological systems
Systems biology
Systems engineering | Biological systems engineering | [
"Engineering",
"Biology"
] | 762 | [
"Systems engineering",
"Biological engineering",
"nan",
"Systems biology"
] |
9,579,379 | https://en.wikipedia.org/wiki/Base%20%28geometry%29 | In geometry, a base is a side of a polygon or a face of a polyhedron, particularly one oriented perpendicular to the direction in which height is measured, or on what is considered to be the "bottom" of the figure. This term is commonly applied in plane geometry to triangles, parallelograms, trapezoids, and in solid geometry to cylinders, cones, pyramids, parallelepipeds, prisms, and frustums.
The side or point opposite the base is often called the apex or summit of the shape.
Of a triangle
In a triangle, any arbitrary side can be considered the base. The two endpoints of the base are called base vertices and the corresponding angles are called base angles. The third vertex opposite the base is called the apex.
The extended base of a triangle (a particular case of an extended side) is the line that contains the base. When the triangle is obtuse and the base is chosen to be one of the sides adjacent to the obtuse angle, then the altitude dropped perpendicularly from the apex to the base intersects the extended base outside of the triangle.
The area of a triangle is its half of the product of the base times the height (length of the altitude). For a triangle with opposite sides if the three altitudes of the triangle are called the area is:
Given a fixed base side and a fixed area for a triangle, the locus of apex points is a straight line parallel to the base.
Of a trapezoid or parallelogram
Any of the sides of a parallelogram, or either (but typically the longer) of the parallel sides of a trapezoid can be considered its base. Sometimes the parallel opposite side is also called a base, or sometimes it is called a top, apex, or summit. The other two edges can be called the sides.
Role in area and volume calculation
Bases are commonly used (together with heights) to calculate the areas and volumes of figures. In speaking about these processes, the measure (length or area) of a figure's base is often referred to as its "base."
By this usage, the area of a parallelogram or the volume of a prism or cylinder can be calculated by multiplying its "base" by its height; likewise, the areas of triangles and the volumes of cones and pyramids are fractions of the products of their bases and heights. Some figures have two parallel bases (such as trapezoids and frustums), both of which are used to calculate the extent of the figures.
References
Parts of a triangle
Area
Volume | Base (geometry) | [
"Physics",
"Mathematics"
] | 527 | [
"Scalar physical quantities",
"Physical quantities",
"Quantity",
"Size",
"Extensive quantities",
"Volume",
"Wikipedia categories named after physical quantities",
"Area"
] |
9,579,546 | https://en.wikipedia.org/wiki/Parity%20of%20esteem | In health care, establishing parity of esteem means assigning equal value to mental health care and to physical health care.
In many healthcare systems, parity of esteem is unrealized because of the pervasive stigma of mental illness. People with diagnosed mental illness die on average around 20 years earlier than those without such a diagnosis, some because of suicide, but mostly because of poorly treated physical illness. Mental illness has been assessed as constituting around a quarter of the disease burden in developed countries. There is much bigger treatment gap for mental illness than for physical illness.
In the USA legislation was enacted in 2006 which attempted to achieve equality in health insurance coverage between surgical treatment and mental health treatments.
In the UK, Norman Lamb campaigned for mental health to be given parity of esteem with physical health. The Royal College of Psychiatrists proposed that parity of esteem should be defined as "Valuing mental health equally with physical health". In practice most arguments have been centred on levels of funding. Expenditure on mental health services provided by NHS trusts fell by around 8.25% between 2010 and 2015. According to Dr Phil Moore, chair of the Mental Health Commissioners Network at NHS Clinical Commissioners, discussions in 2016 had degenerated into a funding dispute. He wanted to see discussions about the degree to which mental health is embedded into other services, including the integration of psychological services with general practice
It has also been raised as an issue when comparing pay and conditions in healthcare with social care, where pay is generally much lower.
References
Tom Hennessey and Robin Wilson, 1997, With All Due Respect: Pluralism and Parity of Esteem, Democratic Dialogue,
Mental health
Discrimination | Parity of esteem | [
"Biology"
] | 335 | [
"Behavior",
"Aggression",
"Discrimination"
] |
9,579,762 | https://en.wikipedia.org/wiki/Iris%20albicans | Iris albicans, also known as the cemetery iris, white cemetery iris, or the white flag iris, is a species of iris which was planted on graves in Muslim regions and grows in many countries throughout the Middle East and northern Africa. It was later introduced to Spain, and then other European countries. It is a natural hybrid.
It grows to 30–60 cm tall. The leaves are grey-green, and broadly sword-shaped. The inflorescence is fan-shaped and contains two or three fragrant flowers. The flowers are grey or silvery in bud, and are white or off-white and 8 cm wide in bloom. It is a sterile hybrid, and spreads by rhizomal growth and division, as it cannot produce seeds.
Iris albicans has been cultivated since ancient times and may be the oldest iris in cultivation. Collected by Lange in 1860, it has been in cultivation since at least 1400 BC. Originating from Yemen and Saudi Arabia, it appears in a wall painting of the Botanical Garden of Tuthmosis III in the Temple of Amun at Karnak in ancient Thebes dated around 1426 BC.
Iris albicans is included in the Tasmanian Fire Service's list of low flammability plants, indicating that it is suitable for growing within a building protection zone.
References
albicans
Flora of Saudi Arabia
Flora of Yemen
Garden plants of Asia
Hybrid plants
Plants described in 1861 | Iris albicans | [
"Biology"
] | 289 | [
"Hybrid plants",
"Plants",
"Hybrid organisms"
] |
9,580,824 | https://en.wikipedia.org/wiki/Segment%20descriptor | In memory addressing for Intel x86 computer architectures, segment descriptors are a part of the segmentation unit, used for translating a logical address to a linear address. Segment descriptors describe the memory segment referred to in the logical address.
The segment descriptor (8 bytes long in 80286 and later) contains the following fields:
A segment base address
The segment limit which specifies the segment size
Access rights byte containing the protection mechanism information
Control bits
Structure
The x86 and x86-64 segment descriptor has the following form:
Where the fields stand for:
Base Address Starting memory address of the segment. Its length is 32 bits and it is created from the lower part bits 16 to 31, and the upper part bits 0 to 7, followed by bits 24 to 31.
Segment Limit Its length is 20 bits and is created from the lower part bits 0 to 15 and the upper part bits 16 to 19. It defines the address of the last accessible data. The length is one more than the value stored here. How exactly this should be interpreted depends on the Granularity bit of the segment descriptor.
G=Granularity If clear, the limit is in units of bytes, with a maximum of 220 bytes. If set, the limit is in units of 4096-byte pages, for a maximum of 232 bytes.
D/B
D = Default operand size : If clear, this is a 16-bit code segment; if set, this is a 32-bit segment.
B = Big: If set, the maximum offset size for a data segment is increased to 32-bit 0xffffffff. Otherwise it's the 16-bit max 0x0000ffff. Essentially the same meaning as "D".
L=Long If set, this is a 64-bit segment (and D must be zero), and code in this segment uses the 64-bit instruction encoding. "L" cannot be set at the same time as "D" aka "B". (Bit 21 in the image)
AVL=Available For software use, not used by hardware (Bit 20 in the image with the label A)
P=Present If clear, a "segment not present" exception is generated on any reference to this segment
DPL=Descriptor privilege level Privilege level (ring) required to access this descriptor
S=System Segment If clear, this is system segment, used to handle interrupts or store LDT segment descriptors. If 1, this is Code/Data segment.
Type If set, this is a code segment descriptor. If clear, this is a data/stack segment descriptor, which has "D" replaced by "B", "C" replaced by "E"and "R" replaced by "W". This is in fact a special case of the 2-bit type field, where the preceding bit 12 cleared as "0" refers to more internal system descriptors, for LDT, LSS, and gates.
C=Conforming Code in this segment may be called from less-privileged levels.
E=Expand-Down If clear, the segment expands from base address up to base+limit. If set, it expands from maximum offset down to limit, a behavior usually used for stacks.
R=Readable If clear, the segment may be executed but not read from.
W=Writable If clear, the data segment may be read but not written to.
A=Accessed This bit is set to 1 by hardware when the segment is accessed, and cleared by software.
See also
Burroughs large systems descriptors
Memory segment
Memory address
References
Further reading
External links
Intel 80386 Reference Programmer's Manual - Segment Translation
X86 architecture
Operating system kernels
Memory management | Segment descriptor | [
"Technology"
] | 772 | [
"Computing stubs",
"Computer hardware stubs"
] |
9,581,181 | https://en.wikipedia.org/wiki/NONOate | In chemistry, a NONOate is a compound having the chemical formula R1R2N−(NO−)−N=O, where R1 and R2 are alkyl groups. One example for this is 1,1-diethyl-2-hydroxy-2-nitrosohydrazine, or diethylamine dinitric oxide. These compounds are unusual in having three sequential nitrogen atoms: an amine functional group, a bridging NO− group, and a terminal nitrosyl group. In contact with water, these compounds release NO (nitric oxide).
pH-dependent decomposition of NONOates
Most NONOates are stable in alkaline solution above pH 8.0 (e. g. 10 mM NaOH) and can be stored at −20 °C in this way for the short term. To generate NO from NONOates, the pH is lowered accordingly. Typically, a dilution of the stock NONOate solution is made in a phosphate buffer (pH 7.4; tris buffers can also be used) and incubated at room temperature for the desired time to allow NO to accumulate in solution. This is often visible as bubbles at high NONOate concentrations. Incubation time is important, since the different NONOates have different half-lives (t½) in phosphate buffer at pH 7.4. For example, the half-life of MAHMA NONOate under these conditions is ~3.5 minutes, whilst the t½ of DPTA NONOate is 300 minutes. This is often useful in biological systems, where a combination of different NONOates can be used to give a sustained release of nitric oxide. At pH 5.0, most NONOates are considered to decompose almost instantaneously.
References
Nitrosyl compounds
Functional groups | NONOate | [
"Chemistry"
] | 382 | [
"Functional groups"
] |
9,581,197 | https://en.wikipedia.org/wiki/Quark%E2%80%93lepton%20complementarity | The quark–lepton complementarity (QLC) is a possible fundamental symmetry between quarks and leptons. First proposed in 1990 by Foot and Lew, it assumes that leptons as well as quarks come in three "colors". Such theory may reproduce the Standard Model at low energies, and hence quark–lepton symmetry may be realized in nature.
Possible evidence for QLC
Recent neutrino experiments confirm that the Pontecorvo–Maki–Nakagawa–Sakata matrix contains large mixing angles. For example, atmospheric measurements of particle decay yield ≈ 45°, while solar experiments yield ≈ 34°. Compare these results with ≈ 9° which is clearly smaller, at about ~× the size,
and with the quark mixing angles in the Cabibbo–Kobayashi–Maskawa matrix . The disparity that nature indicates between quark and lepton mixing angles has been viewed in terms of a "quark–lepton complementarity" which can be expressed in the relations
Possible consequences of QLC have been investigated in the literature and in particular a simple correspondence between the PMNS and CKM matrices have been proposed and analyzed in terms of a correlation matrix. The correlation matrix is roughly
defined as the product of the CKM and PMNS matrices:
Unitarity implies:
Open questions
One may ask where the large lepton mixings come from, and whether this information is implicit in the form of the matrix. This question has been widely investigated in the literature, but its answer is still open. Furthermore, in some Grand Unification Theories (GUTs) the direct QLC correlation between the CKM and the PMNS mixing matrix can be obtained. In this class of models, the matrix is determined by the heavy Majorana neutrino mass matrix.
Despite the naïve relations between the PMNS and CKM angles, a detailed analysis shows that the correlation matrix is phenomenologically compatible with a tribimaximal pattern, and only marginally with a bimaximal pattern. It is possible to include bimaximal forms of the correlation matrix in models with renormalization effects that are relevant, however, only in particular cases with and with quasi-degenerate neutrino masses.
See also
Leptoquark
Footnotes
References
Leptons
Quarks
Standard Model | Quark–lepton complementarity | [
"Physics"
] | 478 | [
"Standard Model",
"Particle physics"
] |
9,583,109 | https://en.wikipedia.org/wiki/Dade-Collier%20Training%20and%20Transition%20Airport | Dade-Collier Training and Transition Airport, formerly the Everglades Jetport, is a public airport located within the Florida Everglades, 36 miles (58 km) west of the central business district of Miami, in Collier County, Florida, United States. It is owned by Miami-Dade County and operated by the Miami-Dade Aviation Department. The airport is on Tamiami Trail near the border between Dade and Collier counties in central South Florida.
History
Begun in 1968 as the Everglades Jetport (also known as Big Cypress Jetport or Big Cypress Swamp Jetport), the airport was planned to be the largest airport in the world, covering 39 square miles with six runways, and connected to both central Miami and the Gulf of Mexico by an expressway and monorail line. The airport would have been five times the size of JFK Airport in New York. At the time, the Boeing 2707 was under development and it was anticipated that supersonic aircraft would dominate long-haul air transportation. South Florida was viewed as an ideal location for an intercontinental SST hub due to the limitation that such aircraft would have to fly over water. Because of environmental concerns and the cancellation of the 2707 program, construction was halted in 1970 after the completion of just one 10,500' runway. The remaining land became the Big Cypress National Reserve.
Although the airport was left abandoned and unfinished, it was still retained by the local government as a general aviation airport and (to a greater extent) training airport. It was originally heavily used by Pan Am and Eastern Airlines as a training airport, as the long runway at Dade-Collier could accommodate aircraft as large as Boeing 747s, and was equipped with a relatively new instrument landing system, which allowed pilots to train for landing with low cloud ceilings and/or poor visibility. The isolation of the airport meant that it could be used for training flights 24/7 all days of the year without interfering with the traffic at Miami International. In more recent years, the advent of flight simulators has made such training flights less economical, and the airport is now used much less frequently, although it remains open to general aviation.
Facilities and aircraft
Dade-Collier Training and Transition Airport covers an area of , which contains one asphalt paved runway (9/27) measuring 10,499 × 150 ft (3,200 × 46 m). For the year ending October 10, 2018, the airport had 14,468 general aviation aircraft operations, an average of 39 per day. As of 2015 the airport had an average of 12 landings and take-offs per day.
Other uses
High-speed automobile events have been held here because the runway is two miles long. This allows exotic cars to break the barrier.
Oil exploration on the site was considered in 2009, but not pursued due to resistance from conservation groups.
The Carlos Gimenez administration proposed holding a regular Miami air show at Dade-Collier, similar in concept to the Paris Air Show. Homestead Air Reserve Base had previously been considered, but the idea had been rejected by the US military.
See also
List of airports in Florida
References
External links
Exotic Car Runway Event
Photo gallery of runway racing by Exotics Rally
240mph breaks speed record video
Airports in Miami-Dade County, Florida
Big Cypress National Preserve
1968 establishments in Florida
Megaprojects | Dade-Collier Training and Transition Airport | [
"Engineering"
] | 668 | [
"Megaprojects"
] |
9,583,233 | https://en.wikipedia.org/wiki/Aya%20%28goddess%29 | Aya was a Mesopotamian goddess associated with dawn. Multiple variant names were attributed to her in god lists. She was regarded as the wife of Shamash, the sun god. She was worshiped alongside her husband in Sippar. Multiple royal inscriptions pertaining to this city mention her. She was also associated with the Nadītu community inhabiting it. She is less well attested in the other cult center of Shamash, Larsa, though she was venerated there as well. Additional attestations are available from Uruk, Mari and Assur. Aya was also incorporated into Hurrian religion, and in this context she appears as the wife of Shamash's counterpart Šimige.
Names
Aya's name was written in cuneiform as da-a (). It is sometimes romanized as Aia instead. It has Akkadian origin and means "dawn". Sporadically it could be prefixed with the sign NIN, with the variant form Nin-Aya attested in a dedicatory inscription of Manishtushu and in an offering list from Mari. NIN was a grammatically neutral title well attested as a part of theonyms, and in this context can be translated as "queen" or "mistress". It has been suggested that in Aya's case, it was used as a sumerogram representing the term "Lady". In Hurrian sources Aya was referred to as "Ayu-Ikalti". This form of the name was derived from the phrase Aya kallatu, "Aya the bride".
Multiple additional names of Aya are attested in god lists.
Sherida
Sherida (; dŠÈ.NIR-da, also dŠÈ.NIR, Šerida or Šerda) could function as a Sumerian equivalent of Aya's primary name. It has been suggested that it was a loanword derived from Akkadian šērtum, "morning". However, this proposal is not universally accepted.
The name Sherida is already attested in the Early Dynastic god lists from Fara and Abu Salabikh. Additionally, the theophoric name Ur-Sherida is known from Lagash and Ur. notes that if the assumption that it was an Akkadian loanword is accepted, she would be one of the earliest deities bearing names of Akkadian origin to be integrated into the pantheons of Sumerian-speaking areas. The name Sherida appears for the last time in cultic context in sources from Sippar and Larsa from the Old Babylonian period.
Sudaĝ and related names
Sudaĝ (dsud-áĝ or dsù-da-áĝ), "golden yellow shine" or "golden yellow shining rock/metal", is attested as a name of Aya in multiple god lists, including An = Anum (tablet III, line 131) and its Old Babylonian forerunner. A further name present in the same source, Sudgan (tablet III, line 130), might have a similar meaning ("light", "glow"). Ninsudaĝ (dnin-BU-áĝ, interpreted as dnin-sud4-áĝ), attested in the Early Dynastic god list from Fara and possibly in the Old Babylonian god list from Mari, might be a further variant of the name, though the reading is ultimately uncertain in this case.
Due to similarity of the names Sudaĝ and Sud, the tutelary goddess of Shuruppak equated with Ninlil, the latter appears in the role Ishum's mother in a single myth. However, according to Sud and Sudaĝ were only confused with each other rather than conflated or syncretised.
Ninkar
Ninkar or Ninkara (from kár, "to light up") was one of the names of Aya according to An = Anum (tablet III, line 126). However, this theonym initially referred to a separate deity, presumably considered to be the goddess of daylight. In the oldest available sources her name was written as dnin-kar, while dnin-kár(-ra) first attested in the Ur III period is presumed to be a later variant. Joan Goodnick Westenholz argued that she is mentioned in one of the Early Dynastic Zame Hymns from Abu Salabikh. initially also tentatively accepted that this text might contain a reference to Ninkar. However, later on in a translation of the text he prepared in collaboration with Jan Lisman the corresponding passage has been interpreted as a reference to a "quay (kar) of Ningal" instead. It is known that a temple dedicated to Ninkar existed in Lagash. She is additionally attested in the theophoric name Ur-Ninkar, one of whose bearers might have been a deified king of Umma.
Krebernik assumes that in texts from Ebla, the name Ninkar also refers to the spouse of a sun deity, who he assumed was seen as male in this city. Alfonso Archi instead concludes that the Eblaite sun deity was primarily female based on available lexical evidence. Westenholz proposed that Ninkar in Eblaite texts should be interpreted as Ninkarrak rather than the phonetically similar but less well attested Mesopotamian Ninkar. She pointed out occasional shortening of Ninkarrak's name to "Ninkar" is known from Mesopotamian sources. The identification of Eblaite Ninkar with Ninkarrak is also accepted by Archi.
Other names
Further names of Aya attested in An = Anum include Nin-mul-guna ("lady colorful star"; tablet III, line 132) and Nin-ul-šutag (; "lady delighted with charm"; tablet III, line 134, the end of the Aya section). Paul-Alain Beaulieu additionally proposes that Belet Larsa ("Lady of Larsa") known from a number of Neo-Babylonian letters might be identical with Aya.
Character and iconography
Aya was considered the personification of dawn. She was associated with morning light and the rising sun. She was called the "morning-maker" Her other primary function was that of a divine bride, as exemplified by her epithet kallatum ("bride", "daughter-in-law"), and in this capacity she was regarded as epitome of beauty and charm. She was also commonly invoked to intercede with her husband Shamash on behalf of worshipers. This function is also well attested for other spouses of popular deities, such as Ninmug and Shala, the wives of Ishum and Adad, as well as for Inanna's sukkal Ninshubur.
The astronomical compendium MUL.APIN states that Aya was associated with the constellation Ewe, typically represented by the sumerogram mulU8, though a source referring to it with the phonetic Akkadian translation, mulImmertu, is known too. It might have corresponded to the northeastern section of the constellation Boötes. However, ultimately its identification remains uncertain.
In Mesopotamian art Aya was commonly depicted frontally. Many depictions highlighted her beauty and sexual charm. On seals from Sippar she was often depicted wearing a type of garment which exposed her right breast, meant to emphasize her qualities as a charming and attractive bride. Ishtar and Annunitum (who in Sippar functioned as a separate goddess, rather than an epithet) were depicted similarly. The existence of an emblem representing Aya is mentioned in texts from Sippar, but no descriptions of it are known.
Associations with other deities
As the wife of Shamash, Aya was regarded as the daughter-in-law of his parents Suen and Ningal and sister-in-law of his sister Ishtar. Their daughters were Mamu (or Mamud), the goddess of dreams and Kittum, the personification of truth. According to Joan Goodnick Westenholz another deity considered to be their child was Ishum.
In Hurrian sources Aya was also viewed as the spouse of a sun god, Šimige. A trilingual Sumero-Hurro-Ugaritic edition of the Weidner god list from Ugarit attests the equivalence between Shamash (Utu), Šimige and the local sun goddess Shapash (Šapšu). Apparently to avoid the implications that Shapash had a wife, the scribes interpreted the name of Aya, present in the Mesopotamian original, as an unconventional writing of Ea, with his Hurrian name Eyan corresponding to it in the Hurrian column and local craftsman god Kothar-wa-Khasis in the Ugaritic one.
A single god list dated to the Middle Babylonian period or later equates Lahar with Aya and explains that the former should be understood as "Aya as the goddess of caring for things" (da-a šá ku-né-e), though Wilfred G. Lambert noted this equation is unusual, as Lahar was consistently regarded as male otherwise, and the evidence for connections between both goddesses and mortal women with herding sheep, a sphere of life he was associated with, is limited.
Worship
Aya was already worshiped in the Early Dynastic period. While she is overall less well attested in textual record than major goddesses such as Ishtar, Nanaya, Ninlil or Ninisina, it is nonetheless assumed that she was a popular target of personal devotion, as she appears commonly in personal names and on seals, especially in the Old Babylonian period. In personal letters she is attested with frequency lesser only than Ishtar.
Sippar
Aya was worshiped in Sippar in the temple of Shamash, known under the ceremonial name . They are the divine couple most often invoked together in seal inscriptions from this city, followed by Adad and Shala and Enki and Damkina. In legal texts, Aya often appears as a divine witness alongside her husband, their daughter Mamu and Shamash's sukkal Bunene.
In the Sargonic period, Manishtushu dedicated a mace head to Aya in this city. Hammurabi of Babylon referred to himself as the "beloved of Aya" in an inscription commemorating the construction of new walls of Sippar in the twenty fifth year of his reign. He also mentioned Aya in an inscription commemorating the construction of a canal named after her, Aya-ḫegal, "Aya is abundance". Samsu-iluna called himself the "beloved of Shamash and Aya" and both renovated the Ebabbar and built walls around Sippar. It has also been noted that the Naditu community from this city were particularly closely associated with Aya, as evidenced by the fact that they addressed her as their mistress, commonly took theophoric names invoking her, and exclusively swore oaths by her. They were a class of women closely associated with Shamash. Their existence is particularly well attested in the Old Babylonian period, and it has been argued that the institution first developed around 1880 BCE, during the reign of Sumu-la-El of Babylon. Naditu lived in a building referred to as gagûm, conventionally translated as "cloister," and Tonia Sharlach notes they can be compared to medieval Christian nuns. They are sometimes described as "priestesses" in modern literature, but while it is well attested that they were considered to be dedicated to a specific deity, there is little evidence for their involvement in religious activities other than personal prayer, and it is not impossible they were understood as a fully separate social class.
Other Babylonian cities
It has been argued that in contrast with her position in Sippar, Aya was less prominent in the other city associated with Shamash, Larsa, where she does not appear in official lists of offerings. It is assumed that his temple in this city, which also bore the name Ebabbar, was nonetheless also dedicated to her. Some references to her are also present in texts from the Neo-Babylonian period, with one text mentioning the priests from Larsa sent jewelry of Aya and of the "divine daughter of Ebabbar" to Uruk for repairs. References to a "treasury of Shamash and Aya" are known too.
While Aya was not worshiped in Neo-Babylonian Uruk, she appears in ritual texts from this city from the Seleucid period. Julia Krul suggests that her introduction into the local pantheon reflected a broader phenomenon of incorporating spouses, children and servants of deities already worshiped locally (in this case Shamash) into it. She was celebrated during the New Year festival. In this context she appears alongside Shamash and Bunene.
A house of worship dedicated to Aya, the Edimgalanna ("house, great bond of heaven"), is mentioned in the Canonical Temple List, but its location is unknown.
Outside Babylonia
Aya was worshiped in Mari in the Old Babylonian period. She appears in theophoric names of women from this city with comparable frequency to Shamash and Dagan, the head god of the region, though less commonly than Annu, Ishtar, Išḫara, Kakka (regarded as a goddess in this city), Mamma and Admu. Examples include Aya-lamassi, Aya-ummi and Yatara-Aya.
A sanctuary dedicated to Aya, Eidubba ("house of storage bins") existed in Assur in Assyria.
Hurrian reception
Aya was among Mesopotamian deities incorporated into Hurrian religion. She is attested in the (offering lists) focused on Ḫepat and her circle. She is one of the Hurrian deities depicted in the Yazılıkaya sanctuary, where a relief of her can be seen in a procession of goddesses, between Nikkal and a figure who might represent Šauška. She is also attested in the itkalzi rituals.
Mythology
An UD.GAL.NUN text known from five copies from Abu Salabikh and one from Fara which focuses on Utu traveling to various mountainous areas to bring deities or animals from them lists Šerda as the final of the deities he transports and describes her as a resident of the "mountain-lands of Amurru" (kur mar-tu). According to Kamran Vincent Zand, this term should be understood as a designation of the Middle Euphrates in this context, and is the westernmost area mentioned. He also points out the next line of the text mentions Mari.
Buduhudug, a mythical mountain where the sun was believed to set, was regarded as "the entrance of Shamash to Aya" (nēreb dŠamaš <ana> dAya) - the place where they were able to reunite each day after Shamash finished his journey through the sky.
In the "Standard Babylonian" version of the Epic of Gilgamesh, Ninsun during her prayer to Shamash asks Aya three times to intercede on behalf of her son Gilgamesh to guarantee his safety both during the day and the night. Ninsun states that the optimal time for Aya to appeal to her husband is right after sunset, when he returns home from his daily journey.
Notes
References
Bibliography
Mesopotamian goddesses
Solar goddesses
Hurrian deities
Dawn goddesses
Dawn
Larsa
Sippar | Aya (goddess) | [
"Physics"
] | 3,213 | [
"Physical phenomena",
"Earth phenomena",
"Dawn"
] |
9,584,635 | https://en.wikipedia.org/wiki/Bornology | In mathematics, especially functional analysis, a bornology on a set X is a collection of subsets of X satisfying axioms that generalize the notion of boundedness. One of the key motivations behind bornologies and bornological analysis is the fact that bornological spaces provide a convenient setting for homological algebra in functional analysis. This is becausepg 9 the category of bornological spaces is additive, complete, cocomplete, and has a tensor product adjoint to an internal hom, all necessary components for homological algebra.
History
Bornology originates from functional analysis. There are two natural ways of studying the problems of functional analysis: one way is to study notions related to topologies (vector topologies, continuous operators, open/compact subsets, etc.) and the other is to study notions related to boundedness (vector bornologies, bounded operators, bounded subsets, etc.).
For normed spaces, from which functional analysis arose, topological and bornological notions are distinct but complementary and closely related.
For example, the unit ball centered at the origin is both a neighborhood of the origin and a bounded subset.
Furthermore, a subset of a normed space is a neighborhood of the origin (respectively, is a bounded set) exactly when it contains (respectively, it is contained in) a non-zero scalar multiple of this ball; so this is one instance where the topological and bornological notions are distinct but complementary (in the sense that their definitions differ only by which of and is used).
Other times, the distinction between topological and bornological notions may even be unnecessary.
For example, for linear maps between normed spaces, being continuous (a topological notion) is equivalent to being bounded (a bornological notion).
Although the distinction between topology and bornology is often blurred or unnecessary for normed space, it becomes more important when studying generalizations of normed spaces.
Nevertheless, bornology and topology can still be thought of as two necessary, distinct, and complementary aspects of one and the same reality.
The general theory of topological vector spaces arose first from the theory of normed spaces and then bornology emerged from this general theory of topological vector spaces, although bornology has since become recognized as a fundamental notion in functional analysis.
Born from the work of George Mackey (after whom Mackey spaces are named), the importance of bounded subsets first became apparent in duality theory, especially because of the Mackey–Arens theorem and the Mackey topology.
Starting around the 1950s, it became apparent that topological vector spaces were inadequate for the study of certain major problems.
For example, the multiplication operation of some important topological algebras was not continuous, although it was often bounded.
Other major problems for which TVSs were found to be inadequate was in developing a more general theory of differential calculus, generalizing distributions from (the usual) scalar-valued distributions to vector or operator-valued distributions, and extending the holomorphic functional calculus of Gelfand (which is primarily concerted with Banach algebras or locally convex algebras) to a broader class of operators, including those whose spectra are not compact.
Bornology has been found to be a useful tool for investigating these problems and others, including problems in algebraic geometry and general topology.
Definitions
A on a set is a cover of the set that is closed under finite unions and taking subsets. Elements of a bornology are called .
Explicitly, a or on a set is a family of subsets of such that
is stable under inclusion or : If then every subset of is an element of
Stated in plain English, this says that subsets of bounded sets are bounded.
covers Every point of is an element of some or equivalently,
Assuming (1), this condition may be replaced with: For every In plain English, this says that every point is bounded.
is stable under finite unions: The union of finitely many elements of is an element of or equivalently, the union of any sets belonging to also belongs to
In plain English, this says that the union of two bounded sets is a bounded set.
in which case the pair is called a or a .
Thus a bornology can equivalently be defined as a downward closed cover that is closed under binary unions.
A non-empty family of sets that closed under finite unions and taking subsets (properties (1) and (3)) is called an (because it is an ideal in the Boolean algebra/field of sets consisting of all subsets). A bornology on a set can thus be equivalently defined as an ideal that covers
Elements of are called or simply , if is understood.
Properties (1) and (2) imply that every singleton subset of is an element of every bornology on property (3), in turn, guarantees that the same is true of every finite subset of In other words, points and finite subsets are always bounded in every bornology. In particular, the empty set is always bounded.
If is a bounded structure and then the set of complements is a (proper) filter called the ; it is always a , which by definition means that it has empty intersection/kernel, because for every
Bases and subbases
If and are bornologies on then is said to be or than and also is said to be or than if
A family of sets is called a or of a bornology if and for every there exists an such that
A family of sets is called a of a bornology if and the collection of all finite unions of sets in forms a base for
Every base for a bornology is also a subbase for it.
Generated bornology
The intersection of any collection of (one or more) bornologies on is once again a bornology on
Such an intersection of bornologies will cover because every bornology on contains every finite subset of (that is, if is a bornology on and is finite then ). It is readily verified that such an intersection will also be closed under (subset) inclusion and finite unions and thus will be a bornology on
Given a collection of subsets of the smallest bornology on containing is called the .
It is equal to the intersection of all bornologies on that contain as a subset.
This intersection is well-defined because the power set of is always a bornology on so every family of subsets of is always contained in at least one bornology on
Bounded maps
Suppose that and are bounded structures.
A map is called a , or just a , if the image under of every -bounded set is a -bounded set;
that is, if for every
Since the composition of two locally bounded map is again locally bounded, it is clear that the class of all bounded structures forms a category whose morphisms are bounded maps.
An isomorphism in this category is called a and it is a bijective locally bounded map whose inverse is also locally bounded.
Examples of bounded maps
If is a continuous linear operator between two topological vector spaces (not necessarily Hausdorff), then it is a bounded linear operator when and have their von-Neumann bornologies, where a set is bounded precisely when it is absorbed by all neighbourhoods of origin (these are the subsets of a TVS that are normally called bounded when no other bornology is explicitly mentioned.).
The converse is in general false.
A sequentially continuous map between two TVSs is necessarily locally bounded.
General constructions
Discrete bornology
For any set the power set of is a bornology on called the . Since every bornology on is a subset of the discrete bornology is the finest bornology on
If is a bounded structure then (because bornologies are downward closed) is the discrete bornology if and only if
Indiscrete bornology
For any set the set of all finite subsets of is a bornology on called the . It is the coarsest bornology on meaning that it is a subset of every bornology on
Sets of bounded cardinality
The set of all countable subsets of is a bornology on
More generally, for any infinite cardinal the set of all subsets of having cardinality at most is a bornology on
Inverse image bornology
If is a map and is a bornology on then denotes the bornology generated by which is called it the or the induced by on
Let be a set, be an -indexed family of bounded structures, and let be an -indexed family of maps where for every
The on determined by these maps is the strongest bornology on making each locally bounded.
This bornology is equal to
Direct image bornology
Let be a set, be an -indexed family of bounded structures, and let be an -indexed family of maps where for every
The on determined by these maps is the weakest bornology on making each locally bounded.
If for each denotes the bornology generated by then this bornology is equal to the collection of all subsets of of the form where each and all but finitely many are empty.
Subspace bornology
Suppose that is a bounded structure and be a subset of
The on is the finest bornology on making the inclusion map of into (defined by ) locally bounded.
Product bornology
Let be an -indexed family of bounded structures, let and for each let denote the canonical projection.
The on is the inverse image bornology determined by the canonical projections
That is, it is the strongest bornology on making each of the canonical projections locally bounded.
A base for the product bornology is given by
Topological constructions
Compact bornology
A subset of a topological space is called relatively compact if its closure is a compact subspace of
For any topological space in which singleton subsets are relatively compact (such as a T1 space), the set of all relatively compact subsets of form a bornology on called the on
Every continuous map between T1 spaces is bounded with respect to their compact bornologies.
The set of relatively compact subsets of form a bornology on A base for this bornology is given by all closed intervals of the form for
Metric bornology
Given a metric space the consists of all subsets such that the supremum is finite.
Similarly, given a measure space the family of all measurable subsets of finite measure (meaning ) form a bornology on
Closure and interior bornologies
Suppose that is a topological space and is a bornology on
The bornology generated by the set of all topological interiors of sets in (that is, generated by is called the of and is denoted by
The bornology is called if
The bornology generated by the set of all topological closures of sets in (that is, generated by ) is called the of and is denoted by
We necessarily have
The bornology is called if it satisfies any of the following equivalent conditions:
the closed subsets of generate ;
the closure of every belongs to
The bornology is called if is both open and closed.
The topological space is called or just if every has a neighborhood that belongs to
Every compact subset of a locally bounded topological space is bounded.
Bornology of a topological vector space
If is a topological vector space (TVS) then the set of all bounded subsets of form a bornology (indeed, even a vector bornology) on called the , the , or simply of and is referred to as .
In any locally convex TVS the set of all closed bounded disks forms a base for the usual bornology of
A linear map between two bornological spaces is continuous if and only if it is bounded (with respect to the usual bornologies).
Topological rings
Suppose that is a commutative topological ring.
A subset of is called a if for each neighborhood of the origin in there exists a neighborhood of the origin in such that
See also
References
Functional analysis
Topological vector spaces | Bornology | [
"Mathematics"
] | 2,366 | [
"Functions and mappings",
"Functional analysis",
"Vector spaces",
"Mathematical objects",
"Topological vector spaces",
"Space (mathematics)",
"Mathematical relations"
] |
9,584,709 | https://en.wikipedia.org/wiki/Thermal%20oxidizer | A thermal oxidizer (also known as thermal oxidiser, or thermal incinerator) is a process unit for air pollution control in many chemical plants that decomposes hazardous gases at a high temperature and releases them into the atmosphere.
Principle
Thermal oxidizers are typically used to destroy hazardous air pollutants (HAPs) and volatile organic compounds (VOCs) from industrial air streams. These pollutants are generally hydrocarbon based and when destroyed, via thermal combustion, they are chemically oxidized to form CO2 and H2O. Three main factors in designing the effective thermal oxidizers are temperature, residence time, and turbulence. The temperature needs to be high enough to ignite the waste gas. Most organic compounds ignite at the temperature between and . To ensure near destruction of hazardous gases, most basic oxidizers are operated at much higher temperature levels. When catalyst is used, the operating temperature range may be lower. Residence time is to ensure that there is enough time for the combustion reaction to occur. The turbulence factor is the mixture of combustion air with the hazardous gases.
Technologies
Direct fired thermal oxidizer – afterburner
The simplest technology of thermal oxidation is direct-fired thermal oxidizer. A process stream with hazardous gases is introduced into a firing box through or near the burner and enough residence time is provided to get the desired destruction removal efficiency (DRE) of the VOCs. Most direct-fired thermal oxidizers operate at temperature levels between and with air flow rates of 0.24 to 24 standard cubic meters per second.
Also called afterburners in the cases where the input gases come from a process where combustion is incomplete, these systems are the least capital intensive, and can be integrated with downstream boilers and heat exchangers to optimize fuel efficiency. Thermal Oxidizers are best applied where there is a very high concentration of VOCs to act as the fuel source (instead of natural gas or oil) for complete combustion at the targeted operating temperature.
Regenerative thermal oxidizer (RTO)
One of today's most widely accepted air pollution control technologies across industry is a regenerative thermal oxidizer, commonly referred to as a RTO. RTOs use a ceramic bed which is heated from a previous oxidation cycle to preheat the input gases to partially oxidize them. The preheated gases enter a combustion chamber that is heated by an external fuel source to reach the target oxidation temperature which is in the range between and . The final temperature may be as high as for applications that require maximum destruction. The air flow rates are 2.4 to 240 standard cubic meters per second.
RTOs are very versatile and extremely efficient – thermal efficiency can reach 95%. They are regularly used for abating solvent fumes, odours, etc. from a wide range of industries. Regenerative Thermal Oxidizers are ideal in a range of low to high VOC concentrations up to 10 g/m3 solvent. There are currently many types of Regenerative Thermal Oxidizers on the market with the capability of 99.5+% Volatile Organic Compound (VOC) oxidization or destruction efficiency. The ceramic heat exchanger(s) in the towers can be designed for thermal efficiencies as high as 97+%.
Ventilation air methane thermal oxidizer (VAMTOX)
Ventilation air methane thermal oxidizers are used to destroy methane in the exhaust air of underground coal mine shafts. Methane is a greenhouse gas and, when oxidized via thermal combustion, is chemically altered to form CO2 and H2O. CO2 is 25 times less potent than methane when emitted into the atmosphere with regards to global warming. Concentrations of methane in mine ventilation exhaust air of coal and trona mines are very dilute; typically below 1% and often below 0.5%. VAMTOX units have a system of valves and dampers that direct the air flow across one or more ceramic filled bed(s). On start-up, the system preheats by raising the temperature of the heat exchanging ceramic material in the bed(s) at or above the auto-oxidation temperature of methane , at which time the preheating system is turned off and mine exhaust air is introduced. Then the methane-filled air reaches the preheated bed(s), releasing the heat from combustion. This heat is then transferred back to the bed(s), thereby maintaining the temperature at or above what is necessary to support auto-thermal operation.
Thermal recuperative oxidizer
A less commonly used thermal oxidizer technology is a thermal recuperative oxidizer. Thermal recuperative oxidizers have a primary and/or secondary heat exchanger within the system. A primary heat exchanger preheats the incoming dirty air by recuperating heat from the exiting clean air. This is done by a shell and tube heat exchanger or a plate heat exchanger. As the incoming air passes on one side of the metal tube or plate, hot clean air from the combustion chamber passes on the other side of the tube or plate and heat is transferred to the incoming air through the process of conduction using the metal as the medium of heat transfer. In a secondary heat exchanger the same concept applies for heat transfer, but the air being heated by the outgoing clean process stream is being returned to another part of the plant – perhaps back to the process.
Biomass fired thermal oxidizer
Biomass, such as wood chips, can be used as the fuel for a thermal oxidizer. The biomass is then gasified and the stream with hazardous gases is mixed with the biomass gas in a firing box. Sufficient turbulence, retention time, oxygen content and temperature will ensure destruction of the VOC's. Such biomass fired thermal oxidizer has been installed at Warwick Mills, New Hampshire. The inlet concentrations are between 3000–10.000 ppm VOC. The outlet concentration of VOC are below 3 ppm, thus having a VOC destruction efficiency of 99.8–99.9%.
Flameless thermal oxidizer (FTO)
In a flameless thermal oxidizer system waste gas, ambient air, and auxiliary fuel are premixed prior to passing the combined gaseous mixture through a preheated inert ceramic media bed. Through the transfer of heat from the ceramic media to the gaseous mixture the organic compounds in the gas are oxidized to innocuous byproducts, i.e., carbon dioxide (CO2) and water vapor (H2O) while also releasing heat into the ceramic media bed.
The gas mixture temperature is kept below the lower flammability limit based on the percentages of each organic species present. Flameless thermal oxidizers are designed to operate safely and reliably below the composite LFL while maintaining a constant operating temperature. Waste gas streams experience multiple seconds of residence time at high temperatures leading to measured destruction removal efficiencies that exceed 99.9999%. Premixing all of the gases prior to treatment eliminates localized high temperatures which leads to thermal NOx typically below 2 ppmV. Flameless thermal oxidizer technology was originally developed at the U.S. Department of Energy to more efficiently convert energy in burners, process heaters, and other thermal systems.
Fluidized bed concentrator (FBC)
In a Fluidized bed concentrator (FBC), a bed of activated carbon beads to adsorb volatile organic compounds (VOCs) from the exhaust gas. Evolving from the previous fixed-bed and carbon rotor concentrators, the FBC system forces the VOC-laden air through several perforated steel trays, increasing the velocity of the air and allowing the sub-millimeter carbon beads to fluidize, or behave as if suspended in a liquid. This increases the surface area of the carbon-gas interaction, making it more effective at capturing VOCs.
Catalytic oxidizer
Catalytic oxidizer (also known as catalytic incinerator) is another category of oxidation systems that is similar to typical thermal oxidizers, but the catalytic oxidizers use a catalyst to promote the oxidation. Catalytic oxidation occurs through a chemical reaction between the VOC hydrocarbon molecules and a precious-metal catalyst bed that is internal to the oxidizer system. A catalyst is a substance that is used to accelerate the rate of a chemical reaction, allowing the reaction to occur in a normal temperature range between and .
Regenerative catalytic oxidizer (RCO)
The catalyst can be used in a Regenerative Thermal Oxidizer (RTO) to allow lower operating temperatures. This is also called Regenerative Catalytic Oxidizer or RCO. For example, the thermal ignition temperature of carbon monoxide is normally . By utilizing a suitable oxidation catalyst, the ignition temperature can be reduced to around .
This can result in lower operating costs than a RTO. Most systems operate within the to degree range. Some systems are designed to operate both as RCOs and RTOs. When these systems are used special design considerations are utilized to reduce the probability of overheating (dilution of inlet gas or recycling), as these high temperatures would deactivate the catalyst, e.g. by sintering of the active material.
Recuperative catalytic oxidizer
Catalytic oxidizers can also be in the form of recuperative heat recovery to reduce the fuel requirement. In this form of heat recovery, the hot exhaust gases from the oxidizer pass through a heat exchanger to heat the new incoming air to the oxidizer.
References
Chemical equipment
Air pollution control systems | Thermal oxidizer | [
"Chemistry",
"Engineering"
] | 1,971 | [
"Chemical equipment",
"nan"
] |
9,585,625 | https://en.wikipedia.org/wiki/Flying%20ice%20cube | In molecular dynamics (MD) simulations, the flying ice cube effect is an artifact in which the energy of high-frequency fundamental modes is drained into low-frequency modes, particularly into zero-frequency motions such as overall translation and rotation of the system. The artifact derives its name from a particularly noticeable manifestation that arises in simulations of particles in vacuum, where the system being simulated acquires high linear momentum and experiences extremely damped internal motions, freezing the system into a single conformation reminiscent of an ice cube or other rigid body flying through space. The artifact is entirely a consequence of molecular dynamics algorithms and is wholly unphysical, since it violates the principle of equipartition of energy.
Origin and avoidance
The flying ice cube artifact arises from repeated rescalings of the velocities of the particles in the simulation system. Velocity rescaling is a means of imposing a thermostat on the system by multiplying the velocities of a system's particles by a factor after an integration timestep is completed, as is done by the Berendsen thermostat and the Bussi–Donadio–Parrinello thermostat. These schemes fail when the rescaling is done to a kinetic energy distribution of an ensemble that is not invariant under microcanonical molecular dynamics; thus, the Berendsen thermostat (which rescales to the isokinetic ensemble) exhibits the artifact, while the Bussi–Donadio–Parrinello thermostat (which rescales to the canonical ensemble) does not exhibit the artifact. Rescaling to an ensemble that is not invariant under microcanonical molecular dynamics results in a violation of the balance condition that is a requirement of Monte Carlo simulations (molecular dynamics simulations with velocity rescaling thermostats can be thought of as Monte Carlo simulations with molecular dynamics moves and velocity rescaling moves), which is the artifact's underlying reason.
When the flying ice cube problem was first found, the Bussi–Donadio–Parrinello thermostat had not yet been developed, and it was desired to continue using the Berendsen thermostat due to the efficiency with which velocity rescaling thermostats relax systems to desired temperatures. Thus, suggestions were given to avoid the flying ice cube effect under the Berendsen thermostat, such as periodically removing the center-of-mass motions and using a longer temperature coupling time. However, more recently it has been recommended that the better practice is to discontinue use of the Berendsen thermostat entirely in favor of the Bussi–Donadio–Parrinello thermostat, as it has been shown that the latter thermostat does not exhibit the flying ice cube effect.
References
Molecular dynamics
Numerical artifacts | Flying ice cube | [
"Physics",
"Chemistry"
] | 585 | [
"Molecular dynamics",
"Computational chemistry",
"Molecular physics",
"Computational physics"
] |
9,585,793 | https://en.wikipedia.org/wiki/Reuse%20metrics | In software engineering, many reuse metrics and models are metrics used to measure code reuse and reusability. A metric is a quantitative indicator of an attribute of a thing. A model specifies relationships among metrics. Reuse models and metrics can be categorized into six types:
reuse cost-benefits models
maturity assessment
amount of reuse
failure modes
reusability
reuse library metrics
Reuse cost-benefits models include economic cost-benefit analysis as well as quality and productivity payoff.
Maturity assessment models categorize reuse programs by how advanced they are in implementing systematic reuse.
Amount of reuse metrics are used to assess and monitor a reuse improvement effort by tracking percentages of reuse for life cycle objects.
Failure modes analysis is used to identify and order the impediments to reuse in a given organization.
Reusability metrics indicate the likelihood that an artifact is reusable.
Reuse library metrics are used to manage and track usage of a reuse repository.
References
Frakes, William and Carol, Terry. "Software Reuse: Metrics and Models." ACM Computing Surveys 28(2), pp. 415-435, 1996.
Software metrics
Reuse | Reuse metrics | [
"Mathematics",
"Engineering"
] | 252 | [
"Metrics",
"Quantity",
"Software engineering stubs",
"Software metrics",
"Software engineering"
] |
9,585,894 | https://en.wikipedia.org/wiki/Kohn%20anomaly | A Kohn anomaly or the Kohn effect is an anomaly in the dispersion relation of a phonon branch in a metal. The anomaly is named for Walter Kohn, who first proposed it in 1959.
Description
In condensed matter physics, a Kohn anomaly (also called the Kohn effect) is an anomaly in the dispersion relation of a phonon branch in a metal.
For a specific wavevector, the frequency (and thus the energy) of the associated phonon is considerably lowered, and there is a discontinuity in its derivative. In extreme cases (that can happen in low-dimensional materials), the energy of this phonon is zero, meaning that a static distortion of the lattice appears. This is one explanation for charge density waves in solids. The wavevectors at which a Kohn anomaly is possible are the nesting vectors of the Fermi surface, that is vectors that connect a lot of points of the Fermi surface (for a one-dimensional chain of atoms or a spherical Fermi surface this vector would be ). The electron phonon interaction causes a rigid shift of the Fermi sphere and a failure of the Born-Oppenheimer approximation since the electrons do not follow any more the ionic motion adiabatically.
In the phonon spectrum of a metal, a Kohn anomaly is a discontinuity in the derivative of the dispersion relation that is produced by the abrupt change in the screening of lattice vibrations by conduction electrons. It can occur at any point in the Brillouin Zone because ) is unrelated to crystal symmetry. In one dimension, it is equivalent to a Peierls instability, and it is similar to the Jahn-Teller effect seen in molecular systems.
Kohn anomalies arise together with Friedel oscillations when one considers the Lindhard theory instead of the Thomas–Fermi approximation in order to find an expression for the dielectric function of a homogeneous electron gas. The expression for the real part of the reciprocal space dielectric function obtained following the Lindhard theory includes a logarithmic term that is singular at , where is the Fermi wavevector. Although this singularity is quite small in reciprocal space, if one takes the Fourier transform and passes into real space, the Gibbs phenomenon causes a strong oscillation of in the proximity of the singularity mentioned above. In the context of phonon dispersion relations, these oscillations appear as a vertical tangent in the plot of , called the Kohn anomalies.
Many different systems exhibit Kohn anomalies, including graphene, bulk metals, and many low-dimensional systems (the reason involves the condition , which depends on the topology of the Fermi surface). However, it is important to emphasize that only materials showing metallic behaviour can exhibit a Kohn anomaly, since the model emerges from a homogeneous electron gas approximation.
History
The anomaly is named for Walter Kohn. They have been first proposed by Walter Kohn in 1959.
See also
Zero sound
Pomeranchuk instability
References
Condensed matter physics | Kohn anomaly | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 636 | [
"Phases of matter",
"Condensed matter physics",
"Matter",
"Materials science"
] |
1,043,247 | https://en.wikipedia.org/wiki/Rhyolite%2C%20Nevada | Rhyolite is a ghost town in Nye County, in the U.S. state of Nevada. It is in the Bullfrog Hills, about northwest of Las Vegas, near the eastern boundary of Death Valley National Park.
The town began in early 1905 as one of several mining camps that sprang up after a prospecting discovery in the surrounding hills. During an ensuing gold rush, thousands of gold-seekers, developers, miners and service providers flocked to the Bullfrog Mining District. Many settled in Rhyolite, which lay in a sheltered desert basin near the region's biggest producer, the Montgomery Shoshone Mine.
Industrialist Charles M. Schwab bought the Montgomery Shoshone Mine in 1906 and invested heavily in infrastructure, including piped water, electric lines and railroad transportation, that served the town as well as the mine. By 1907, Rhyolite had electric lights, water mains, telephones, newspapers, a hospital, a school, an opera house, and a stock exchange. Published estimates of the town's peak population vary widely, but scholarly sources generally place it in a range between 3,500 and 5,000 in 1907–08.
Rhyolite declined almost as rapidly as it rose. After the richest ore was exhausted, production fell. The 1906 San Francisco earthquake and the financial panic of 1907 made it more difficult to raise development capital. In 1908, investors in the Montgomery Shoshone Mine, concerned that it was overvalued, ordered an independent study. When the study's findings proved unfavorable, the company's stock value crashed, further restricting funding. By the end of 1910, the mine was operating at a loss, and it closed in 1911. By this time, many out-of-work miners had moved elsewhere, and Rhyolite's population dropped well below 1,000. By 1920, it was close to zero.
After 1920, Rhyolite and its ruins became a tourist attraction and a setting for motion pictures. Most of its buildings crumbled, were salvaged for building materials, or were moved to nearby Beatty or other towns, although the railway depot and a house made chiefly of empty bottles were repaired and preserved. From 1988 to 1998, three companies operated a profitable open-pit mine at the base of Ladd Mountain, about south of Rhyolite. The Goldwell Open Air Museum lies on private property just south of the ghost town, which is on property overseen by the Bureau of Land Management.
Names
The town is named for rhyolite, an igneous rock composed of light-colored silicates, usually buff to pink and occasionally light gray. It belongs to the same rock class, felsic, as granite but is much less common. The Amargosa River, which flows through nearby Beatty, gets its name from the Spanish word for "bitter", amargo. In its course, the river takes up large amounts of salts, which give it a bitter taste.
"Bullfrog" was the name Frank "Shorty" Harris and Ernest "Ed" Cross, the prospectors who started the Bullfrog gold rush, gave to their mine. As quoted by Robert D. McCracken in A History of Beatty, Nevada, Harris said during a 1930 interview for Westways magazine, "The rock was green, almost like turquoise, spotted with big chunks of yellow metal, and looked a lot like the back of a frog." The Bullfrog Mining District, the Bullfrog Hills, the town of Bullfrog, and other geographical entities in the region took their name from the Bullfrog Mine.
"Bullfrog" became so popular that Giant Bullfrog, Bullfrog Merger, Bullfrog Apex, Bullfrog Annex, Bullfrog Gold Dollar, Bullfrog Mogul, and most of the district's other 200 or so mining companies included "Bullfrog" in their names. The name persisted and, decades later, was given to the short-lived Bullfrog County.
Beatty is named after "Old Man" Montillus (Montillion) Murray Beatty, a Civil War veteran and miner who bought a ranch along the Amargosa River just north of what became the town of Beatty. In 1906, he sold the ranch to the Bullfrog Water, Power, and Light Company. "Shoshone" in "Montgomery Shoshone Mine" refers to the Western Shoshone people indigenous to the region. In about 1875, the Shoshone had six camps along the Amargosa River near Beatty. The total population of these camps was 29, and because game was scarce, they subsisted largely on seeds, bulbs and plants gathered throughout the region, including the Bullfrog Hills.
Geology
The Bullfrog Hills are at the western edge of the southwestern Nevada volcanic field. Extensionally faulted volcanic rocks, ranging in age from about 13.3 million years to about 7.6 million years, overlie the region's Paleozoic sedimentary rocks. The prevailing rocks, which contain the ore deposits, are a series of rhyolitic lava flows that built to a combined thickness of about above the more ancient rock.
After the flows ceased, tectonic stresses fractured the area into many separate fault blocks. Most of these blocks tilt to the east, and the horizontal banding of individual flows shows clearly on their western scarps. Within the blocks, the ore deposits tend to occur in nearly vertical mineralized faults or fault zones in the rhyolite. Most of the lodes in the Bullfrog Hills are not simple veins but rather fissure zones with many stringers of vein material.
Geography and climate
Rhyolite is at the northern end of the Amargosa Desert in Nye County in the U.S. state of Nevada. Nestled in the Bullfrog Hills, about northwest of Las Vegas, it is about south of Goldfield, and south of Tonopah. Roughly to the east lie Beatty and the Amargosa River. To the west, roughly from Rhyolite, the Funeral and Grapevine Mountains of the Amargosa Range rise between the Amargosa Desert in Nevada and Death Valley in California. State Route 374, passing about south of Rhyolite, links Beatty to Death Valley via Daylight Pass. Rhyolite is about west of Yucca Mountain and the proposed Yucca Mountain nuclear waste repository, which is adjacent to the Nevada Test Site.
Bordered on three sides by ridges but open to the south, the ghost town is at above sea level. The high points of the ridges are Ladd Mountain to the east, Sutherland Mountain to the west, and Busch Peak to the north. Sawtooth Mountain, the highest point in the Bullfrog Hills, rises to above sea level about northwest of Rhyolite. The hills form a barrier between the Amargosa Desert and Sarcobatus Flat to the north. Most of the primary mining communities in the Beatty–Rhyolite area during the gold-rush boom of 1904–08 were either in or on the edge of the Bullfrog Hills. Of these and many smaller towns and camps in the Bullfrog district, only Beatty survived as a populated place. Prior to its demise, the rival town of Bullfrog lay about southwest of Rhyolite, and the Montgomery Shoshone Mine was on the north side of Montgomery Mountain, about northeast of Rhyolite.
Nevada's main climatic features are bright sunshine, low annual precipitation, heavy snowfall in the higher mountains, clean, dry air, and large daily temperature ranges. Strong surface heating occurs by day and rapid cooling by night, and usually even the hottest days have cool nights. The average percentage of possible sunshine in southern Nevada is more than 80 percent. Sunshine and low humidity in this region account for an average evaporation, as measured in evaporation pans, of more than of water a year.
Beatty, about lower in elevation than Rhyolite, receives only about of precipitation a year. July is the hottest month in Beatty, when the average high temperature is and the average low is . December and January are the coolest months with an average high of and an average low of in December and in January. Rhyolite is high enough in the hills to have relatively cool summers, and it has relatively mild winters. However, it is far from sources of water.
History
Boom
On August 9, 1904, Cross and Harris found gold on the south side of a southwestern Nevada hill later called Bullfrog Mountain. Assays of ore samples from the site suggested values up to $3,000 a ton, or about $ a ton in dollars when adjusted for inflation. Word of the discovery spread to Tonopah and beyond, and soon thousands of hopeful prospectors and speculators rushed to what became known as the Bullfrog Mining District.
Within the district, gold rush settlements quickly arose near the mines, and Rhyolite became the largest. It sprang up near the most promising discovery, the Montgomery Shoshone Mine, which in February 1905 produced ores assayed as high as $16,000 a ton, equivalent to $ a ton in . Starting as a two-man camp in January 1905, Rhyolite became a town of 1,200 people in two weeks and reached a population of 2,500 by June 1905. By then it had 50 saloons, 35 gambling tables, cribs for prostitution, 19 lodging houses, 16 restaurants, half a dozen barbers, a public bath house, and a weekly newspaper, the Rhyolite Herald. Four daily stage coaches connected Goldfield, to the north, and Rhyolite. Rival auto lines ferried people between Rhyolite and Goldfield and the rail station in Las Vegas in Pope-Toledos, White Steamers, and other touring cars.
Ernest Alexander "Bob" Montgomery, the original owner, and his partners sold the mine to industrialist Charles M. Schwab in February 1906. Schwab expanded the operation on a grand scale, hiring workers, opening new tunnels and drifts, and building a huge mill to process the ore. He had water piped in, paid to have an electric line run from a hydroelectric plant at the foot of the Sierra Nevada mountain range to Rhyolite, and contracted with the Las Vegas and Tonopah Railroad to run a spur line to the mine. Three railroads eventually served Rhyolite. The first was the Las Vegas and Tonopah Railroad (LVTR), which began running regular trains to the city on December 14, 1906. Its depot, built in California-mission style, cost about $130,000, equivalent to about $ in . About a half-year later, the Bullfrog Goldfield Railroad (BGR) began regular service from the north. By December 1907, the Tonopah and Tidewater Railroad (TTR) began service to Rhyolite on tracks leased from the BGR. The TTR was built to reach the borax-bearing colemanite beds in Death Valley as well as the gold fields.
By 1907, about 4,000 people lived in Rhyolite, according to Richard E. Lingenfelter in Death Valley & the Amargosa: A Land of Illusion. Russell R. Elliott cites an estimated population of 5,000 in 1907–08 in Nevada's Twentieth-Century Mining Boom, noting that "accurate population figures during the boom are impossible to obtain". Alan H. Patera in Rhyolite: The Boom Years states published estimates of the peak population have been "as high as 6,000 or 8,000, but the town itself never claimed more than 3,500 through its newspapers". The newspapers estimated that 6,000 people lived in the Bullfrog mining district, which included the towns of Rhyolite, Bullfrog, Gold Center, and Beatty as well as camps at the major mines.
Rhyolite in 1907 had concrete sidewalks, electric lights, water mains, telephone and telegraph lines, daily and weekly newspapers, a monthly magazine, police and fire departments, a hospital, school, train station and railway depot, at least three banks, a stock exchange, an opera house, a public swimming pool and two formal church buildings. Most prominent was the three-story John S. Cook and Co. Bank on Golden Street. Finished in 1908, it cost more than $90,000, equivalent to $ in . Much of the cost went for Italian marble stairs, imported stained-glass windows, and other luxuries. The building housed brokerage offices, and a post office, as well as the bank. Other large buildings included the train depot, the three-story Overbury Bank building, and the two-story eight-room school. A miner named Tom T. Kelly built the Bottle House in February 1906 from 50,000 discarded beer and liquor bottles. Another building housed the Rhyolite Mining Stock Exchange, which opened on March 25, 1907, with 125 members, including brokers from New York, Philadelphia, Los Angeles, and other large cities. The small, modestly equipped storefront listed shares of 74 Bullfrog companies and a similar number of companies in nearby mining districts. Sixty thousand shares changed hands on the first day, and by the end of the second week the number had topped 750,000.
Bust
Although the mine produced more than $1 million (equivalent to about $24 million in 2009) in bullion in its first three years, its shares declined from $23 a share (in historical dollars) to less than $3. In February 1908, a committee of minority stockholders, suspecting that the mine was overvalued, hired a British mining engineer to conduct an inspection. The engineer's report was unfavorable, and news of this caused a sudden further decline in share value from $3 to 75 cents. Schwab expressed disappointment when he learned that "the wonderful high-grade [ore] that had brought [the mine] fame was confined to only a few stringers and that what he had actually bought was a large low-grade mine."
Although the mine was still profitable, by 1909 no new ore was being discovered, and the value of the remaining ore steadily decreased. In 1910, the mine operated at a loss for most of the year, and on March 14, 1911, it was closed. By then, the stock, which had fallen to 10 cents a share, slid to 4 cents and was dropped from the exchanges.
Rhyolite began to decline before the final closing of the mine. At roughly the same time that the Bullfrog mines were running out of high-grade ore, the 1906 San Francisco earthquake diverted capital to California while interrupting rail service, and the financial panic of 1907 restricted funding for mine development. As mines in the district reduced production or closed, unemployed miners left Rhyolite to seek work elsewhere, businesses failed, and by 1910, the census reported only 675 residents.
All three banks in the town closed by March 1910. The newspapers, including the Rhyolite Herald, the last to go, all shut down by June 1912. The post office closed in November 1913; the last train left Rhyolite Station in July 1914, and the Nevada-California Power Company turned off the electricity and removed its lines in 1916. Within a year the town was "all but abandoned", and the 1920 census reported a population of only 14. A 1922 motor tour by the Los Angeles Times found only one remaining resident, a 92-year-old man who died in 1924.
Much of Rhyolite's remaining infrastructure became a source of building materials for other towns and mining camps. Whole buildings were moved to Beatty. The Miners' Union Hall in Rhyolite became the Old Town Hall in Beatty, and two-room cabins were moved and reassembled as multi-room homes. Parts of many buildings were used to build a Beatty school.
Ghost town
The Rhyolite historic townsite, maintained by the Bureau of Land Management, is "one of the most photographed ghost towns in the West". Ruins include the railroad depot and other buildings, and the Bottle House, which the Famous Players Lasky Corporation, the parent of Paramount Pictures, restored in 1925 for the filming of a silent movie, The Air Mail. The ruins of the Cook Bank building were used in the 1964 film The Reward and again in 2004 for the filming of The Island. Orion Pictures used Rhyolite for its 1988 science-fiction movie Cherry 2000 depicting the collapse of American society. Six-String Samurai (1998) was another movie using Rhyolite as a setting. The Rhyolite-Bullfrog cemetery, with many wooden headboards, is slightly south of Rhyolite.
Tourism flourished in and near Death Valley in the 1920s, and souvenir sellers set up tables in Rhyolite to sell rocks and bottles on weekends. In the 1930s, Revert Mercantile of Beatty acquired a Union Oil distributorship, built a gas station in Beatty, and supplied pumps in other locations, including Rhyolite. The Rhyolite service station consisted of an old caboose, a storage tank, and a pump, managed by a local owner. In 1937, the train depot became a casino and bar called the Rhyolite Ghost Casino, which was later turned into a small museum and curio shop that remained open into the 1970s. In 1984, Belgian artist Albert Szukalski created his sculpture The Last Supper on Golden Street near the Rhyolite railway depot. The art became part of the Goldwell Open Air Museum, an outdoor sculpture park near the southern entrance to the ghost town.
Barrick Bullfrog Mine
Mining in and around Rhyolite after 1920 consisted mainly of working old tailings until a new mine opened in 1988 on the south side of Ladd Mountain. A company known as Bond Gold built an open-pit mine and mill at the site, about south of Rhyolite along State Route 374. LAC Minerals acquired the mine from Bond in 1989 and established an underground mine there in 1991 after a new body of ore called the North Extension was discovered. Barrick Gold acquired LAC Minerals in 1994 and continued to extract and process ore at what became known as the Barrick Bullfrog Mine until the end of 1998. The mine used a chemical extraction process known as vat leaching involving the use of a weak cyanide solution. The process, like heap leaching, makes it possible to process ore profitably that otherwise would not qualify as mill-grade. Over its entire life, the mine processed about of ore and produced about of gold.
See also
List of ghost towns in Nevada
References
Further reading
Elliott, Russell R. (1988). Nevada's Twentieth-Century Mining Boom: Tonopah, Goldfield, Ely. Reno: University of Nevada Press. .
Hall, Shawn. (1999). Preserving the Glory Days: Ghost Towns and Mining Camps of Nye County, Nevada. Reno: University of Nevada Press. .
Hustrulid, William A., and Bullock, Richard L., eds. (2001) Underground Mining Methods: Engineering Fundamentals and International Case Studies. Littleton, Colorado: Society for Mining, Metallurgy, and Exploration (SME). .
Lingenfelter, Richard E. (1986). Death Valley & the Amargosa: A Land of Illusion. Berkeley and Los Angeles, California: University of California Press. .
McCoy, Suzy. (2004). Rebecca's Walk Through Time: A Rhyolite Story. Lake Grove, Oregon: Western Places. .
McCracken, Robert D. (1992). A History of Beatty, Nevada. Tonopah, Nevada: Nye County Press. .
McCracken, Robert D. (1992). Beatty: Frontier Oasis. Tonopah, Nevada: Nye County Press. .
Patera, Alan H. (2001). Rhyolite: the Boom Years (Western Places #10, fourth printing). Lake Grove, Oregon: Western Places. .
Ransome, R.L. (1907). "Preliminary Account of Goldfield, Bullfrog and Other Mining Districts in Southern Nevada". Originally published as "United States Geological Survey Bulletin 303". Reprinted in Mines of Goldfield, Bullfrog and Other Southern Nevada Districts (1983). Las Vegas: Nevada Publications. .
External links
Beatty Museum and Historical Society
From the Ghost Town – Suzy McCoy
Rhyolite – Ghost Town Gallery
Rhyolite Ghost Town – National Park Service
Rhyolite video – Vimeo
1920s images of Rhyolite from the Death Valley Region Photographs Digital Collection – Utah State University
Ghost towns in Nye County, Nevada
Mining communities in Nevada
Amargosa Desert
Death Valley National Park
Tonopah and Tidewater Railroad
Populated places established in 1905
1905 establishments in Nevada
Ghost towns in Nevada
Bottle houses | Rhyolite, Nevada | [
"Engineering"
] | 4,324 | [
"Bottle houses",
"Architecture"
] |
1,043,263 | https://en.wikipedia.org/wiki/Excitotoxicity | In excitotoxicity, nerve cells suffer damage or death when the levels of otherwise necessary and safe neurotransmitters such as glutamate become pathologically high, resulting in excessive stimulation of receptors. For example, when glutamate receptors such as the NMDA receptor or AMPA receptor encounter excessive levels of the excitatory neurotransmitter, glutamate, significant neuronal damage might ensue. Excess glutamate allows high levels of calcium ions (Ca2+) to enter the cell. Ca2+ influx into cells activates a number of enzymes, including phospholipases, endonucleases, and proteases such as calpain. These enzymes go on to damage cell structures such as components of the cytoskeleton, membrane, and DNA. In evolved, complex adaptive systems such as biological life it must be understood that mechanisms are rarely, if ever, simplistically direct. For example, NMDA, in subtoxic amounts, can block glutamate toxicity and thereby induce neuronal survival.
Excitotoxicity may be involved in cancers, spinal cord injury, stroke, traumatic brain injury, hearing loss (through noise overexposure or ototoxicity), and in neurodegenerative diseases of the central nervous system such as multiple sclerosis, Alzheimer's disease, amyotrophic lateral sclerosis (ALS), Parkinson's disease, alcoholism, alcohol withdrawal or hyperammonemia and especially over-rapid benzodiazepine withdrawal, and also Huntington's disease. Other common conditions that cause excessive glutamate concentrations around neurons are hypoglycemia. Blood sugars are the primary glutamate removal method from inter-synaptic spaces at the NMDA and AMPA receptor site. Persons in excitotoxic shock must never fall into hypoglycemia. Patients should be given 5% glucose (dextrose) IV drip during excitotoxic shock to avoid a dangerous build up of glutamate around NMDA and AMPA neurons. When 5% glucose (dextrose) IV drip is not available high levels of fructose are given orally. Treatment is administered during the acute stages of excitotoxic shock along with glutamate antagonists. Dehydration should be avoided as this also contributes to the concentrations of glutamate in the inter-synaptic cleft and "status epilepticus can also be triggered by a build up of glutamate around inter-synaptic neurons."
History
The harmful effects of glutamate on the central nervous system were first observed in 1954 by T. Hayashi, a Japanese scientist who stated that direct application of glutamate caused seizure activity, though this report went unnoticed for several years. D. R. Lucas and J. P. Newhouse, after noting that "single doses of [20–30 grams of sodium glutamate in humans] have ... been administered intravenously without permanent ill-effects", observed in 1957 that a subcutaneous dose described as "a little less than lethal", destroyed the neurons in the inner layers of the retina in newborn mice. In 1969, John Olney discovered that the phenomenon was not restricted to the retina, but occurred throughout the brain, and coined the term excitotoxicity. He also assessed that cell death was restricted to postsynaptic neurons, that glutamate agonists were as neurotoxic as their efficiency to activate glutamate receptors, and that glutamate antagonists could stop the neurotoxicity.
In 2002, Hilmar Bading and co-workers found that excitotoxicity is caused by the activation of NMDA receptors located outside synaptic contacts. The molecular basis for toxic extrasynaptic NMDA receptor signaling was uncovered in 2020 when Hilmar Bading and co-workers described a death signaling complex that consists of extrasynaptic NMDA receptor and TRPM4. Disruption of this complex using NMDAR/TRPM4 interface inhibitors (also known as ‚interface inhibitors‘) renders extrasynaptic NMDA receptor non-toxic.
Pathophysiology
Excitotoxicity can occur from substances produced within the body (endogenous excitotoxins). Glutamate is a prime example of an excitotoxin in the brain, and it is also the major excitatory neurotransmitter in the central nervous system of mammals. During normal conditions, glutamate concentration can be increased up to 1mM in the synaptic cleft, which is rapidly decreased in the lapse of milliseconds. When the glutamate concentration around the synaptic cleft cannot be decreased or reaches higher levels, the neuron kills itself by a process called apoptosis.
This pathologic phenomenon can also occur after brain injury and spinal cord injury. Within minutes after spinal cord injury, damaged neural cells within the lesion site spill glutamate into the extracellular space where glutamate can stimulate presynaptic glutamate receptors to enhance the release of additional glutamate. Brain trauma or stroke can cause ischemia, in which blood flow is reduced to inadequate levels. Ischemia is followed by accumulation of glutamate and aspartate in the extracellular fluid, causing cell death, which is aggravated by lack of oxygen and glucose. The biochemical cascade resulting from ischemia and involving excitotoxicity is called the ischemic cascade. Because of the events resulting from ischemia and glutamate receptor activation, a deep chemical coma may be induced in patients with brain injury to reduce the metabolic rate of the brain (its need for oxygen and glucose) and save energy to be used to remove glutamate actively. (The main aim in induced comas is to reduce the intracranial pressure, not brain metabolism).
Increased extracellular glutamate levels leads to the activation of Ca2+ permeable NMDA receptors on myelin sheaths and oligodendrocytes, leaving oligodendrocytes susceptible to Ca2+ influxes and subsequent excitotoxicity. One of the damaging results of excess calcium in the cytosol is initiating apoptosis through cleaved caspase processing. Another damaging result of excess calcium in the cytosol is the opening of the mitochondrial permeability transition pore, a pore in the membranes of mitochondria that opens when the organelles absorb too much calcium. Opening of the pore may cause mitochondria to swell and release reactive oxygen species and other proteins that can lead to apoptosis. The pore can also cause mitochondria to release more calcium. In addition, production of adenosine triphosphate (ATP) may be stopped, and ATP synthase may in fact begin hydrolysing ATP instead of producing it, which is suggested to be involved in depression.
Inadequate ATP production resulting from brain trauma can eliminate electrochemical gradients of certain ions. Glutamate transporters require the maintenance of these ion gradients to remove glutamate from the extracellular space. The loss of ion gradients results in not only the halting of glutamate uptake, but also in the reversal of the transporters. The Na+-glutamate transporters on neurons and astrocytes can reverse their glutamate transport and start secreting glutamate at a concentration capable of inducing excitotoxicity. This results in a buildup of glutamate and further damaging activation of glutamate receptors.
On the molecular level, calcium influx is not the only factor responsible for apoptosis induced by excitoxicity. Recently, it has been noted that extrasynaptic NMDA receptor activation, triggered by both glutamate exposure or hypoxic/ischemic conditions, activate a CREB (cAMP response element binding) protein shut-off, which in turn caused loss of mitochondrial membrane potential and apoptosis. On the other hand, activation of synaptic NMDA receptors activated only the CREB pathway, which activates BDNF (brain-derived neurotrophic factor), not activating apoptosis.
Exogenous excitotoxins
Exogenous excitotoxins refer to neurotoxins that also act at postsynaptic cells but are not normally found in the body. These toxins may enter the body of an organism from the environment through wounds, food intake, aerial dispersion etc. Common excitotoxins include glutamate analogs that mimic the action of glutamate at glutamate receptors, including AMPA and NMDA receptors.
BMAA
The L-alanine derivative β-methylamino-L-alanine (BMAA) has long been identified as a neurotoxin which was first associated with the amyotrophic lateral sclerosis/parkinsonism–dementia complex (Lytico-bodig disease) in the Chamorro people of Guam. The widespread occurrence of BMAA can be attributed to cyanobacteria which produce BMAA as a result of complex reactions under nitrogen stress. Following research, excitotoxicity appears to be the likely mode of action for BMAA which acts as a glutamate agonist, activating AMPA and NMDA receptors and causing damage to cells even at relatively low concentrations of 10 μM. The subsequent uncontrolled influx of Ca2+ then leads to the pathophysiology described above. Further evidence of the role of BMAA as an excitotoxin is rooted in the ability of NMDA antagonists like MK801 to block the action of BMAA. More recently, evidence has been found that BMAA is misincorporated in place of L-serine in human proteins. A considerable portion of the research relating to the toxicity of BMAA has been conducted on rodents. A study published in 2016 with vervets (Chlorocebus sabaeus) in St. Kitts, which are homozygous for the apoE4 (APOE-ε4) allele (a condition which in humans is a risk factor for Alzheimer's disease), found that vervets orally administered BMAA developed hallmark histopathology features of Alzheimer's Disease including amyloid beta plaques and neurofibrillary tangle accumulation. Vervets in the trial fed smaller doses of BMAA were found to have correlative decreases in these pathology features. This study demonstrates that BMAA, an environmental toxin, can trigger neurodegenerative disease as a result of a gene/environment interaction. While BMAA has been detected in brain tissue of deceased ALS/PDC patients, further insight is required to trace neurodegenerative pathology in humans to BMAA.
See also
Glutamatergic system
Glutamic acid (flavor)
NMDA receptor antagonist
Dihydropyridine
References
Further reading
Invited Review
Food safety
Neurochemistry
Neurotrauma
Toxins | Excitotoxicity | [
"Chemistry",
"Biology",
"Environmental_science"
] | 2,357 | [
"Biochemistry",
"Toxins",
"Neurochemistry",
"Toxicology"
] |
1,043,443 | https://en.wikipedia.org/wiki/Flood%20control%20%28communications%29 | In communications, flood control is a feature of many communication protocols designed to prevent overwhelming of a destination receiver. Such controls can be implemented either in software or in hardware, and will often request that the message be resent after the receiver has finished processing.
Internet forums often use a flood control mechanism to prevent too many messages from being posted at once, either to prevent spamming or denial-of-service attacks. Internet Relay Chat servers will often quit users performing IRC floods with an "Excess Flood" message.
References
Data transmission
Internet terminology
Internet forum terminology
IRC | Flood control (communications) | [
"Technology"
] | 114 | [
"Computing terminology",
"Internet terminology"
] |
1,043,487 | https://en.wikipedia.org/wiki/EF%20Eridani | EF Eridani (abbreviated EF Eri, sometimes incorrectly referred to as EF Eridanus) is a variable star of the type known as polars, AM Herculis stars, or magnetic cataclysmic variable stars. Historically it has varied between apparent magnitudes 14.5 and 17.3, although since 1995 it has generally remained at the lower limit. The star system consists of a white dwarf with a substellar-mass former star in orbit.
EF Eridani B
The substellar mass in orbit around the white dwarf is a star that lost all of its gas to the white dwarf. What remains is an object with a mass of 0.05 solar masses (), or about 53 Jupiter masses (), which is too small to continue fusion, and does not have the composition of a super-planet, brown dwarf, or white dwarf. There is no category for such a stellar remnant.
It is theorized that 500 million years ago, the white dwarf started to cannibalize its partner, when they were separated by 7 million km. As it lost mass, the regular star spiraled inward. Today, they are separated by a mere 700,000 km for an orbital period of . The orbit is expected to continue to shrink due to gravitational radiation.
See also
AM Herculis
Cataclysmic variable star
Polar (cataclysmic variable)
Variable stars
Stellar remnants
PSR J1719-1438 b, a planetary-mass former star that was eroded by its binary star partner, PSR J1719-1438
PSR J1544+4937 b
PSR B1957+20 b
References
External links
(CNN) Faded star defies description
AAVSO charts for EF Eridani
Polars (cataclysmic variable stars)
Eridanus (constellation)
Eridani, EF
Eclipsing binaries | EF Eridani | [
"Astronomy"
] | 391 | [
"Eridanus (constellation)",
"Constellations"
] |
1,043,627 | https://en.wikipedia.org/wiki/Biomedicine | Biomedicine (also referred to as Western medicine, mainstream medicine or conventional medicine) is a branch of medical science that applies biological and physiological principles to clinical practice. Biomedicine stresses standardized, evidence-based treatment validated through biological research, with treatment administered via formally trained doctors, nurses, and other such licensed practitioners.
Biomedicine also can relate to many other categories in health and biological related fields. It has been the dominant system of medicine in the Western world for more than a century.
It includes many biomedical disciplines and areas of specialty that typically contain the "bio-" prefix such as molecular biology, biochemistry, biotechnology, cell biology, embryology, nanobiotechnology, biological engineering, laboratory medical biology, cytogenetics, genetics, gene therapy, bioinformatics, biostatistics, systems biology, neuroscience, microbiology, virology, immunology, parasitology, physiology, pathology, anatomy, toxicology, and many others that generally concern life sciences as applied to medicine.
Overview
Biomedicine is the cornerstone of modern health care and laboratory diagnostics. It concerns a wide range of scientific and technological approaches: from in vitro diagnostics to in vitro fertilisation, from the molecular mechanisms of cystic fibrosis to the population dynamics of the HIV virus, from the understanding of molecular interactions to the study of carcinogenesis, from a single-nucleotide polymorphism (SNP) to gene therapy.
Biomedicine is based on molecular biology and combines all issues of developing molecular medicine into large-scale structural and functional relationships of the human genome, transcriptome, proteome, physiome and metabolome with the particular point of view of devising new technologies for prediction, diagnosis and therapy.
Biomedicine involves the study of (patho-) physiological processes with methods from biology and physiology. Approaches range from understanding molecular interactions to the study of the consequences at the in vivo level. These processes are studied with the particular point of view of devising new strategies for diagnosis and therapy.
Depending on the severity of the disease, biomedicine pinpoints a problem within a patient and fixes the problem through medical intervention. Medicine focuses on curing diseases rather than improving one's health.
In social sciences biomedicine is described somewhat differently. Through an anthropological lens biomedicine extends beyond the realm of biology and scientific facts; it is a socio-cultural system which collectively represents reality. While biomedicine is traditionally thought to have no bias due to the evidence-based practices, Gaines & Davis-Floyd (2004) highlight that biomedicine itself has a cultural basis and this is because biomedicine reflects the norms and values of its creators.
Molecular biology
Molecular biology is the process of synthesis and regulation of a cell's DNA, RNA, and protein. Molecular biology consists of different techniques including Polymerase chain reaction, Gel electrophoresis, and macromolecule blotting to manipulate DNA.
Polymerase chain reaction is done by placing a mixture of the desired DNA, DNA polymerase, primers, and nucleotide bases into a machine. The machine heats up and cools down at various temperatures to break the hydrogen bonds binding the DNA and allows the nucleotide bases to be added onto the two DNA templates after it has been separated.
Gel electrophoresis is a technique used to identify similar DNA between two unknown samples of DNA. This process is done by first preparing an agarose gel. This jelly-like sheet will have wells for DNA to be poured into. An electric current is applied so that the DNA, which is negatively charged due to its phosphate groups is attracted to the positive electrode. Different rows of DNA will move at different speeds because some DNA pieces are larger than others. Thus if two DNA samples show a similar pattern on the gel electrophoresis, one can tell that these DNA samples match.
Macromolecule blotting is a process performed after gel electrophoresis. An alkaline solution is prepared in a container. A sponge is placed into the solution and an agarose gel is placed on top of the sponge. Next, nitrocellulose paper is placed on top of the agarose gel and a paper towels are added on top of the nitrocellulose paper to apply pressure. The alkaline solution is drawn upwards towards the paper towel. During this process, the DNA denatures in the alkaline solution and is carried upwards to the nitrocellulose paper. The paper is then placed into a plastic bag and filled with a solution full of the DNA fragments, called the probe, found in the desired sample of DNA. The probes anneal to the complementary DNA of the bands already found on the nitrocellulose sample. Afterwards, probes are washed off and the only ones present are the ones that have annealed to complementary DNA on the paper. Next the paper is stuck onto an x ray film. The radioactivity of the probes creates black bands on the film, called an autoradiograph. As a result, only similar patterns of DNA to that of the probe are present on the film. This allows us the compare similar DNA sequences of multiple DNA samples. The overall process results in a precise reading of similarities in both similar and different DNA sample.
Biochemistry
Biochemistry is the science of the chemical processes which takes place within living organisms. Living organisms need essential elements to survive, among which are carbon, hydrogen, nitrogen, oxygen, calcium, and phosphorus. These elements make up the four macromolecules that living organisms need to survive: carbohydrates, lipids, proteins, and nucleic acids.
Carbohydrates, made up of carbon, hydrogen, and oxygen, are energy-storing molecules. The simplest carbohydrate is glucose, CHO, is used in cellular respiration to produce ATP, adenosine triphosphate, which supplies cells with energy.
Proteins are chains of amino acids that function, among other things, to contract skeletal muscle, as catalysts, as transport molecules, and as storage molecules. Protein catalysts can facilitate biochemical processes by lowering the activation energy of a reaction. Hemoglobins are also proteins, carrying oxygen to an organism's cells.
Lipids, also known as fats, are small molecules derived from biochemical subunits from either the ketoacyl or isoprene groups. Creating eight distinct categories: fatty acids, glycerolipids, glycerophospholipids, sphingolipids, saccharolipids, and polyketides (derived from condensation of ketoacyl subunits); and sterol lipids and prenol lipids (derived from condensation of isoprene subunits). Their primary purpose is to store energy over the long term. Due to their unique structure, lipids provide more than twice the amount of energy that carbohydrates do. Lipids can also be used as insulation. Moreover, lipids can be used in hormone production to maintain a healthy hormonal balance and provide structure to cell membranes.
Nucleic acids are a key component of DNA, the main genetic information-storing substance, found oftentimes in the cell nucleus, and controls the metabolic processes of the cell. DNA consists of two complementary antiparallel strands consisting of varying patterns of nucleotides. RNA is a single strand of DNA, which is transcribed from DNA and used for DNA translation, which is the process for making proteins out of RNA sequences.
See also
References
External links
Branches of biology
Veterinary medicine
Western culture | Biomedicine | [
"Biology"
] | 1,588 | [
"nan",
"Biomedicine"
] |
1,043,647 | https://en.wikipedia.org/wiki/Burst%20charge | In fireworks, a burst charge (usually black powder) is a pyrotechnic mixture placed in a shell which is ignited when the shell reaches the desired height in order to create an explosion and spread the stars. Burst charge compositions are usually coated onto rice hulls or other low-density fillers, which increases the rate of combustion.
In artillery and Naval artillery the burst charge or bursting charge is ignited by a primer at the base of the shell.
Common burst charges
Black powder
Flash powder
H3
Whistle mix
References
Pyrotechnic compositions | Burst charge | [
"Chemistry"
] | 113 | [
"Pyrotechnic compositions"
] |
1,043,712 | https://en.wikipedia.org/wiki/Capgemini%20Engineering | Capgemini Engineering (previously known as Altran Technologies, SA) is a global innovation and engineering consulting firm founded in 1982 in France by Alexis Kniazeff and Hubert Martigny.
Altran Technologies operated primarily in high technology and innovation industries, which accounted for nearly 75% of its turnover. Administrative and information consultancy accounted for 20% of its turnover with strategy and management consulting making up the rest. The firm is active in most engineering domains, particularly electronics and IT technology.
In 2018, Altran generated €2.916 billion in revenues and employed over 46,693 people around the world. Altran was acquired by Capgemini in 2019 and was renamed as "Capgemini Engineering" on 8 April 2021 due to its merge with Capgemini's Engineering and R&D services.
History
1980s
In 1982, Alexis Kniazeff and Hubert Martigny, ex-consultants of Peat Marwick (today known as KPMG), founded CGS Informatique, which would later become Altran. By 1985, the firm counted a staff of 50 engineers.
The company expanded through small business units that would later generally range from 10 to 200 employees. Business units operated semi-independently and were given the autonomy to choose their own growth strategy and investment programs while still getting assistance from central management. This allowed business units to give each other support and share ideas. Managers’ compensation was decided based on the units’ performance.
One of Altran's first major projects was developing the on-board communications network in 1987 for France's high-speed TGV trains that allowed French lines to be connected to other European rail lines.
In 1987, the company was listed on the Secondary Market of the Paris Stock Exchange. By 1989, Altran's sales had neared the equivalent of 48 million euros. That same year, Altran bought Ségur Informatique, an aeronautics simulation and modeling company. The number of the company's employees grew to approximately 1,000 by 1990, as well as its range of expertise, moving into the transportation, telecommunications, and energy sectors, with a strong information technology component.
1990s
In the early 1990s the company adopted a new business model. While much of the company's work during the previous decade had been performed in-house, at the beginning of the 1990s the company developed a new operational concept, that of a temp agency for the high-technology sector. The firm's staff started to work directly with its clients' projects, adding their specialized expertise to projects. By the end of the decade, the company had more than 50 subsidiaries in France, and had taken the lead of that market's technology consulting sector. The company was helped by the long-lasting recession affecting France and much of Europe at the beginning of the decade, as companies began outsourcing parts of their research and development operations. Altran was also expanding by acquisition, buying up a number of similar consultancies in France, such as the 1992 acquisition of GERPI, based in Rennes. By the end of that year, Altran's revenues had reached 76.5 million euros.
With the elimination of border controls within the European Community in 1992, the company's clients began operations in other European countries. At first Altran turned to foreign partnerships in order to accommodate its clients. Yet this approach quickly proved unsatisfactory, and Altran put into place an aggressive acquisition plan in order to establish its own foreign operations.
Altran targeted the Benelux countries, the first to lower their trade barriers, acquiring a Belgian company in 1992. By the end of the decade, the firm's network in these countries' markets was composed of 12 companies and 1,000 consultants. When an acquisition took place, Altran kept on existing management and in general the acquired firms retained their names. The acquisition policy was based on paying an initial fee for an acquisition, then on subsequent annual payments based on the acquired unit's performance.
In 1992, Altran created Altran Conseil to work in the automobile equipment, nuclear and consumer electronic industries.
Altran's operations in Spain began with the acquisition in 1993 of SDB España, a leading telecommunications consultant in that country, and later grew with the acquisitions of STE Consulting, Norma Consulting, Insert Sistemas, Strategy Consultors, Inad, Siev and Consultrans. Spain remained one of the company's top three markets into the new century, becoming a group of nine companies and more than 2,000 consultants operating under the Altran brand.
By 1995, Altran's sales had topped 155 million euros, and its total number of employees had grown to nearly 2,400 (mostly engineers). The company recognized that the majority of engineers lacked a background in management, thus a training program called IMA (Institut pour le management Altran) was launched capable of training 200 candidates per year.
In 1995 the company invested in the United Kingdom and acquired High Integrity Systems, a consulting firm focused on assisting companies that were transitioning into new-generation computer and network systems, and DCE Consultants, which operated from offices in Oxford and Manchester.
In 1997, Altran also acquired Praxis Critical Systems, founded in Bath in 1983 to provide software and safety-engineering services. In order to supplement the activities of its acquisitions, the company also opened new subsidiary offices, such as Altran Technologies UK, a multi-disciplinary and cross-industry engineering consultancy.
In the second half of the 1990s the company was acquiring an average of 15 companies per year. Italy became a target for growth in 1996, when Altran established subsidiary Altran Italy, before making its first acquisition in that country in 1997.
In 1998, Altran added four new Italian acquisitions, EKAR, RSI Sistemi, CCS and Pool. In 1999, the company added an office in Turin as well as two new companies, ASP and O&I.
Germany was also a primary target for Altran during this period, starting with the 1997 establishment of Altran Technologies GmbH and the acquisition of Europspace Technische Entwicklungen, a company that had been formed in 1993 and specialized in aeronautics. In 1998, the company added consulting group Berata and, the following year, Askon Consulting joined the group, which then expanded with a second component, Askon Beratung.
Other European countries joined the Altran network in the late 1990s as well, including Portugal and Luxembourg in 1998 and Austria in 1999. In 1998, Altran deployed a telecommunications network in Portugal. By the end of 1999, the company's sales had climbed to EUR 614 million; significantly, international sales already accounted for more than one-third of the company's total revenues.
Similar progress was made in Switzerland, a market Altran entered in 1997 with the purchase of D1B2. The Berate Germany purchase brought Altran that company's Swiss office as well in 1998; that same year, Altran launched its own Swiss startup, Altran Technologies Switzerland. In 1999, the company added three new Swiss companies, , Innovatica, and Cerri.
Significant projects during the decade included the design of the Météor autopilot system for the first automated subway line for the Paris Metro (Line 14) and the attitude control system for the European Space Agency's Ariane 5 rocket.
Early 21st century
In 2000, the company's Italian branch expanded to 10 subsidiaries with the opening of offices in Lombardy and Lazio and the acquisition of CEDATI. Also in 2000, Altran's presence in Switzerland grew with two new subsidiaries (Infolearn and De Simone & Osswald). In Germany, Altran acquired I&K Beratung. The United States became a primary target for the company's expansion with the acquisition of a company that was renamed Altran Corporation.
Altran began building its operations in South America as well, especially in Brazil. By the end of 2001, Altran's revenues had jumped to more than 1.2 billion euros, while its ranks of consultants now topped 15,000.
Altran become involved in a couple of new PR initiatives at the beginning of the decade, including a partnership with the Renault F1 racing team and a commitment to the Solar Impulse project with the goal of circumnavigating the Earth powered by only solar power.
In 2002, Askon Beratung was spun off from Askon consulting as a separate, independently operating company within Altran, and the company's Swiss network had added a new component with the purchase of Sigma. This year a full-scale entry into the United States was made. After providing $56 million to back a management buyout of the European, Asian, and Latin American operations of bankrupt Arthur D. Little (the US-based consulting firm founded in 1886), Altran itself acquired the Arthur D. Little brand and trademark. This acquisition was seen as an important step in achieving the company's next growth target. Sales grew to 2 billion euros by 2003 and the company had more than 40,000 engineers by 2005.
In 2004, Altran established operations in Asia and created Altran Pr[i]me, a consulting outfit specialized in large-scale innovation projects.
On 29 December 2006, all subsidiaries based in Ile de France were merged under the name of Altran Technologies SA, a technology consultant, which was organized into four business lines (as well as brand names):
Altran TEM: Telecommunications, Electronics and Multimedia.
Altran AIT: Automobiles, Infrastructure and Transportation.
Altran Eilis: Energy, Industry and Life Science.
Altran ASD: Aeronautics, Space and Defence.
In 2009, Altran launched its Altran Research program. The program is centered around three main themes: designing tools, research and proof-of-concepts, and research on how to organize and improve practices.
In 2012, as part its Performance Plan 2012, PSA Peugeot Citroën chose Altran as its strategic partner.
In early 2013, Altran group finalised the acquisition of 100% of IndustrieHansa, an engineering and consulting group based in Germany, placing it among the top five in the market of Technical Consultancy, Innovation, Research and Development.
Altran continued to acquire innovation consultancies in other countries as part of its expansion strategy. In February 2015, it acquired Nspyre, a Dutch R&D and high-technology firm. In July 2015, it bought SiConTech, an Indian engineering company specializing in semiconductors.
Altran's revenues reached €1.945 billion in 2015. At that time, it had over 25,000 employees operating in over 20 countries.
In November 2015, Dominique Cerutti announced his five-year strategic plan, "Altran 2020. Ignition." The plan aimed for the firm to reach 3 billion euros in revenue in five years and a big increase in profitability.
In December 2015, Altran announced the acquisition of Tessella, in analytical and data science consulting.
In 2016, the company acquired two other American companies: Synapse, specializing in the development of innovative products, and Lohika, a software engineering firm. This transatlantic expansion is one of the principal approaches to development supported by Altran in the Ignition 2020 strategic plan.
Additionally, Altran announced in October 2016 the acquisition of two automobile industry companies: Swell, an engineering services and research and development firm based in the Czech Republic, as well as Benteler Engineering, a German firm specializing in conception and engineering services. Dominique Cerutti is noted for establishing several strategic partnerships, notably with Divergent, an American holding that integrates 3D printing in the automobile production process, and the Chinese digital mapping holding EMG ().
22 December 2016 Acquisition: Altran acquires Pricol Technologies, an India-based engineering firm.
In July and September 2017, Altran finalized two acquisitions: Information Risk Management, and GlobalEdge. The acquisition of IRM enabled Altran to enhance its presence and offers in the domain of cyber security. The buying of GlobalEdge, an Indian software product engineering firm, aimed at helping Altran to develop its presence in India as well as in the US, where Global Edge has an office in California.
In November 2017, the company also acquired Aricent, a global digital design and engineering company headquartered in Santa Clara, California. The $2.0 billion transaction enabled the company to become the global leader in engineering and R&D services, completing its "Altran 2020. Ignition" strategic plan as early as 2018. The acquisition was completed on 22 March 2018, bringing the overall turnover of the new structure close to €3 billion.
On 28 June 2018, Altran announced the plan "The High Road, Altran 2022". This plan aimed for a 14.5% margin and a 4 billion euros turnover in 2022 by betting on technological breakthroughs.
Takeover by Capgemini
On 1 April 2020, Capgemini's friendly takeover bid for Altran was finalized. Capgemini reached the squeeze-out threshold of 90% of Altran's capital, which was delisted from stock markets on 15 April 2020.
Organization and activities
The company covers the entire project life-cycle, from the planning stages (technological monitoring, technical feasibility studies, strategy planning, etc.) to final realization (design, implementation, and testing.)
Worldwide presences
Altran is headquartered on the avenue Charles de Gaulle in Neuilly-sur-Seine, France. The group is present in Belgium, Brazil, Canada, China, Colombia, Germany, Spain, Ukraine, France, Italy, India, Luxembourg, Malaysia, Mexico, Tunisia, Morocco, the Netherlands, Norway, Austria, Portugal, Romania, Sweden, Switzerland, the Middle East, the United Kingdom and the United States.
Geographical breakdown of revenues: France (43.3%), Europe (51.6%) and other (5.1%).
Research and Innovation
Altran Research
Altran Research, headed by Fabrice Mariaud, is Altran's internal R&D department in France. Scientific experts, each without their domain of expertise, plan and put in place research and innovation projects in collaboration with Altran Lab, academic partners and industrial actors. Current research areas include e-health, space & aeronautics, energy, complex systems, transportation and mobility, industry, and the services of the future.
Altran Lab
Altran Lab is made up of an incubator, an innovation hub and Altran Pr[i]me, created in 2004 and focused on innovation management.
Altran Foundation for Innovation
The Altran Foundation for Innovation is an international scientific competition run by the company.
The competition's theme is selected each year addressing a major issue in society. The entries are judged by a panel containing scientific, political or academic experts. A prize of a year's technological support for the project is awarded to the winner and Altran's consultant teams will also follow up the awarded project.
Pro bono work
Altran France does pro bono work in areas relating to culture, civic engagement and innovation. In particular, Altran aids the Musée des Arts et Métiers of Paris, the Quai Branly Museum and the Arab World Institute with their digital strategy and management of their digital cultural assets.
Financial data
Altran first appeared on the Paris stock market on 20 October 1987.
Stock valued on the Paris stock market (Euronext)
Member of the CAC All Shares index
ISIN Code: FR0000034639
Number of outstanding shares as of 30 October 2015: 175,536,188
Market capitalization as of 10 April 2019: 2.5 billion euros
Primary stockholders as of 10 April 2019:
Altrafin Participations: 8.4%
Alexis Kniazeff: 1.4%
Hubert Martigny: 1.4%
Financial data table
See also
List of IT consulting firms
Frog Design Inc.
Tessella
Cambridge Consultants
References
Engineering
Consulting firms established in 1982
Engineering consulting firms of France
International engineering consulting firms
International information technology consulting firms
International management consulting firms
Companies based in Paris
French companies established in 1982
Technology companies established in 1982
Companies formerly listed on Euronext Paris
2020 mergers and acquisitions | Capgemini Engineering | [
"Engineering"
] | 3,294 | [
"Engineering consulting firms",
"International engineering consulting firms"
] |
1,043,769 | https://en.wikipedia.org/wiki/MAC%20times | MAC times are pieces of file system metadata which record when certain events pertaining to a computer file occurred most recently. The events are usually described as "modification" (the data in the file was modified), "access" (some part of the file was read), and "metadata change" (the file's permissions or ownership were modified), although the acronym is derived from the "mtime", "atime", and "ctime" structures maintained by Unix file systems. Windows file systems do not update ctime when a file's metadata is changed, instead using the field to record the time when a file was first created, known as "creation time" or "birth time". Some other systems also record birth times for files, but there is no standard name for this metadata; ZFS, for example, stores birth time in a field called "crtime". MAC times are commonly used in computer forensics. The name Mactime was originally coined by Dan Farmer, who wrote a tool with the same name.
Modification time (mtime)
A file's modification time describes when the content of the file most recently changed. Because most file systems do not compare data written to a file with what is already there, if a program overwrites part of a file with the same data as previously existed in that location, the modification time will be updated even though the contents did not technically change.
Access time (atime)
A file's access time identifies when the file was most recently opened for reading. Access times are usually updated even if only a small portion of a large file is examined. A running program can maintain a file as "open" for some time, so the time at which a file was opened may differ from the time data was most recently read from the file.
Because some computer configurations are much faster at reading data than at writing it, updating access times after every read operation can be very expensive. Some systems mitigate this cost by storing access times at a coarser granularity than other times; by rounding access times only to the nearest hour or day, a file which is read repeatedly in a short time frame will only need its access time updated once. In Windows, this is addressed by waiting for up to an hour to flush updated access dates to the disk.
Some systems also provide options to disable access time updating altogether. In Windows, starting with Vista, file access time updating is disabled by default.
Change time and creation time (ctime)
Unix and Windows file systems interpret 'ctime' differently:
Unix systems maintain the historical interpretation of ctime as being the time when certain file metadata, not its contents, were last changed, such as the file's permissions or owner (e.g. 'This file's metadata was changed on 05/05/02 12:15pm').
Windows systems use ctime to mean 'creation time' (also called 'birth time') (e.g. 'This file was created on 05/05/02 12:15pm').
This difference in usage can lead to incorrect presentation of time metadata when a file created on a Windows system is accessed on a Unix system and vice versa. Although not specified by POSIX, most modern Unix file systems (such as ext4, HFS+, ZFS, and UFS2) allow to store the creation time.
NTFS stores both the creation time and the change time.
The semantics of creation times is the source of some controversy. One view is that creation times should refer to the actual content of a file: e.g. for a digital photo the creation time would note when the photo was taken or first stored on a computer. A different approach is for creation times to stand for when the file system object itself was created, e.g. when the photo file was last restored from a backup or moved from one disk to another.
Metadata issues
As with all file system metadata, user expectations about MAC times can be violated by programs which are not metadata-aware. Some file-copying utilities will explicitly set MAC times of the new copy to match those of the original file, while programs that simply create a new file, read the contents of the original, and write that data into the new copy, will produce new files whose times do not match those of the original.
Some programs, in an attempt to avoid losing data if a write operation is interrupted, avoid modifying existing files. Instead, the updated data is written to a new file, and the new file is moved to overwrite the original. This practice loses the original file metadata unless the program explicitly copies the metadata from the original file. Windows is not affected by this due to a workaround feature called File System Tunneling.
See also
Computer forensics
References
External links
Discussion about Windows and Unix timestamps (Cygwin project mailing list)
Computer file systems
Computer forensics | MAC times | [
"Engineering"
] | 1,006 | [
"Cybersecurity engineering",
"Computer forensics"
] |
1,043,867 | https://en.wikipedia.org/wiki/Energy%E2%80%93maneuverability%20theory | Energy–maneuverability theory is a model of aircraft performance. It was developed by Col. John Boyd, a fighter pilot, and Thomas P. Christie, a mathematician with the United States Air Force, and is useful in describing an aircraft's performance as the total of kinetic and potential energies or aircraft specific energy. It relates the thrust, weight, aerodynamic drag, wing area, and other flight characteristics of an aircraft into a quantitative model. This enables the combat capabilities of various aircraft or prospective design trade-offs to be predicted and compared.
Formula
All of these aspects of airplane performance are compressed into a single value by the following formula:
History
John Boyd, a U.S. jet fighter pilot in the Korean War, began developing the theory in the early 1960s. He teamed with mathematician Thomas Christie at Eglin Air Force Base to use the base's high-speed computer to compare the performance envelopes of U.S. and Soviet aircraft from the Korean and Vietnam Wars. They completed a two-volume report on their studies in 1964. Energy Maneuverability came to be accepted within the U.S. Air Force and brought about improvements in the requirements for the F-15 Eagle and later the F-16 Fighting Falcon fighters.
See also
Lagrangian mechanics
Notes
References
Hammond, Grant T. The Mind of War: John Boyd and American Security. Washington, D.C.: Smithsonian Institution Press, 2001. and .
Coram, Robert. Boyd: The Fighter Pilot Who Changed the Art of War. New York: Back Bay Books, 2002. and .
Wendl, M.J., G.G. Grose, J.L. Porter, and V.R. Pruitt. Flight/Propulsion Control Integration Aspects of Energy Management. Society of Automotive Engineers, 1974, p. 740480.
Aerospace engineering | Energy–maneuverability theory | [
"Engineering"
] | 373 | [
"Aerospace engineering"
] |
1,043,896 | https://en.wikipedia.org/wiki/Cif | Cif is a French brand of household cleaning products owned by the English-Dutch company Unilever, known as Jif in Australia, New Zealand, Japan, Middle East and the Nordic countries.
Cif was launched in France in 1965 and was marketed in competition against scouring powders such as Vim.
Name
Cif is sold under the names Jif, Vim, Viss and Handy Andy, depending on which of the 51 countries it is sold in.
In Sweden, and South Africa, the products were originally sold under the name Vim before this was changed to Jif, the launch name in the United Kingdom, Ireland, the Netherlands and Hong Kong.
In January 2001, the name in most of these countries was changed to Cif in order to align marketing and product inventories across the continent.
In Belgium, Finland and Portugal, the product was known as Vim for quite some time, after which it became Cif. In Canada it is still called Vim. In Germany, the cleaner's name is Viss. In Iraq it is still traded as Jif, with local Arabic and English writing.
In Norway, the product is traded as Jif and owned by Lilleborg, a part of Solenis. Unilever had a decades long partnership with Lilleborg when they were given the exclusive rights to the brand Jif in Norway. Lilleborg registered the trademark in 1968. Without consent from Lilleborg, Unilever registered their trademark Cif in 1998 and started selling Cif products in 2018. Thereafter Lilleborg protested and two JIF vs CIF court cases followed. The court decided that Unilever was allowed to keep the brand Cif, but not allowed to sell Cif products in Norway.
In the United States, Cif Cream Cleaner is sold as a co-branded product with Scrub Daddy.
Products
Cif Bathroom Mousse
Cif Stainless Steel Cleaner
Cif Bathroom Cleaner
Cif Kitchen Cleaner
Cif Power Cream
Cif Cream Cleaner, a non-abrasive version
References
External links
UK product website
Australian Jif product website
German Viss product website (in German)
Japanese Jif product website (in Japanese)
Unilever brands
Cleaning products
Cleaning product brands
Orkla ASA
Products introduced in 1969 | Cif | [
"Chemistry"
] | 456 | [
"Cleaning products",
"Products of chemical industry"
] |
1,043,919 | https://en.wikipedia.org/wiki/D%C3%B6hle%20bodies | Döhle bodies are light blue-gray, oval, basophilic, leukocyte inclusions located in the peripheral cytoplasm of neutrophils. They measure 1–3 μm in diameter. Not much is known about their formation, but they are thought to be remnants of the rough endoplasmic reticulum.
They are named after German pathologist, Karl Gottfried Paul Döhle (1855–1928). They are often present in conjunction with toxic granulation. However, it has been found that certain healthy individuals may have persistent Döhle bodies found in neutrophils.
Associated conditions
They are seen in:
Burns
Infection
Physical trauma
Neoplastic diseases
Fanconi syndrome
May–Hegglin anomaly
Chédiak–Steinbrinck–Rayer-Buchanan-Higashi's syndrome
Leukemoid reaction
Pathophysiology
The presence of Döhle bodies in mature and immature neutrophils on a blood smear can be normal if they are present only in small numbers. They are also normally more abundant in cats and horses. Döhle bodies are intra-cytoplasmic structures thought to be composed of endoplasmic reticulum material; they will increase in number with inflammation and increased granulocytopoiesis. If there are many neutrophils in the bloodstream containing Döhle bodies, these can be referred to as toxic neutrophils. Toxic neutrophils can also correspond to neutrophils that possess a more basophilic cytoplasm, basophilic granulation (infrequently observed), or cytoplasmic vacuoles in addition to one of the preceding cytoplasmic changes. Döhle bodies, cytoplasmic basophilia and cytoplasmic granulation all reflect "defects" in cell production and maturation during active granulocytopoiesis. Just like a left shift, the presence of toxic neutrophils suggests increased granulocytopoiesis. However, in a freshly prepared blood smear, the presence of vacuolation in addition to toxic neutrophils reflects endotoxemia resulting in autolysis of neutrophils. This autodigestion is responsible for the cytoplasmic vacuolation. It is the single toxic change that does not result from the "manufacturing" process.
References
Histopathology
Abnormal clinical and laboratory findings for blood | Döhle bodies | [
"Chemistry"
] | 522 | [
"Histopathology",
"Microscopy"
] |
1,043,961 | https://en.wikipedia.org/wiki/Off-premises%20extension | An off-premises extension (OPX), sometimes also known as off-premises station (OPS), is an extension telephone at a location distant from its servicing exchange.
One type of off-premises extension, connected to a private branch exchange (PBX), is generally used to provide employees with access to a company telephone system while they are out of the office. Off-premises extensions are used in distributed environments, serving locations that are too far from the PBX to be served by on-premises wiring.
Another type of off-premises extension, connected to a public telephone exchange, is generally used to allow a private phone line to ring at a second location. For example, the owner of a business may have an OPX for their home phone at the business location, allowing them to avoid missing calls to the home phone.
Telephone service providers charge a significant monthly rate for an OPX, partly calculated by the distance; in extreme cases, the distance may result in a rate higher than simply having an additional central office line with its own number. Recent innovations such as call forwarding-no answer or simultaneous ringing of multiple lines can replace several of the conveniences of an OPX at much lower cost.
An OPX uses a conditioned wire pair that is usually used only for voice applications, while for data, a pair usually needs to be unconditioned. An alarm circuit is an unconditioned pair.
In Internet telephony, a VoIP VPN OPX may be implemented by connecting an extension over a virtual private networking connection, instead of connecting it directly to the local area network. As a host connected by a VPN appears as a part of the local area network, the off-premises extension appears to the IP-PBX as if it were on-site.
See also
Foreign exchange service (telecommunications)
Hosted PBX
Communication circuits | Off-premises extension | [
"Engineering"
] | 382 | [
"Telecommunications engineering",
"Communication circuits"
] |
1,044,027 | https://en.wikipedia.org/wiki/Optical%20System%20for%20Imaging%20and%20low%20Resolution%20Integrated%20Spectroscopy | The Optical System for Imaging and low Resolution Integrated Spectroscopy (OSIRIS) is an optical spectrometer at the Gran Telescopio Canarias (GTC) in Spain. It was the first instrument in operation at the GTC. OSIRIS's key scientific project is OTELO.
Sensitive in the wavelength range from 365 through 1000 nm, OSIRIS is a multiple purpose instrument for imaging and low-resolution long slit and multiple object spectroscopy (MOS). Imaging can be done using broad-band filters or narrow-band tunable filters with FWHM ranging from 0.2 to 0.9 nm at 365 nm, through 0.9 to 1.2 at 1000 nm. OSIRIS observing modes include also fast photometry and spectroscopy. OSIRIS's field of view is of 8.5×8.5 arcminutes and the maximum nominal spectral resolution is of 5000 for a slit width of 0.6 arcsec. MOS incorporates detector charge shuffling co-ordinated with telescope nodding for an excellent sky subtraction. The use of tunable filters is a completely new feature in 8 to 10 m class telescopes that will allow observing the very faint and distant emission line objects.
External links
Gran Telescopio CANARIAS OSIRIS instrument page
OSIRIS Home Page (IAC)
Telescopes | Optical System for Imaging and low Resolution Integrated Spectroscopy | [
"Astronomy"
] | 278 | [
"Telescopes",
"Astronomical instruments"
] |
1,044,040 | https://en.wikipedia.org/wiki/OTELO | OTELO (OSIRIS Tunable Emission Line Object survey) is an emission line object survey using OSIRIS tunable filters in selected atmospheric windows relatively free of sky emission lines.
Overview
The total survey sky area is of 1 square degree (0.30 msr) distributed in different high latitude and low extinction fields with enough angular separations. A 5 sigma depth of 8×10−18 (erg/cm2)/s (8 zW/m2 or 9×10−24 hp/yd2) allows detecting objects of equivalent widths down to 6. OTELO will observe objects with an age equivalent to 10% of the age of the Universe. Given the observing procedure, OTELO will allow studying a clearly defined volume of the Universe at a known flux limit. OTELO will complement spectroscopic surveys, since the selection criteria are completely different: not broad band based but emission line based, which will allow detecting faint continuum objects.
The scientific field that OTELO will allow tackling is very wide, and encompasses evolutionary studies of Ly alpha emitters, QSO, AGN, star-forming populations (specifically the faint end luminosity), Emissión line ellipticals (detectable depending upon its evolution), chemical evolution of the Universe between z=0.24 and z=1.5, mass/luminosity relation vs. morphological type and redshift (up to z=1.5), Tully-Fisher relation up to z=1.5 and derivation of cosmological parameters. Other studies include galactic structure and Galactic objects (PN, peculiar stars, cataclysmic variables).
References
External links
Project Website at iac.es
Astronomical surveys
Observational astronomy | OTELO | [
"Astronomy"
] | 352 | [
"Astronomical surveys",
"Observational astronomy",
"Works about astronomy",
"Astronomy stubs",
"Astronomical catalogue stubs",
"Astronomical objects",
"Astronomical sub-disciplines"
] |
1,044,109 | https://en.wikipedia.org/wiki/BLAST%20model%20checker | The Berkeley Lazy Abstraction Software verification Tool (BLAST) is a software model checking tool for C programs. The task addressed by BLAST is the need to check whether software satisfies the behavioral requirements of its associated interfaces. BLAST employs counterexample-driven automatic abstraction refinement to construct an abstract model that is then model-checked for safety properties. The abstraction is constructed on the fly, and only to the requested precision.
Achievements
BLAST came first in the category DeviceDrivers64 in the 1st Competition on Software Verification (2012) that was held at TACAS 2012 in Tallinn.
BLAST came third (category DeviceDrivers64) in the 2nd Competition on Software Verification (2013) that was held at TACAS 2013 in Rome.
BLAST came first in the category DeviceDrivers64 in the 3rd Competition on Software Verification (2014), that was held at TACAS 2014 in Grenoble.
References
Notes
External links
BLAST 2.5 website
BLAST 2.7 website
Free software testing tools
Model checkers
Static program analysis tools
Software using the Apache license | BLAST model checker | [
"Mathematics"
] | 212 | [
"Model checkers",
"Mathematical software"
] |
1,044,151 | https://en.wikipedia.org/wiki/Joint%20Institute%20for%20Nuclear%20Research | The Joint Institute for Nuclear Research (JINR, ), in Dubna, Moscow Oblast (110 km north of Moscow), Russia, is an international research center for nuclear sciences, with 5,500 staff members including 1,200 researchers holding over 1,000 Ph.Ds from eighteen countries. Most scientists are scientists of the Russian Federation.
The institute has seven laboratories, each with its own specialisation: theoretical physics, high energy physics (particle physics), heavy ion physics, condensed matter physics, nuclear reactions, neutron physics, and information technology. The institute has a division to study radiation and radiobiological research and other ad hoc experimental physics experiments.
Principal research instruments include a nuclotron superconductive particle accelerator (particle energy: 7 GeV), three isochronous cyclotrons (120, 145, 650 MeV), a phasitron (680 MeV) and a synchrophasotron (4 GeV). The site has a neutron fast-pulse reactor (1,500MW pulse) with nineteen associated instruments receiving neutron beams.
Founding
The Joint Institute for Nuclear Research was established on the basis of an agreement signed on 26 March 1956, in Moscow by representatives of the governments of the eleven founding countries, with a view to combining their scientific and material potential. The USSR contributed 50 percent, the People's Republic of China 20 percent. In February 1957, the JINR was registered by the United Nations. The institute is located in Dubna, 120 km north of Moscow.
At the time of the creation of JINR, the Institute of Nuclear Problems (INP) of the Academy of Sciences of the USSR already existed at the site of the future Dubna since the late 1940s, and it launched a program of fundamental and applied research at the synchrocyclotron. The Electrophysics Laboratory of the Academy of Sciences of the USSR (EFLAN) was established, and under the guidance of Academician Vladimir Veksler, work began to create a new accelerator – a proton synchrophasotron – with a record energy of 10 GeV at that time.
By the mid-1950s, there was a worldwide consensus that nuclear science should be accessible and that only broad cooperation could ensure the progressive development of this research, as well as the peaceful use of atomic energy. Thus, in 1954, near Geneva, CERN (European Organization for Nuclear Research) was established. At about the same time, the countries that belonged to the socialist community decided to establish a Joint Institute for Nuclear Research on the basis of the INP and EFLAN.
The first director of the United Institute was Professor D. I. Blokhintsev, who just completed the creation of Obninsk Nuclear Power Plant the world's first nuclear power plant in Obninsk. The first vice-directors of JINR were professors Marian Danysz (Poland) and V. Votruba (Czechoslovakia).
The history of the formation of the JINR is associated with the names of prominent scientists and Professors. The following list provides some of the names of prominent scientists.
Nikolay Bogolyubov
Lajos Jánossy
Leopold Infeld
Igor Kurchatov
Heinz Pose
Heinz Barwich
Igor Tamm
Alexander Baldin
Wang Ganchang
Vladimir Veksler
Nikolay Govorun
Venedikt Dzhelepov
Jaroslav Kožešník
Moisey Markov
Șerban Țițeica
Georgi Nadjakov
Le Van Thiem
Yuri Oganessian
Bruno Pontecorvo
Boris Arbuzov
Albert Tavkhelidze
Georgy Flyorov
Ilya Frank
F. Shapiro
Dmitry Shirkov
E. Yanik (Polish: Jerzy Janik)
Cooperation
The JINR cooperates with many organizations. One of the main organizations with which JINR cooperates is UNESCO. Its collaboration with JINR started in 1997 in order to develop basic sciences and try to achieve sustainable development. Joint activities include training programmes and grant mechanisms for researchers in the basic science. This international scientific cooperation and knowledge sharing in key scientific fields is one of the main 2030 UNESCO goals, the achievement of Sustainable Development.
The United Nations General Assembly and UNESCO General Conference named 2019 as The International Year of the Periodic Table of Chemical Elements (IYPTE 2019). This reinforced the cooperation between these two organizations. JINR was one of the observers of European Organization for Nuclear Research (CERN) from 2014 till 25 March 2022.
As of 1 January 2023, 13 JINR state members are active and three suspended:
(suspended)
(suspended)
(suspended)
Associate members are:
Scientific collaboration with organizations including:
CERN – since 2014, subject to restrictions detailed in the CERN Council resolutions 3671 and 3638 following the invasion of Ukraine by the Russian Federation. Collaboration to be reviewed well in advance of January 2025, the expiration date of the International Cooperation Agreement.
UNESCO – since 1997
BMBF, since 1991.
INFN, since 1996.
University of Turin, since 1999.
EPS, since 1990.
Former members: In December 2022 the Czech Republic, Poland and Ukraine terminated their membership and Bulgaria and Slovakia suspended their participation in JINR. The Democratic People's Republic of Korea was one of the founding states in 1956. It has been suspended from participating in JINR since 2015.
Structure of research
The main fields of the institute's research are:
Theoretical physics
Elementary particle physics
Relativistic nuclear physics
Heavy ion physics
Low and intermediate energy physics
Nuclear physics with neutrons
Condensed matter physics
Radiobiology
Computer networking, computing and computational physics
Educational programme
The JINR possess eight laboratories and University Centre.
Superheavy Element Factory
The Superheavy Element Factory (SHE factory) at the JINR, opened in 2019, is a new experimental complex dedicated to superheavy element research. Its facilities enable a tenfold increase in beam intensity; such an increase in sensitivity enables the study of reactions with lower cross sections that would otherwise be inaccessible. Sergey Dmitriev, director of the Flerov Laboratory of Nuclear Reactions, believes that the SHE factory will enable closer examination of nuclei near the limits of stability, as well as experiments aimed at the synthesis of elements 119 and 120.
Scientific achievements
More than 40 major achievements in particle physics have been made through experiments at JINR, including:
1957 – prediction of neutrino oscillation, published in JETP by Bruno Pontecorvo
1976 – element 107 (bohrium)
1999 – element 114 (flerovium)
2000 – element 116 (livermorium)
2002 – element 118 (oganesson)
2003 – element 115 (moscovium) and element 113 (nihonium)
2010 – synthesis of element 117 (tennessine)
Prizes and awards
JINR has instituted awards to honour and encourage high-level research in the fields of physics and mathematics since 1961.
The Bogolyubov Prize for young scientists – an award for young researchers in theoretical physics.
The Bogolyubov Prize – an international award to scientists with outstanding contribution to theoretical physics and applied mathematics.
The Bruno Pontecorvo Prize – is an award to scientists with contribution to elementary particle physics.
The first award was dedicated to Wang Ganchang, deputy director from 1958 to 1960 and the Soviet Professor Vladimir Veksler for the discovery of antisigma-minus hyperon.
Directors
Dmitry Blokhintsev (1956–1965)
Nikolay Bogolyubov (1966–1988)
(1989–1991)
Vladimir Kadyshevsky (1992–2005)
(2005–2010)
(May 2010–September 2011) ad interim
Victor A. Matveev (2012–2020)
Grigory V. Trubnikov (since 2021)
Gallery
See also
Nuclotron
Institute for Nuclear Research
Budker Institute of Nuclear Physics, Russian particle physics laboratory in Novosibirsk
Institute for High Energy Physics, Russian particle physics laboratory in the vicinity of Moscow; located south of Moscow
Institute for Theoretical and Experimental Physics, Russian particle physics laboratory in the vicinity of Moscow; located in Moscow proper
Bogolyubov Prize for young scientists, an award for young scientists provided by JINR
Notes
References
External links
JINR Website
JINR Telegram Channel
Frank Laboratory of Neutron Physics Website
Research institutes established in 1956
Research institutes in Russia
Nuclear research institutes
Research institutes in the Soviet Union
International research institutes
Particle physics facilities
Nuclear research institutes in Russia
Nuclear technology in the Soviet Union
Institutes associated with CERN | Joint Institute for Nuclear Research | [
"Engineering"
] | 1,715 | [
"Nuclear research institutes",
"Nuclear organizations"
] |
1,044,194 | https://en.wikipedia.org/wiki/Integral%20Equations%20and%20Operator%20Theory | Integral Equations and Operator Theory is a journal dedicated to operator theory and its applications to engineering and other mathematical sciences. As some approaches to the study of integral equations (theoretically and numerically) constitute a subfield of operator theory, the journal also deals with the theory of integral equations and hence of differential equations. The journal consists of two sections: a main section consisting of refereed papers and a second consisting of short announcements of important results, open problems, information, etc. It has been published monthly by Springer-Verlag since 1978. The journal is also available online by subscription.
The founding editor-in-chief of the journal, in 1978, was Israel Gohberg. Its current editor-in-chief is Christiane Tretter.
References
External links
Journal homepage
Mathematical analysis journals
Academic journals established in 1978 | Integral Equations and Operator Theory | [
"Mathematics"
] | 164 | [
"Mathematical analysis",
"Mathematical analysis journals"
] |
1,044,497 | https://en.wikipedia.org/wiki/Workplace%20OS | Workplace OS is IBM's ultimate operating system prototype of the 1990s. It is the product of an exploratory research program in 1991 which yielded a design called the Grand Unifying Theory of Systems (GUTS), proposing to unify the world's systems as generalized "personalities" cohabitating concurrently upon a universally sophisticated platform of object-oriented frameworks upon one microkernel. Using personalities, a single machine would be able to run applications from multiple conventional operating systems like Unix and OS/2.
Within the AIM alliance, Apple demonstrated its mature Pink operating system prototype to IBM's GUTS design team which was immediately heavily impressed and influenced. The result is Workplace OS, intended to improve software portability and maintenance costs by aggressively recruiting all operating system vendors to convert their products into Workplace OS personalities. That included Pink when it became Taligent, which was a pillar of AIM and co-developed with Workplace OS. In 1995, IBM reported that "Nearly 20 corporations, universities, and research institutes worldwide have licensed the microkernel, laying the foundation for a completely open microkernel standard." Workplace OS was at the core of IBM's new unified strategic direction for the entire company, and was intended also as a bellwether toward PowerPC hardware platforms, to compete with the Wintel duopoly.
With protracted development spanning four years and $2 billion (or 0.6% of IBM's revenue for that period), the project suffered development hell characterized by workplace politics, feature creep, and the second-system effect. Many idealistic key assumptions made by IBM architects about software complexity and system performance were never tested until far too late in development, then proven infeasible. In January 1996, the first and only commercial preview was billed under the OS/2 family with the name "OS/2 Warp Connect (PowerPC Edition)" for limited special order by select IBM customers, as a crippled prototype. The entire Workplace OS platform was discontinued in March due to very low market demand, including that for enterprise PowerPC hardware.
A University of California case study described the Workplace OS project as "one of the most significant operating systems software investments of all time" and "one of the largest operating system failures in modern times".
Overview
Objective
By 1990, IBM acknowledged the software industry to be in a state of perpetual crisis. This was due to the chaos from the inordinate complexity of software engineering inherited by its legacy of procedural programming practices since the 1960s. Large software projects were too difficult, fragile, expensive, and time-consuming to create and maintain; they required too many programmers, who were too busy with fixing bugs and adding incremental features to create new applications. Different operating systems were alien to each other, each of them running their own proprietary applications. IBM envisioned "life after maximum entropy" through "operating systems unification at last" and wanted to lay a new worldview for the future of computing.
IBM sought a new world view of a unified foundation for computing, based upon the efficient reuse of common work. It wanted to break the traditional monolithic software development cycle of producing alphas, then betas, then testing, and repeating over the entire operating system — instead, compartmentalizing the development and quality assurance of individual unit objects. This new theory of unifying existing legacy software and the new way of building all new software, was nicknamed the Grand Unified Theory of Systems or GUTS.
Coincidentally, Apple already had a two-year-old secret prototype of its microkernel-based object-oriented operating system with application frameworks, named Pink. The theory of GUTS was expanded by Pink, yielding Workplace OS.
Architecture
IBM described its new microkernel architecture as scalable, modular, portable, client/server distributed, and open and fully licensable both in binary and source code forms. This microkernel-based unified architecture was intended to allow all software to become scalable both upward into supercomputing space and downward into mobile and embedded space.
Leveraged upon a single microkernel, IBM wanted to achieve its grand goal of unification by simplifying complex development models into reusable objects and frameworks, and all while retaining complete backward compatibility with legacy and heritage systems. Multiple-library support would allow developers to progressively migrate select source code objects to 64-bit mode, with side-by-side selectable 32-bit and 64-bit modes. IBM's book on Workplace OS says, "Maybe we can get to a 64-bit operating system in our lifetime." IBM intended shareable objects to eventually reduce the footprint of each personality, scaling them down to a handheld computing profile.
At the base of Workplace OS is a fork of the Mach 3.0 microkernel (release mk68) originally developed by Carnegie Mellon University and heavily modified by the Open Software Foundation's Research Institute. Officially named "IBM Microkernel", it provides five core features: IPC, virtual memory support, processes and threads, host and processor sets, and I/O and interrupt support.
On top of the IBM Microkernel is a layer of shared services (originally called Personality Neutral Services or PNS) to cater to some or all of the personalities above them. Shared services are endian-neutral, have no user interface, and can serve other shared services. Byte summarizes that shared services "can include not only low-level file system and device-driver services but also higher-level networking and even database services. [Workplace OS's lead architect Paul Giangarra] believes that locating such application-oriented services close to the microkernel will improve their efficiency by reducing the number of function calls and enabling the service to integrate its own device drivers." This layer contains the file systems, the scheduler, network services, and security services. IBM first attempted a device driver model completely based in userspace to maximize its dynamic configuration, but later found the need to blend it between userspace and kernelspace, while keeping as much as possible in userspace. The Adaptive Driver Architecture (ADD) was designed for the creation of layered device drivers, which are easily portable to other hardware and operating system platforms beyond Workplace OS, and which consist of about 5000-8000 lines of device-specific code each. Some shared services are common only to select personalities, such as MMPM serving multimedia only to Windows 3.1 and OS/2 personalities, and which is alien or redundant to other markets.
Atop the shared services, another layer of userspace servers called personalities provide DOS, Windows, OS/2 (Workplace OS/2), and UNIX (WPIX) environments. The further hope was to support OS/400, AIX, Taligent OS, and MacOS personalities. Personalities provide environment subsystems to applications. Any one personality can be made dominant for a given version of the OS, providing the desktop user with a single GUI environment to accommodate the secondary personalities. In 1993, IBM intended one release version to be based upon the OS/2 Workplace Shell and another to be based upon the UNIX Common Desktop Environment (CDE).
IBM explained the branding: "Workplace OS is the codename for a collection of operating system components including, among others, the IBM Microkernel and the OS/2 personality. Workplace OS/2 is the specific codename for the OS/2 personality. Workplace OS/2 will operate with the IBM Microkernel and can be considered OS/2 for the PowerPC." For the 1995 final preview release, IBM continued, "When we stopped using the name 'Workplace' and started calling the product 'OS/2 for the PowerPC', you might have thought that the 'Workplace' was dead. But the 'Workplace' is far from dead. It has simply been renamed for prime time." Workplace OS/2 was intended to define the future of OS/2, as a 32-bit clean platform and free of internal legacy, with perfect compatibility for source code of OS/2 applications and drivers. IBM originally wanted to prove new development models on Workplace OS/2 and backport them into OS/2 2.1 for x86 until the two platforms were unifiedespecially the IBM Microkernel, a new graphics subsystem, fully 32-bit system code with a flat memory model, Taligent, and OpenDoc.
IBM intended for Workplace OS to run on several processor architectures, including PowerPC, ARM, and x86 which would range in size from handheld PDAs to workstations to large 64-bit servers and supercomputers. IBM saw the easy portability of the Workplace OS as creating a simple migration path to move its existing x86 (DOS and OS/2) customer base onto a new wave of standard reference PowerPC-based systems, such as the PC Power Series and the Power Macintosh. Creating a unique but open and industry-standard reference platform of open-source microkernel, IBM hedged its company-wide operating system strategy by aggressively attempting to recruit other computer companies to adopt its microkernel as a basis for their own operating systems.
History
Development
GUTS
In January 1991, there was an internal presentation to the IBM Management Committee of a new strategy for operating system products. This included a chart called the Grand Unification Theory of Operating Systems (GUTS) which outlined how a single microkernel underlying common subsystems could provide a single unifying architecture for the world's many existing and future operating systems. It was initially based in a procedural programming model, not object-oriented. The design elements of this plan had already been implemented on IBM's RS/6000 platform via the System Object Model (SOM), a model which had already been delivered as integral to the OS/2 operating system.
Sometime later in 1991, as a result of the Apple/IBM business partnership, a small exploratory IBM team first visited the Taligent team, who demonstrated a relatively mature prototype operating system and programming model based entirely on Apple's Pink project from 1987. There, GUTS's goals were greatly impacted and expanded by exposure to these similar goals—especially advanced in the areas of aggressive object-orientation, and of software frameworks upon a microkernel. IBM's optimistic team saw the Pink platform as being the current state of the art of operating system architecture. IBM wanted to adopt Pink's more object-oriented programming model and framework-based system design, and add compatibility with legacy procedural programming along with the major concept of multiple personalities of operating systems, to create the ultimate possible GUTS model.
Through the historic Apple/IBM partnership, Apple's CEO John Sculley said that the already volume-shipping OS/2 and MacOS would become unified upon the common PowerPC hardware platform to "bring a renaissance to the industry".
In late 1991, a small team from Boca Raton and Austin began implementing the GUTS project, with the goal of proving the GUTS concept, by first converting the monolithic OS/2 2.1 system to the Mach microkernel, and yielding a demo. To gain shared access to key personnel currently working on the existing OS/2, they disguised the project as the Joint Design Task Force and brought "a significant number" of personnel from Boca, Austin (with LANs and performance), Raleigh (with SNA and other transport services), IBM Research (with operating systems and performance), and Rochester (with the 64-bit, object-oriented worldview from AS/400). Pleased with the robust, long-term mentality of the microkernel technology and with the progress of the project, the team produced a prototype in mid 1992. The initial internal-development prototypes ran on x86-based hardware and provided a BSD Unix derived personality and a DOS personality.
Demos and business reorganization
At Comdex in late 1992, the team flew in and assembled a private demonstration based on last-minute downloads to replace corrupted files and one hour of sleep. The presentation was so well received that the prototype was put on the trade show floor on Thursday, as the first public demonstration of the IBM Microkernel-based system running OS/2, DOS, 16-bit Windows, and UNIX applications. In 1992, IBM persuaded Taligent to migrate the Taligent OS from its internally developed microkernel named Opus, onto the IBM Microkernel. Ostensibly, this would have allowed Taligent's operating system (implemented as a Workplace OS personality) to execute side-by-side with DOS and OS/2 operating system personalities.
In 1993, InfoWorld reported that Jim Cannavino "has gone around the company and developer support for a plan to merge all of the company's computing platforms—ES/9000, AS/400, RS/6000, and PS/2—around a single set of technologies, namely the PowerPC microprocessor, the Workplace OS operating system, and the Taligent object model, along with a series of open standards for cross-platform development, network interoperability, etc." On June 30, 1993, a presentation was given at the Boca Programming Center by Larry Loucks, IBM Fellow and VP of Software Architecture of the Personal Software Products (PSP) Division.
By 1993, IBM reportedly planned two packages of Workplace OS, based on personality dominance: one based on the OS/2 Workplace Shell and another based upon the UNIX Common Desktop Environment (CDE). IBM and Apple were speaking about the possibility of a Mac OS personality.
By January 1994, the IBM Power Personal Systems Division had still not yet begun testing its PowerPC hardware with any of its three intended launch operating systems: definitely AIX and Windows NT, and hopefully also Workplace OS. Software demonstrations showed limited personality support, with the dominant one being the OS/2 Workplace Shell desktop, and the DOS and UNIX personalities achieving only fullscreen text mode support with crude hotkey switching between the environments. Byte reported that the multiple personality support promised in Workplace OS's conceptual ambitions was more straightforward, foundational, and robust than that of the already-shipping Windows NT. The magazine said "IBM is pursuing multiple personalities, while Microsoft appears to be discarding them" while conceding that "it's easier to create a robust plan than a working operating system with robust implementations of multiple personalities".
In 1994, the industry was reportedly shifting away from monolithic development and even application suites, toward object-oriented, component-based, crossplatform, application frameworks.
By 1995, Workplace OS was becoming notable for its many and repeated launch delays, with IBM described as being inconsistent and "wishy washy" with dates. This left IBM's own PowerPC hardware products without a mainstream operating system, forcing the company to at least consider the rival Windows NT. In April 1994, Byte reported that under lead architect Paul Giangarra, IBM had staffed more than "400 people working to bring [Workplace OS] up on Power Personal hardware".
In May 1994, the RISC Systems software division publicly announced IBM's first attempt to even study the feasibility of converting AIX into a Workplace OS personality, which the company had been publicly promising since the beginning. One IBM Research Fellow led a team of fewer than ten, to identify and address the problem. The team defined the AIX personality problem as being the fundamentally incompatible byte ordering between the big-endian AIX and the little-endian Workplace OS. This problem is endemic, because though the PowerPC CPU and Workplace OS can perform in either mode, endianness is a systemwide configuration set once at boot time only; and Workplace OS favors OS/2 which comes from the little-endian Intel x86 architecture. After seven months of silence on the issue, IBM announced in January 1995 that the intractable endianness problem had resulted in the total abandonment of the flagship plan for an AIX personality.
In late 1994, as Workplace OS approached its first beta version, IBM referred to the beta product as "OS/2 for the PowerPC". As the project's first deliverable product, this first beta was released to select developers on the Power Series 440 in December 1994. A second beta was released in 1995. By 1995, IBM had shipped two different releases of an application sampler CD for the beta OS.
Preview launch
In mid 1995, IBM officially named its planned initial Workplace OS release "OS/2 Warp Connect (PowerPC Edition)" with the codename "Falcon". In October 1995, IBM announced the upcoming first release, though still a developer preview. The announcement predicted it to have version 1.0 of the IBM Microkernel with the OS/2 personality and a new UNIX personality, on PowerPC. Having been part of the earliest demonstrations, the UNIX personality was now intended to be offered to customers as a holdover due to the nonexistence of a long-awaited AIX personality, but the UNIX personality was also abandoned prior to release.
This developer release is the first ever publication of Workplace OS, and of the IBM Microkernel (at version 1.0), which IBM's internal developers had been running privately on Intel and PowerPC hardware. The gold master was produced on December 15, 1995 with availability on January 5, 1996, only to existing Power Series hardware customers who paid $215 for a special product request through their IBM representative, who then relayed the request to the Austin research laboratory. The software essentially appears to the user as the visually identical and source-compatible PowerPC equivalent of the mainstream OS/2 3.0 for Intel. Packaged as two CDs with no box, its accompanying overview paper booklet calls it the "final edition" but it is still a very incomplete product intended only for developers. Its installer only supports two computer models, the IBM PC Power Series 830 and 850 which have PowerPC 604 CPUs of , of RAM, and IDE drives. Contrary to the product's "Connect" name, the installed operating system has no networking support. However, full networking functionality is described within the installed documentation files, and in the related book IBM's Official OS/2 Warp Connect PowerPC Edition: Operating in the New Frontier (1995) — all of which the product's paper booklet warns the user to disregard. The kernel dumps debugging data to the serial console. The system hosts no compiler, so developers are required to cross-compile applications on the source-compatible OS/2 for Intel system, using MetaWare’s High C compiler or VisualAge C++, and manually copy the files via relocatable medium to run them.
With an officially concessionary attitude, IBM had no official plans for a general release packaged for OEMs or retail, beyond this developer preview available only via special order from the development lab. Upon its launch, Joe Stunkard, spokesman for IBM's Personal Systems Products division, said "When and if the Power market increases, we'll increase the operating system's presence as required." On January 26, 1996, an Internet forum statement was made by John Soyring, IBM's Vice President of Personal Software Products: "We are not planning additional releases of the OS/2 Warp family on the PowerPC platform during 1996 — as we just released in late December 1995 the OS/2 Warp (PowerPC Edition) product. ... We have just not announced future releases on the PowerPC platform. In no way should our announcement imply that we are backing away from the PowerPC."
Roadmap
On November 22, 1995, IBM's developer newsletter said, "Another focus of the 1996 product strategy will be the IBM Microkernel and microkernel-based versions of OS/2 Warp. Nearly 20 corporations, universities, and research institutes worldwide have licensed the microkernel, laying the foundation for a completely open microkernel standard." IBM planned a second feature-parity release for x86 and PowerPC in 1996, and version 2.0 of the microkernel was "distributed to microkernel adopters" early that year. This version was described as final, with support for x86 and ARM processors. IBM reportedly tested OS/2 on the never-released x86-compatible PowerPC 615.
At this point, the several-year future roadmap of Workplace OS included IBM Microkernel 2.0 and was intended to subsume the fully converged future of the OS/2 platform starting after the future release of OS/2 version 4, including ports to Pentium, Pentium Pro, MIPS, ARM, and Alpha CPUs.
Discontinuation
The Workplace OS project was finally canceled in March 1996 due to myriad factors: inadequate performance; low acceptance of the PowerPC Reference Platform; poor quality of the PowerPC 620 launch; extensive cost overruns; lack of AIX, Windows, or OS/400 personalities; and the overall low customer demand. The only mainstream desktop operating system running on PowerPC was Windows NT, which also lacked supply and demand. Industry analysts said that "the industry may have passed by the PowerPC". In 1996, IBM also closed the Power Personal Division responsible for personal PowerPC systems. IBM stopped developing new operating systems, and instead committed heavily to Linux, Java, and some Windows. In 2012, IBM described Linux as the "universal platform" in a way that happens to coincide with many of the essential design objectives of GUTS.
Reception
Industrial reception
Reception was enthusiastically but skeptically mixed, as the young IT industry was already constantly grappling with the second-system effect, and was now presented with Workplace OS and PowerPC hardware as the ultimate second system duo to unify all preceding and future systems. On November 15, 1993, InfoWorlds concerns resembled the Osborne effect: "Now IBM needs to talk about this transition without also telling its customers to stop buying all the products it is already selling. Tough problem. Very little of the new platform that IBM is developing will be ready for mission-critical deployment until 1995 or 1996. So the company has to dance hard for two and maybe three years to keep already disaffected customers on board."
In 1994, an extensive analysis by Byte reported that the multiple personality concept in Workplace OS's beta design was more straightforward, foundational, and robust than that of the already-shipping Windows NT. It said "IBM is pursuing multiple personalities, while Microsoft appears to be discarding them" and conceded that "it's easier to create a robust plan than a working operating system with robust implementations of multiple personalities".
Upon the January 1996 developer final release, InfoWorld relayed the industry's dismay that the preceding two years of delays had made the platform "too little, too late", "stillborn", and effectively immediately discontinued. An analyst was quoted, "The customer base would not accept OS/2 and the PowerPC at the same time" because by the time IBM would eventually ship a final retail package of OS/2 on PowerPC machines, "the power/price ratio of the PowerPC processor just wasn't good enough to make customers accept all of the other drawbacks" of migrating to a new operating system alone.
In 2013, Ars Technica retrospectively characterized the years of hype surrounding Workplace OS as supposedly being "the ultimate operating system, the OS to end all OSes ... It would run on every processor architecture under the sun, but it would mostly showcase the power of POWER. It would be all-singing and all-dancing."
Internal analysis
In January 1995, four years after the conception and one year before the cancellation of Workplace OS, IBM announced the results of a very late stage analysis of the project's initial assumptions. This concluded that it is impossible to unify the inherent disparity in endianness between different proposed personalities of legacy systems, resulting in the total abandonment of the flagship plan for an AIX personality.
In May 1997, one year after its cancellation, one of its architects reflected back on the intractable problems of the project's software design and the limits of available hardware.
Academic analysis
In September 1997, a case study of the history of the development of Workplace OS was published by the University of California with key details having been verified by IBM personnel. These researchers concluded that IBM had relied throughout the project's history upon multiple false assumptions and overly grandiose ambitions, and had failed to apprehend the inherent difficulty of implementing a kernel with multiple personalities. IBM considered the system mainly as its constituent components and not as a whole, in terms of system performance, system design, and corporate personnel organization. IBM had not properly researched and proven the concept of generalizing all these operating system personalities before starting the project, or at any responsible timeframe during it — especially its own flagship AIX. IBM assumed that all the resultant performance issues would be mitigated by eventual deployment upon PowerPC hardware. The Workplace OS product suffered the second-system effect, including feature creep, with thousands of global contributing engineers across many disparate business units nationwide. The Workplace OS project had spent four years and $2 billion (or 0.6% of IBM's revenue for that period), which the report described as "one of the most significant operating systems software investments of all time" and "one of the largest operating system failures in modern times".
See also
Taligent, sister project of Workplace OS
IBM Future Systems project, a previous grand unifying project
Copland, another second system prototype from Apple
64DD, Nintendo's ambitious 1990s platform known for extreme repeated lateness and commercial failure
Notes
References
Further reading
OS/2 PowerPC Toolkit, Developer Connection CD-ROMs. The first doc is a description of the OS/2 ABI on PowerPC32. The second is an API addendum, including a description of new 32-bit console APIs.
Links at EDM/2
ARM operating systems
IBM operating systems
Mach (kernel)
Microkernel-based operating systems
OS/2
PowerPC operating systems
X86 operating systems
Microkernels | Workplace OS | [
"Technology"
] | 5,307 | [
"Computing platforms",
"OS/2"
] |
1,044,574 | https://en.wikipedia.org/wiki/Active%20Fuel%20Management | Active Fuel Management (formerly known as displacement on demand (DoD)) is a trademarked name for the automobile variable displacement technology from General Motors. It allows a V6 or V8 engine to "turn off" half of the cylinders under light-load conditions to improve fuel economy. Estimated performance on EPA tests shows a 5.5–7.5% improvement in fuel economy.
GM's Active Fuel Management technology used a solenoid to deactivate the lifters on selected cylinders of a pushrod V-layout engine.
GM used the Active Fuel Management technology on a range of engines including with the GM Small Block Gen IV engine family, first-generation GM EcoTec3 engine family, second-generation GM High-Feature V6 DOHC engine family, and first-generation High-Feature V8 DOHC engine family. Vehicle applications included the 2005 Chevy TrailBlazer EXT, the GMC Envoy XL, Envoy XUV, and Pontiac Grand Prix.
Displacement on demand
General Motors was the first to modify existing production engines to enable cylinder deactivation, with the introduction of the Cadillac L62 "V8-6-4" in 1981.
Second generation
In 2004, the electronics side was improved greatly with the introductions of Electronic Throttle Control, electronically controlled transmissions, and transient engine and transmission controls. In addition, computing power was vastly increased. A solenoid control valve assembly integrated into the engine valley cover contains solenoid valves that provide a pressurized oil signal to specially designed hydraulic roller lifters provided by Eaton Corp. and Delphi. These lifters disable and re-enable exhaust and intake valve operation to deactivate and reactivate engine cylinders . Unlike the first generation system, only half of the cylinders can be deactivated. It is notable that the second generation system uses engine oil to hydraulically modulate engine valve function. As a result, the system is dependent upon the quality of the oil in the engine. As anti-foaming agents in engine oil are depleted, air may become entrained or dissolve in the oil, delaying the timing of hydraulic control signals. Similarly engine oil viscosity and cleanliness is a factor. Use of the incorrect oil type, i.e. SAE 10W40 instead of SAE 5W30, or the failure to change the engine oil or oil filter at factory recommended intervals, can also significantly impair system performance.
In 2001, GM showcased the 2002 Cadillac Cien concept car, which featured Northstar XV12 engine with Displacement on Demand. Later that year, GM debuted Opel Signum² concept car in Frankfurt Auto Show, which uses the global XV8 engine with displacement on demand. In 2003, GM unveiled the Cadillac Sixteen concept car at the Detroit Opera House, which featured an XV16 concept engine that can switch between 4, 8, and 16 cylinders.
On April 8, 2003, General Motors announced this technology (now called Active Fuel Management) to be commercially available on 2005 GMC Envoy XL, Envoy XUV and Chevrolet TrailBlazer EXT using optional Vortec 5300 V8 engine. GM also extended the technology on the new High Value LZ8 V6 engine in the Chevrolet Impala and Monte Carlo as well as the 5.3L V8 LS4 engine in the last generation Chevrolet Impala SS, Monte Carlo SS and Pontiac Grand Prix GXP. In both designs, half of the cylinders can be switched off under light loads.
On July 21, 2008, General Motors unveiled the production version of the 2010 Chevrolet Camaro. The Camaro SS with an automatic transmission features the GM L99 engine, a development of the LS3 with Active Fuel Management which allowed it to run on four cylinders during light load conditions.
Third generation
In January 2018, GM announced an improved version of AFM called Dynamic Fuel Management to be initially released in Chevy Silverado trucks. This system shuts off any number of cylinders in a variety of combinations, maximizing fuel economy and avoiding switching between banks of cylinders. This is achieved by using oil pressure solenoids to collapse each individual hydraulic valve lifter, allowing for fully independent individual cylinder control. The system is based on Dynamic Skip Fire, a technology developed by California company Tula Technology. The 6.2L V8 engine of the Chevrolet Silverado incorporating the technology was named one of Ward's 10 Best Engines for 2019.
See also
Variable displacement
Honda's Variable Cylinder Management (VCM)
Chrysler's Multi-Displacement System (MDS)
Daimler AG's Active Cylinder Control (ACC)
Start-stop system
Cadillac Variable Displacement V8-6-4 L62 Engine
References
External links
XV8 concept engine
XV12 concept engine
Opel Signum² press release
GM Expands Deployment of Displacement on Demand
GM’s DoD Now Called “Active Fuel Management”
Engine technology
Automotive technology tradenames | Active Fuel Management | [
"Technology"
] | 979 | [
"Engine technology",
"Engines"
] |
1,044,681 | https://en.wikipedia.org/wiki/Mesh | A mesh is a barrier made of interlaced strands of metal, fiber or other flexible or ductile materials. A mesh is similar to a web or a net in that it has many interwoven strands.
Types
A plastic mesh may be extruded, oriented, expanded, woven or tubular. It can be made from polypropylene, polyethylene, nylon, PVC or PTFE.
A metal mesh may be woven, knitted, welded, expanded, sintered, photo-chemically etched or electroformed (screen filter) from steel or other metals.
In clothing, mesh is loosely woven or knitted fabric that has many closely spaced holes. Knitted mesh is frequently used for modern sports jerseys and other clothing like hosiery and lingerie
A meshed skin graft is a piece of harvested skin that has been systematically fenestrated to create a mesh-like patch. Meshing of skin grafts provides coverage of a greater surface area at the recipient site, and also allows for the egress of excess serous or sanguinous fluid, which can compromise the graft establishment via formation of haematoma or seroma. However, it results in a rather pebbled appearance upon healing that may ultimately look less aesthetically pleasing.
Fiberglass mesh is a neatly woven, crisscross pattern of fiberglass thread that can be used to create new products such as door screens, filtration components, and reinforced adhesive tapes. It is commonly sprayed with a PVC coating to make it stronger, last longer, and to prevent skin irritation.
Coiled wire fabric is a type of mesh that is constructed by interlocking metal wire coils via a simple corkscrew method. The resulting spirals are then woven together to create a flexible metal fabric panel. Coiled wire fabric mesh is a product that is used by architects to design commercial and residential structures. It is also used in industrial settings to protect personnel and contain debris. Additionally, coiled wire fabric mesh is used for zoo enclosures, typically aviary and small mammal exhibits.
Uses
Meshes are often used to screen out insects. Wire screens on windows and mosquito netting are meshes.
Wire screens can be used to shield against radio frequency radiation, e.g. in microwave ovens and Faraday cages.
Metal and nylon wire mesh filters are used in filtration.
Wire mesh is used in guarding for secure areas and as protection in the form of vandal screens.
Wire mesh can be fabricated to produce park benches, waste baskets and other baskets for material handling.
Woven meshes are basic to screen printing.
Surgical mesh is used to provide a reinforcing structure in surgical procedures like inguinal hernioplasty, and umbilical hernia repair.
Meshes are used as drum heads in practice and electronic drum sets.
Fence for livestock or poultry (chicken wire or hardware cloth)
Humane animal trapping uses woven or welded wire mesh cages (chicken wire or hardware cloth) to trap wild animals like raccoons and skunks in populated areas.
Meshes can be used for eyes in masks.
See also
Expanded metal
Faraday cage
Gauze
Wire gauze
Heating mantle
Latticework
Sieve
References
External links
Woven fabrics
Net fabrics
Filters
Building materials
Steel | Mesh | [
"Physics",
"Chemistry",
"Engineering"
] | 659 | [
"Chemical equipment",
"Building engineering",
"Filters",
"Architecture",
"Construction",
"Materials",
"Filtration",
"Matter",
"Building materials"
] |
1,044,685 | https://en.wikipedia.org/wiki/Comb%20filter | In signal processing, a comb filter is a filter implemented by adding a delayed version of a signal to itself, causing constructive and destructive interference. The frequency response of a comb filter consists of a series of regularly spaced notches in between regularly spaced peaks (sometimes called teeth) giving the appearance of a comb.
Comb filters exist in two forms, feedforward and feedback; which refer to the direction in which signals are delayed before they are added to the input.
Comb filters may be implemented in discrete time or continuous time forms which are very similar.
Applications
Comb filters are employed in a variety of signal processing applications, including:
Cascaded integrator–comb (CIC) filters, commonly used for anti-aliasing during interpolation and decimation operations that change the sample rate of a discrete-time system.
2D and 3D comb filters implemented in hardware (and occasionally software) in PAL and NTSC analog television decoders, reduce artifacts such as dot crawl.
Audio signal processing, including delay, flanging, physical modelling synthesis and digital waveguide synthesis. If the delay is set to a few milliseconds, a comb filter can model the effect of acoustic standing waves in a cylindrical cavity or in a vibrating string.
In astronomy the astro-comb promises to increase the precision of existing spectrographs by nearly a hundredfold.
In acoustics, comb filtering can arise as an unwanted artifact. For instance, two loudspeakers playing the same signal at different distances from the listener, create a comb filtering effect on the audio. In any enclosed space, listeners hear a mixture of direct sound and reflected sound. The reflected sound takes a longer, delayed path compared to the direct sound, and a comb filter is created where the two mix at the listener. Similarly, comb filtering may result from mono mixing of multiple mics, hence the 3:1 rule of thumb that neighboring mics should be separated at least three times the distance from its source to the mic.
Discrete time implementation
Feedforward form
The general structure of a feedforward comb filter is described by the difference equation:
where is the delay length (measured in samples), and is a scaling factor applied to the delayed signal. The transform of both sides of the equation yields:
The transfer function is defined as:
Frequency response
The frequency response of a discrete-time system expressed in the -domain is obtained by substitution where is the imaginary unit and is angular frequency. Therefore, for the feedforward comb filter:
Using Euler's formula, the frequency response is also given by
Often of interest is the magnitude response, which ignores phase. This is defined as:
In the case of the feedforward comb filter, this is:
The term is constant, whereas the term varies periodically. Hence the magnitude response of the comb filter is periodic.
The graphs show the periodic magnitude response for various values of Some important properties:
The response periodically drops to a local minimum (sometimes known as a notch), and periodically rises to a local maximum (sometimes known as a peak or a tooth).
For positive values of the first minimum occurs at half the delay period and repeats at even multiples of the delay frequency thereafter:
The levels of the maxima and minima are always equidistant from 1.
When the minima have zero amplitude. In this case, the minima are sometimes known as nulls.
The maxima for positive values of coincide with the minima for negative values of , and vice versa.
Impulse response
The feedforward comb filter is one of the simplest finite impulse response filters. Its response is simply the initial impulse with a second impulse after the delay.
Pole–zero interpretation
Looking again at the -domain transfer function of the feedforward comb filter:
the numerator is equal to zero whenever . This has solutions, equally spaced around a circle in the complex plane; these are the zeros of the transfer function. The denominator is zero at , giving poles at . This leads to a pole–zero plot like the ones shown.
Feedback form
Similarly, the general structure of a feedback comb filter is described by the difference equation:
This equation can be rearranged so that all terms in are on the left-hand side, and then taking the transform:
The transfer function is therefore:
Frequency response
By substituting into the feedback comb filter's -domain expression:
the magnitude response becomes:
Again, the response is periodic, as the graphs demonstrate. The feedback comb filter has some properties in common with the feedforward form:
The response periodically drops to a local minimum and rises to a local maximum.
The maxima for positive values of coincide with the minima for negative values of and vice versa.
For positive values of the first maximum occurs at 0 and repeats at even multiples of the delay frequency thereafter:
However, there are also some important differences because the magnitude response has a term in the denominator:
The levels of the maxima and minima are no longer equidistant from 1. The maxima have an amplitude of .
The filter is only stable if is strictly less than 1. As can be seen from the graphs, as increases, the amplitude of the maxima rises increasingly rapidly.
Impulse response
The feedback comb filter is a simple type of infinite impulse response filter. If stable, the response simply consists of a repeating series of impulses decreasing in amplitude over time.
Pole–zero interpretation
Looking again at the -domain transfer function of the feedback comb filter:
This time, the numerator is zero at , giving zeros at . The denominator is equal to zero whenever . This has solutions, equally spaced around a circle in the complex plane; these are the poles of the transfer function. This leads to a pole–zero plot like the ones shown below.
Continuous time implementation
Comb filters may also be implemented in continuous time which can be expressed in the Laplace domain as a function of the complex frequency domain parameter analogous to the z domain. Analog circuits use some form of analog delay line for the delay element. Continuous-time implementations share all the properties of the respective discrete-time implementations.
Feedforward form
The feedforward form may be described by the equation:
where is the delay (measured in seconds). This has the following transfer function:
The feedforward form consists of an infinite number of zeros spaced along the jω axis (which corresponds to the Fourier domain).
Feedback form
The feedback form has the equation:
and the following transfer function:
The feedback form consists of an infinite number of poles spaced along the jω axis.
See also
Dirac comb
Fabry–Pérot interferometer
References
External links
Signal processing
Filter theory | Comb filter | [
"Technology",
"Engineering"
] | 1,354 | [
"Telecommunications engineering",
"Computer engineering",
"Signal processing",
"Filter theory"
] |
1,044,750 | https://en.wikipedia.org/wiki/Brettanomyces | Brettanomyces is a non-spore forming genus of yeast in the family Saccharomycetaceae, and is often colloquially referred to as "Brett". The genus name Dekkera is used interchangeably with Brettanomyces, as it describes the teleomorph or spore forming form of the yeast, but is considered deprecated under the one fungus, one name change. The cellular morphology of the yeast can vary from ovoid to long "sausage" shaped cells. The yeast is acidogenic, and when grown on glucose rich media under aerobic conditions, produces large amounts of acetic acid. Brettanomyces is important to both the brewing and wine industries due to the sensory compounds it produces.
In the wild, Brettanomyces lives on the skins of fruit.
History
In 1889, Seyffert of the Kalinkin Brewery in St. Petersburg was the first to isolate a "Torula" from English beer which produced the typical "English" taste in lager beer, and in 1899 JW Tullo at Guinness described two types of "secondary yeast" in Irish stout. However N. Hjelte Claussen at the Carlsberg brewery was the first to publish a description in 1904, following a 1903 patent (UK patent GB190328184) that was the first patented microorganism in history.
Etymology
The term Brettanomyces comes from the Greek for "British fungus". İt is a compound of Ancient Greek Βρεττανός (Brettanós) : British and μύκης (múkēs) : fungus.
Wine
When Brettanomyces grows in wine it produces several compounds that can alter the palate and bouquet. At low levels some winemakers agree that the presence of these compounds has a positive effect on wine, contributing to complexity, and giving an aged character to some young red wines. Many wines even rely on Brettanomyces to give their distinctive character, such as Château Musar. However, when the levels of the sensory compounds greatly exceed the sensory threshold, their perception is almost always negative. The sensory threshold can differ between individuals, and some find the compounds more unattractive than others. While it can be desirable at lower levels, there is no guarantee that high levels will not be produced. As Brettanomyces can potentially spoil a wine it is generally seen as a wine spoilage yeast, and its presence in wine as a wine fault.
Wines that have been contaminated with Brettanomyces taints are often referred to as "Bretty", "metallic", or as having "Brett character". Brettanomyces taint in wine is also sometimes incorrectly identified as cork taint.
Sensory compounds
The compounds responsible contributing certain sensory characters to wine are;
4-ethylphenol: Band-aids, barnyard, horse stable, antiseptic
4-ethylguaiacol: Bacon, spice, cloves, smoky
isovaleric acid: Sweaty saddle, cheese, rancidity
These compounds can impart completely different sensory properties to a wine when they are present in different ratios.
Origins in the winery
Brettanomyces is most associated with barrel aged red wines, but has also been found in Chardonnay and Sauvignon blanc. In some cases the yeast has caused contamination in sparkling wines produced by the méthode champenoise when en tirage. It is thought Brettanomyces can be introduced to a winery by insect vectors such as fruit flies, or by purchasing Brett-contaminated wine barrels. The ability to metabolise the disaccharide cellobiose, along with the irregular surface of a barrel interior, provide ideal conditions for Brettanomyces growth. Once the yeast is in a winery it is hard to eradicate and is spread readily by unsanitised equipment.
Control measures
The growth of Brettanomyces is best controlled by the addition of sulfur dioxide, to which the yeast is particularly sensitive. The addition of other sterilising compounds such as dimethyl dicarbonate often has a similar effect. Alternatively the wine can be bottled after sterile filtration, which physically removes the yeast. Wines that are vinified to low residual sugar levels, such as <1.0g/L, are also less likely to be spoiled as the main growth substrate has been limited. However growth has been reported at levels below this and it is assumed that the yeast can use other substrates.
Beer
In most beer styles Brettanomyces is typically viewed as a contaminant and the characteristics it imparts are considered unwelcome "off-flavours". However, in certain styles, particularly certain traditional Belgian ales, it is appreciated and encouraged. Gueuze and other lambic beers owe their unique flavour profiles to Brettanomyces, as do wild yeast saison or farmhouse styles; and it is also found in Oud Bruin and Flanders red ale.
In Orval Brettanomyces is added before the final bottle fermentation.
Several American craft breweries intentionally use Brettanomyces in their beers. This use began with a renewed interest in Belgian style ales and later formed new styles altogether (Brewers Association, 2007 Great American Beer Festival Style Guidelines, section 13a, 16). Some breweries use 100% Brettanomyces for the fermentation of some of their beers, and omit Saccharomyces from the recipe. Some American brewers that use Brettanomyces may also include lactic acid producing bacteria such as Lactobacillus and Pediococcus in order to provide sourness to the beer.
While Brett is sometimes pitched into the fermenter, aging in wood barrels previously inoculated with Brettanomyces is another method used to impart the complexity contributed by these strains of yeast.
See also
4-ethylguaiacol
4-ethylphenol
Brettanomyces bruxellensis
Lambic
Wine fault
References
Footnotes
External links
Brettanomyces at Milk the Funk Wiki
Oenology
Yeasts used in brewing
Saccharomycetes
Yeasts
de:Weinfehler#Brettanomyces | Brettanomyces | [
"Biology"
] | 1,265 | [
"Yeasts",
"Fungi"
] |
1,044,752 | https://en.wikipedia.org/wiki/De%20Phenomenis%20in%20Orbe%20Lunae | De Phenomenis in Orbe Lunae is a 1612 book by Collegio Romano philosophy professor Giulio Cesare la Galla that describes emission of light by a stone. La Galla's inspiration came from Galileo's debate with Vincenzo Casciarolo regarding a "lapis solaris," a stone that emitted light seemingly on its own. In De Phenomenis, de Galla asserts that the stone was only able to emit light after the stone itself had calcified. It released "a certain quantity of fire and light" that it had absorbed, just as water would be absorbed by a sponge.
Robert Burton discusses De Phenomenis in The Anatomy of Melancholy.
References
1612 books
Astronomy books | De Phenomenis in Orbe Lunae | [
"Astronomy"
] | 150 | [
"Astronomy books",
"Astronomy book stubs",
"Works about astronomy",
"Astronomy stubs"
] |
1,044,787 | https://en.wikipedia.org/wiki/Silver%20fulminate | Silver fulminate (AgCNO) is the highly explosive silver salt of fulminic acid.
Silver fulminate is a primary explosive, but has limited use as such due to its extreme sensitivity to impact, heat, pressure, and electricity. The compound becomes progressively sensitive as it is aggregated, even in small amounts; the touch of a falling feather, the impact of a single water droplet, or a small static discharge are all capable of explosively detonating an unconfined pile of silver fulminate no larger than a dime and no heavier than a few milligrams. Aggregating larger quantities is impossible, due to the compound's tendency to self-detonate under its own weight.
Silver fulminate was first prepared in 1800 by Edward Charles Howard in his research project to prepare a large variety of fulminates. Along with mercury fulminate, it is the only fulminate stable enough for commercial use. Detonators using silver fulminate were used to initiate picric acid in 1885, but since have been used only by the Italian Navy. The current commercial use has been in producing non-damaging novelty noisemakers as children's toys.
Structure
Silver fulminate occurs in two polymorphic forms, an orthorhombic one and a trigonal one with a rhombohedral lattice. The trigonal polymorph consists of cyclic hexamers, (AgCNO)6.
Properties
Fulminates are toxic, about the same as cyanides. When pure, silver fulminate is chemically stable, not decomposing after years of storage. Like many silver salts, it darkens with light exposure. It is slightly soluble in cold water and can be recrystallized using hot water. It can also be recrystallized from a 20% solution of ammonium acetate. It is not hygroscopic and can explode when moist or under water; it was reported to remain explosive after 37 years under water. It explodes upon contact with concentrated sulfuric acid or chlorine or bromine, but not when contacting iodine. It is insoluble in nitric acid, but dissolves in ammonia, alkali chlorides, alkali cyanides, aniline, pyridine, and potassium iodide by forming complexes. Concentrated hydrochloric acid decomposes it non-explosively with a hissing noise; thiosulfate also decomposes it non-explosively, and can be used for disposal.
Preparation
This compound can be prepared by pouring a solution of silver nitrate in nitric acid into ethanol, under careful control of the reaction conditions, to avoid an explosion. The reaction is usually done at 8090 °C; at 30 °C, the precipitate may not form. Only tiny amounts of silver fulminate should be prepared at once, as even the weight of the crystals can cause them to self-detonate.
Another way to make silver fulminate is to react silver carbonate with ammonia in solution.
4 Ag2CO3 + 4 NH3 → 4 AgCNO + 6 H2O + 4 Ag + O2
Silver fulminate also forms when nitrogen oxide gas is passed through a solution of silver nitrate in ethanol.
Silver fulminate can be prepared unintentionally, when an acidic solution of silver nitrate comes in contact with alcohol. This is a hazard in some formulations of chemically silvering mirrors.
Novelty explosive
Silver fulminate, often in combination with potassium chlorate, is used in trick noise-makers known as "throw-downs", "crackers", "snappers", "whippersnappers", "pop-its", or "bang snaps", a popular type of novelty firework. They contain approximately 200 milligrams of fine gravel coated with a minute quantity (approximately 80 micrograms) of silver fulminate. When thrown against a hard surface, the impact is sufficient to detonate the tiny quantity of explosive, creating a small salute from the supersonic detonation. Snaps are designed to be incapable of producing damage (even when detonated against skin) due to the buffering effect provided by the much greater mass of the gravel medium. It is also the chemical found in Christmas crackers having first been used for that purpose by Tom Smith in 1860. The chemical is painted on one of two narrow strips of card, with abrasive on the second. When the cracker is pulled, the abrasive detonates the silver fulminate.
A fulminate mixture with 1020% potassium chlorate is cheaper and more brisant than the fulminate alone.
Silver fulminate and "fulminating silver"
Silver fulminate is often confused with silver nitride, silver azide, or fulminating silver. "Fulminating silver", though always referring to an explosive silver-containing substance, is an ambiguous term. While it may be a synonym of silver fulminate, it may also refer to the nitride or azide, the decomposition product of Tollen's reagent, or an alchemical mixture, which does not contain the fulminate anion.
See also
Primary explosive
Justus von Liebig
Friedrich Woehler
Silver cyanate
Isomerism
Fulminate
Fulminic acid
Potassium fulminate
Mercury(II) fulminate
References
Further reading
Silver compounds
Fulminates | Silver fulminate | [
"Chemistry"
] | 1,129 | [
"Explosive chemicals",
"Fulminates"
] |
1,044,849 | https://en.wikipedia.org/wiki/Turning%20Torso | Turning Torso is a neo-futurist residential skyscraper built in Malmö, Sweden, in 2005. It was the tallest building in the Nordic region until September 2022, when it was surpassed by Karlatornet in Gothenburg. Located on the Swedish side of the Öresund strait, it was built and is owned by Swedish cooperative housing association HSB. It is regarded as the second twisted skyscraper in the world to receive the title after Telekom Tower in Malaysia.
It was designed by Spanish architect, structural engineer, sculptor and painter Santiago Calatrava and officially opened on 27 August 2005. It reaches a height of with 54 stories and 147 apartments. Turning Torso won the 2005 Gold Emporis Skyscraper Award; and in 2015, the 10 Year Award from the Council on Tall Buildings and Urban Habitat.
Design
Turning Torso is based on Twisting Torso, a white marble sculpture by Calatrava that was based on the form of a twisting human being.
In 1999, HSB Malmö's former managing director, Johnny Örbäck, saw the sculpture in a brochure presenting Calatrava in connection with his contribution to the architectural competition for the Öresund Bridge. It was on this occasion that Örbäck was inspired to build HSB Turning Torso. Shortly afterwards he travelled to Zurich to meet Calatrava, and ask him to design a residential building based on the idea of a structure of twisting cubes.
It is a solid, immobile building constructed in nine segments of five-story pentagons that twist relative to each other as it rises; the topmost segment is twisted 90 degrees clockwise from the ground floor. Each floor consists of an irregular pentagonal shape rotating around the vertical core, which is supported by an exterior steel framework. The two bottom segments are intended as office space. Segments three to nine house 147 rental apartments.
Construction
Construction started in February 2001. One reason for building Turning Torso was to re-establish a recognisable skyline for Malmö since the removal in 2002 of the Kockums Crane, which was located less than from Turning Torso. The local politicians deemed it important for the inhabitants to have a new symbol for Malmö in lieu of the crane that had been used for shipbuilding and somewhat symbolised the city's blue collar roots.
The construction of part of this building was featured on Discovery Channel Extreme Engineering TV programme which showed how a floor of the building was constructed.
Prior to the construction of Turning Torso, the Kronprinsen had been the city's tallest building.
The apartments were initially supposed to be sold, but insufficient interest resulted in the apartments being let. The owner has several times unsuccessfully tried to sell the building. Construction costs for the building were over twice the initial budgeted costs.
Events
On 18 August 2006, Austrian skydiver Felix Baumgartner parachuted onto the Turning Torso, and then jumped off it.
Floor 49 is home to the public observation deck while floors 50–52 contain a private club, meeting events, the reception, and the venue restaurant.
Floor 53 and 54 in the Turning Torso are conference floors booked and managed by Sky High Meetings. Since 2009 the owner, HSB, has decided to let the public visit these floors but only on special scheduled days, and pre-booking is required.
Gallery
See also
List of tallest buildings in the world
List of tallest buildings in Europe
List of tallest buildings in Sweden
List of twisted buildings
Shanghai Tower, Tallest twisted building
Azrieli Sarona Tower
Karla Tower
List of tallest buildings in Scandinavia
Notes
References
External links
Fullscreen panorama from Turning Torso
PERI GmbH - From a sculpture to a building
"The Sculptor", The New Yorker, 31 October 2005, link broken November 2011
Torso Tower Blog
Short films of Turning Torso from various locations
Buildings and structures in Malmö
Towers in Sweden
Santiago Calatrava structures
Residential skyscrapers
Skyscraper office buildings in Sweden
Residential buildings completed in 2005
2005 establishments in Sweden
Twisted buildings and structures
Modernist architecture in Sweden
Postmodern architecture
High-tech architecture
Landmarks in Sweden
Neo-futurist architecture
21st-century establishments in Skåne County | Turning Torso | [
"Engineering"
] | 820 | [
"Postmodern architecture",
"Architecture"
] |
1,044,906 | https://en.wikipedia.org/wiki/Flipper%20%28anatomy%29 | A flipper is a broad, flattened limb adapted for aquatic locomotion. It refers to the fully webbed, swimming appendages of aquatic vertebrates that are not fish.
In animals with two flippers, such as whales, the flipper refers solely to the forelimbs. In animals with four flippers, such as pinnipeds and sea turtles, one may distinguish fore- and hind-flippers, or pectoral flippers and pelvic flippers.
Animals with flippers include penguins (whose flippers are also called wings), cetaceans (e.g., dolphins and whales), pinnipeds (e.g., walruses, earless and eared seals), sirenians (e.g., manatees and dugongs), and marine reptiles such as the sea turtles and the now-extinct plesiosaurs, mosasaurs, ichthyosaurs, and metriorhynchids.
Usage of the terms "fin" and "flipper" is sometimes inconsistent, even in the scientific literature. However, the hydrodynamic control surfaces of fish are always referred to as "fins" and never "flippers". Tetrapod limbs which have evolved into fin-like structures are usually (but not always) called "flippers" rather than fins. The dorsal structure on cetaceans is called the "dorsal fin" and the large cetacean tails are referred to primarily as flukes but occasionally as "caudal fins"; neither of these structures are flippers.
Some flippers are very efficient hydrofoils, analogous to wings (airfoils), used to propel and maneuver through the water with great speed and maneuverability (see Foil). Swimming appendages with the digits still apparent, as in the webbed forefeet of amphibious turtles and platypus, are considered paddles rather than flippers.
Locomotion
For all species of aquatic vertebrates, swimming performance depends upon the animal's control surfaces, which include flippers, flukes and fins. Flippers are used for different types of propulsion, control, and rotation. In cetaceans, they are primarily used for control while the fluke is used for propulsion.
The evolution of flippers in penguins was at the expense of their flying capabilities, in spite of evolving from an auk-like ancestor that could 'fly' underwater as well in the air. Form constrains function, and the wings of diving flying species, such as the murre or cormorant have not developed into flippers. The flippers of penguins became thicker, denser and smaller while being modified for hydrodynamic properties.
Hydrodynamics
Cetacean flippers may be viewed as being analogous to modern engineered hydrofoils, which have hydrodynamic properties: lift coefficient, drag coefficient and efficiency. Flippers are one of the principal control surfaces of cetaceans (whales, dolphins and porpoises) due to their position in front of the center of mass, and their mobility which provides three degrees of freedom.
Flippers on humpback whales (Megaptera novaeangliae) have non-smooth leading edges, yet demonstrate superior fluid dynamics to the characteristically smooth leading edges of artificial wings, turbines and other kinds of blades. The whale's surprising dexterity is due primarily to its non-conventional flippers, which have large, irregular looking bumps called tubercles across their leading edges. The tubercles break up the passage of water, maintaining even channels of the fast-moving water, limiting turbulence and providing greater maneuverability.
The foreflippers used by the pinnipeds act as oscillatory hydrofoils. Both fore and hind flippers are used for turning. A 2007 study of Steller's sea lion found that a majority of thrust was produced during the drive phase of the fore flipper stroke cycle. Although previous findings on eared seals suggested that thrust was generated by the initial outward movement of the fore flippers or the terminal drag-based paddling phase, the 2007 study found that little or no thrust was generated during those phases. Swimming performance in sea lions is modulated by changes in the duration and intensity of movements without changing their sequence. Using criteria based on velocity and the minimum radius of turns, pinnipeds' maneuverability is superior to cetaceans but inferior to many fish.
Evolution of flippers
Marine mammals have evolved several times, developing similar flippers. The forelimbs of cetaceans, pinnipeds, and sirenians presents a classic example of convergent evolution. There is widespread convergence at the gene level. Distinct substitutions in common genes created various aquatic adaptations, most of which constitute parallel evolution because the substitutions in question are not unique to those animals.
When comparing cetaceans to pinnipeds to sirenians, 133 parallel amino acid substitutions occur. Comparing and contrasting cetaceans-pinnipeds, cetaceans-sirenians, and pinnipeds-sirenians, 2,351, 7,684, and 2,579 substitutions occur, respectively.
Digit processes
Whales and their relatives have a soft tissue flipper that encases most of the forelimb, and elongated digits with an increased number of phalanges. Hyperphalangy is an increase in the number of phalanges beyond the plesiomorphic mammal condition of three phalanges-per-digit. This trait is characteristic of secondarily aquatic vertebrates with flippers. Hyperphalangy was present among extinct ichthyosaurs, plesiosaurs, and mosasaurs.
Cetaceans are the sole mammals to have evolved hyperphalangy. Though the flippers of modern cetaceans are not correctly described as webbed feet, the intermediate webbed limbs of ancient semiaquatic cetaceans may be described as such. The presence of interdigital webbing within the fossils of semi-aquatic Eocene cetaceans was probably the result of BMP antagonists counteracting interdigital apoptosis during embryonic limb development. Modifications to signals in these tissues likely contributed to the origin of an early form of hyperphalangy in fully aquatic cetaceans about 35 million years ago. The process continued over time, and a very derived form of hyperphalangy, with six or more phalanges per digit, evolved convergently in rorqual whales and oceanic dolphins, and was likely associated with another wave of signaling within the interdigital tissues.
Although toothed cetaceans have five digits, most baleen whales have four digits and even lack a metacarpal. In the latter (mysticetes), the first digit ray may have been lost as late as 14 million years ago.
Flipper evolution in turtles
Sea turtles evolved in the Cretaceous. Their flippers developed gradually by a series of stepwise adaptations, with the most fundamental traits of flippers appearing in the deepest nodes (the earliest times) in their phylogeny. These initial traits evolved only once among chelonioids, and the bauplan was refined through a secondary process of specialization.
Evers et al. identified characters related to the pectoral girdle and forelimb that are related to the modification of sea turtle arms and hands into flippers.
Key biomechanical features of flippers
flattening of elements
lengthening of the humerus
reduction of mobility between individual flipper elements
Fundamental traits for flipper movement
lateral position of the humeral process
change in the angle of the internal scapula
Foraging behavior
Because of the specialization of flippers and their hydrodynamic constraints, it was thought that they were not used to significantly interact with the environment, unlike the legs of terrestrial tetrapods. However, the use of limbs for foraging is documented in marine tetrapods. Use of the flippers for foraging behavior is observed in marine mammals such as walruses, seals, and manatee, and even in reptiles such as sea turtles. Among turtles, observed behaviors include a green turtle holding a jellyfish, a loggerhead rolling a scallop on the sea floor, and a hawksbill turtle pushing against a reef for leverage to rip an anemone loose. Based on presumed limb use in ancestral turtles, these behaviors may have occurred as long ago as 70 million years.
See also
Fish fin
Homology (biology)
References
Animal anatomy
Evolution of animals | Flipper (anatomy) | [
"Biology"
] | 1,775 | [
"Animals",
"Evolution of animals"
] |
1,044,919 | https://en.wikipedia.org/wiki/Inco%20Superstack | The Inco Superstack in Sudbury, Ontario, with a height of , is the tallest chimney in Canada and the Western Hemisphere, and the second-tallest freestanding chimney in the world after the Ekibastuz GRES-2 Power Station in Kazakhstan. It is also the second-tallest freestanding structure of any type in Canada, behind the CN Tower but ahead of First Canadian Place. As of 2023, it is the 51st-tallest freestanding structure in the world. The Superstack is located on top of the largest nickel smelting operation in the world at Vale's Copper Cliff processing facility in the city of Greater Sudbury.
In 2018, Vale announced that the stack would be decommissioned and dismantled, beginning in 2020. Two new, smaller stacks were constructed under the company's Clean Atmospheric Emissions Reduction Project. In July 2020, Vale announced that the Superstack had been officially taken out of service, but would remain operational in standby mode for two more months as a backup in the event of a malfunction in the new system, following which the dismantling of the Superstack would begin. As of 2024, however, Vale has not yet announced the awarding of a demolition contract on the Superstack, and some have called for the stack to be left in place as a tourist attraction; in September 2024, Vale announced an updated plan which will see the stack dismantled by 2029.
In addition to further reducing sulphur dioxide emissions by 85 per cent, the decommissioning of the stack was expected to cut the complex's natural gas consumption in half.
History
The Superstack was built by Inco Limited (and later purchased by Vale) at an estimated cost of 25 million dollars. Construction on the structure was underway during the Sudbury tornado of August 20, 1970; the structure swayed heavily in the wind, but remained standing and suffered only minor damage. Six workers were on top of the construction platform when the storm hit, and all survived. The same day was the final day of construction on the stack, with the construction fully completed by the evening of August 21, 1970.
The stack entered into full operation in 1972. From the date of its completion until the Ekibastuz GRES-2 chimney was constructed in 1987, it was the world's tallest smokestack. Between the years 1972–75 it was the tallest freestanding structure in Canada.
Prior to the construction of the Superstack, the waste gases contributed to severe local ecological damage. The Copper Cliff smelter was already home to some of the world's tallest stacks, including two chimneys constructed in 1928-29 and 1936. However, these proved to be insufficient and compounded by open coke beds in the early to mid-20th century and logging for fuel, an inevitable near-total loss of native vegetation occurred. Of particular interest to geologists are the now exposed rocky outcrops, which have been permanently stained charcoal black, first by the pollution wafting over the decades from the roasting yards, then by the acid rain in a layer which penetrates up to three inches into the once pink-grey granite.
The Superstack was built to disperse sulphur gases and other byproducts of the smelting process away from the city of Sudbury. It did this by placing the gases high in the air, where they normally blew right past the city on the prevailing winds. As a result, these gases could be detected in the atmosphere around Greater Sudbury in a radius of the Inco plant. During the 1970s and 80s, the sulphur dioxide plume formed a permanent, opaque, cloud-like formation running across the entire horizon as seen from a distance. Periodic inversions would cause the plume to fall into the city.
Construction of the Superstack was followed by an environmental reclamation project which included rehabilitation of existing landscapes and selected water bodies such as Lake Ramsey. An ambitious regreening plan saw over three million new trees planted within the Greater Sudbury area. In 1992, Inco and the city were given an award by the United Nations in honour of their environmental rehabilitation programmes.
On November 3, 2014, Vale announced that they may decide to stop using the stack, following a $1 billion project to reduce emissions by 85% that negates the need for the stack. If no other use for it is found, Vale may decommission the superstack, demolish it, and replace it with a much smaller chimney. In 2017, Vale announced plans to decommission the Superstack upon the construction of two smaller, more energy efficient stacks. On July 28, 2020, Vale updated that news, stating that the stack at its Copper Cliff Complex had been taken out of service. It would remain on "hot standby" for about two months while the replacement flue connections were tested but the stack would then be demolished over the years.
Emissions
While the Superstack lowered the ground-level pollution in the city, it has dispersed sulphur dioxide, and nitrogen dioxide gases over a much larger area. Though not the single source of lake acidification, it appears the heavily industrialized Ohio Valley has contributed to the ecological problem of lakes as far north as northern Ontario. Research from data gleaned up to the late 1980s demonstrated acid rain to have affected the biology of some 7,000 lakes.
Prior to Vale's purchase of Inco, a major construction effort by Inco in the early 1990s dramatically scrubbed waste gases before pumping them up the Superstack. These upgrades were completed in 1994 and emissions from then on have been much reduced. By comparison to the plume prior to installation, the plume now disperses quite rapidly and is often see-through even at the stack site.
Emissions reductions and increases in thermal efficiency have reached the point where natural draught is no longer sufficient to draw flue gas up the stack, necessitating the use of induced draught fans and/or reheating of the flue gas using natural gas burners.
As well as SO2 emissions, Inco's Superstack has had very high arsenic, nickel and lead emissions to the atmosphere. In 1998, Inco emitted 146.7 tonnes of lead from Copper Cliff while producing 238,500 tonnes of nickel-copper matte. This is 150 times more lead emission than would be permitted by a US EPA-regulated lead smelter producing 238,500 tonnes of lead. As a result of the excessive lead emissions from the Inco Superstack, the surrounding community of Copper Cliff was found to have levels of lead in soil tests at a level sufficient to cause harm to young children.
See also
List of chimneys
List of tallest freestanding structures in the world
References
External links
Inco Air Quality Site
CVRD Inco plugs sulphur dioxide holes
Drone video 'A View Rarely Seen' of INCO Superstack
Towers completed in 1970
Buildings and structures in Greater Sudbury
Chimneys in Canada
Metallurgical facilities
Industrial buildings in Ontario
1970 establishments in Ontario
Vale S.A. | Inco Superstack | [
"Chemistry",
"Materials_science"
] | 1,428 | [
"Metallurgy",
"Metallurgical facilities"
] |
1,044,927 | https://en.wikipedia.org/wiki/Resource%20management | In organizational studies, resource management is the efficient and effective development of an organization's resources when they are needed. Such resources may include the financial resources, inventory, human skills, production resources, or information technology (IT) and natural resources.
In the realm of project management, processes, techniques and philosophies as to the best approach for allocating resources have been developed. These include discussions on functional vs. cross-functional resource allocation as well as processes espoused by organizations like the Project Management Institute (PMI) through their Project Management Body of Knowledge (PMBOK) methodology of project management. Resource management is a key element to activity resource estimating and project human resource management. Both are essential components of a comprehensive project management plan to execute and monitor a project successfully. As is the case with the larger discipline of project management, there are resource management software tools available that automate and assist the process of resource allocation to projects and portfolio resource transparency including supply and demand of resources.
Corporate resource management process
Large organizations usually have a defined corporate resource management process which mainly guarantees that resources are never over-allocated across multiple projects. Peter Drucker wrote of the need to focus resources, abandoning less promising initiatives for every new project taken on, as fragmentation inhibits results.
Techniques
One resource management technique is resource-leveling. It aims at smoothing the stock of resources on hand, reducing both excess inventories and shortages.
The required data are: the demands for various resources, forecast by period into the future as far as is reasonable, as well as the resources' configurations required in those demands, and the supply of the resources, again forecast by time into the future as far as is reasonable.
The goal is to achieve 100% utilization but that is very unlikely, when weighted by important metrics and subject to constraints, for example: meeting a minimum service level but otherwise minimizing cost. A Project Resource Allocation Matrix (PRAM) is maintained to visualize the resource allocations against various projects.
The principle is to invest in resources as stored capabilities, then unleash the capabilities as demanded.
A dimension of resource development is included in resource management by which investment in resources can be retained by a smaller additional investment to develop a new capability that is demanded, at a lower investment than disposing of the current resource and replacing it with another that has the demanded capability.
In conservation, resource management is a set of practices pertaining to maintaining natural systems integrity. Examples of this form of management are air resource management, soil conservation, forestry, wildlife management and water resource management. The broad term for this type of resource management is natural resource management (NRM).
See also
Environmental management
Factor 10
Holistic management
Industrial symbiosis
List of resource management software
Resource allocation
References
Nature conservation
Land management
Schedule (project management)
Management by type | Resource management | [
"Physics"
] | 574 | [
"Spacetime",
"Physical quantities",
"Time",
"Schedule (project management)"
] |
1,044,976 | https://en.wikipedia.org/wiki/Peptoid | Peptoids (root from the Greek πεπτός, peptós "digested"; derived from πέσσειν, péssein "to digest" and the Greek-derived suffix -oid meaning "like, like that of, thing like a __," ), or poly-N-substituted glycines, are a class of biochemicals known as biomimetics that replicate the behavior of biological molecules. Peptidomimetics are recognizable by side chains that are appended to the nitrogen atom of the peptide backbone, rather than to the α-carbons (as they are in amino acids).
Chemical structure and synthesis
In peptoids, the side chain is connected to the nitrogen of the peptide backbone, instead of the α-carbon as in peptides. Notably, peptoids lack the amide hydrogen which is responsible for many of the secondary structure elements in peptides and proteins. Peptoids were first invented by Reyna J. Simon, Ronald N. Zuckermann, Paul Bartlett and Daniel V. Santi to mimic protein/peptide products to aid in the discovery of protease-stable small molecule drugs for the East Bay company Chiron.
Following the sub-monomer protocol originally created by Ron Zuckermann, each residue is installed in two steps: acylation and displacement. In the acylation step, a haloacetic acid, typically bromoacetic acid activated by diisopropylcarbodiimide reacts with the amine of the previous residue. In the displacement step (a classical SN2 reaction), an amine displaces the halide to form the N-substituted glycine residue. The submonomer approach allows the use of any commercially available or synthetically accessible amine with great potential for combinatorial chemistry.
Unique characteristics
Like D-Peptides and β peptides, peptoids are completely resistant to proteolysis, and are therefore advantageous for therapeutic applications where proteolysis is a major issue. Since secondary structure in peptoids does not involve hydrogen bonding, it is not typically denatured by solvent, temperature, or chemical denaturants such as urea (see details below).
Notably, since the amino portion of the amino acid results from the use of any amine, thousands of commercially available amines can be used to generate unprecedented chemical diversity at each position at costs far lower than would be required for similar peptides or peptidomimetics. To date, at least 230 different amines have been used as side chains in peptoids.
Structure
Peptoid oligomers are known to be conformationally unstable, due to the flexibility of the main-chain methylene groups and the absence of stabilizing hydrogen bond interactions along the backbone. Nevertheless, through the choice of appropriate side chains it is possible to form specific steric or electronic interactions that favour the formation of stable secondary structures like helices, especially peptoids with C-α-branched side chains are known to adopt structure analogous to polyproline I helix. Different strategies have been employed to predict and characterize peptoid secondary structure, with the ultimate goal of developing fully folded peptoid protein structures
The cis/trans amide bond isomerization still leads to a conformational heterogeneity which doesn’t allow for the formation of homogeneous peptoid foldamers. Nonetheless, scientists were able to find trans-inducer N-Aryl side chains promoting polyproline type II helix, and strong cis-inducer such as bulky naphtylethyl and tert-butyl side chains. It was also found that n→π* interactions can modulate the ratio of cis/trans amide bond conformers, until reaching a complete control of the cis conformer in the peptoid backbone using a functionalizable triazolium side chain.
Applications
The first demonstration of the use of peptoids was in screening a combinatorial library of diverse peptoids, which yielded novel high-affinity ligands for 7-transmembrane G-protein-couple receptors.
Peptoids have been developed as candidates for a range of different biomedical applications, including antimicrobial agents, synthetic lung surfactants, ligands for various proteins including Src Homology 3 (SH3 domain), Vascular Endothelial Growth Factor (VEGF) receptor 2, and antibody Immunoglobulin G biomarkers for the identification of Alzheimer's disease.
Due to their advantageous characteristics as described above, peptoids are also being actively developed for use in nanotechnology, an area in which they may play an important role.
Antimicrobial agents
Researchers supported by grants from the NIH and NIAID tested the efficacy of antimicrobial peptoids against antibiotic-resistant strands of Mycobacterium tuberculosis. Antimicrobial peptoids demonstrate a non-specific mechanism of action against the bacterial membrane, one that differs from small-molecule antibiotics that bind to specific receptors (and thus are susceptible to mutations or alterations in bacterial structure). Preliminary results suggested "appreciable activity" against drug-sensitive bacterial strands, leading to a call for more research into the viability of peptoids as a new class of tuberculocidal drugs.
Researchers at the Barron Lab at Stanford University (supported by a NIH Pioneer Award grant) are currently studying whether upregulation of the human host defense peptide LL-37 or application of antimicrobial treatments based on LL-37 may prevent or treat sporadic Alzheimer’s dementia. Lead researcher Annelise Barron discovered that the innate human defense peptide LL-37 binds to the peptide Ab, which is associated with Alzheimer's disease. Barron's insight is that an imbalance between LL-37 and Ab may be a critical factor affecting AD-associated fibrils and plaques. The project extends focus upon the potential relationship between chronic, oral P. gingivalis and herpesvirus (HSV-1) infections to the progression of Alzheimer's dementia.
See also
Peptidomimetic
Beta-peptide
Peptoid Nanosheet
References
Peptides | Peptoid | [
"Chemistry"
] | 1,282 | [
"Biomolecules by chemical classification",
"Peptides",
"Molecular biology"
] |
1,045,027 | https://en.wikipedia.org/wiki/Malolactic%20fermentation | Malolactic conversion (also known as malolactic fermentation or MLF) is a process in winemaking in which tart-tasting malic acid, naturally present in grape must, is converted to softer-tasting lactic acid. Malolactic fermentation is most often performed as a secondary fermentation shortly after the end of the primary fermentation, but can sometimes run concurrently with it. The process is standard for most red wine production and common for some white grape varieties such as Chardonnay, where it can impart a "buttery" flavor from diacetyl, a byproduct of the reaction.
The fermentation reaction is undertaken by the family of lactic acid bacteria (LAB); Oenococcus oeni, and various species of Lactobacillus and Pediococcus. Chemically, malolactic fermentation is a decarboxylation, which means carbon dioxide is liberated in the process.
The primary function of all these bacteria is to convert L-malic acid, one of the two major grape acids found in wine, to another type of acid, L+ lactic acid. This can occur naturally. However, in commercial winemaking, malolactic conversion typically is initiated by an inoculation of desirable bacteria, usually O. oeni. This prevents undesirable bacterial strains from producing "off" flavors. Conversely, commercial winemakers actively prevent malolactic conversion when it is not desired, such as with fruity and floral white grape varieties such as Riesling and Gewürztraminer, to maintain a more tart or acidic profile in the finished wine.
Malolactic fermentation tends to create a rounder, fuller mouthfeel. Malic acid is typically associated with the taste of green apples, while lactic acid is richer and more buttery tasting. Grapes produced in cool regions tend to be high in acidity, much of which comes from the contribution of malic acid. Malolactic fermentation generally enhances the body and flavor persistence of wine, producing wines of greater palate softness. Many winemakers also feel that better integration of fruit and oak character can be achieved if malolactic fermentation occurs during the time the wine is in barrel.
A wine undergoing malolactic conversion will be cloudy because of the presence of bacteria, and may have the smell of buttered popcorn, the result of the production of diacetyl. The onset of malolactic fermentation in the bottle is usually considered a wine fault, as the wine will appear to the consumer to still be fermenting (as a result of CO2 being produced). However, for early Vinho Verde production, this slight effervesce was considered a distinguishing trait, though Portuguese wine producers had to market the wine in opaque bottles because of the increase in turbidity and sediment that the "in-bottle MLF" produced. Today, most Vinho Verde producers no longer follow this practice and instead complete malolactic fermentation prior to bottling with the slight sparkle being added by artificial carbonation.
History
Malolactic fermentation is possibly as old as the history of wine, but scientific understanding of the positive benefits of MLF and control of the process is a relatively recent development. For many centuries, winemakers noticed an "activity" that would happen in their wines stored in barrel during the warm spring months following harvest. Like primary alcoholic fermentation, this phenomenon would release carbon dioxide gas and seem to have a profound change on the wine that was not always welcomed. It was described as a "second fermentation" in 1837 by the German enologist Freiherr von Babo and the cause for increased turbidity in the wine. Von Babo encouraged winemakers to quickly respond at the first sight of this activity by racking the wine into a new barrel, adding sulfur dioxide, and then following up with another set of racking and sulfuring to stabilize the wine.
In 1866, Louis Pasteur, one of the pioneers of modern microbiology, isolated the first bacteria from wine and determined that all bacteria in wine were a cause for wine spoilage. While Pasteur did notice an acid reduction in wine with the lactic bacteria, he did not link that process to a consumption of malic acid by the bacteria, but rather assumed it was just tartrate precipitation. In 1891, the Swiss enologist Hermann Müller theorized that bacteria may be the cause of this reduction. With the aid of peers, Müller explained his theory of "biological deacidication" in 1913 to be caused by wine bacterium Bacterium gracile.
In the 1930s, the French enologist Jean Ribéreau-Gayon published papers stating the benefits of this bacterial transformation in wine. During the 1950s, advances in enzymatic analysis allowed enologists to better understand the chemical processes behind malolactic fermentation. Émile Peynaud furthered enology understanding of the process and soon cultured stock of beneficial lactic acid bacteria was available for winemakers to use.
Role in winemaking
The primary role of malolactic fermentation is to deacidify wine. It can also affect the sensory aspects of a wine, making the mouthfeel seem smoother and adding potential complexity in the flavor and aroma of the wine. For these other reasons, most red wines throughout the world (as well as many sparkling wines and nearly 20% of the world's white wines) today go through malolactic fermentation.
Malolactic fermentation deacidifies the wine by converting the "harsher" diprotic malic acid to the softer monoprotic lactic acid. The different structures of malic and lactic acids leads to a reduction of titratable acidity (TA) in the wine by 1 to 3 g/L and an increase in pH by 0.3 units. Malic acid is present in the grape throughout the growing season, reaching its peak at veraison and gradually decreasing throughout the ripening process. Grapes harvested from cooler climates usually have the highest malic content and have the most dramatic changes in TA and pH levels after malolactic fermentation.
Malolactic fermentation can aid in making a wine "microbiologically stable" in that the lactic acid bacteria consume many of the leftover nutrients that other spoilage microbes could use to develop wine faults. However, it can also make the wine slightly "unstable" due to the rise in pH, especially if the wine already was at the high end of wine pH. It is not unusual for wines to be "deacidified" by malolactic fermentation only to have the winemaker later add acidity (usually in the form of tartaric acid) to lower the pH to more stable levels.
Conversion of malic into lactic
Lactic acid bacteria convert malic acid into lactic acid as an indirect means of creating energy for the bacteria by chemiosmosis which uses the difference in pH gradient between inside the cell and outside in the wine to produce ATP. One model on how this is accomplished notes that the form of L-malate most present at the low pH of wine is its negatively charged monoanionic form. When the bacteria move this anion from the wine into higher pH level of its cellular plasma membrane, it causes a net-negative charge that creates electrical potential. The decarboxylation of malate into L-lactic acid releases not only carbon dioxide but also consumes a proton, which generates the pH gradient which can produce ATP.
Lactic acid bacteria convert L-malic acid found naturally in wine grapes. Most commercial malic acid additives are a mixture of the enantiomers D+ and L-malic acid.
Sensory influences
Many different studies have been conducted on the sensory changes that occur in wines that have gone through malolactic fermentation. The most common descriptor is that acidity in the wine feels "softer" due to the change of the "harsher" malic acid to the softer lactic acid. The perception of sourness comes from the titratable acidity in the wine, so the reduction in TA that follows MLF leads to a reduction in perceived sour or "tartness" in the wine.
The change in mouthfeel is related to the increase in pH, but may also be due to the production of polyols, particularly the sugar alcohols erythritol and glycerol. Another factor that may enhance the mouthfeel of wines that have gone through malolactic fermentation is the presence of ethyl lactate which can be as high as 110 mg/L after MLF.
The potential influence on the aroma of the wine is more complex and difficult to predict with different strains of Oenococcus oeni (the bacterium most commonly used in MLF) having the potential to create different aroma compounds. In Chardonnay, wines that have gone through MLF are often described as having "hazelnut" and "dried fruit" notes, as well as the aroma of freshly baked bread. In red wines, some strains metabolize the amino acid methionine into a derivative of propionic acid that tends to produce roasted aroma and chocolate notes. Red wines that go through malolactic fermentation in the barrel can have enhanced spice or smoke aromas.
However, some studies have also shown that malolactic fermentation may diminish primary fruit aromas such as Pinot noir, often losing raspberry and strawberry notes after MLF. Additionally, red wines may endure a loss of color after MLF due to pH changes that causes a shift in the equilibrium of the anthocyanins which contribute to the stability of color in wine.
Lactic acid bacteria
All lactic acid bacteria (LAB) involved in winemaking, whether as a positive contributor or as a source for potential faults, have the ability to produce lactic acid through the metabolism of a sugar source, as well as the metabolism of L-malic acid. Species differ in how they metabolise the available sugars in wine (both glucose and fructose, as well as the unfermentable pentoses that wine yeasts do not consume). Some bacteria species use the sugars through a homofermentative pathway, meaning only one main end product (usually lactate) is produced, while others use heterofermentative pathways that can create multiple end products such as carbon dioxide, ethanol, and acetate. While only the L-isomer of lactate is produced by LAB in the conversion of malic acid, both hetero- and homofermenters can produce D-, L- and DL-isomers of lactic from glucose which may contribute to slightly different sensory properties in the wine.
While O. oeni is often the LAB most desired by winemakers to complete malolactic fermentation, the process is most often carried out by a variety of LAB species that dominate the must at different points during fermentations. Several factors influence which species will be dominant, including fermentation temperature, nutritional resources, the presence of sulfur dioxide, interaction with yeast and other bacteria, pH, and alcohol levels (Lactobacillus species, for example, tend to prefer higher pH and can tolerate higher alcohol levels than O. oeni), as well as initial inoculation (such as "wild" ferments versus an inoculation of cultured O. oeni).
Oenococcus
The genus Oenococcus has one main member involved in winemaking, O. oeni, once known as Leuconostoc oeni. Despite having the name Oenococcus, under the microscope, the bacterium has a bacillus (shape) rod shape. The bacteria is a Gram-positive, facultative anaerobe that can utilize some oxygen for aerobic respiration but usually produces cellular energy through fermentation. O. oeni is a heterofermenter that creates multiple end products from the use of glucose with D-lactic acid and carbon dioxide being produced in roughly equal amounts to either ethanol or acetate. In reductive conditions (such as near the end of alcoholic fermentation), the third end product is usually ethanol while in slightly oxidative (such as early in alcohol fermentation or in an untopped barrel), the bacteria are more likely to produce acetate.
Some O. oeni strains can use fructose to create mannitol (which can lead to wine fault known as mannitol taint), while many other strains can break down the amino acid arginine (which can be present in the wine that is resting on the lees after fermentation from the autolysis of dead yeast cells) into ammonia.
In addition to the hexose glucose and fructose sugars, most strains of O. oeni can use the residual pentose sugars left behind from yeast fermentation including L-arabinose and ribose. Only around 45% of O. oeni strains can ferment sucrose (the form of sugar usually added for chaptalization that gets converted by yeast into glucose and fructose).
Winemakers tend to prefer O. oeni for several reasons. First, the species is compatible with the main wine yeast Saccharomyces cerevisiae, though in cases where both MLF and alcoholic fermentation are started together, the yeast most often outcompetes the bacterium for nutritional resources which may cause a delay in the onset of malolactic fermentation. Second, most strains of O. oeni are tolerant to the low pH levels of wine and can usually deal with the standard alcohol levels that most wines reach by the end of fermentation. Additionally, while sulfur dioxide levels above 0.8 molecular SO2 (pH dependent but roughly 35-50 ppm) will inhibit the bacteria, O. oeni is relatively resistant compared to other LAB. Finally, O. oeni tends to produce the least amount of biogenic amines (and most lactic acid) among the lactic acid bacteria encountered in winemaking.
Lactobacillus
Within the genus Lactobacillus are both heterofermentative and homofermentative species. All lactobacilli involved in winemaking are Gram-positive and microaerophilic, with most species lacking the enzyme catalase needed to protect themselves from oxidative stress.
Species of Lactobacillus that have been isolated from wine and grape must samples across the globe include L. brevis, L. buchneri, L. casei, L. curvatus, L. delbrueckii subsp. lactis, L. diolivorans, L. fermentum, L. fructivorans, L. hilgardii, L. jensenii, L. kunkeei, L. leichmannii, L. nagelii, L. paracasei, L. plantarum, and L. yamanashiensis.
Most Lactobacillus species are undesirable in winemaking with the potential of producing high levels of volatile acidity, off odors, wine haze, gassiness, and sediment that can be deposited in the bottle, especially if the wine had not been filtered. These bacteria also have the potential to create excessive amounts of lactic acid which can further influence the flavor and sensory perception of the wine. Some species, such as the so-called "ferocious Lactobacillus", have been implicated in causing sluggish or stuck fermentations, while other species, such as L. fructivorans, have been known to create a cottony mycelium-like growth on the surface of wines, nicknamed "Fresno mold" after the wine region where it was discovered.
Pediococcus
So far, four species from the genus Pediococcus have been isolated in wines and grape must, P. inopinatus, P. pentosaceus, P. parvulus, and P. damnosus, with the last two being the species most commonly found in wine. All Pediococcus species are Gram-positive with some species being micro-aerophilic while others utilizing mostly aerobic respiration. Under the microscope, Pediococcus often appear in pairs of pairs or tetrads which can make them identifiable. Pediococci are homofermenters, metabolizing glucose into a racemic mixture of both L- and D-lactate by glycolysis. However, in the absence of glucose, some species, such as P. pentosaceus, begin using glycerol, degrading it into pyruvate which later can be converted to diacetyl, acetate, 2,3-butanediol and other compounds that can impart unfavorable characteristics to the wine.
Most Pediococcus species are undesirable in winemaking due to the high levels of diacetyl that can be produced, as well as increased production of biogenic amines that has been implicated as one potential cause for red wine headaches. Many species of Pediococcus also have the potential to introduce off odors or other wine faults to the wine such as the bitter-tasting "acrolein taint" that comes from degradation of glycerol into acrolein which then reacts with phenolic compounds in the wine to produce a bitter-tasting compound.
One species, P. parvulus, has been found in wines that have not gone through MLF (meaning malic acid is still present in the wine), but has still had its bouquet altered in a way that enologist have described as "not spoiled" or flaw. Other studies have isolated P. parvulus from wines that have gone through malolactic fermentation without the development of off odors or wine faults.
Nutritional requirements
Lactic acid bacteria are fastidious organisms that cannot synthesize on their own all of their complex nutritional requirements. For LAB to grow and complete malolactic fermentation, the constitution of the wine medium must provide for their nutritional needs. Like wine yeast, LAB require a carbon source for energy metabolism (usually sugar and malic acid), nitrogen source (such as amino acids and purines) for protein synthesis, and various vitamins (such as niacin, riboflavin, and thiamine) and minerals to assist in the synthesis of enzymes and other cellular components.
The source for these nutrients is often found in the grape must itself, though MLF inoculations that run concurrent with alcoholic fermentation risk the yeast outcompeting the bacteria for these nutrients. Towards the end of fermentation, while most of the original grape must resources have been consumed, the lysis of dead yeast cells (the "lees") can be a source for some nutrients, particularly amino acids. Plus, even "dry" wines that have been fermented to dryness still have unfermentable pentose sugars (such as arabinose, ribose and xylose) left behind that can be used by both positive and spoilage bacteria. As with wine yeast, manufacturers of cultured LAB inoculum usually offer specially prepared nutritional additives that be used as a supplement. However, unlike wine yeast, lactic acid bacteria can not use the supplement diammonium phosphate as a nitrogen source.
Before the introduction of complex nutritional supplements and advances in freeze-dried cultures of LAB, winemakers would cultivate their inoculum of lactic acid bacteria from culture slants provided by laboratories. In the 1960s, these winemakers found it easier to create starter cultures in media that contained apple or tomato juice. This "tomato juice factor" was discovered to be a derivative of pantothenic acid, an important growth factor for the bacteria.
As with yeast, oxygen can be considered a nutrient for LAB, but only in very small amount and only for microaerophilic species such as O. oeni. However, no evidence exists currently to suggest that malolactic fermentation runs more smoothly in aerobic conditions than in complete anaerobic conditions, and in fact, excessive amounts of oxygen can retard growth of LAB by favoring conditions of competing microbes (such as Acetobacter).
Native LAB species in the vineyard and the winery
Oenococcus oeni, the LAB species most often desired by winemakers to carry out malolactic fermentation, can be found in the vineyard, but often at very low levels. While moldy, damaged fruit has the potential to carry a diverse flora of microbes, the LAB most often found on clean, healthy grapes after harvest are species from the Lactobacillus and Pediococcus genera. After crushing, microbiologists usually find populations under 103 colony forming units/mL containing a mix of P. damnosus, L. casei, L. hilgardii, and L. plantarum, as well as O. oeni. For musts that do not receive an early dose of sulfur dioxide to "knock back" these wild populations of LAB, this flora of bacteria compete with each other (and the wine yeasts) for nutrients early in fermentation.
In the winery, multiple contact points can be home to native population of LAB including oak barrels, pumps, hoses, and bottling lines. For wines where malolactic fermentation is undesirable (such as fruity white wines), the lack of proper sanitation of wine equipment can lead to the development of unwanted MLF and result in wine faults. In cases of oak barrels where full and complete sanitation is almost impossible, wineries often mark barrels that have contained wines going through MLF and keep them isolated from "clean" or brand new barrels that they can use for wines that are not destined to go through MLF.
Schizosaccharomyces yeast
Several species in the genus Schizosaccharomyces use L-malic acid, and enologists have been exploring the potential of using this wine yeast for deacidifying wines instead of the traditional route of malolactic fermentation with bacteria. However, early results with Schizosaccharomyces pombe have shown a tendency of the yeast to produce off odors and unpleasant sensory characteristics in the wine. In recent years, enologists have been experimenting with a mutant strain of Schizosaccharomyces malidevorans that has so far been shown to produce less potential wine flaws and off odors.
Influence of inoculation timing
Winemakers differ in when they choose to inoculate their must with LAB, with some winemakers pitching the bacteria at the same time as the yeast, allowing both alcoholic and malolactic fermentations to run concurrently, while some wait till the end of fermentation when the wine is racked off its lees and into barrel, and others doing it somewhere between. For practitioners of minimalist or "natural winemaking" who choose not to inoculate with cultured LAB, malolactic fermentation can happen at any time depending on several factors such as the microbiological flora of the winery and the competing influences of these other microbes. All options have potential benefits and disadvantages.
The benefits of inoculating for MLF during alcoholic fermentation include:
More potential nutrients from the grape must (though the bacteria will be competing with the yeast for these)
Lower sulfur dioxide and ethanol levels which can otherwise inhibit the LAB
Higher fermentation temperatures which are more conducive to LAB growth and an earlier completion of MLF: The optimal temperatures for malolactic fermentation are between , while the process is significantly inhibited at temperatures below . Wine stored in the barrels in the cellar during the winter following fermentation will often have a very prolonged malolactic fermentation due to the cool cellar temperatures.
Early completion of malolactic fermentation means the winemaker can make a postfermentation SO2 earlier to protect the wine from oxidation and spoilage microbes (such as Acetobacter). Since sulfur dioxide can inhibit MLF, delaying LAB inoculation till after alcoholic fermentation may mean a delay in sulfur addition till early spring when cellar temperatures warm up enough to encourage the completion of MLF.
Less diacetyl production
The disadvantages for early inoculation include:
Wine yeast and LAB competing for resources (including glucose) and potential antagonism between the microbes
Heterofermenters such as O. oeni metabolizing the glucose still present in the must and potentially creating undesirable byproducts such as acetic acid
Many of the advantages for postalcoholic fermentation answer the disadvantages of early inoculation (namely less antagonism and potential for undesirable byproducts). Also, the advantage is seen of the lees being a nutrient source through the autolysis of the dead yeast cells, though that nutrient source may not always be enough to ensure MLF runs successfully to completion. Conversely, many of the disadvantages of late inoculation are the absence of the advantages that come from early inoculation (higher temperatures, potentially quicker completion, etc.).
Preventing MLF
For some wine styles, such as light, fruity wines or for low-acid wines from warm climates, malolactic fermentation is not desired. Winemakers can take several steps to prevent MLF from taking place, including:
Limited maceration, early pressing, and early racking to limit contact time of the LAB with potential nutrient sources
Maintain sulfur dioxide levels to at least 25 ppm of "free" (unbound) SO2, depending on the pH of the wine, this may mean an addition of 50–100 mg/L of SO2
Maintain pH levels below 3.3
Keep the wine cool at temperatures between 10 and 14 °C (50. 0 to 57.2 °F)
Filter the wine at bottling with at least a 0.45-micron membrane filter to prevent any bacteria from making it into the bottle
In addition, winemakers can use chemical and biological inhibitors such as lysozyme, nisin, dimethyl dicarbonate (Velcorin), and fumaric acid, though some (like Verlcorin) are restricted in winemaking countries outside the United States. Fining agents, such as bentonite, and putting the wine through cold stabilization will also remove potential nutrients for LAB, thus inhibiting malolactic fermentation. Some experimentation with the use of bacteriophages (viruses that infect bacteria) has been conducted to limit malolactic fermentations, but disappointing results in the cheesemaking industry have led to skepticism about the practical use of bacteriophages in winemaking.
Measuring malic content
Winemakers can track the progression of malolactic fermentation by paper chromatography or with a spectrophotometer. The paper chromatography method involves using capillary tubes to add small samples of the wine to chromatograph paper. The paper is then rolled and placed in a jar filled with a butanol solution containing bromocresol green indicator dye for several hours. After the paper is pulled out and dried, the distance of yellow-colored "splotches" from the base line denotes the presence of various acids, with tartaric being closest to the baseline followed by citric, malic, and finally lactic acids near the top of the paper.
A significant limitation to paper chromatography is that it will not show exactly how much malic is still remaining in the wine, with the size of the "splotch" on the paper having no correlation to a quantitative figure. The sensitivity of the paper is also limited to a detection threshold of 100–200 mg/L while most measurements of "MLF stability" target a malic level of less than 0.03 g/L (30 mg/L).
The enzymatic method allows for a quantitative measurement of both malic and lactic acids, but requires the expense of reagent kits and a spectrophotometer that can measure absorbance values at 334, 340, or 365 nm.
Other products produced
The main products of malolactic fermentation are lactic acid, diacetyl, acetic acid, acetoin, and various esters. The amount and exact nature of these products depends on the species/strain of LAB conducting the malolactic fermentation and the condition influencing that wine (pH, available nutrients, oxygen levels, etc.).
Some strains of O. oeni can synthesize higher alcohols which can contribute to fruity notes in the aroma of the wine. Additionally, some strains of the bacterium have beta-glucosidase enzymes that can break down monoglucosides which are aroma compounds attached to a sugar molecule. When the sugar component is cleaved, the rest of the compound becomes volatilized, meaning it can potentially be detected in the aroma bouquet of the wine.
In the early 21st century, some strains of O. oeni were shown to use acetaldehyde by breaking it down into ethanol or acetic acid. While this may help for wines with excessive levels of acetaldehyde, for red wines, it can also destabilize the color of the wine by interfering with acetaldehyde's reaction with anthocyanins to create polymeric pigments that help create a wine's color.
Diacetyl
Diacetyl (or 2,3-butanedione) is the compound associated with the "buttery" aromas of Chardonnays, but it can affect any wine that has gone through malolactic fermentation. At an odor detection threshold of 0.2 mg/L in white wines and 2.8 mg/L in red wines, it can be perceived as slightly buttery or "nutty" while at concentrations greater than 5 to 7 mg/L (5-7 ppm) can overwhelm other aroma notes in the wine.
Diacetyl can be produced by the LAB through metabolism of sugar or of citric acid. While citric acid is naturally present in grapes, it is in a very small amount with most of it coming from deliberate addition by the winemaker to acidify the wine. In the presence of both malic and citric acids, the LAB use both, but use the malic much more quickly, with the rate of citric use/diacetyl formation influenced by the particular bacterial strain (with most strains of O. oeni producing less diacetyl than Lactobacillus and Pediococcis species), as well as the redox potential of the wine. In wine conditions that have a low redox potential (meaning it is more oxidative such as in a barrel that is not fully topped up), more citric acid will be consumed and diacetyl formed. In more reductive conditions, such as in alcoholic fermentations where yeast populations are at their peak and the wine is heavily saturated with carbon dioxide, the formation of diacetyl is much slower. The yeasts also help keep levels low by consuming diacetyl and reducing it to acetoin and butylene glycol.
Diacetyl production is favored in fermentations that run warm with temperatures between . It also tends to be produced at higher levels in wines with lower pH levels (under 3.5), though at levels below 3.2, most strains of LAB desirable for MLF tend to be inhibited. "Wild" (as in uninoculated) malolactic ferments have the potential to produce more diacetyl than inoculated ferments due to the lower initial populations during the lag phase with inoculated ferments usually having an initial inoculum of 106 CFU/mL. Late MLF inoculations, after alcoholic fermentation, also tend to produce higher levels of diacetyl. Chardonnay producers desiring to make the high-diacetyl "buttery style" will often do late or "wild" inoculation in the barrel after primary fermentation, allowing the wine to spend several weeks or even months sur lie in reductive conditions that promote diacetyl production. Some sources point out that diacetyl is actually decreased by sur lie, due to surviving yeast metabolizing diacetyl, and therefore malolactic fermentation is best performed apart from lees.
With wines that have excessive levels of diacetyl, some winemakers use sulfur dioxide to bind with the compound and reduce the perception of diacetyl by 30 to 60%. This binding is a reversible process and after only a few weeks aging in the bottle or tank, the high levels of diacetyl return. However, sulfur dioxide added earlier in the malolactic fermentation process limits diacetyl production by inhibiting the bacteria and limiting their activity in its entirety, including the conversion of malic to lactic acid.
Wine faults
The most common fault associated with malolactic fermentation is its occurrence when it is not desired. This could be for a wine that is meant to be acidic and fruity (such as Riesling) or it could be a wine that was previously thought to have gone through MLF and bottled only to have malolactic fermentation commence in the bottle. The outcome of this "in-bottle" fermentation is often gassy, hazy wine that can be unpalatable to consumers. Improvement in sanitation and control of lactic acid bacteria in the winery can limit the occurrence of these faults.
For early Vinho Verde producers, the slight effervesce that came from in-bottle malolactic fermentation was considered a distinguishing trait that consumers enjoyed in the wine. However, wineries had to market the wine in opaque bottles to mask the turbidity and sediment that the "in-bottle MLF" produced. Today, most Vinho Verde producers no longer follow this practice and instead complete malolactic fermentation prior to bottle with the slight sparkle being added by artificial carbonation.
While not necessarily a fault, malolactic fermentation does have the potential of making a wine "protein unstable" due to the resulting change in pH which affects the solubility of proteins in wine. For this reason, protein fining and heat stability tests on wine usually take place after malolactic fermentation has run to completion.
Volatile acidity
While volatile acidity (VA) is usually measured in terms of acetic acid content, its sensory perception is a combination of acetic (vinegary aromas) and ethyl acetate (nail polish remover and model airplane glue aromas). High levels of VA can inhibit wine yeast and may lead to a sluggish or stuck fermentation. Several microbes can be a source for VA, including Acetobacter, Brettanomyces, and film yeast such as Candida, as well as LAB. However, while LAB usually only produce acetic acid, these other microbes often produce ethyl acetate, as well as acetic acid.
Most wine-producing countries have laws regulating the amount volatile acidity permitted for wine available for sale and consumption. In the United States, the legal limit is 0.9 g/L for foreign wine exported to the United States, 1.2 g/L for white table wine, 1.4 g/L for red wine, 1.5 g/L for white dessert wine, and 1.7 g/L for red dessert wine. European Union wine regulations limit VA to 1.08 g/L for white table wines and 1.20 g/L for red table wines.
Heterofermenting species of Oenococcus and Lactobacillus have the potential to produce high levels of acetic acid through the metabolism of glucose, though with most strains of O. oeni, the amount is usually only 0.1 to 0.2 g/L. Several species of Pediococcus can also produce acetic acid through other pathways. Wines starting out with a high pH levels (above 3.5) stand the greatest risk of excessive acetic acid production due to the more favorable conditions for Lactobacillus and Pediococcus species. L. Kunkeei, one of the so-called "ferocious Lactobacillus" species, has been known to produce 3 to 5 g/L of acetic acid in wines—levels which can easily lead to stuck fermentations.
"Ferocious" Lactobacillus
In the late 20th century, among American winemakers, seemingly healthy fermentation were reported becoming rapidly inundated with high levels of acetic acid that overcame wine yeasts and led to stuck fermentations. While a novel species of Acetobacter or wine spoilage yeast was initially thought to be the culprit, it was eventually discovered to be several species of Lactobacillus, L. kunkeei, L. nagelii, and L. hilgardii, collectively nicknamed "ferocious" Lactobacillus for their aggressive acetic acid production, how quickly they multiply, and their high tolerance to sulfur dioxides and other microbiological controls.
Ferments of high-pH wines (greater than 3.5) that spent time cold soaking prior to yeast inoculations and received little to no sulfur dioxide during crushing seem to be at the most risk for "ferocious" Lactobacillus. While infection seems to be vineyard-specific, currently, none of any of the implicated lactobacilli has been reported as being found on the surface of freshly harvested wine grapes.
Acrolein and mannitol taint
The degradation of glycerol by some strains of LAB can yield the compound acrolein. Glycerol is a sweet-tasting polyol present in all wines, but at higher levels in wines that have been infected with Botrytis cinerea. An "active-aldehyde", acrolein can interact with some phenolic compounds in wine to create highly bitter-tasting wines, described as amertume by Pasteur. While at least one strain of O. oeni has been shown to produce acrolein, it is more commonly found in wines that have been infected by strains of Lactobacillus and Pediococcus species such as L. brevis, L. buchneri, and P. parvulus. Acrolein taint has also shown to be more common in wines that have been fermented at high temperatures and/or made from grapes that have been harvested at high Brix levels.
Heterofermenting species from the genus Lactobacillus, as well as some wild strains of O. oeni, have the potential to metabolize fructose (one of the main sugars in wine) into the sugar alcohols mannitol and (less commonly) erythritol. These are sweet-tasting compounds can add sweetness to a wine where it is not desired (such as Cabernet Sauvignon). Mannitol taint, described as mannite by Pasteur, in wines is often accompanied by other wine faults, including the presence of excessive levels of acetic acid, diacetyl, lactic acid, and 2-butanol, which can contribute to a "vinegary-estery" aroma. The wine may also have a slimy sheen on the surface.
Fresno mold and ropiness
In the mid-20th century, a cottony mycelium-like growth began appearing in the bottles of some sweet fortified wines produced in California's Central Valley. Being fortified, these wines often had alcohol levels in excess of 20% which is usually a level that discourages growth of most spoilage organisms associated with winemaking. Nicknamed "Fresno mold" due to where it was first discovered, the culprit of this growth was determined to be L. fructivorans, a species which can be controlled by sanitation and maintaining adequate sulfur dioxide levels.
Some Lactobacillus and Pediococcus species (particularly P. damnosus and P. pentosaceus) have the potential to synthesize polysaccharides that add an oily viscosity to the wine. In the case of Lactobacillus, some of these saccharides may be glucans that can be synthesized from glucose present in the wine as low as 50–100 mg/L (0.005 to 0.01% residual sugar) and afflict seemingly "dry" wines. While "ropiness" can occur in the barrel or tank, it is often observed in the wines several months after they are bottled. Wines with pH levels above 3.5 and low sulfur dioxide levels are at most risk for developing this fault.
Called graisse (or "grease") by the French and les vins filant by Pasteur, this fault has been observed in apple wines and cider. It can also be potentially be caused by other spoilage microbes such as Streptococcus mucilaginous, Candida krusei, and Acetobacter rancens.
Mousiness and geranium taint
Wines infected with L. brevis, L. hilgardii, and L. fermentum have been known to occasionally develop an aroma reminiscent of rodent droppings. The aroma becomes more pronounced when the wine is rubbed between the fingers and, if consumed, can leave a long, unpleasant finish. The aroma can be very potent, detectable at a sensory threshold as low as 1.6 parts per billion (μg/l). The exact compound behind this is derivatives of the amino acid lysine created through an oxidation reaction with ethanol. While undesirable LAB species have been most commonly associated with this fault, wine infected by Brettanomyces yeast in the presence of ammonium phosphate and lysine have also been known to exhibit this fault.
Sorbate is often used as a yeast-inhibitor by home winemakers to stop alcoholic fermentation in the production of sweet wines. Most species of lactic acid bacteria can synthesize sorbate to produce 2-ethoxyhexa-3,5-diene which has the aroma of crushed geranium leaves.
Tourne
Compared to malic and citric acids, tartaric acid is usually considered microbiologically stable. However, some species of Lactobacillus (particularly L. brevis and L. plantarum) have the potential to degrade tartaric acid in wine, reducing a wine's total acidity by 3-50%. French winemakers had long observed this phenomenon and called it tourne (meaning "turn to brown") in reference to the color change that can occur in the wine at the same time likely due to other processes at work in addition to the tartaric loss. While Lactobacillus is the most common culprit of tourne, some species of the spoilage film yeast Candida can also metabolize tartaric acid.
Health-related faults
While the presence of ethyl carbamate is not a sensory wine fault, the compound is a suspected carcinogen which is subjected to regulation in many countries. The compound is produced from the degradation of the amino acid arginine which is present in both grape must and released in the wine through the autolysis of dead yeast cells. While the use of urea as a source of yeast assimilable nitrogen (no longer legal in most countries) was the most common cause of ethyl carbamate in wine, both O. oeni and L. buchneri have been known to produce carbamyl phosphate and citrulline which can be precursors to ethyl carbamate formation. L. hilgardii, one of the "ferocious Lactobacillus" species, has also been suspected of contributing to ethyl carbamate production. In the United States, the Alcohol and Tobacco Tax and Trade Bureau has established a voluntary target limit of ethyl carbamate in wine to less than 15 μg/L for table wines and less than 60 μg/L for dessert wines.
Biogenic amines have been implicated as a potential cause of red wine headaches. In wine, histamine, cadaverine, phenylethylamine, putrescine, and tyramine have all been detected. These amines are created by the degradation of amino acids found in grape must and left over from the breakdown of dead yeast cells after fermentation. Most LAB have the potential to create biogenic amines, even some strains of O. oeni, but high levels of biogenic amines are most often associated with species from the Lactobacillus and Pediococcus genera. In the European Union, the concentration of biogenic amines in wine is beginning to be monitored, while the United States currently does not have any regulations.
References
External links
Purdue University - "The Joy of Malolactic Fermentation" Accessed 27 Dec. 2007
Vintessential Articles for Winemakers - Successful Malolactic Fermentations
Fermentation in food processing
Winemaking | Malolactic fermentation | [
"Chemistry"
] | 9,445 | [
"Fermentation in food processing",
"Fermentation"
] |
1,045,127 | https://en.wikipedia.org/wiki/Landrace | A landrace is a domesticated, locally adapted, often traditional variety of a species of animal or plant that has developed over time, through adaptation to its natural and cultural environment of agriculture and pastoralism, and due to isolation from other populations of the species. Landraces are distinct from cultivars and from standard breeds.
A significant proportion of farmers around the world grow landrace crops, and most plant landraces are associated with traditional agricultural systems. Landraces of many crops have probably been grown for millennia. Increasing reliance upon modern plant cultivars that are bred to be uniform has led to a reduction in biodiversity, because most of the genetic diversity of domesticated plant species lies in landraces and other traditionally used varieties. Some farmers using scientifically improved varieties also continue to raise landraces for agronomic reasons that include better adaptation to the local environment, lower fertilizer requirements, lower cost, and better disease resistance. Cultural and market preferences for landraces include culinary uses and product attributes such as texture, color, or ease of use.
Plant landraces have been the subject of more academic research, and the majority of academic literature about landraces is focused on botany in agriculture, not animal husbandry. Animal landraces are distinct from ancestral wild species of modern animal stock, and are also distinct from separate species or subspecies derived from the same ancestor as modern domestic stock. Not all landraces derive from wild or ancient animal stock; in some cases, notably dogs and horses, domestic animals have escaped in sufficient numbers in an area to breed feral populations that form new landraces through evolutionary pressure.
Characteristics
There are differences between authoritative sources on the specific criteria which describe landraces, although there is broad consensus about the existence and utility of the classification. Individual criteria may be weighted differently depending on a given source's focus (e.g., governmental regulation, biological sciences, agribusiness, anthropology and culture, environmental conservation, pet -keeping and -breeding, etc.). Additionally, not all cultivars agreed to be landraces exhibit every characteristic of a landrace. General features that characterize a landrace may include:
It is morphologically distinctive and identifiable (i.e., has particular and recognizable characteristics or properties), yet remains "dynamic".
It is genetically adapted to, and has a reputation for being able to withstand, the conditions of the local environment, including climate, disease and pests, even cultural practices.
It is not the product of formal (governmental, organizational, or private) breeding programs, and may lack systematic selection, development and improvement by breeders.
It is maintained and fostered less deliberately than a standardized breed, with its genetic isolation principally a matter of geography acting upon whatever animals that happened to be brought by humans to a given area.
It has a historical origin in a specific geographic area, will usually have its own local name(s), and will often be classified according to intended purpose.
Where yield (e.g. of a grain or fruit crop) can be measured, a landrace will show high stability of yield, even under adverse conditions, but a moderate yield , even under carefully managed conditions.
At the level of genetic testing, its heredity will show a degree of integrity, but still some genetic heterogeneity (i.e. genetic diversity).
Terminology
Landrace literally means 'country-breed' (German: Landrasse) and close cognates of it are found in various Germanic languages. The first known reference to the role of landraces as genetic resources was made in 1890 at an agriculture and forestry congress in Vienna, Austria. The term was first defined by Kurt von Rümker in 1908, and more clearly described in 1909 by U. J. Mansholt, who wrote that landraces have more stable characteristics and better resistance to adverse conditions, but have lower production capacity than cultivars, and are apt to change genetically when moved to another environment. H. Kiessling added in 1912 that a landrace is a mixture of phenotypic forms despite relative outward uniformity, and a great adaptability to its natural and human environment.
The word landrace entered non-academic English in the early 1930s, by way of the Danish Landrace pig, a particular breed of lop-eared swine. Many other languages do not use separate terms, like landrace and breed, but instead rely on extended description to convey such distinctions. Spanish is one such language.
Geneticist D. Phillip Sponenberg described animal breeds within these classes: the landrace, the standardized breed, modern "type" breeds, industrial strains, and feral populations. He describes landraces as an early stage of breed development, created by a combination of founder effect, isolation, and environmental pressures. Human selection for production goals is also typical of landraces.
As discussed in more detail in breed, that term itself has several definitions from various scientific and animal husbandry perspectives. Some of those senses of breed relate to the concept of landraces. A Food and Agriculture Organization of the United Nations (FAO) guideline defines landrace and landrace breed as "a breed that has largely developed through adaptation to the natural environment and traditional production system in which it has been raised." This is in contrast to its definition of a standardized breed: "a breed of livestock that was developed according to a strict programme of genetic isolation and formal artificial selection to achieve a particular phenotype."
In various domestic species (including pigs, goats, sheep and geese) some standardized breeds include "Landrace" in their names, but do not meet widely used definitions of landraces. For example, the British Landrace pig is a standardized breed, derived from earlier breeds with "Landrace" names.
Farmers' variety, usually applied to local cultivars, or seen as intermediate between a landrace and a cultivar, may also include landraces when referring to plant varieties not subjected to formal breeding programs.
Autochthonous and allochthonous landraces
A landrace native to, or produced for a long time within the agricultural system in which it is found is referred to as an autochthonous landrace, while a more recently introduced one is termed an allochthonous landrace.
Within academic agronomy, the term autochthonous landrace is sometimes used with a more technical, productivity-related definition, synthesized by A. C. Zeven from previous definitions beginning with Mansholt's: "an autochthonous landrace is a variety with a high capacity to tolerate biotic and abiotic stress, resulting in a high yield stability and an intermediate yield level under a low input agricultural system."
The terms autochthonous and allochthonous are most often applied to plants, with animals more often being referred to as indigenous or native. Examples of references in sources to long-term local landraces of livestock include constructions such as "indigenous landraces of sheep", and "Leicester Longwool sheep were bred to the native landraces of the region". Some usage of autochthonous does occur in reference to livestock, e.g. "autochthonous races of cattle such as the Asturian mountain cattle – Ratina and Casina – and Tudanca cattle."
Biodiversity and conservation
A significant proportion of farmers around the world grow landrace crops. However, as industrialized agriculture spreads, cultivars, which are selectively bred for high yield, rapid growth, disease and drought resistance, and other commercial production values, are supplanting landraces, putting more and more of them at risk of extinction.
In 1927 at the International Agricultural Congress, organized by the predecessor of the FAO, an extensive discussion was held on the need to conserve landraces. A recommendation that members organize nation-by-nation landrace conservation did not succeed in leading to widespread conservation efforts.
Landraces are often free from many intellectual property and other regulatory encumbrances. However, in some jurisdictions, a focus on their production may result in missing out on some benefits afforded to producers of genetically selected and homogenous organisms, including breeders' rights legislation, easier availability of loans and other business services, even the right to share seed or stock with others, depending on how favorable the laws in the area are to high-yield agribusiness interests.
As Regine Andersen of the Fridtjof Nansen Institute (Norway) and the Farmers' Rights Project puts it, "Agricultural biodiversity is being eroded. This trend is putting at risk the ability of future generations to feed themselves. In order to reverse the trend, new policies must be implemented worldwide. The irony of the matter is that the poorest farmers are the stewards of genetic diversity." Protecting farmer interests and protecting biodiversity is at the heart of the International Treaty on Plant Genetic Resources for Food and Agriculture (the "Plant Treaty" for short), under the Food and Agriculture Organization of the United Nations (FAO), though its concerns are not exclusively limited to landraces.
Landraces played a basic role in the development of the standardized breeds but are today threatened by the market success of the standardized breeds. In developing countries, landraces still play an important role, especially in traditional production systems. Specimens within an animal landrace tend to be genetically similar, though more diverse than members of a standardized or formal breed.
In situ and ex situ landrace conservation
Two approaches have been used to conserve plant landraces:
in situ where the landrace is grown and conserved by farmers on farms.
ex situ where the landrace is conserved in an artificial environment such as a gene-bank, using controls such as laminated packets kept frozen at .
As the amount of agricultural land dedicated to growing landrace crops declines, such as in the example of wheat landraces in the Fertile Crescent, landraces can become extinct in cultivation. Therefore ex situ landrace conservation practices are considered a way to avoid losing the genetic diversity completely. Research published in 2020 suggested that existing ways of cataloging diversity within ex situ genebanks fall short of cataloging the appropriate information for landrace crops.
An in situ conservation effort to save the Berrettina di Lungavilla squash landrace made use of participatory plant breeding practices in order to incorporate the local community into the work.
Preserving cereal landraces
Preservation efforts for cereal strains are ongoing including in situ and in online-searchable germplasm collections (seed banks), coordinated by Biodiversity International and the National Institute of Agricultural Botany (NIAB, UK). However, more may need to be done, because plant genetic variety, the source of crop health and seed quality, depends on a diversity of landraces and other traditionally used varieties. Efforts () were mostly focused on Iberia, the Balkans, and European Russia, and dominated by species from mountainous areas. Despite their incompleteness, these efforts have been described as "crucial in preventing the extinction of many of these local ecotypes".
An agricultural study published in 2008 showed that landrace cereal crops began to decline in Europe in the 19th century such that cereal landraces "have largely fallen out of use" in Europe. Landrace cultivation in central and northwest Europe was almost eradicated by the early 20th century, due to economic pressure to grow improved, modern cultivars. While many in the region are already extinct, some have survived by being passed from generation to generation, and have also been revived by enthusiasts outside Europe to preserve European agriculture and food culture elsewhere. These survivals are usually for specific uses, such as thatch, and traditional European cuisine and craft beer brewing.
Plants
Plant landrace development
The label landrace includes regional cultigens that are genetically heterogeneous, but with enough characteristics in common to permit their recognition as a group. These characteristics are used by farmers to manage diversity and purity within landraces.
In some cultures, the development of new landraces is typically limited to members of specific social groups, such as women or shaman. Maintaining existing landraces, like developing new landraces, requires that farmers be able to identify crop-specific characteristics and that those characteristics are passed on to following generations.
Over time, the process of identifying the distinguishing characteristic or features of a new landrace is reinforced by cultivation processes; for example, descendants of a plant that is notably drought tolerant may become iteratively more so through selective breeding as farmers regard it as better for dry areas and prioritize planting it in those locations. This is one way in which farming systems can develop a portfolio of landraces over time that have specific ecological niches and uses.
Conversely, modern cultivars can also be developed into a landrace over time when farmers save seed and practice selective breeding.
Although landraces are often discussed once they have become endemic to a particular geographical region, landraces have always been moved over long and short distances. Some landraces can adapt to various environments, while others only thrive within specific conditions. Self-fertilizing and vegetatively populated species adapt by changing the frequencies of phenotypes. Outbreeding crops absorb new genotypes through intentional and unintentional hybridization, or through mutation.
A clear example of vegetal landrace would consist in the diverse adaptations of wheat to differential artificial selection constraints.
Cultivars developed from landraces
Members of a landrace variety, selected for uniformity with regards to a unique feature over a period of time, can be developed into a farmers' variety or cultivar. Traits from landraces are valuable for incorporation into elite lines. Crop disease resistance genes from landraces can provide eternally-needed resistances to more widely-used, modern varieties.
Examples of plant landraces
Beans
Carrots
Maize
Okra
Peas
Peppers
Rice
Squash
Tomatillo
Tomatoes
Wheat
Animals
Animal landrace development
Some standardized animal breeds originate from attempts to make landraces more consistent through selective breeding, and a landrace may become a more formal breed with the creation of a breed registry or publication of a breed standard. In such a case, one may think of the landrace as a "stage" in breed development. However, in other cases, formalizing a landrace may result in the genetic resource of a landrace being lost through crossbreeding.
While many landrace animals are associated with farming, other domestic animals have been put to use as modes of transportation, as companion animals, for sporting purposes, and for other non-farming uses, so their geographic distribution may differ. For example, horse landraces are less common because human use of them for transport has meant that they have moved with people more commonly and constantly than most other domestic animals, reducing the incidence of populations locally genetically isolated for extensive periods of time.
Examples of animal landraces
Cats
Many standardized breeds have rather recently (within a century or less) been derived from landraces. Examples, often called natural breeds, include Arabian Mau, Egyptian Mau, Korat, Kurilian Bobtail, Maine Coon, Manx, Norwegian Forest Cat, Siberian, and Siamese.
In some cases, such as the Turkish Angora and Turkish Van breeds and their possible derivation from the Van cat landrace, the relationships are not entirely clear.
Cattle
Dogs
Dog landraces and the selectively bred dog breeds that follow breed standards vary widely depending on their origins and purpose.
Landraces are distinguished from dog breeds which have breed standards, breed clubs and registries.
Landrace dogs have more variety in their appearance than do standardized dog breeds. An example of a dog landrace with a related standardized breed with a similar name is the collie. The Scotch Collie is a landrace, while the Rough Collie and the Border Collie are standardized breeds. They can be very different in appearance, though the Rough Collie in particular was developed from the Scotch Collie by inbreeding to fix certain highly desired traits. In contrast to the landrace, in the various standardized Collie breeds, purebred individuals closely match a breed-standard appearance but might have lost other useful characteristics and have developed undesirable traits linked to inbreeding.
The ancient landrace dogs of the Fertile Crescent that led to the Saluki breed excels in running down game across open tracts of hot desert, but conformation-bred individuals of the breed are not necessarily able to chase and catch desert hares.
Goats
Some standardized breeds that are derived from landraces include the Dutch Landrace, Swedish Landrace and Finnish Landrace goats. The Danish Landrace is a modern mix of three different breeds, one of which was a "Landrace"-named breed.
Sheep
Horses
The wild progenitor of the domestic horse is extinct. It is rare for landraces among domestic horses to remain isolated, due to human use of horses for transportation, thus causing horses to move from one local population to another.
The heavy 'draft' type of domestic horse, developed in Europe, has differentiated into many separate landraces or breeds. Examples of horse landraces also include insular populations in Greece and Indonesia, and, on a broader scale, New World populations derived from the founder stock of Colonial Spanish horse.
The Yakutian and Mongolian Horses of Asia have "unimproved" characteristics.
Pigs
The standardized swine breeds named "Landrace" are often not actually landraces or derived from landraces. The Danish Landrace pig breed, pedigreed in 1896 from an actual local landrace, is the principal ancestor of the American Landrace (1930s). In this way, the Swedish Landrace is derived from the Danish and from other Scandinavian breeds, as is the British Landrace breed.
Chicken
Ducks
Geese
Many standardized goose breeds named "Landrace", e.g. the Twente Landrace goose, are not actually true landraces, but may be derived from them.
Rabbits
See also
References
External links
Short DIVERSEEDS video on crop wild relatives and landraces in the fertile crescent in Israel
Biology terminology
Breeds
Domesticated plants
Domesticated animals
Rare breed conservation | Landrace | [
"Biology"
] | 3,719 | [
"nan"
] |
1,045,142 | https://en.wikipedia.org/wiki/Debris | Debris (, ) is rubble, wreckage, ruins, litter and discarded garbage/refuse/trash, scattered remains of something destroyed, or, as in geology, large rock fragments left by a melting glacier, etc. Depending on context, debris can refer to a number of different things. The first apparent use of the French word in English is in a 1701 description of the army of Prince Rupert upon its retreat from a battle with the army of Oliver Cromwell, in England.
Disaster
In disaster scenarios, tornadoes leave behind large pieces of houses and mass destruction overall. This debris also flies around the tornado itself when it is in progress. The tornado's winds capture debris it kicks up in its wind orbit, and spins it inside its vortex. The tornado's wind radius is larger than the funnel itself. Tsunamis and hurricanes also bring large amounts of debris, such as Hurricane Katrina in 2005 and Hurricane Sandy in 2012. Earthquakes rock cities to rubble debris.
Geological
In geology, debris usually applies to the remains of geological activity including landslides, volcanic explosions, avalanches, mudflows or Glacial lake outburst floods (Jökulhlaups) and moraine, lahars, and lava eruptions. Geological debris sometimes moves in a stream called a debris flow. When it accumulates at the base of hillsides, it can be called "talus" or "scree".
In mining, debris called attle usually consists of rock fragments which contain little or no ore.
Marine
Marine debris applies to floating garbage such as bottles, cans, styrofoam, cruise ship waste, offshore oil and gas exploration and production facilities pollution, and fishing paraphernalia from professional and recreational boaters. Marine debris is also called litter or flotsam and jetsam. Objects that can constitute marine debris include used automobile tires, detergent bottles, medical wastes, discarded fishing line and nets, soda cans, and bilge waste solids.
In addition to being unsightly, it can pose a serious threat to marine life, boats, swimmers, divers, and others. For example, each year millions of seabirds, sea turtles, fish, and marine mammals become entangled in marine debris, or ingest plastics which they have mistaken for food. As many as 30,000 northern fur seals per year get caught in abandoned fishing nets and either drown or suffocate. Whales mistake plastic bags for squid, and birds may mistake plastic pellets for fish eggs. At other times, animals accidentally eat the plastic while feeding on natural food.
The largest concentration of marine debris is the Great Pacific Garbage Patch.
Marine debris most commonly originates from land-based sources. Various international agencies are currently working to reduce marine debris levels around the world.
Meteorological
In meteorology, debris usually applies to the remains of human habitation and natural flora after storm related destruction. This debris is also commonly referred to as storm debris. Storm debris commonly consists of roofing material, downed tree limbs, downed signs, downed power lines and poles, and wind-blown garbage. Storm debris can become a serious problem immediately after a storm, in that it often blocks access to individuals and communities that may require emergency services. This material frequently exists in such large quantities that disposing of it becomes a serious issue for a community. In addition, storm debris is often hazardous by its very nature, since, for example, downed power lines annually account for storm-related deaths.
Space
Space debris usually refers to the remains of spacecraft that have either fallen to Earth or are still orbiting Earth. Space debris may also consist of natural components such as chunks of rock and ice. The problem of space debris has grown as various space programs have left legacies of launches, explosions, repairs, and discards in both low Earth orbit and more remote orbits. These orbiting fragments have reached a great enough proportion to constitute a hazard to future space launches of both satellite and crewed vehicles. Various government agencies and international organizations are beginning to track space debris and also research possible solutions to the problem. While many of these items, ranging in size from nuts and bolts to entire satellites and spacecraft, may fall to Earth, other items located in more remote orbits may stay aloft for centuries. The velocity of some of these pieces of space junk have been clocked in excess of 17,000 miles per hour (27,000 km/h). A piece of space debris falling to Earth leaves a fiery trail, just like a meteor.
A debris disk is a circumstellar disk of dust and debris in orbit around a star.
Surgical
In medicine, debris usually refers to biological matter that has accumulated or lodged in surgical instruments and is referred to as surgical debris. The presence of surgical debris can result in cross-infections or nosocomial infections if not removed and the affected surgical instruments or equipment properly disinfected.
War
In the aftermath of a war, large areas of the region of conflict are often strewn with war debris in the form of abandoned or destroyed hardware and vehicles, mines, unexploded ordnance, bullet casings and other fragments of metal.
Much war debris has the potential to be lethal and continues to kill and maim civilian populations for years after the end of a conflict. The risks from war debris may be sufficiently high to prevent or delay the return of refugees. In addition war debris may contain hazardous chemicals or radioactive components that can contaminate the land or poison civilians who come into contact with it. Many Mine clearance agencies are also involved in the clearance of war debris.
Land mines in particular are very dangerous as they can remain active for decades after a conflict, which is why they have been banned by international war regulations.
In November 2006 the Protocol on Explosive Remnants of War
came into effect with 92 countries subscribing to the treaty that requires the parties involved in a conflict to assist with the removal of unexploded ordnance following the end of hostilities.
Some of the countries most affected by war debris are Afghanistan, Angola, Cambodia, Iraq and Laos.
Similarly military debris may be found in and around firing range and military training areas.
Debris can also be used as cover for military purposes, depending on the situation.
Culinary
In South Louisiana's Creole and Cajun cultures, debris (pronounced "DAY-bree") refers to chopped organs such as liver, heart, kidneys, tripe, spleen, brain, lungs and pancreas.
See also
Debris fallout
Woody debris
References
External links
United States Geological Survey: Debris Flows, Mudflows, Jökulhlaups, and Lahars
Matter
Pollution | Debris | [
"Physics"
] | 1,333 | [
"Matter"
] |
1,045,465 | https://en.wikipedia.org/wiki/Open%20Mobile%20Alliance | OMA SpecWorks, previously the Open Mobile Alliance (OMA), is a standards organization which develops open, international technical standards for the mobile phone industry. It is a nonprofit Non-governmental organization (NGO), not a formal government-sponsored standards organization as is the International Telecommunication Union (ITU): a forum for industry stakeholders to agree on common specifications for products and services.
History
The OMA was created in June 2002 as an answer to the proliferation of industry forums each dealing with a few application protocols: WAP Forum (focused on browsing and device provisioning protocols), the Wireless Village (focused on instant messaging and presence), The SyncML Initiative (focused on data synchronization), the Location Interoperability Forum, the Mobile Games Interoperability Forum, and the Mobile Wireless Internet Forum. Each of these forums had its bylaws, its decision-taking procedures, its release schedules, and in some instances there was some overlap in the specifications, causing duplication of work.
Members include traditional wireless industry players such as equipment and mobile systems manufacturers (Ericsson, ZTE, Nokia, Qualcomm, Rohde & Schwarz) and mobile operators (AT&T, NTT Docomo, Orange, T-Mobile, Verizon), and also software vendors (Gemalto, Mavenir and others).
In March, 2018, it merged with the IPSO Alliance to form OMA SpecWorks.
Related standards bodies include: 3rd Generation Partnership Project (3GPP), 3rd Generation Partnership Project 2 (3GPP2), Internet Engineering Task Force (IETF) and the World Wide Web Consortium (W3C).
Its mission is to provide Interoperability of services across countries, operators and mobile terminals.
The OMA only standardises applicative protocols; OMA specifications are intended to work with any cellular network technologies being used to provide networking and data transport. These networking technology are specified by outside parties. In particular, OMA specifications for a given function are the same with either GSM, UMTS, or CDMA2000 networks.
Adherence to the standards is entirely voluntary; the OMA does not have a mandative role..
OMA members that own intellectual property rights (e.g. patents) on technologies that are essential to realizing a specification agree in advance to provide licenses to their technology on "fair, reasonable and non-discriminatory licensing" terms to other members.
OMA is incorporated in California, United States.
Standard specifications
The OMA maintains many specifications, including:
Browsing specifications, now named Browser and Content, formerly named WAP browsing; in current version, these specifications rely essentially on XHTML Mobile Profile
Multimedia Messaging Service (MMS) specifications
OMA DRM specifications for digital rights management
OMA Instant Messaging and Presence Service (OMA IMPS) specification, which is a system for instant messaging on mobile phones; formerly named Wireless Village
OMA SIMPLE IM instant messaging based on Session Initiation Protocol (SIP) SIMPLE
OMA CAB Converged Address Book, a social address book service standard
OMA CPM Converged IP Messaging, the underlying enabler for Rich Communication Services
OMA Lock and Wipe (LAWMO) specifications for those functions
OMA Lightweight M2M (LwM2M) OMA LWM2M specifications for machine to machine functions
OMA Client Provisioning (OMA CP) specification for provisioning
OMA Data Synchronization (OMA DS) specification for data synchronization using SyncML
OMA Device Management (OMA DM) specification for mobile device management using SyncML
OMA BCAST specification for Mobile Broadcast Services
OMA Rich Media Environment (RME) specification
OMA OpenCMAPI Connection Management APIs
OMA PoC specification for Push to talk Over Cellular (PoC)
OMA Presence SIMPLE specification for presence based on Session Initiation Protocol (SIP) SIMPLE
OMA Service Environment
FUMO Firmware update
Secure User Plane Location Protocol (SUPL), an IP-based service for assisted GPS on handsets
Mobile Location Protocol (MLP), an IP-based protocol for obtaining the position/location of mobile handset
Wireless Application Protocol 1 (WAP1), 5-layer stack of protocols
OMA LOCSIP Location in SIP/IP Core
Software Component Management Object (SCOMO), allows a management authority to perform software management on a remote device
The OMA specifications inspired or formed the base for the following:
NGSI-LD is an API and information model specified by ETSI based (with permission) on OMA specifications NGSI-09 and NGSI-10, extending them to provide bindings and to formally use property graphs, with node and relationship (edge) types that may play the role of labels in formerly-mentioned models and support semantic referencing by inheriting classes defined in shared ontologies.
See also
Linux Phone Standards Forum (LiPS)
LiMo Foundation
Content Management Interface
Open Handset Alliance
Mobile Platform
3GPP
European Telecommunications Standards Institute (ETSI)
List of wireless router firmware projects
Mobile Device Management
List of Mobile Device Management Software
References
External links
Mobile telecommunications standards
Open standards
Telecommunications organizations
Business organizations based in the United States
Mobile phone industry | Open Mobile Alliance | [
"Technology"
] | 1,064 | [
"Mobile telecommunications",
"Mobile telecommunications standards"
] |
1,045,467 | https://en.wikipedia.org/wiki/Travelers%27%20diarrhea | Travelers' diarrhea (TD) is a stomach and intestinal infection. TD is defined as the passage of unformed stool (one or more by some definitions, three or more by others) while traveling. It may be accompanied by abdominal cramps, nausea, fever, headache and bloating. Occasionally dysentery may occur. Most travelers recover within three to four days with little or no treatment. About 12% of people may have symptoms for a week.
Bacteria are responsible for more than half of cases, typically via foodborne illness and waterborne diseases. The bacteria enterotoxigenic Escherichia coli (ETEC) are typically the most common except in Southeast Asia, where Campylobacter is more prominent. About 10 to 20 percent of cases are due to norovirus. Protozoa such as Giardia may cause longer term disease. The risk is greatest in the first two weeks of travel and among young adults. People affected are more often from the developed world.
Recommendations for prevention include eating only properly cleaned and cooked food, drinking bottled water, and frequent hand washing. The oral cholera vaccine, while effective for cholera, is of questionable use for travelers' diarrhea. Preventive antibiotics are generally discouraged. Primary treatment includes rehydration and replacing lost salts (oral rehydration therapy). Antibiotics are recommended for significant or persistent symptoms, and can be taken with loperamide to decrease diarrhea. Hospitalization is required in less than 3 percent of cases.
Estimates of the percentage of people affected range from 20 to 50 percent among travelers to the developing world. TD is particularly common among people traveling to Asia (except for Japan and Singapore), the Middle East, Africa, Latin America, and Central and South America. The risk is moderate in Southern Europe, and Russia. TD has been linked to later irritable bowel syndrome and Guillain–Barré syndrome. It has colloquially been known by a number of names, including "Montezuma's revenge," “mummy tummy” and "Delhi belly".
Signs and symptoms
The onset of TD usually occurs within the first week of travel, but may occur at any time while traveling, and even after returning home, depending on the incubation period of the infectious agent. Bacterial TD typically begins abruptly, but Cryptosporidium may incubate for seven days, and Giardia for 14 days or more, before symptoms develop. Typically, a traveler experiences four to five loose or watery bowel movements each day. Other commonly associated symptoms are abdominal cramping, bloating, fever, and malaise. Appetite may decrease significantly. Though unpleasant, most cases of TD are mild, and resolve in a few days without medical intervention.
Blood or mucus in the diarrhea, significant abdominal pain, or high fever suggests a more serious cause, such as cholera, characterized by a rapid onset of weakness and torrents of watery diarrhea with flecks of mucus (described as "rice water" stools). Medical care should be sought in such cases; dehydration is a serious consequence of cholera, and may trigger serious sequelae—including, in rare instances, death—as rapidly as 24 hours after onset if not addressed promptly.
Causes
Infectious agents are the primary cause of travelers' diarrhea. Bacterial enteropathogens cause about 80% of cases. Viruses and protozoans account for most of the rest.
The most common causative agent isolated in countries surveyed has been enterotoxigenic Escherichia coli (ETEC). Enteroaggregative E. coli is increasingly recognized. Shigella spp. and Salmonella spp. are other common bacterial pathogens. Campylobacter, Yersinia, Aeromonas, and Plesiomonas spp. are less frequently found. Mechanisms of action vary: some bacteria release toxins which bind to the intestinal wall and cause diarrhea; others damage the intestines themselves by their direct presence.
Brachyspira pilosicoli pathogen also appears to be responsible for many chronic intermittent watery diarrhea and is only diagnosed through colonic biopsies and microscopic discovery of a false brush border on H&E or Warthin silver stain: its brush-border is stronger and longer that Brachyspira aalborgi's brush-border. It is unfortunately often not diagnosed as coproculture does not allow growth and 16S PCR panel primers do not match Brachyspira sequences.
While viruses are associated with less than 20% of adult cases of travelers' diarrhea, they may be responsible for nearly 70% of cases in infants and children. Diarrhea due to viral agents is unaffected by antibiotic therapy, but is usually self-limited. Protozoans such as Giardia lamblia, Cryptosporidium and Cyclospora cayetanensis can also cause diarrhea. Pathogens commonly implicated in travelers' diarrhea appear in the table in this section.
A subtype of travelers' diarrhea afflicting hikers and campers, sometimes known as wilderness diarrhea, may have a somewhat different frequency of distribution of pathogens.
Risk factors
The primary source of infection is ingestion of fecally contaminated food or water. Attack rates are similar for men and women.
The most important determinant of risk is the traveler's destination. High-risk destinations include developing countries in Latin America, Africa, the Middle East, and Asia. Among backpackers, additional risk factors include drinking untreated surface water and failure to maintain personal hygiene practices and clean cookware. Campsites often have very primitive (if any) sanitation facilities, making them potentially as dangerous as any developing country.
Although travelers' diarrhea usually resolves within three to five days (mean duration: 3.6 days), in about 20% of cases, the illness is severe enough to require bedrest, and in 10%, the illness duration exceeds one week. For those prone to serious infections, such as bacillary dysentery, amoebic dysentery, and cholera, TD can occasionally be life-threatening. Others at higher-than-average risk include young adults, immunosuppressed persons, persons with inflammatory bowel disease or diabetes, and those taking H2 blockers or antacids.
Immunity
Travelers often get diarrhea from eating and drinking foods and beverages that have no adverse effects on local residents. This is due to immunity that develops with constant, repeated exposure to pathogenic organisms. The extent and duration of exposure necessary to acquire immunity has not been determined; it may vary with each individual organism. A study among expatriates in Nepal suggests that immunity may take up to seven years to develop—presumably in adults who avoid deliberate pathogen exposure.
Conversely, immunity acquired by American students while living in Mexico disappeared, in one study, as quickly as eight weeks after cessation of exposure.
Prevention
Sanitation
Recommendations include avoidance of questionable foods and drinks, on the assumption that TD is fundamentally a sanitation failure, leading to bacterial contamination of drinking water and food. While the effectiveness of this strategy has been questioned, given that travelers have little or no control over sanitation in hotels and restaurants, and little evidence supports the contention that food vigilance reduces the risk of contracting TD, guidelines continue to recommend basic, common-sense precautions when making food and beverage choices:
Maintain good hygiene and use only safe water for drinking and brushing teeth.
Safe beverages include bottled water, bottled carbonated beverages, and water boiled or appropriately treated by the traveler (as described below). Caution should be exercised with tea, coffee, and other hot beverages that may be only heated, not boiled.
In restaurants, insist that bottled water be unsealed in your presence; reports of locals filling empty bottles with untreated tap water and reselling them as purified water have surfaced. When in doubt, a bottled carbonated beverage is the safest choice, since it is difficult to simulate carbonation when refilling a used bottle.
Avoid ice, which may not have been made with safe water.
Avoid green salads, because the lettuce and other uncooked ingredients are unlikely to have been washed with safe water.
Avoid eating raw fruits and vegetables unless cleaned and peeled personally.
If handled properly, thoroughly cooked fresh and packaged foods are usually safe. Raw or undercooked meat and seafood should be avoided. Unpasteurized milk, dairy products, mayonnaise, and pastry icing are associated with increased risk for TD, as are foods and beverages purchased from street vendors and other establishments where unhygienic conditions may be present.
Water
Although safe bottled water is now widely available in most remote destinations, travelers can treat their own water if necessary, or as an extra precaution.
Techniques include boiling, filtering, chemical treatment, and ultraviolet light; boiling is by far the most effective of these methods. Boiling rapidly kills all active bacteria, viruses, and protozoa. Prolonged boiling is usually unnecessary; most microorganisms are killed within seconds at water temperature above . The second-most effective method is to combine filtration and chemical disinfection. Filters eliminate most bacteria and protozoa, but not viruses. Chemical treatment with halogens—chlorine bleach, tincture of iodine, or commercial tablets—have low-to-moderate effectiveness against protozoa such as Giardia, but work well against bacteria and viruses.
UV light is effective against both viruses and cellular organisms, but only works in clear water, and it is ineffective unless manufacturer's instructions are carefully followed for maximum water depth/distance from UV source, and for dose/exposure time. Other claimed advantages include short treatment time, elimination of the need for boiling, no taste alteration, and decreased long-term cost compared with bottled water. The effectiveness of UV devices is reduced when water is muddy or turbid; as UV is a type of light, any suspended particles create shadows that hide microorganisms from UV exposure.
Medications
Bismuth subsalicylate four times daily reduces rates of travelers' diarrhea. Though many travelers find a four-times-per-day regimen inconvenient, lower doses are not effective. Potential side effects include black tongue, black stools, nausea, constipation, and ringing in the ears. Bismuth subsalicylate should not be taken by those with aspirin allergy, kidney disease, or gout, nor concurrently with certain antibiotics such as the quinolones, and should not be taken continuously for more than three weeks. Some countries do not recommend it due to the risk of rare but serious side effects.
A hyperimmune bovine colostrum to be taken by mouth is marketed in Australia for prevention of ETEC-induced TD. As yet, no studies show efficacy under actual travel conditions.
Though effective, antibiotics are not recommended for the prevention of TD in most situations because of the risk of allergy or adverse reactions to the antibiotics, and because intake of preventive antibiotics may decrease the effectiveness of such drugs should a serious infection develop subsequently. Antibiotics can also cause vaginal yeast infections, or overgrowth of the bacterium Clostridioides difficile, leading to pseudomembranous colitis and its associated severe, unrelenting diarrhea.
Antibiotics may be warranted in special situations where benefits outweigh the above risks, such as immunocompromised travelers, chronic intestinal disorders, prior history of repeated disabling bouts of TD, or scenarios in which the onset of diarrhea might prove particularly troublesome. Options for prophylactic treatment include the fluoroquinolone antibiotics (such as ciprofloxacin), azithromycin, and trimethoprim/sulfamethoxazole, though the latter has proved less effective in recent years. Rifaximin may also be useful. Quinolone antibiotics may bind to metallic cations such as bismuth, and should not be taken concurrently with bismuth subsalicylate. Trimethoprim/sulfamethoxazole should not be taken by anyone with a history of sulfa allergy.
Vaccination
The oral cholera vaccine, while effective for prevention of cholera, is of questionable use for prevention of TD. A 2008 review found tentative evidence of benefit. A 2015 review stated it may be reasonable for those at high risk of complications from TD. Several vaccine candidates targeting ETEC or Shigella are in various stages of development.
Probiotics
One 2007 review found that probiotics may be safe and effective for prevention of TD, while another review found no benefit. A 2009 review confirmed that more study is needed, as the evidence to date is mixed.
Treatment
Most cases of TD are mild and resolve in a few days without treatment, but severe or protracted cases may result in significant fluid loss and dangerous electrolytic imbalance. Dehydration due to diarrhea can also alter the effectiveness of medicinal and contraceptive drugs. Adequate fluid intake (oral rehydration therapy) is therefore a high priority. Commercial rehydration drinks are widely available; alternatively, purified water or other clear liquids are recommended, along with salty crackers or oral rehydration salts (available in stores and pharmacies in most countries) to replenish lost electrolytes. Carbonated water or soda, left open to allow dissipation of the carbonation, is useful when nothing else is available. In severe or protracted cases, the oversight of a medical professional is advised.
Antibiotics
If diarrhea becomes severe (typically defined as three or more loose stools in an eight-hour period), especially if associated with nausea, vomiting, abdominal cramps, fever, or blood in stools, medical treatment should be sought. Such patients may benefit from antimicrobial therapy. A 2000 literature review found that antibiotic treatment shortens the duration and severity of TD; most reported side effects were minor, or resolved on stopping the antibiotic.
The antibiotic recommended varies based upon the destination of travel. Trimethoprim–sulfamethoxazole and doxycycline are no longer recommended because of high levels of resistance to these agents. Antibiotics are typically given for three to five days, but single doses of azithromycin or levofloxacin have been used. Rifaximin and rifamycin are approved in the U.S. for treatment of TD caused by ETEC. If diarrhea persists despite therapy, travelers should be evaluated for bacterial strains resistant to the prescribed antibiotic, possible viral or parasitic infections, bacterial or amoebic dysentery, Giardia, helminths, or cholera.
Antimotility agents
Antimotility drugs such as loperamide and diphenoxylate reduce the symptoms of diarrhea by slowing transit time in the gut. They may be taken to slow the frequency of stools, but not enough to stop bowel movements completely, which delays expulsion of the causative organisms from the intestines. They should be avoided in patients with fever, bloody diarrhea, and possible inflammatory diarrhea. Adverse reactions may include nausea, vomiting, abdominal pain, hives or rash, and loss of appetite. Antimotility agents should not, as a rule, be taken by children under age two.
Epidemiology
An estimated 10 million people—20 to 50% of international travelers—develop TD each year. It is more common in the developing world, where rates exceed 60%, but has been reported in some form in virtually every travel destination in the world.
Society and culture
Moctezuma's revenge is a colloquial term for travelers' diarrhea contracted in Mexico. The name refers to Moctezuma II (1466–1520), the Tlatoani (ruler) of the Aztec civilization who was overthrown by the Spanish conquistador Hernán Cortés in the early 16th century, thereby bringing large portions of what is now Mexico and Central America under the rule of the Spanish crown. The relevance being that Cortés and his soldiers carried the smallpox virus, which Mexicans had never been exposed to. The resulting infection reduced the population of Tenochtitlan by 40 percent in 1520 alone.
Wilderness diarrhea
Wilderness diarrhea, also called wilderness-acquired diarrhea (WAD) or backcountry diarrhea, refers to diarrhea among backpackers, hikers, campers and other outdoor recreationalists in wilderness or backcountry situations, either at home or abroad. It is caused by the same fecal microorganisms as other forms of travelers' diarrhea, usually bacterial or viral. Since wilderness campsites seldom provide access to sanitation facilities, the infection risk is similar to that of any developing country. Water treatment, good hygiene, and dish washing have all been shown to reduce the incidence of WAD.
References
External links
Diarrhea
Foodborne illnesses
Waterborne diseases
Infectious diseases
Tourism
Conditions diagnosed by stool test
Wikipedia medicine articles ready to translate
Wikipedia emergency medicine articles ready to translate
Travel | Travelers' diarrhea | [
"Physics"
] | 3,554 | [
"Physical systems",
"Transport",
"Travel"
] |
1,045,553 | https://en.wikipedia.org/wiki/Multinomial%20distribution | In probability theory, the multinomial distribution is a generalization of the binomial distribution. For example, it models the probability of counts for each side of a k-sided die rolled n times. For n independent trials each of which leads to a success for exactly one of k categories, with each category having a given fixed success probability, the multinomial distribution gives the probability of any particular combination of numbers of successes for the various categories.
When k is 2 and n is 1, the multinomial distribution is the Bernoulli distribution. When k is 2 and n is bigger than 1, it is the binomial distribution. When k is bigger than 2 and n is 1, it is the categorical distribution. The term "multinoulli" is sometimes used for the categorical distribution to emphasize this four-way relationship (so n determines the suffix, and k the prefix).
The Bernoulli distribution models the outcome of a single Bernoulli trial. In other words, it models whether flipping a (possibly biased) coin one time will result in either a success (obtaining a head) or failure (obtaining a tail). The binomial distribution generalizes this to the number of heads from performing n independent flips (Bernoulli trials) of the same coin. The multinomial distribution models the outcome of n experiments, where the outcome of each trial has a categorical distribution, such as rolling a k-sided die n times.
Let k be a fixed finite number. Mathematically, we have k possible mutually exclusive outcomes, with corresponding probabilities p1, ..., pk, and n independent trials. Since the k outcomes are mutually exclusive and one must occur we have pi ≥ 0 for i = 1, ..., k and . Then if the random variables Xi indicate the number of times outcome number i is observed over the n trials, the vector X = (X1, ..., Xk) follows a multinomial distribution with parameters n and p, where p = (p1, ..., pk). While the trials are independent, their outcomes Xi are dependent because they must sum to n.
Definitions
Probability mass function
Suppose one does an experiment of extracting n balls of k different colors from a bag, replacing the extracted balls after each draw. Balls of the same color are equivalent. Denote the variable which is the number of extracted balls of color i (i = 1, ..., k) as Xi, and denote as pi the probability that a given extraction will be in color i. The probability mass function of this multinomial distribution is:
for non-negative integers x1, ..., xk.
The probability mass function can be expressed using the gamma function as:
This form shows its resemblance to the Dirichlet distribution, which is its conjugate prior.
Example
Suppose that in a three-way election for a large country, candidate A received 20% of the votes, candidate B received 30% of the votes, and candidate C received 50% of the votes. If six voters are selected randomly, what is the probability that there will be exactly one supporter for candidate A, two supporters for candidate B and three supporters for candidate C in the sample?
Note: Since we’re assuming that the voting population is large, it is reasonable and permissible to think of the probabilities as unchanging once a voter is selected for the sample. Technically speaking this is sampling without replacement, so the correct distribution is the multivariate hypergeometric distribution, but the distributions converge as the population grows large in comparison to a fixed sample size.
Properties
Normalization
The multinomial distribution is normalized according to:
where the sum is over all permutations of such that .
Expected value and variance
The expected number of times the outcome i was observed over n trials is
The covariance matrix is as follows. Each diagonal entry is the variance of a binomially distributed random variable, and is therefore
The off-diagonal entries are the covariances:
for i, j distinct.
All covariances are negative because for fixed n, an increase in one component of a multinomial vector requires a decrease in another component.
When these expressions are combined into a matrix with i, j element the result is a k × k positive-semidefinite covariance matrix of rank k − 1. In the special case where k = n and where the pi are all equal, the covariance matrix is the centering matrix.
The entries of the corresponding correlation matrix are
Note that the number of trials n drops out of this expression.
Each of the k components separately has a binomial distribution with parameters n and pi, for the appropriate value of the subscript i.
The support of the multinomial distribution is the set
Its number of elements is
Matrix notation
In matrix notation,
and
with = the row vector transpose of the column vector .
Visualization
As slices of generalized Pascal's triangle
Just like one can interpret the binomial distribution as (normalized) one-dimensional (1D) slices of Pascal's triangle, so too can one interpret the multinomial distribution as 2D (triangular) slices of Pascal's pyramid, or 3D/4D/+ (pyramid-shaped) slices of higher-dimensional analogs of Pascal's triangle. This reveals an interpretation of the range of the distribution: discretized equilateral "pyramids" in arbitrary dimension—i.e. a simplex with a grid.
As polynomial coefficients
Similarly, just like one can interpret the binomial distribution as the polynomial coefficients of when expanded, one can interpret the multinomial distribution as the coefficients of when expanded, noting that just the coefficients must sum up to 1.
Large deviation theory
Asymptotics
By Stirling's formula, at the limit of , we havewhere relative frequencies in the data can be interpreted as probabilities from the empirical distribution , and is the Kullback–Leibler divergence.
This formula can be interpreted as follows.
Consider , the space of all possible distributions over the categories . It is a simplex. After independent samples from the categorical distribution (which is how we construct the multinomial distribution), we obtain an empirical distribution .
By the asymptotic formula, the probability that empirical distribution deviates from the actual distribution decays exponentially, at a rate . The more experiments and the more different is from , the less likely it is to see such an empirical distribution.
If is a closed subset of , then by dividing up into pieces, and reasoning about the growth rate of on each piece , we obtain Sanov's theorem, which states that
Concentration at large n
Due to the exponential decay, at large , almost all the probability mass is concentrated in a small neighborhood of . In this small neighborhood, we can take the first nonzero term in the Taylor expansion of , to obtainThis resembles the gaussian distribution, which suggests the following theorem:
Theorem. At the limit, converges in distribution to the chi-squared distribution .
The space of all distributions over categories is a simplex: , and the set of all possible empirical distributions after experiments is a subset of the simplex: . That is, it is the intersection between and the lattice .
As increases, most of the probability mass is concentrated in a subset of near , and the probability distribution near becomes well-approximated by From this, we see that the subset upon which the mass is concentrated has radius on the order of , but the points in the subset are separated by distance on the order of , so at large , the points merge into a continuum.
To convert this from a discrete probability distribution to a continuous probability density, we need to multiply by the volume occupied by each point of in . However, by symmetry, every point occupies exactly the same volume (except a negligible set on the boundary), so we obtain a probability density , where is a constant.
Finally, since the simplex is not all of , but only within a -dimensional plane, we obtain the desired result.
Conditional concentration at large n
The above concentration phenomenon can be easily generalized to the case where we condition upon linear constraints. This is the theoretical justification for Pearson's chi-squared test.
Theorem. Given frequencies observed in a dataset with points, we impose independent linear constraints (notice that the first constraint is simply the requirement that the empirical distributions sum to one), such that empirical satisfy all these constraints simultaneously. Let denote the -projection of prior distribution on the sub-region of the simplex allowed by the linear constraints. At the limit, sampled counts from the multinomial distribution conditional on the linear constraints are governed by which converges in distribution to the chi-squared distribution .
An analogous proof applies in this Diophantine problem of coupled linear equations in count variables , but this time is the intersection of with and hyperplanes, all linearly independent, so the probability density is restricted to a -dimensional plane. In particular, expanding the KL divergence around its minimum (the -projection of on ) in the constrained problem ensures by the Pythagorean theorem for -divergence that any constant and linear term in the counts vanishes from the conditional probability to multinationally sample those counts.
Notice that
by definition, every one of must be a rational number,
whereas may be chosen from any real number in and need not satisfy the Diophantine system of equations.
Only asymptotically as , the 's can be regarded as probabilities over .
Away from empirically observed constraints (such as moments or prevalences) the theorem can be generalized:
Theorem.
Given functions , such that they are continuously differentiable in a neighborhood of , and the vectors are linearly independent;
given sequences , such that asymptotically for each ;
then for the multinomial distribution conditional on constraints , we have the quantity converging in distribution to at the limit.
In the case that all are equal, the Theorem reduces to the concentration of entropies around the Maximum Entropy.
Related distributions
In some fields such as natural language processing, categorical and multinomial distributions are synonymous and it is common to speak of a multinomial distribution when a categorical distribution is actually meant. This stems from the fact that it is sometimes convenient to express the outcome of a categorical distribution as a "1-of-k" vector (a vector with one element containing a 1 and all other elements containing a 0) rather than as an integer in the range ; in this form, a categorical distribution is equivalent to a multinomial distribution over a single trial.
When k = 2, the multinomial distribution is the binomial distribution.
Categorical distribution, the distribution of each trial; for k = 2, this is the Bernoulli distribution.
The Dirichlet distribution is the conjugate prior of the multinomial in Bayesian statistics.
Dirichlet-multinomial distribution.
Beta-binomial distribution.
Negative multinomial distribution
Hardy–Weinberg principle ( a trinomial distribution with probabilities )
Statistical inference
Equivalence tests for multinomial distributions
The goal of equivalence testing is to establish the agreement between a theoretical multinomial distribution and observed counting frequencies. The theoretical distribution may be a fully specified multinomial distribution or a parametric family of multinomial distributions.
Let denote a theoretical multinomial distribution and let be a true underlying distribution. The distributions and are considered equivalent if for a distance and a tolerance parameter . The equivalence test problem is versus . The true underlying distribution is unknown. Instead, the counting frequencies are observed, where is a sample size. An equivalence test uses to reject . If can be rejected then the equivalence between and is shown at a given significance level. The equivalence test for Euclidean distance can be found in text book of Wellek (2010). The equivalence test for the total variation distance is developed in Ostrovski (2017). The exact equivalence test for the specific cumulative distance is proposed in Frey (2009).
The distance between the true underlying distribution and a family of the multinomial distributions is defined by . Then the equivalence test problem is given by and . The distance is usually computed using numerical optimization. The tests for this case are developed recently in Ostrovski (2018).
Confidence intervals for the difference of two proportions
In the setting of a multinomial distribution, constructing confidence intervals for the difference between the proportions of observations from two events, , requires the incorporation of the negative covariance between the sample estimators and .
Some of the literature on the subject focused on the use-case of matched-pairs binary data, which requires careful attention when translating the formulas to the general case of for any multinomial distribution. Formulas in the current section will be generalized, while formulas in the next section will focus on the matched-pairs binary data use-case.
Wald's standard error (SE) of the difference of proportion can be estimated using:
For a approximate confidence interval, the margin of error may incorporate the appropriate quantile from the standard normal distribution, as follows:
As the sample size () increases, the sample proportions will approximately follow a multivariate normal distribution, thanks to the multidimensional central limit theorem (and it could also be shown using the Cramér–Wold theorem). Therefore, their difference will also be approximately normal. Also, these estimators are weakly consistent and plugging them into the SE estimator makes it also weakly consistent. Hence, thanks to Slutsky's theorem, the pivotal quantity approximately follows the standard normal distribution. And from that, the above approximate confidence interval is directly derived.
The SE can be constructed using the calculus of the variance of the difference of two random variables:
A modification which includes a continuity correction adds to the margin of error as follows:
Another alternative is to rely on a Bayesian estimator using Jeffreys prior which leads to using a dirichlet distribution, with all parameters being equal to 0.5, as a prior. The posterior will be the calculations from above, but after adding 1/2 to each of the k elements, leading to an overall increase of the sample size by . This was originally developed for a multinomial distribution with four events, and is known as wald+2, for analyzing matched pairs data (see the next section for more details).
This leads to the following SE:
Which can just be plugged into the original Wald formula as follows:
Occurrence and applications
Confidence intervals for the difference in matched-pairs binary data (using multinomial with k=4)
For the case of matched-pairs binary data, a common task is to build the confidence interval of the difference of the proportion of the matched events. For example, we might have a test for some disease, and we may want to check the results of it for some population at two points in time (1 and 2), to check if there was a change in the proportion of the positives for the disease during that time.
Such scenarios can be represented using a two-by-two contingency table with the number of elements that had each of the combination of events. We can use small f for sampling frequencies: , and capital F for population frequencies: . These four combinations could be modeled as coming from a multinomial distribution (with four potential outcomes). The sizes of the sample and population can be n and N respectively. And in such a case, there is an interest in building a confidence interval for the difference of proportions from the marginals of the following (sampled) contingency table:
In this case, checking the difference in marginal proportions means we are interested in using the following definitions: , .
And the difference we want to build confidence intervals for is:
Hence, a confidence intervals for the marginal positive proportions () is the same as building a confidence interval for the difference of the proportions from the secondary diagonal of the two-by-two contingency table ().
Calculating a p-value for such a difference is known as McNemar's test. Building confidence interval around it can be constructed using methods described above for Confidence intervals for the difference of two proportions.
The Wald confidence intervals from the previous section can be applied to this setting, and appears in the literature using alternative notations. Specifically, the SE often presented is based on the contingency table frequencies instead of the sample proportions. For example, the Wald confidence intervals, provided above, can be written as:
Further research in the literature has identified several shortcomings in both the Wald and the Wald with continuity correction methods, and other methods have been proposed for practical application.
One such modification includes Agresti and Min’s Wald+2 (similar to some of their other works) in which each cell frequency had an extra added to it. This leads to the Wald+2 confidence intervals. In a Bayesian interpretation, this is like building the estimators taking as prior a dirichlet distribution with all parameters being equal to 0.5 (which is, in fact, the Jeffreys prior). The +2 in the name wald+2 can now be taken to mean that in the context of a two-by-two contingency table, which is a multinomial distribution with four possible events, then since we add 1/2 an observation to each of them, then this translates to an overall addition of 2 observations (due to the prior).
This leads to the following modified SE for the case of matched pairs data:
Which can just be plugged into the original Wald formula as follows:
Other modifications include Bonett and Price’s Adjusted Wald, and Newcombe’s Score.
Computational methods
Random variate generation
First, reorder the parameters such that they are sorted in descending order (this is only to speed up computation and not strictly necessary). Now, for each trial, draw an auxiliary variable X from a uniform (0, 1) distribution. The resulting outcome is the component
{Xj = 1, Xk = 0 for k ≠ j } is one observation from the multinomial distribution with and n = 1. A sum of independent repetitions of this experiment is an observation from a multinomial distribution with n equal to the number of such repetitions.
Sampling using repeated conditional binomial samples
Given the parameters and a total for the sample such that , it is possible to sample sequentially for the number in an arbitrary state , by partitioning the state space into and not-, conditioned on any prior samples already taken, repeatedly.
Algorithm: Sequential conditional binomial sampling
S = n
rho = 1
for i in [1,k-1]:
if rho != 0:
X[i] ~ Binom(S,p[i]/rho)
else
X[i] = 0
S = S - X[i]
rho = rho - p[i]
X[k] = SHeuristically, each application of the binomial sample reduces the available number to sample from and the conditional probabilities are likewise updated to ensure logical consistency.
Software implementations
The MultinomialCI R package allows the computation of simultaneous confidence intervals for the probabilities of a multinomial distribution given a set of observations.
See also
Additive smoothing
References
Further reading
Discrete distributions
Multivariate discrete distributions
Factorial and binomial topics
Exponential family distributions | Multinomial distribution | [
"Mathematics"
] | 4,034 | [
"Factorial and binomial topics",
"Combinatorics"
] |
1,045,573 | https://en.wikipedia.org/wiki/133%20%28number%29 | 133 (one hundred [and] thirty-three) is the natural number following 132 and preceding 134.
In mathematics
133 is an n whose divisors (excluding n itself) added up divide φ(n). It is an octagonal number and a happy number.
133 is a Harshad number, because it is divisible by the sum of its digits.
133 is a repdigit in base 11 (111) and base 18 (77), whilst in base 20 it is a cyclic number formed from the reciprocal of the number three.
133 is a semiprime: a product of two prime numbers, namely 7 and 19. Since those prime factors are Gaussian primes, this means that 133 is a Blum integer.
133 is the number of compositions of 13 into distinct parts.
References
Integers | 133 (number) | [
"Mathematics"
] | 167 | [
"Elementary mathematics",
"Integers",
"Mathematical objects",
"Numbers"
] |
1,045,761 | https://en.wikipedia.org/wiki/Minnesota%20nice | Minnesota nice is a cultural stereotype applied to the behavior of people from Minnesota, implying residents are unusually courteous, reserved, and mild-mannered compared to people from other states. The phrase also implies polite friendliness, an aversion to open confrontation, a tendency toward understatement, a disinclination to make a direct fuss or stand out, apparent emotional restraint, and self-deprecation. It is sometimes associated with passive-aggression.
Social norms
Playwright and corporate communications consultant Syl Jones suggested that Minnesota nice is not so much about being "nice" but is more about keeping up appearances, maintaining the social order, and keeping people (including non-natives of the state) in their place. He relates these social norms to the literary work of Danish-Norwegian novelist Aksel Sandemose, the fictional Law of Jante, and more generally, Scandinavian culture. Garrison Keillor's A Prairie Home Companion discusses "Wobegonics", the supposed language of Minnesotans, which includes "no confrontational verbs or statements of strong personal preference".
Examples
The generosity of state citizens has been commented on; the heavily reported influenza vaccine shortage of late 2004 did not strike the state as hard as elsewhere since many people willingly gave up injections for others. The concept has also received some support from the academic community; a national study by Peter Rentfrow, Samuel D. Gosling, and Jeff Potter done in 2008 found that Minnesota was the second most agreeable and fifth most extraverted state in the nation, traits associated with "nice".
The tradition of social progressivism in Minnesota politics has been linked to the Minnesota Nice culture. Tim Walz, Governor of Minnesota and the Democratic vice presidential nominee in the 2024 United States presidential election, has been described this way, along with other Minnesotan politicians such as Hubert Humphrey, Walter Mondale, Paul Wellstone, and Amy Klobuchar.
Since the 1960s and 1970s and continuing into the present, Minnesota has been a leading state in refugee resettlement, which can be linked to the traditions of progressivism and generosity associated with Minnesota nice. Various groups, especially Hmong from Laos and Somalis, as well as large numbers of Vietnamese, Burmese, Ethiopians, Laotians, Tibetans, and Liberians, have found homes in the state, particularly in the Twin Cities. Since 2002, Minnesota has harbored the largest population of Somalis in North America.
Minnesota nice was an influence on the Coen brothers movie Fargo, set in both Minnesota and neighboring North Dakota. A 2003 documentary about the making of the movie was entitled Minnesota Nice.
Criticism
History professor Annette Atkins suggests that the concept is a marketing myth, emerging from the work of Howard Mohr and Garrison Keillor in the 1980s. These authors may have created the myth in order to make Minnesota distinctive from neighboring states like Iowa.
Journalist and Minnesota native Michele Norris argued the phrase had acquired "undertones of irony and despair" following the 2020 murder of George Floyd in Minneapolis.
See also
Agreeableness
Beverly Hills 90210 (TV show with Minnesota Nice tropes)
Fargo (Movie with Minnesota Nice tropes)
Iowa nice
Seattle Freeze
Southern hospitality
References
Nice
Personality traits
Pleasure
Virtue
Kindness
Stereotypes | Minnesota nice | [
"Biology"
] | 658 | [
"Behavior",
"Human behavior",
"Kindness"
] |
1,045,839 | https://en.wikipedia.org/wiki/Continuous-rod%20warhead | A continuous-rod warhead is a specialized munition exhibiting an annular blast fragmentation pattern, thus when exploding it spreads into a large circle cutting through the target. It is used in anti-aircraft and anti-missile missiles.
Early anti-aircraft munitions
Rifle and machine-gun bullets were used against early military aircraft during World War I. Artillery was used when aircraft flew above the range of rifle and machine-gun cartridges. Since the probability of actually striking the aircraft was small, artillery shells were designed to explode at the approximate altitude of the aircraft to throw a shower of fragments in the vicinity of the explosion. Similar anti-aircraft weaponry with larger calibers, higher rates of fire, and improved fuzes continued to be used through World War II. These bullets and small fragments often made small holes in the airframe. Unless a bullet or fragment struck the pilot, or some critical part of the airframe, (like a fuel line, part of the engine coolant system, a critical wire or hydraulic line actuating control surfaces), the aircraft remained operational. Some anti-aircraft artillery projectiles were designed to fragment into long, thin pieces in an attempt to inflict damage on the airframe. Holes made by such fragments were more likely to cause destructive disruption of airflow around high-speed aircraft, but the hit probability was lowered for the smaller number of fragments from a warhead of equal size.
The problem became more significant as anti-aircraft missiles were developed to replace guns after World War II: A smaller number of missiles would require an improved warhead to match the aircraft destruction probability of the larger number of artillery projectiles potentially carried by a weapon of the same size and cost.
The concept of a folded continuous rod warhead was suggested in 1952. The Applied Physics Laboratory of Johns Hopkins University invented the Continuous Expanding Rod Warhead as part of its Navy-contracted development of the U. S. Navy's anti-air missile defense program. The details of the warhead were Top Secret until its use was no longer needed.
Construction
An even number of individual steel rods are arranged in parallel to form a cylinder. The ends of the rods are welded together—the first rod and the second rod are welded together at the top, the second and third at the bottom, and so on all the way around the form.
Operation
When detonated, the high explosive imparts momentum to the rods, thrusting them outward in an expanding circle. The pressure wave from the explosive needs to act evenly on the rods over their length, so some sort of tamper is used to shape the shock wave similar to an explosive lens. The rods are sufficiently soft (ductile) to allow the expansion without breaking the rods or the welded joints, and the detonation velocity is limited to under 1,150 m/s, allowing the rods to bend at these locations instead. At some intermediate point the ring will have a zig-zag (alternating direction) appearance within a cylindrical envelope. Upon ultimate expansion the ring is circular and contained within a plane. The ring will then break and ultimately tend to form one or more straight rods. Since the net momentum of the rod relative to the missile is roughly zero, its effectiveness will rapidly diminish as the broken ring expands.
This rapidly expanding ring, when hitting the aircraft, can be more effective than an equivalent fragmentation warhead: the ring of rods provides a larger surface area than a fragmented ballistic. Portions of the aircraft intercepted by the expanding ring of the continuous rod warhead will receive a continuous cut through the skin, flight or aerodynamic surfaces, underlying cables, hydraulic lines, and other flight or mission-critical structures. This may cause a structural failure, or, if not, can be sufficient for defeating the redundancy of aircraft systems. The effect is only pronounced as long as the ring is unbroken, so multiple layers of rods are employed in practical weapons to increase the effective radius.
When designing the ill-fated SAM-equipped Mauler, Monte Carlo simulations on the then-state-of-the-art IBM 650 indicated that a continuous-rod warhead was likely to be less effective than blast fragmentation types. Subsequent implementations indicated the opposite.
References
Rocketry
Anti-aircraft weapons
Ammunition
es:Ojiva de barra contínua | Continuous-rod warhead | [
"Engineering"
] | 864 | [
"Rocketry",
"Aerospace engineering"
] |
1,045,999 | https://en.wikipedia.org/wiki/1%2C000%2C000 | 1,000,000 (one million), or one thousand thousand, is the natural number following 999,999 and preceding 1,000,001. The word is derived from the early Italian millione (milione in modern Italian), from mille, "thousand", plus the augmentative suffix -one.
It is commonly abbreviated:
in British English as m (not to be confused with the metric prefix "m" milli, for , or with metre),
M,
MM ("thousand thousands", from Latin "Mille"; not to be confused with the Roman numeral = 2,000),
mm (not to be confused with millimetre), or
mn, mln, or mio can be found in financial contexts.
In scientific notation, it is written as or 106. Physical quantities can also be expressed using the SI prefix mega (M), when dealing with SI units; for example, 1 megawatt (1 MW) equals 1,000,000 watts.
The meaning of the word "million" is common to the short scale and long scale numbering systems, unlike the larger numbers, which have different names in the two systems.
The million is sometimes used in the English language as a metaphor for a very large number, as in "Not in a million years" and "You're one in a million", or a hyperbole, as in "I've walked a million miles" and "You've asked a million-dollar question".
1,000,000 is also the square of 1000 and also the cube of 100.
Visualizing one million
Even though it is often stressed that counting to precisely a million would be an exceedingly tedious task due to the time and concentration required, there are many ways to bring the number "down to size" in approximate quantities, ignoring irregularities or packing effects.
Information: Not counting spaces, the text printed on 136 pages of an Encyclopædia Britannica, or 600 pages of pulp paperback fiction contains approximately one million characters.
Length: There are one million millimetres in a kilometre, and roughly a million sixteenths of an inch in a mile (1 sixteenth = 0.0625). A typical car tire might rotate a million times in a trip, while the engine would do several times that number of revolutions.
Fingers: If the width of a human finger is , then a million fingers lined up would cover a distance of . If a person walks at a speed of , it would take them approximately five and a half hours to reach the end of the fingers.
Area: A square a thousand objects or units on a side contains a million such objects or square units, so a million holes might be found in less than three square yards of window screen, or similarly, in about one half square foot (400–500 cm2) of bed sheet cloth. A city lot 70 by 100 feet is about a million square inches.
Volume: The cube root of one million is one hundred, so a million objects or cubic units is contained in a cube a hundred objects or linear units on a side. A million grains of table salt or granulated sugar occupies about , the volume of a cube one hundred grains on a side. One million cubic inches would be the volume of a small room feet long by feet wide by feet high.
Mass: A million cubic millimetres (small droplets) of water would have a volume of one litre and a mass of one kilogram. A million millilitres or cubic centimetres (one cubic metre) of water has a mass of a million grams or one tonne.
Weight: A million honey bees would weigh the same as an person.
Landscape: A pyramidal hill wide at the base and high would weigh about a million short tons.
Computer: A display resolution of 1,280 by 800 pixels contains 1,024,000 pixels.
Money: A U.S. dollar bill of any denomination weighs . There are 454 grams in a pound. One million dollar bills would weigh or 1 tonne (just over 1 short ton).
Time: A million seconds, 1 megasecond, is 11.57 days.
In Indian English and Pakistani English, it is also expressed as 10 lakh. Lakh is derived from for 100,000 in Sanskrit.
Selected 7-digit numbers (1,000,001–9,999,999)
1,000,001 to 1,999,999
1,000,003 = Smallest 7-digit prime number
1,000,405 = Smallest triangular number with 7 digits and the 1,414th triangular number
1,002,001 = 10012, palindromic square
1,006,301 = First number of the first pair of prime quadruplets occurring thirty apart ({1006301, 1006303, 1006307, 1006309} and {1006331, 1006333, 1006337, 1006339})
1,024,000 = Sometimes, the number of bytes in a megabyte
1,030,301 = 1013, palindromic cube
1,037,718 = Large Schröder number
1,048,576 = 10242 = 324 = 165 = 410 = 220, the number of bytes in a mebibyte (previously called a megabyte)
1,048,976 = smallest 7 digit Leyland number
1,058,576 = Leyland number
1,058,841 = 76 x 32
1,077,871 = the amount of prime numbers between 0 and 16777216(2^24)
1,081,080 = 39th highly composite number
1,084,051 = fifth Keith prime
1,089,270 = harmonic divisor number
1,111,111 = repunit
1,112,083 = logarithmic number
1,129,30832 + 1 is prime
1,136,689 = Pell number, Markov number
1,174,281 = Fine number
1,185,921 = 10892 = 334
1,200,304 = 17 + 27 + 37 + 47 + 57 + 67 + 77
1,203,623 = smallest unprimeable number ending in 3
1,234,321 = 11112, palindromic square
1,246,863 = Number of 27-bead necklaces (turning over is allowed) where complements are equivalent
1,256,070 = number of reduced trees with 29 nodes
1,262,180 = number of triangle-free graphs on 12 vertices
1,278,818 = Markov number
1,290,872 = number of 26-bead binary necklaces with beads of 2 colors where the colors may be swapped but turning over is not allowed
1,296,000 = number of primitive polynomials of degree 25 over GF(2)
1,299,709 = 100,000th prime number
1,336,336 = 11562 = 344
1,346,269 = Fibonacci number, Markov number
1,367,631 = 1113, palindromic cube
1,388,705 = number of prime knots with 16 crossings
1,413,721 = square triangular number
1,419,857 = 175
1,421,280 = harmonic divisor number
1,441,440 = 11th colossally abundant number, 11th superior highly composite number, 40th highly composite number
1,441,889 = Markov number
1,500,625 = 12252 = 354
1,539,720 = harmonic divisor number
1,563,372 = Wedderburn-Etherington number
1,594,323 = 313
1,596,520 = Leyland number
1,606,137 = number of ways to partition {1,2,3,4,5,6,7,8,9} and then partition each cell (block) into subcells.
1,607,521/1,136,689 ≈ √2
1,647,086 = Leyland number
1,671,800 = Initial number of first century xx00 to xx99 consisting entirely of composite numbers
1,679,616 = 12962 = 364 = 68
1,686,049 = Markov prime
1,687,989 = number of square (0,1)-matrices without zero rows and with exactly 7 entries equal to 1
1,719,900 = number of primitive polynomials of degree 26 over GF(2)
1,730,787 = Riordan number
1,741,725 = equal to the sum of the seventh power of its digits
1,771,561 = 13312 = 1213 = 116, also, Commander Spock's estimate for the tribble population in the Star Trek episode "The Trouble with Tribbles"
1,864,637 = k such that the sum of the squares of the first k primes is divisible by k.
1,874,161 = 13692 = 374
1,889,568 = 185
1,928,934 = 2 x 39 x 72
1,941,760 = Leyland number
1,953,125 = 1253 = 59
1,978,405 = 16 + 26 + 36 + 46 + 56 + 66 + 76 + 86 + 96 + 106
2,000,000 to 2,999,999
2,000,002 = number of surface-points of a tetrahedron with edge-length 1000
2,000,376 = 1263
2,012,174 = Leyland number
2,012,674 = Markov number
2,027,025 = double factorial of 15
2,085,136 = 14442 = 384
2,097,152 = 1283 = 87 = 221
2,097,593 = Leyland prime using 2 & 21 (221 + 212)
2,118,107 = largest integer such that , where is the prime omega function for distinct prime factors. The corresponding sum for 2118107 is indeed 57.
2,124,679 = largest known Wolstenholme prime
2,144,505 = number of trees with 21 unlabeled nodes
2,177,399 = smallest pandigital number in base 8.
2,178,309 = Fibonacci number
2,222,222 = repdigit
2,266,502 = number of signed trees with 13 nodes
2,274,205 = the number of different ways of expressing 1,000,000,000 as the sum of two prime numbers
2,313,441 = 15212 = 394
2,356,779 = Motzkin number
2,405,236 = Number of 28-bead necklaces (turning over is allowed) where complements are equivalent
2,423,525 = Markov number
2,476,099 = 195
2,485,534 = number of 27-bead binary necklaces with beads of 2 colors where the colors may be swapped but turning over is not allowed
2,515,169 = number of reduced trees with 30 nodes
2,560,000 = 16002 = 404
2,567,284 = number of partially ordered set with 10 unlabelled elements
2,646,723 = little Schroeder number
2,674,440 = Catalan number
2,692,537 = Leonardo prime
2,704,900 = initial number of fourth century xx00 to xx99 containing seventeen prime numbers {2,704,901, 2,704,903, 2,704,907, 2,704,909, 2,704,927, 2,704,931, 2,704,937, 2,704,939, 2,704,943, 2,704,957, 2,704,963, 2,704,969, 2,704,979, 2,704,981, 2,704,987, 2,704,993, 2,704,997}
2,744,210 = Pell number
2,796,203 = Wagstaff prime, Jacobsthal prime
2,825,761 = 16812 = 414
2,890,625 = 1-automorphic number
2,922,509 = Markov prime
2,985,984 = 17282 = 1443 = 126 = 1,000,00012 AKA a great-great-gross
3,000,000 to 3,999,999
3,111,696 = 17642 = 424
3,200,000 = 205
3,263,442 = product of the first five terms of Sylvester's sequence
3,263,443 = sixth term of Sylvester's sequence
3,276,509 = Markov prime
3,294,172 = 22×77
3,301,819 = alternating factorial
3,333,333 = repdigit
3,360,633 = palindromic in 3 consecutive bases: 62818269 = 336063310 = 199599111
3,418,801 = 18492 = 434
3,426,576 = number of free 15-ominoes
3,524,578 = Fibonacci number, Markov number
3,554,688 = 2-automorphic number
3,626,149 = Wedderburn–Etherington prime
3,628,800 = 10!
3,748,096 = 19362 = 444
3,880,899/2,744,210 ≈ √2
4,000,000 to 4,999,999
4,008,004 = 20022, palindromic square
4,037,913 = sum of the first ten factorials
4,084,101 = 215
4,100,625 = 20252 = 454
4,194,304 = 20482 = 411 = 222
4,194,788 = Leyland number
4,202,496 = number of primitive polynomials of degree 27 over GF(2)
4,208,945 = Leyland number
4,210,818 = equal to the sum of the seventh powers of its digits
4,213,597 = Bell number
4,260,282 = Fine number
4,297,512 = 12-th derivative of xx at x=1
4,324,320 = 12th colossally abundant number, 12th superior highly composite number, pronic number
4,400,489 = Markov number
4,444,444 = repdigit
4,477,456 = 21162 = 464
4,636,390 = Number of 29-bead necklaces (turning over is allowed) where complements are equivalent
4,741,632 = number of primitive polynomials of degree 28 over GF(2)
4,782,969 = 21872 = 97 = 314
4,782,974 = n such that n | (3n + 5)
4,785,713 = Leyland number
4,794,088 = number of 28-bead binary necklaces with beads of 2 colors where the colors may be swapped but turning over is not allowed
4,805,595 = Riordan number
4,826,809 = 21972 = 1693 = 136
4,879,681 = 22092 = 474
4,913,000 = 1703
4,937,284 = 22222
5,000,000 to 5,999,999
5,049,816 = number of reduced trees with 31 nodes
5,096,876 = number of prime numbers having eight digits
5,134,240 = the largest number that cannot be expressed as the sum of distinct fourth powers
5,153,632 = 225
5,221,225 = 22852, palindromic square
5,293,446 = Large Schröder number
5,308,416 = 23042 = 484
5,496,925 = first cyclic number in base 6
5,555,555 = repdigit
5,623,756 = number of trees with 22 unlabeled nodes
5,702,887 = Fibonacci number
5,761,455 = The number of primes under 100,000,000
5,764,801 = 24012 = 494 = 78
5,882,353 = 5882 + 23532
6,000,000 to 6,999,999
6,250,000 = 25002 = 504
6,436,343 = 235
6,536,382 = Motzkin number
6,625,109 = Pell number, Markov number
6,666,666 = repdigit
6,765,201 = 26012 = 514
6,948,496 = 26362, palindromic square
7,000,000 to 7,999,999
7,109,376 = 1-automorphic number
7,311,616 = 27042 = 524
7,453,378 = Markov number
7,529,536 = 27442 = 1963 = 146
7,652,413 = Largest n-digit pandigital prime
7,777,777 = repdigit
7,779,311 = A hit song written by Prince and released in 1982 by The Time
7,861,953 = Leyland number
7,890,481 = 28092 = 534
7,906,276 = pentagonal triangular number
7,913,837 = Keith number
7,962,624 = 245
8,000,000 to 8,999,999
8,000,000 = Used to represent infinity in Japanese mythology
8,053,393 = number of prime knots with 17 crossings
8,108,731 = repunit prime in base 14
8,388,607 = second composite Mersenne number with a prime exponent
8,388,608 = 223
8,389,137 = Leyland number
8,399,329 = Markov number
8,436,379 = Wedderburn-Etherington number
8,503,056 = 29162 = 544
8,675,309 = A hit song for Tommy Tutone (also a twin prime with 8,675,311)
8,675,311 = Twin prime with 8,675,309
8,877,691 = number of nonnegative integers with distinct decimal digits
8,888,888 = repdigit
8,946,176 = self-descriptive number in base 8
8,964,800 = Number of 30-bead necklaces (turning over is allowed) where complements are equivalent
9,000,000 to 9,999,999
9,000,000 = 30002
9,150,625 = 30252 = 554
9,227,465 = Fibonacci number, Markov number
9,256,396 = number of 29-bead binary necklaces with beads of 2 colors where the colors may be swapped but turning over is not allowed
9,261,000 = 2103
9,369,319 = Newman–Shanks–Williams prime
9,647,009 = Markov number
9,653,449 = square Stella octangula number
9,581,014 = n such that n | (3n + 5)
9,663,500 = Initial number of first century xx00 to xx99 that possesses an identical prime pattern to any century with four or fewer digits: its prime pattern of {9663503, 9663523, 9663527, 9663539, 9663553, 9663581, 9663587} is identical to {5903, 5923, 5927, 5939, 5953, 5981, 5987}
9,694,845 = Catalan number
9,699,690 = eighth primorial
9,765,625 = 31252 = 255 = 510
9,800,817 = equal to the sum of the seventh powers of its digits
9,834,496 = 31362 = 564
9,865,625 = Leyland number
9,926,315 = equal to the sum of the seventh powers of its digits
9,938,375 = 2153, the largest 7-digit cube
9,997,156 = largest triangular number with 7 digits and the 4,471st triangular number
9,998,244 = 31622, the largest 7-digit square
9,999,991 = Largest 7-digit prime number
9,999,999 = repdigit
Prime numbers
There are 78,498 primes less than 106, where 999,983 is the largest prime number smaller than 1,000,000.
Increments of 106 from 1 million through a 10 million have the following prime counts:
70,435 primes between 1,000,000 and 2,000,000.
67,883 primes between 2,000,000 and 3,000,000.
66,330 primes between 3,000,000 and 4,000,000.
65,367 primes between 4,000,000 and 5,000,000.
64,336 primes between 5,000,000 and 6,000,000.
63,799 primes between 6,000,000 and 7,000,000.
63,129 primes between 7,000,000 and 8,000,000.
62,712 primes between 8,000,000 and 9,000,000.
62,090 primes between 9,000,000 and 10,000,000.
In total, there are 586,081 prime numbers between 1,000,000 and 10,000,000.
See also
Huh (god), depictions of whom were also used in hieroglyphs to represent 1,000,000
Megagon
Millionaire
Names of large numbers
Orders of magnitude (numbers) to help compare dimensionless numbers between 1,000,000 and 10,000,000 (106 and 107)
Notes
References
Integers
Large numbers
Powers of ten | 1,000,000 | [
"Mathematics"
] | 4,749 | [
"Mathematical objects",
"Elementary mathematics",
"Large numbers",
"Integers",
"Numbers"
] |
1,046,024 | https://en.wikipedia.org/wiki/Formal%20equivalence%20checking | Formal equivalence checking process is a part of electronic design automation (EDA), commonly used during the development of digital integrated circuits, to formally prove that two representations of a circuit design exhibit exactly the same behavior.
Equivalence checking and levels of abstraction
In general, there is a wide range of possible definitions of functional equivalence covering comparisons between different levels of abstraction and varying granularity of timing details.
The most common approach is to consider the problem of machine equivalence which defines two synchronous design specifications functionally equivalent if, clock by clock, they produce exactly the same sequence of output signals for any valid sequence of input signals.
Microprocessor designers use equivalence checking to compare the functions specified for the instruction set architecture (ISA) with a register transfer level (RTL) implementation, ensuring that any program executed on both models will cause an identical update of the main memory content. This is a more general problem.
A system design flow requires comparison between a transaction level model (TLM), e.g., written in SystemC and its corresponding RTL specification. Such a check is becoming of increasing interest in a system-on-a-chip (SoC) design environment.
Synchronous machine equivalence
The register transfer level (RTL) behavior of a digital chip is usually described with a hardware description language, such as Verilog or VHDL. This description is the golden reference model that describes in detail which operations will be executed during which clock cycle and by which pieces of hardware. Once the logic designers, by simulations and other verification methods, have verified register transfer description, the design is usually converted into a netlist by a logic synthesis tool. Equivalence is not to be confused with functional correctness, which must be determined by functional verification.
The initial netlist will usually undergo a number of transformations such as optimization, addition of Design For Test (DFT) structures, etc., before it is used as the basis for the placement of the logic elements into a physical layout. Contemporary physical design software will occasionally also make significant modifications (such as replacing logic elements with equivalent similar elements that have a higher or lower drive strength and/or area) to the netlist. Throughout every step of a very complex, multi-step procedure, the original functionality and the behavior described by the original code must be maintained. When the final tape-out is made of a digital chip, many different EDA programs and possibly some manual edits will have altered the netlist.
In theory, a logic synthesis tool guarantees that the first netlist is logically equivalent to the RTL source code. All the programs later in the process that make changes to the netlist also, in theory, ensure that these changes are logically equivalent to a previous version.
In practice, programs have bugs and it would be a major risk to assume that all steps from RTL through the final tape-out netlist have been performed without error. Also, in real life, it is common for designers to make manual changes to a netlist, commonly known as Engineering Change Orders, or ECOs, thereby introducing a major additional error factor. Therefore, instead of blindly assuming that no mistakes were made, a verification step is needed to check the logical equivalence of the final version of the netlist to the original description of the design (golden reference model).
Historically, one way to check the equivalence was to re-simulate, using the final netlist, the test cases that were developed for verifying the correctness of the RTL. This process is called gate level logic simulation. However, the problem with this is that the quality of the check is only as good as the quality of the test cases. Also, gate-level simulations are notoriously slow to execute, which is a major problem as the size of digital designs continues to grow exponentially.
An alternative way to solve this is to formally prove that the RTL code and the netlist synthesized from it have exactly the same behavior in all (relevant) cases. This process is called formal equivalence checking and is a problem that is studied under the broader area of formal verification.
A formal equivalence check can be performed between any two representations of a design: RTL <> netlist, netlist <> netlist or RTL <> RTL, though the latter is rare compared to the first two. Typically, a formal equivalence checking tool will also indicate with great precision at which point there exists a difference between two representations.
Methods
There are two basic technologies used for boolean reasoning in equivalence checking programs:
Binary decision diagrams, or BDDs: A specialized data structure designed to support reasoning about boolean functions. BDDs have become highly popular because of their efficiency and versatility.
Conjunctive Normal Form Satisfiability: SAT solvers returns an assignment to the variables of a propositional formula that satisfies it if such an assignment exists. Almost any boolean reasoning problem can be expressed as a SAT problem.
Commercial applications for equivalence checking
Major products in the Logic Equivalence Checking (LEC) area of EDA are:
FormalPro by Mentor Graphics
Questa SLEC by Mentor Graphics
Conformal by Cadence
Jasper by Cadence
Formality by Synopsys
VC Formal by Synopsys
360 EC by OneSpin Solutions
ATEC by ATEC
Generalizations
Equivalence Checking of Retimed Circuits: Sometimes it is helpful to move logic from one side of a register to another, and this complicates the checking problem.
Sequential Equivalence Checking: Sometimes, two machines are completely different at the combinational level, but should give the same outputs if given the same inputs. The classic example is two identical state machines with different encodings for the states. Since this cannot be reduced to a combinational problem, more general techniques are required.
Equivalence of Software Programs, i.e. checking if two well-defined programs that take N inputs and produce M outputs are equivalent: Conceptually, you can turn software into a state machine (that's what the combination of a compiler does, since a computer plus its memory form a very large state machine.) Then, in theory, various forms of property checking can ensure they produce the same output. This problem is even harder than sequential equivalence checking, since the outputs of the two programs may appear at different times; but it is possible, and researchers are working on it.
See also
Formal methods
References
Electronic Design Automation For Integrated Circuits Handbook, by Lavagno, Martin, and Scheffer, A survey of the field. This article was derived, with permission, from Volume 2, Chapter 4, Equivalence Checking, by Fabio Somenzi and Andreas Kuehlmann.
R.E. Bryant, Graph-based algorithms for Boolean function manipulation, IEEE Transactions on Computers., C-35, pp. 677–691, 1986. The original reference on BDDs.
Sequential equivalence checking for RTL models. Nikhil Sharma, Gagan Hasteer and Venkat Krishnaswamy. EE Times.
External links
CADP – provides equivalence checking tools for asynchronous designs
OneSpin 360 EC-FPGA – Functional correctness of FPGA synthesis from RTL code to final netlist
Electronic circuit verification
Formal methods | Formal equivalence checking | [
"Engineering"
] | 1,461 | [
"Software engineering",
"Formal methods"
] |
1,046,120 | https://en.wikipedia.org/wiki/Canonicalization | In computer science, canonicalization (sometimes standardization or normalization) is a process for converting data that has more than one possible representation into a "standard", "normal", or canonical form. This can be done to compare different representations for equivalence, to count the number of distinct data structures, to improve the efficiency of various algorithms by eliminating repeated calculations, or to make it possible to impose a meaningful sorting order.
Usage cases
Filenames
Files in file systems may in most cases be accessed through multiple filenames. For instance in Unix-like systems, the string "/./" can be replaced by "/". In the C standard library, the function realpath() performs this task. Other operations performed by this function to canonicalize filenames are the handling of /.. components referring to parent directories, simplification of sequences of multiple slashes, removal of trailing slashes, and the resolution of symbolic links.
Canonicalization of filenames is important for computer security. For example, a web server may have a restriction that only files under the cgi directory C:\inetpub\wwwroot\cgi-bin may be executed. This rule is enforced by checking that the path starts with C:\inetpub\wwwroot\cgi-bin\ and only then executing it. While the file C:\inetpub\wwwroot\cgi-bin\..\..\..\Windows\System32\cmd.exe initially appears to be in the cgi directory, it exploits the .. path specifier to traverse back up the directory hierarchy in an attempt to execute a file outside of cgi-bin. Permitting cmd.exe to execute would be an error caused by a failure to canonicalize the filename to the simplest representation, C:\Windows\System32\cmd.exe, and is called a directory traversal vulnerability. With the path canonicalized, it is clear the file should not be executed.
Unicode
In Unicode, many accented letters can be represented in more than one way. For example, é can be represented in Unicode as the Unicode character U+0065 (LATIN SMALL LETTER E) followed by the character U+0301 (COMBINING ACUTE ACCENT), but it can also be represented as the precomposed character U+00E9 (LATIN SMALL LETTER E WITH ACUTE). This makes string comparison more complicated, since every possible representation of a string containing such glyphs must be considered. To deal with this, Unicode provides the mechanism of canonical equivalence. In this context, canonicalization is Unicode normalization.
Variable-width encodings in the Unicode standard, in particular UTF-8, may cause an additional need for canonicalization in some situations. Namely, by the standard, in UTF-8 there is only one valid byte sequence for any Unicode character, but some byte sequences are invalid, i.e., they cannot be obtained by encoding any string of Unicode characters into UTF-8. Some sloppy decoder implementations may accept invalid byte sequences as input and produce a valid Unicode character as output for such a sequence. If one uses such a decoder, some Unicode characters effectively have more than one corresponding byte sequence: the valid one and some invalid ones. This could lead to security issues similar to the one described in the previous section. Therefore, if one wants to apply some filter (e.g., a regular expression written in UTF-8) to UTF-8 strings that will later be passed to a decoder that allows invalid byte sequences, one should canonicalize the strings before passing them to the filter. In this context, canonicalization is the process of translating every string character to its single valid byte sequence. An alternative to canonicalization is to reject any strings containing invalid byte sequences.
URL
A canonical URL is a URL for defining the single source of truth for duplicate content.
Use by Google
A canonical URL is the URL of the page that Google thinks is most representative from a set of duplicate pages on your site. For example, if you have URLs for the same page, such as https://example.com/?dress=1234 and https://example.com/dresses/1234, Google chooses one as canonical. Note that the pages do not need to be absolutely identical; minor changes in sorting or filtering of list pages do not make the page unique (for example, sorting by price or filtering by item color).
The canonical can be in a different domain than a duplicate.
Internet
With the help of canonical URLs, a search engine knows which link should be provided in a query result.
A canonical link element can get used to define a canonical URL.
Intranet
In intranets, manual searching for information is predominant. In this case, canonical URLs can be defined in a non-machine-readable form, too. For example in a guideline.
Misc
Canonical URLs are usually the URLs that get used for the share action.
Since the Canonical URL gets used in the search result of search engines, they are in most cases a landing page.
Search engines and SEO
In web search and search engine optimization (SEO), URL canonicalization deals with web content that has more than one possible URL. Having multiple URLs for the same web content can cause problems for search engines - specifically in determining which URL should be shown in search results. Most search engines support the Canonical link element as a hint to which URL should be treated as the true version. As indicated by John Mueller of Google, having other directives in a page, like the robots noindex element can give search engines conflicting signals about how to handle canonicalization
Example:
http://wikipedia.com
http://www.wikipedia.com
http://www.wikipedia.com/
http://www.wikipedia.com/?source=asdf
All of these URLs point to the homepage of Wikipedia, but a search engine will only consider one of them to be the canonical form of the URL.
XML
A Canonical XML document is by definition an XML document that is in XML Canonical form, defined by The Canonical XML specification. Briefly, canonicalization removes whitespace within tags, uses particular character encodings, sorts namespace references and eliminates redundant ones, removes XML and DOCTYPE declarations, and transforms relative URIs into absolute URIs.
A simple example would be the following two snippets of XML:
<code><node1 x='1' a="1" a="2">Data</node1 > Data</code>
Data Data
The first example contains extra spaces in the closing tag of the first node. The second example, which has been canonicalized, has had these spaces removed. Note that only the spaces within the tags are removed under W3C canonicalization, not those between tags.
A full summary of canonicalization changes is listed below:
The document is encoded in UTF-8
Line breaks normalized to #xA on input, before parsing
Attribute values are normalized, as if by a validating processor
Character and parsed entity references are replaced
CDATA sections are replaced with their character content
The XML declaration and document type declaration are removed
Empty elements are converted to start-end tag pairs
Whitespace outside of the document element and within start and end tags is normalized
All whitespace in character content is retained (excluding characters removed during line feed normalization)
Attribute value delimiters are set to quotation marks (double quotes)
Special characters in attribute values and character content are replaced by character references
Superfluous namespace declarations are removed from each element
Default attributes are added to each element
Fixup of xml:base attributes is performed
Lexicographic order is imposed on the namespace declarations and attributes of each element
Computational linguistics
In morphology and lexicography, a lemma is the canonical form of a set of words. In English, for example, run, runs, ran, and running are forms of the same lexeme, so we can select one of them; ex. run, to represent all the forms. Lexical databases such as Unitex use this kind of representation.
Lemmatisation is the process of converting a word to its canonical form.
See also
References
External links
Canonical XML Version 1.0, W3C Recommendation
OWASP Security Reference for Canonicalization
Computing terminology | Canonicalization | [
"Technology"
] | 1,742 | [
"Computing terminology"
] |
1,046,155 | https://en.wikipedia.org/wiki/Projection-valued%20measure | In mathematics, particularly in functional analysis, a projection-valued measure (or spectral measure) is a function defined on certain subsets of a fixed set and whose values are self-adjoint projections on a fixed Hilbert space. A projection-valued measure (PVM) is formally similar to a real-valued measure, except that its values are self-adjoint projections rather than real numbers. As in the case of ordinary measures, it is possible to integrate complex-valued functions with respect to a PVM; the result of such an integration is a linear operator on the given Hilbert space.
Projection-valued measures are used to express results in spectral theory, such as the important spectral theorem for self-adjoint operators, in which case the PVM is sometimes referred to as the spectral measure. The Borel functional calculus for self-adjoint operators is constructed using integrals with respect to PVMs. In quantum mechanics, PVMs are the mathematical description of projective measurements. They are generalized by positive operator valued measures (POVMs) in the same sense that a mixed state or density matrix generalizes the notion of a pure state.
Definition
Let denote a separable complex Hilbert space and a measurable space consisting of a set and a Borel σ-algebra on . A projection-valued measure is a map from to the set of bounded self-adjoint operators on satisfying the following properties:
is an orthogonal projection for all
and , where is the empty set and the identity operator.
If in are disjoint, then for all ,
for all
The second and fourth property show that if and are disjoint, i.e., , the images and are orthogonal to each other.
Let and its orthogonal complement denote the image and kernel, respectively, of . If is a closed subspace of then can be wrtitten as the orthogonal decomposition and is the unique identity operator on satisfying all four properties.
For every and the projection-valued measure forms a complex-valued measure on defined as
with total variation at most . It reduces to a real-valued measure when
and a probability measure when is a unit vector.
Example Let be a -finite measure space and, for all , let
be defined as
i.e., as multiplication by the indicator function on L2(X). Then defines a projection-valued measure. For example, if , , and there is then the associated complex measure which takes a measurable function and gives the integral
Extensions of projection-valued measures
If is a projection-valued measure on a measurable space (X, M), then the map
extends to a linear map on the vector space of step functions on X. In fact, it is easy to check that this map is a ring homomorphism. This map extends in a canonical way to all bounded complex-valued measurable functions on X, and we have the following.
The theorem is also correct for unbounded measurable functions but then will be an unbounded linear operator on the Hilbert space .
This allows to define the Borel functional calculus for such operators and then pass to measurable functions via the Riesz–Markov–Kakutani representation theorem. That is, if is a measurable function, then a unique measure exists such that
Spectral theorem
Let be a separable complex Hilbert space, be a bounded self-adjoint operator and the spectrum of . Then the spectral theorem says that there exists a unique projection-valued measure , defined on a Borel subset , such that
where the integral extends to an unbounded function when the spectrum of is unbounded.
Direct integrals
First we provide a general example of projection-valued measure based on direct integrals. Suppose (X, M, μ) is a measure space and let {Hx}x ∈ X be a μ-measurable family of separable Hilbert spaces. For every E ∈ M, let (E) be the operator of multiplication by 1E on the Hilbert space
Then is a projection-valued measure on (X, M).
Suppose , ρ are projection-valued measures on (X, M) with values in the projections of H, K. , ρ are unitarily equivalent if and only if there is a unitary operator U:H → K such that
for every E ∈ M.
Theorem. If (X, M) is a standard Borel space, then for every projection-valued measure on (X, M) taking values in the projections of a separable Hilbert space, there is a Borel measure μ and a μ-measurable family of Hilbert spaces {Hx}x ∈ X , such that is unitarily equivalent to multiplication by 1E on the Hilbert space
The measure class of μ and the measure equivalence class of the multiplicity function x → dim Hx completely characterize the projection-valued measure up to unitary equivalence.
A projection-valued measure is homogeneous of multiplicity n if and only if the multiplicity function has constant value n. Clearly,
Theorem. Any projection-valued measure taking values in the projections of a separable Hilbert space is an orthogonal direct sum of homogeneous projection-valued measures:
where
and
Application in quantum mechanics
In quantum mechanics, given a projection-valued measure of a measurable space to the space of continuous endomorphisms upon a Hilbert space ,
the projective space of the Hilbert space is interpreted as the set of possible (normalizable) states of a quantum system,
the measurable space is the value space for some quantum property of the system (an "observable"),
the projection-valued measure expresses the probability that the observable takes on various values.
A common choice for is the real line, but it may also be
(for position or momentum in three dimensions ),
a discrete set (for angular momentum, energy of a bound state, etc.),
the 2-point set "true" and "false" for the truth-value of an arbitrary proposition about .
Let be a measurable subset of and a normalized vector quantum state in , so that its Hilbert norm is unitary, . The probability that the observable takes its value in , given the system in state , is
We can parse this in two ways. First, for each fixed , the projection is a self-adjoint operator on whose 1-eigenspace are the states for which the value of the observable always lies in , and whose 0-eigenspace are the states for which the value of the observable never lies in .
Second, for each fixed normalized vector state , the association
is a probability measure on making the values of the observable into a random variable.
A measurement that can be performed by a projection-valued measure is called a projective measurement.
If is the real number line, there exists, associated to , a self-adjoint operator defined on by
which reduces to
if the support of is a discrete subset of .
The above operator is called the observable associated with the spectral measure.
Generalizations
The idea of a projection-valued measure is generalized by the positive operator-valued measure (POVM), where the need for the orthogonality implied by projection operators is replaced by the idea of a set of operators that are a non-orthogonal partition of unity. This generalization is motivated by applications to quantum information theory.
See also
Spectral theorem
Spectral theory of compact operators
Spectral theory of normal C*-algebras
Notes
References
*
Mackey, G. W., The Theory of Unitary Group Representations, The University of Chicago Press, 1976
G. Teschl, Mathematical Methods in Quantum Mechanics with Applications to Schrödinger Operators, https://www.mat.univie.ac.at/~gerald/ftp/book-schroe/, American Mathematical Society, 2009.
Varadarajan, V. S., Geometry of Quantum Theory V2, Springer Verlag, 1970.
Linear algebra
Measures (measure theory)
Spectral theory | Projection-valued measure | [
"Physics",
"Mathematics"
] | 1,625 | [
"Physical quantities",
"Measures (measure theory)",
"Quantity",
"Size",
"Linear algebra",
"Algebra"
] |
1,046,654 | https://en.wikipedia.org/wiki/PDTV | PDTV is an abbreviation short for Pure Digital Television. Often seen as part of the filename of TV shows shared through P2P, The Scene, and FTP servers on the Internet. In this case, PDTV refers not to container, bitrate or dimensions of the video, but the digital nature of the capture source. Non Scene European rippers often use the label DVBRip or DVB-rip to specify a purely digital rip of a Digital Video Broadcast (DVB), however all Scene groups use standardized labeling.
PDTV encompasses a broad array of capture methods and sources, but generally it involves the capture of SD or non-HD digital television broadcasts without any analog-to-digital conversion, instead relying on directly ripping MPEG streams. PDTV sources can be captured by a variety of digital TV tuner cards from a digital feed such as ClearQAM unencrypted cable, Digital Terrestrial Television, Digital Video Broadcast or other satellite sources. Just as with Freeview (DVB-T) in the United Kingdom, broadcast television in the United States has no barriers to PDTV capture. Hardware such as the HDHomeRun when connected to an ATSC (Antenna) or unencrypted ClearQAM cable feed allows lossless digital capture of MPEG-2 streams (Pure Digital Television), without monthly fees or other restrictions normally implemented by a Set-top box. Although different from the analog hole, Pure Digital Television capture imposes no technological restriction on what is done with the stream; playback, Mash-Ups and even recompression/pirated distribution are possible without the permission of the rights holder.
A publisher of fan-made DVD releases also uses the name PDTV, but with no connection to the more common usage explained above. The "PD" in this case refers to "planet dust" with an additional connotation of Public Domain, even though the material offered is more often the video equivalent of abandonware as opposed to anything where copyright has actually expired. Whereas PDTV content online (as described above) is indiscriminate in terms of copyright, physical DVD releases from PDTV only exist to supply fans with material not officially published to the DVD format.
As of 2018, the latter PDTV has undergone somewhat of a "rebranding", shifting its focus slightly to further emphasize preservation of VHS, Beta and Laserdisc content. The meaning of the "PD" part of its name thus becoming more associated with "physical disc" rather than anything else.
References
Film and video technology
Digital television
Video
Multimedia
File sharing
Warez | PDTV | [
"Technology"
] | 529 | [
"Multimedia"
] |
1,046,687 | https://en.wikipedia.org/wiki/Equal-loudness%20contour | An equal-loudness contour is a measure of sound pressure level, over the frequency spectrum, for which a listener perceives a constant loudness when presented with pure steady tones. The unit of measurement for loudness levels is the phon and is arrived at by reference to equal-loudness contours. By definition, two sine waves of differing frequencies are said to have equal-loudness level measured in phons if they are perceived as equally loud by the average young person without significant hearing impairment.
The Fletcher–Munson curves are one of many sets of equal-loudness contours for the human ear, determined experimentally by Harvey Fletcher and Wilden A. Munson, and reported in a 1933 paper entitled "Loudness, its definition, measurement and calculation" in the Journal of the Acoustical Society of America. Fletcher–Munson curves have been superseded and incorporated into newer standards. The definitive curves are those defined in ISO 226 from the International Organization for Standardization, which are based on a review of modern determinations made in various countries.
Amplifiers often feature a "loudness" button, known technically as loudness compensation, that boosts low and high-frequency components of the sound. These are intended to offset the apparent loudness fall-off at those frequencies, especially at lower volume levels. Boosting these frequencies produces a flatter equal-loudness contour that appears to be louder even at low volume, preventing the perceived sound from being dominated by the mid-frequencies where the ear is most sensitive.
Fletcher–Munson curves
The first research on the topic of how the ear hears different frequencies at different levels was conducted by Fletcher and Munson in 1933. Until recently, it was common to see the term Fletcher–Munson used to refer to equal-loudness contours generally, even though a re-determination was carried out by Robinson and Dadson in 1956, which became the basis for an ISO 226 standard.
The generic term equal-loudness contours is now preferred, of which the Fletcher–Munson curves are now a sub-set, and especially since a 2003 survey by ISO redefined the curves in a new standard.
Experimental determination
The human auditory system is sensitive to frequencies from about 20 Hz to a maximum of around 20,000 Hz, although the upper hearing limit decreases with age. Within this range, the human ear is most sensitive between 2 and 5 kHz, largely due to the resonance of the ear canal and the transfer function of the ossicles of the middle ear.
Fletcher and Munson first measured equal-loudness contours using headphones (1933). In their study, test subjects listened to pure tones at various frequencies and over 10 dB increments in stimulus intensity. For each frequency and intensity, the listener also listened to a reference tone at 1000 Hz. Fletcher and Munson adjusted the reference tone until the listener perceived that it had the same loudness as the test tone. Loudness, being a psychological quantity, is difficult to measure, so Fletcher and Munson averaged their results over many test subjects to derive reasonable averages. The lowest equal-loudness contour represents the quietest audible tone—the absolute threshold of hearing. The highest contour is the threshold of pain.
Churcher and King carried out a second determination in 1937, but their results and Fletcher and Munson's showed considerable discrepancies over parts of the auditory diagram.
In 1956 Robinson and Dadson produced a new experimental determination that they believed was more accurate. It became the basis for a standard (ISO 226) that was considered definitive until 2003, when ISO revised the standard on the basis of recent assessments by research groups worldwide.
Recent revision aimed at more precise determination – ISO 226:2023
Perceived discrepancies between early and more recent determinations led the International Organization for Standardization (ISO) to revise the standard curves in ISO 226. They did this in response to recommendations in a study coordinated by the Research Institute of Electrical Communication, Tohoku University, Japan. The study produced new curves by combining the results of several studies—by researchers in Japan, Germany, Denmark, UK, and the US. (Japan was the greatest contributor with about 40% of the data.)
This has resulted in the recent acceptance of a new set of curves standardized as ISO 226:2003. The report comments on the surprisingly large differences, and the fact that the original Fletcher–Munson contours are in better agreement with recent results than the Robinson–Dadson, which appear to differ by as much as 10–15 dB, especially in the low-frequency region, for reasons not explained.
According to the ISO report, the Robinson–Dadson results were the odd one out, differing more from the current standard than did the Fletcher–Munson curves. The report states that it is fortunate that the 40-phon Fletcher–Munson curve on which the A-weighting standard was based turns out to have been in agreement with modern determinations.
The report also comments on the large differences apparent in the low-frequency region, which remain unexplained. Possible explanations are:
The equipment used was not properly calibrated.
The criteria used for judging equal loudness at different frequencies had differed.
Subjects were not properly rested for days in advance, or were exposed to loud noise in traveling to the tests, which tensed the tensor tympani and stapedius muscles controlling low-frequency mechanical coupling.
Side versus frontal presentation
Real-life sounds from a reasonably distant source arrive as planar wavefronts. If the source of sound is directly in front of the listener, then both ears receive equal intensity, but at frequencies above about 1 kHz the sound that enters the ear canal is partially reduced by the head shadow, and also highly dependent on reflection off the pinna (outer ear). Off-centre sounds result in increased head masking at one ear, and subtle changes in the effect of the pinna, especially at the other ear. This combined effect of head-masking and pinna reflection is quantified in a set of curves in three-dimensional space referred to as head-related transfer functions (HRTFs). Frontal presentation is now regarded as preferable when deriving equal-loudness contours, and the latest ISO standard is specifically based on frontal and central presentation.
Because no HRTF is involved in normal headphone listening, equal-loudness curves derived using headphones are valid only for the special case of what is called side-presentation, which is not how we normally hear.
The Robinson–Dadson determination used loudspeakers, and for a long time the difference from the Fletcher–Munson curves was explained partly on the basis that the latter used headphones. However, the ISO report actually lists the latter as using compensated headphones, though it doesn't make clear how Robinson–Dadson achieved compensation.
Headphones versus loudspeaker testing
Good headphones, well sealed to the ear, provide a flat low-frequency pressure response to the ear canal, with low distortion even at high intensities. At low frequencies, the ear is purely pressure-sensitive, and the cavity formed between headphones and ear is too small to introduce modifying resonances. Headphone testing is, therefore, a good way to derive equal-loudness contours below about 500 Hz, though reservations have been expressed about the validity of headphone measurements when determining the actual threshold of hearing, based on the observation that closing off the ear canal produces increased sensitivity to the sound of blood flow within the ear, which the brain appears to mask in normal listening conditions. At high frequencies, headphone measurement becomes unreliable, and the various resonances of pinnae (outer ears) and ear canals are severely affected by proximity to the headphone cavity.
With speakers, the opposite is true. A flat low-frequency response is hard to obtain—except in free space high above ground, or in a very large and anechoic chamber that is free from reflections down to 20 Hz. Until recently, it was not possible to achieve high levels at frequencies down to 20 Hz without high levels of harmonic distortion. Even today, the best speakers are likely to generate around 1 to 3% of total harmonic distortion, corresponding to 30 to 40 dB below fundamental. This is not good enough, given the steep rise in loudness (rising to as much as 24 dB per octave) with frequency revealed by the equal-loudness curves below about 100 Hz. A good experimenter must ensure that trial subjects really hear the fundamental and not harmonics—especially the third harmonic, which is especially strong as a speaker cone's travel becomes limited as its suspension reaches the limit of compliance. A possible way around the problem is to use acoustic filtering, such as by resonant cavity, in the speaker setup. A flat free-field high-frequency response up to 20 kHz, on the other hand, is comparatively easy to achieve with modern speakers on-axis. These effects must be considered when comparing results of various attempts to measure equal-loudness contours.
Relevance to sound level and noise measurements
The A-weighting curve—in widespread use for noise measurement—is said to have been based on the 40-phon Fletcher–Munson curve. However, research in the 1960s demonstrated that determinations of equal-loudness made using pure tones are not directly relevant to our perception of noise. This is because the cochlea in our inner ear analyzes sounds in terms of spectral content, each "hair-cell" responding to a narrow band of frequencies known as a critical band. The high-frequency bands are wider in absolute terms than the low-frequency bands, and therefore "collect" proportionately more power from a noise source. However, when more than one critical band is stimulated, the signals to the brain add the various bands to produce the impressions of loudness. For these reasons equal-loudness curves derived using noise bands show an upwards tilt above 1 kHz and a downward tilt below 1 kHz when compared to the curves derived using pure tones.
Various weighting curves were derived in the 1960s, in particular as part of the DIN 4550 standard for audio quality measurement, which differed from the A-weighting curve, showing more of a peak around 6 kHz. These gave a more meaningful subjective measure of noise on audio equipment, especially on the newly invented compact cassette tape recorders with Dolby noise reduction, which were characterized by a noise spectrum dominated by the higher frequencies.
BBC Research conducted listening trials in an attempt to find the best weighting curve and rectifier combination for use when measuring noise in broadcast equipment, examining the various new weighting curves in the context of noise rather than tones, confirming that they were much more valid than A-weighting when attempting to measure the subjective loudness of noise. This work also investigated the response of human hearing to tone-bursts, clicks, pink noise and a variety of other sounds that, because of their brief impulsive nature, do not give the ear and brain sufficient time to respond. The results were reported in BBC Research Report EL-17 1968/8 entitled The Assessment of Noise in Audio Frequency Circuits.
The ITU-R 468 noise weighting curve, originally proposed in CCIR recommendation 468, but later adopted by numerous standards bodies (IEC, BSI, JIS, ITU) was based on the research, and incorporates a special quasi-peak detector to account for our reduced sensitivity to short bursts and clicks. It is widely used by Broadcasters and audio professionals when they measure noise on broadcast paths and audio equipment, so they can subjectively compare equipment types with different noise spectra and characteristics.
See also
A-weighting
Audio quality measurement
Audiogram
CCIR (ITU) 468 Noise Weighting
dB(A)
ITU-R 468 noise weighting
Listener fatigue
Luminosity function, the same concept in vision
Mel scale
Pure tone audiometry
Robinson–Dadson curves
Sound level meter
Weighting filter
Notes
References
Audio Engineer's Reference Book, 2nd Ed., 1999, edited Michael Talbot Smith, Focal Press.
An Introduction to the Psychology of Hearing 5th ed, Brian C.J. Moore, Elsevier Press.
External links
ISO Standard
Precise and Full-range Determination of Two-dimensional Equal Loudness Contours
Fletcher–Munson is not Robinson–Dadson (PDF)
Full Revision of International Standards for Equal-Loudness Level Contours (ISO 226)
Test your hearing – A tool for measuring your equal-loudness contours
Equal-loudness contour measurements in detail
Evaluation of Loudness-level weightings and LLSEL JASA
A Model of Loudness Applicable to Time-Varying Sounds AESJ Article
Psychoacoustics
Audio engineering
ISO standards
Sound
Acoustics | Equal-loudness contour | [
"Physics",
"Engineering"
] | 2,604 | [
"Electrical engineering",
"Audio engineering",
"Classical mechanics",
"Acoustics"
] |
1,046,731 | https://en.wikipedia.org/wiki/Elfriede%20Jelinek | Elfriede Jelinek (; born 20 October 1946) is an Austrian playwright and novelist. She is one of the most decorated authors to write in German and was awarded the 2004 Nobel Prize in Literature for her "musical flow of voices and counter-voices in novels and plays that, with extraordinary linguistic zeal, reveal the absurdity of society's clichés and their subjugating power". She is considered to be among the most important living playwrights of the German language.
Biography
Elfriede Jelinek was born on 20 October 1946 in Mürzzuschlag, Styria, the daughter of Olga Ilona (née Buchner), a personnel director, and Friedrich Jelinek. She was raised in Vienna by her Romanian-German Catholic mother and a non-observant Czech Jewish father (whose surname Jelinek means "little deer" in Czech). Her mother's family came from Stájerlakanina, Krassó-Szörény County, Banat, Kingdom of Hungary (now Anina, Romania), and was of a bourgeois background, while her father was a working-class socialist.
Her father was a chemist, who managed to avoid persecution during the Second World War by working in strategically important industrial production. However, many of his relatives became victims of the Holocaust. Her mother, with whom she had a strained relationship, was from a formerly prosperous Vienna family. As a child, Elfriede attended a Roman Catholic convent school in Vienna. Her mother planned a career for her as a musical "Wunderkind". She was instructed in piano, organ, guitar, violin, viola, and recorder from an early age. Later, she went on to study at the Vienna Conservatory, where she graduated with an organist diploma; during this time, she tried to meet her mother's high expectations, while coping with her psychologically ill father. She studied art history and theater at the University of Vienna. However, she had to discontinue her studies due to an anxiety disorder, which resulted in self-isolation at her parents' house for a year. During this time, she began serious literary work as a form of therapy. After a year, she began to feel comfortable leaving the house, often with her mother. She began writing poetry at a young age. She made her literary debut with Lisas Schatten (Lisa's Shadow) in 1967, and received her first literary prize in 1969. During the 1960s, she became active politically, read a great deal, and "spent an enormous amount of time watching television".
She married Gottfried Hüngsberg on 12 June 1974.
Work and political engagement
Despite the author's own differentiation from Austria (due to her criticism of Austria's Nazi past), Jelinek's writing is deeply rooted in the tradition of Austrian literature, showing the influence of Austrian writers such as Ingeborg Bachmann, Marlen Haushofer, and Robert Musil.
Editor Friederike Eigler states that Jelinek has three major and inter-related "targets" in her writing: what she views as capitalist consumer society and its commodification of all human beings and relationships, what she views as the remnants of Austria's fascist past in public and private life, and what she views as the systematic exploitation and oppression of women in a capitalist-patriarchal society. Jelinek has claimed in multiple interviews that the Austrian-Jewish satirical tradition has been a formative influence on her writing, citing Karl Kraus, Elias Canetti, and Jewish cabaret in particular. In an interview with Sigrid Löffler, Jelinek claimed that her work is considered an oddity in contemporary Austria, where she claims satire is unappreciated and misunderstood, "because the Jews are dead." She has stressed her Jewish identity as the daughter of a Holocaust survivor, claiming a continuity with a Jewish-Viennese tradition that she believes has been destroyed by fascism and is dying out.
Work
Jelinek's output has included radio plays, poetry, theatre texts, polemical essays, anthologies, novels, translations, screenplays, musical compositions, libretti and ballets, film and video art.
Jelinek's work is multi-faceted, and highly controversial. It has been praised and condemned by leading literary critics. In the wake of the Fritzl case, for example, she was accused of "executing 'hysterical' portraits of Austrian perversity". Likewise, her political activism has encountered divergent and often heated reactions. Despite the controversy surrounding her work, Jelinek has won many distinguished awards; among them are the Georg Büchner Prize in 1998; the Mülheim Dramatists Prize in 2002 and 2004; the Franz Kafka Prize in 2004; and the Nobel Prize in Literature, also in 2004.
Female sexuality, sexual abuse, and the battle of the sexes in general are prominent topics in her work. Texts such as Wir sind Lockvögel, Baby! (We are Decoys, Baby!), Die Liebhaberinnen (Women as Lovers) and Die Klavierspielerin (The Piano Teacher) showcase the brutality and power play inherent in human relations in a style that is, at times, ironically formal and tightly controlled. According to Jelinek, power and aggression are often the principal driving forces of relationships. Likewise Ein Sportstück (Sports Play) explores the darker side of competitive sports. Her provocative novel Lust contains graphic description of sexuality, aggression and abuse. It received poor reviews by many critics, some of whom likened it to pornography. But others, who noted the power of the cold descriptions of moral failures, considered it to have been misunderstood and undervalued by them.
In April 2006, Jelinek spoke out to support Peter Handke, whose play Die Kunst des Fragens (The Art of Asking) was removed from the repertoire of the Comédie-Française for his alleged support of Slobodan Milošević. Her work is less known in English-speaking countries. However, in July and August 2012, a major English language premiere of her play Ein Sportstück by Just a Must theatre company brought her dramatic work to the attention of English-speaking audiences. The following year, in February and March 2013, the Women's Project in New York staged the North American premiere of Jackie, one of her Princess Dramas.
Political engagement
Jelinek was a member of Austria's Communist Party from 1974 to 1991. She became a household name during the 1990s due to her vociferous clash with Jörg Haider's Freedom Party. Following the 1999 National Council elections, and the subsequent formation of a coalition cabinet consisting of the Freedom Party and the Austrian People's Party, Jelinek became one of the new cabinet's more vocal critics.
Many foreign governments moved swiftly to ostracize Austria's administration, citing the Freedom Party's alleged nationalism and authoritarianism. The cabinet construed the sanctions against it as directed against Austria as such, and attempted to prod the nation into a national rallying (Nationaler Schulterschluss) behind the coalition parties.
This provoked a temporary heating of the political climate severe enough for dissidents such as Jelinek to be accused of treason by coalition supporters.
In the mid- to late-1980s, Jelinek was one of many Austrian intellectuals who signed a petition for the release of Jack Unterweger, who was imprisoned for the murder of a prostitute, and who was regarded by intellectuals and politicians as an example of successful rehabilitation. Unterweger was later found guilty of murdering nine more women within two years of his release, and committed suicide after his arrest.
Awards and honors
1996: Literaturpreis der Stadt Bremen for Die Kinder der Toten
1998: Georg Büchner Prize
2002: Mülheimer Dramatikerpreis for Macht Nichts
2003: Else Lasker-Schüler Dramatist Prize
2004: Hörspielpreis der Kriegsblinden for Jackie
2004: Franz Kafka Prize
2004: Nobel Prize in Literature
2004: Stig Dagerman Prize
2004: Mülheimer Dramatikerpreis for Das Werk
2009: Mülheimer Dramatikerpreis for Rechnitz (Der Würgeengel)
2011: Mülheimer Dramatikerpreis for Winterreise
2011: Honorary member of the American Academy of Arts and Letters
2017: Theatre prize Der Faust for lifetime achievement
2021: Honorary citizen of the City of Vienna
2021: Nestroy Theatre Prize for lifetime achievement
Publications
Poetry
Lisas Schatten; München 1967
ende: gedichte von 1966–1968; München 2000
Novels
bukolit.hörroman (written 1968, published by Rhombus Verlag, 1979). bukolit: audio novel.
wir sind lockvögel baby! (Rowohlt, 1970).
Michael. Ein Jugendbuch für die Infantilgesellschaft (Rowohlt, 1972).
Die Liebhaberinnen (Rowohlt, 1975). Women as Lovers, trans. Martin Chalmers (London: Serpent's Tail, 1994). .
Die Ausgesperrten (Rowohlt, 1980). Wonderful, Wonderful Times, trans. Michael Hulse (London: Serpent's Tail, 1990). .
Die Klavierspielerin (Rowohlt, 1983). The Piano Teacher, trans. Joachim Neugroschel (New York: Weidenfeld & Nicolson, 1988). .
Oh Wildnis, oh Schutz vor ihr (Rowohlt, 1985).
Lust (Rowohlt, 1989). Lust, trans. Michael Hulse (London: Serpent's Tail, 1992). .
Die Kinder der Toten (Rowohlt, 1995). The Children of the Dead, trans. Gitta Honegger (Yale, 2024).
Gier (Rowohlt, 2000). Greed, trans. Martin Chalmers (London: Serpent's Tail, 2006). .
Neid (2007). Envy. Private novel published on Jelinek's website.
rein GOLD. ein bühnenessay (Rowohlt, 2013). rein GOLD, trans. Gitta Honegger (Fitzcarraldo Editions, 2021).
Plays
Was geschah, nachdem Nora ihren Mann verlassen hatte; oder Stützen der Gesellschaften (1979). What Happened after Nora Left Her Husband; or Pillars of Society. Premiered at Graz, October 1979.
Clara S, musikalische Tragödie (1982). Clara S, a Musical Tragedy. Premiered at Bonn, 1982.
Krankheit oder Moderne Frauen. Wie ein Stück (1984). Illness or Modern Women. Like a Play. Premiered at Bonn, 1987.
Burgtheater. Posse mit Gesang (1985). Burgtheater. Farce with Songs. Premiered at Bonn, 1985.
Begierde und Fahrererlaubnis (eine Pornographie) (1986). Desire and Permission to Drive – Pornography. Premiered at the Styrian Autumn, Graz, 1986.
Wolken. Heim (1988). Clouds. Home. Premiered at Bonn, 1988.
Präsident Abendwind. Ein Dramolett, sehr frei nach Johann Nestroy (1992). President Abendwind. A dramolet, very freely after Johann Nestroy. Premiered at Tyrol Landestheater, Innsbruck, 1992.
Totenauberg (1992). Premiered at Burgtheater (Akademietheater), 1992.
Raststätte oder Sie machens alle. Eine Komödie (1994). Service Area or They're All Doing It. A Comedy. Premiered at Burgtheater, 1994.
Stecken, Stab und Stangl. Eine Handarbeit (1996). Rod, Staff, and Crook – Handmade. Premiered at Deutsches Schauspielhaus, 1996.
Ein Sportstück (1998). Sports Play, trans. Penny Black (Oberon Books, 2012). Premiered at Burgtheater, 1998; English-language premiere in Lancaster, 11 July 2012. Also translated by Lillian Banks as Sports Chorus for the Museum of Contemporary Art in Krakow.
er nicht als er (zu, mit Robert Walser) (1998). Her Not All Her: On/With Robert Walser, trans. Damion Searls (Sylph Editions, 2012). Premiered at Salzburg Festival in conjunction with Deutsches Schauspielhaus, 1998.
Das Lebewohl (2000). Les Adieux. Premiered at Berliner Ensemble, 2000.
Das Schweigen (2000). Silence. Premiered at Deutsches Schauspielhaus, 2000.
Der Tod und das Mädchen II (2000). Death and the Maiden II. Premiered at Expo 2000 in conjunction with the Saarbrücken Staatstheater and ZKM Karlsruhe.
MACHT NICHTS – Eine Kleine Trilogie des Todes (2001). NO PROBLEM – A Little Trilogy of Death. Premiered at Schauspielhaus Zürich, 2001.
In den Alpen (2002). In the Alps. Premiered at Munich Kammerspiele in conjunction with Schauspielhaus Zürich, 2002.
Prinzessinnendramen: Der Tod und das Mädchen I-III und IV-V (2002). Princess Dramas: Death and the Maiden I-III and IV-V. Parts I-III premiered at Deutsches Schauspielhaus, 2002; Parts IV-V premiered at Deutsches Theater, 2002.
Das Werk (2003). Premiered at Burgtheater (Akademietheater), 2003.
Bambiland (2003). Trans. Lilian Friedberg (2007). Premiered at Burgtheater, 2003.
Irm und Margit A part of "Attabambi Pornoland" (2004). Premiered at Schauspielhaus Zürich, 2004.
Ulrike Maria Stuart (2006). Premiered at Thalia Theater, 2006.
Über Tiere (2006).
Rechnitz (Der Würgeengel) (2008). Rechnitz (The Exterminating Angel).
Die Kontrakte des Kaufmanns. Eine Wirtschaftskomödie (2009). The Merchant's Contracts.
Das Werk / Im Bus / Ein Sturz (2010). Premiered at Schauspiel Köln, 2010.
Winterreise (2011). Premiered at Munich Kammerspiele, 2011.
Kein Licht (2011). Premiered at Schauspiel Köln, 2011
FaustIn and out (2011). Premiered at Schauspielhaus Zürich, 2012.
Die Straße. Die Stadt. Der Überfall (2012). Premiered at Munich Kammerspiele, 2012.
Schatten (Eurydike sagt) (2013). Shadow. Eurydice Says, trans. Gitta Honegger (2017). Premiered at Burgtheater, 2013.
Aber sicher! (2013). Premiered at Theater Bremen, 2013.
Die Schutzbefohlenen (2013). Charges (The Supplicants), trans. Gitta Honegger (Seagull Books, 2016). First read at Hamburg, 2013; first produced at Mannheim, 23 May 2014.
Das schweigende Mädchen (2014). Premiered at Munich, 27 September 2014.
Wut (2016). Fury, trans. Gitta Honegger (Seagull Books, 2022). Premiered at Munich, 16 April 2016.
Am Königsweg (2017). On the Royal Road: The Burgher King, trans. Gitta Honegger (Seagull Books, 2020). Premiered at Hamburg, 28 October 2017.
Schnee Weiss (2018). Premiered at Cologne, 21 December 2018.
Schwarzwasser (2020). Premiered at Vienna, 6 February 2020.
Sonne, los jetzt! (2022). Premiered at Zürich, 15 December 2022.
Angabe der Person (2022). Premiered at Berlin, 17 December 2022.
Sonne / Luft (2022). Premiered at Zürich, 15 December 2022.
Asche (2024). Premiered at Münich, 26 April 2024.
Opera libretto
Lost Highway (2003), adapted from the film by David Lynch, with music by Olga Neuwirth
Translations
Die Enden der Parabel (Gravity's Rainbow) novel by Thomas Pynchon; 1976
Herrenjagd drama by Georges Feydeau; 1983
Floh im Ohr drama by Georges Feydeau; 1986
Der Gockel drama by Georges Feydeau; 1986
Die Affaire Rue de Lourcine drama by Eugène Labiche; 1988
Die Dame vom Maxim drama by Georges Feydeau; 1990
Der Jude von Malta drama by Christopher Marlowe; 2001
Ernst sein ist alles drama by Oscar Wilde; 2004
Der ideale Mann drama by Oscar Wilde; 2011
Poetry and short stories from Latin American authors
Jelinek's works in English translation
The Piano Teacher, trans. Joachim Neugroschel (New York: Weidenfeld & Nicolson, 1988). .
Wonderful, Wonderful Times, trans. Michael Hulse (London: Serpent's Tail, 1990). .
Lust, trans. Michael Hulse (London: Serpent's Tail, 1992). .
Women as Lovers, trans. Martin Chalmers (London: Serpent's Tail, 1994). .
Greed, trans. Martin Chalmers (London: Serpent's Tail, 2006). .
Bambiland, trans. Lilian Friedberg (2009), in Theater 39.3, pp. 111–43.
Her Not All Her: On/With Robert Walser, trans. Damion Searls (Sylph Editions, 2012).
Sports Play, trans. Penny Black (Oberon Books, 2012).
Sports Chorus, trans. Lilian Banks (2012), in Sport in Art, commissioned by Museum of Contemporary Art in Kraków.
Rechnitz and The Merchant's Contracts, trans. Gitta Honegger (Seagull Books, 2015). .
Charges (The Supplicants), trans. Gitta Honegger (Seagull Books, 2016). .
Three Plays: Rechnitz, The Merchant's Contracts, Charges (The Supplicants), trans. Gitta Honegger (Seagull Books, 2019).
On the Royal Road: The Burgher King, trans. Gitta Honegger (Seagull Books, 2020).
rein GOLD, trans. Gitta Honegger (Fitzcarraldo Editions, 2021).
The Children of the Dead, trans. Gitta Honegger (Yale, 2024).
In popular culture
Her novel The Piano Teacher was the basis for the 2001 film of the same title by Austrian director Michael Haneke, starring Isabelle Huppert as the protagonist.
In 2022, a documentary about Jelinek was created by Claudia Müller, Elfriede Jelinek – Language Unleashed (German: Elfriede Jelinek – Die Sprache von der Leine lassen).
See also
List of female Nobel laureates
Gottfried Hüngsberg (German Wikipedia)
List of Jewish Nobel laureates
References
Further reading
Bethman, Brenda. 'Obscene Fantasies': Elfriede Jelinek's Generic Perversions. New York, NY: Peter Lang, 2011;
Fiddler, Allyson. Rewriting Reality: An Introduction to Elfriede Jelinek. Oxford: Berg, 1994;
Gérard Thiériot (dir.). Elfriede Jelinek et le devenir du drame, Toulouse, Presses universitaires du Mirail, 2006;
Flitner, Bettina. Frauen mit Visionen – 48 Europäerinnen (Women with Visions – 48 Europeans). With texts by Alice Schwarzer. Munich: Knesebeck, 2004; , 122–125 p.
Konzett, Matthias. The Rhetoric of National Dissent in Thomas Bernhard, Peter Handke, and Elfriede Jelinek. Rochester, NY: Camden House, 2000;
Lamb-Faffelberger, Margarete and Matthias Konzett, editors. Elfriede Jelinek: Writing Woman, Nation, and Identity—A Critical Anthology. Fairleigh Dickinson University Press, 2007;
Rosellini, Jay. "Haider, Jelinek, and the Austrian Culture Wars". CreateSpace.com, 2009. .
External links
Elfriede Jelinek-Forschungszentrum
including the Nobel Lecture on 7 December 2004 Sidelined
BBC synopsis
List of works
Elfriede Jelinek: Nichts ist verwirklicht. Alles muss jetzt neu definiert werden.
The Goethe-Institut's 70th Birthday Page for Elfriede Jelinek
Some of Jelinek's poems in English from the Poetry Foundation
Sound recordings with Elfriede Jelinek in the Online Archive of the Österreichische Mediathek (Literary readings, interviews and radio reports)
1946 births
21st-century Austrian Jews
Living people
Nobel laureates in Literature
Austrian Nobel laureates
Jewish Austrian writers
Jewish feminists
Jewish dramatists and playwrights
Jewish novelists
Jewish socialists
Jewish women writers
Austrian women dramatists and playwrights
Austrian communists
Austrian feminists
Austrian women novelists
Austrian people of Czech-Jewish descent
Austrian people of German descent
Austrian people of Romanian descent
BDSM writers
Communist Party of Austria politicians
Georg Büchner Prize winners
People from Mürzzuschlag
Women Nobel laureates
20th-century Austrian women writers
21st-century Austrian women writers
20th-century Austrian dramatists and playwrights
21st-century Austrian dramatists and playwrights
20th-century Austrian novelists
21st-century Austrian novelists
Austrian socialist feminists
Communist women writers
German-language poets | Elfriede Jelinek | [
"Technology"
] | 4,570 | [
"Women Nobel laureates",
"Women in science and technology"
] |
1,047,047 | https://en.wikipedia.org/wiki/American%20Nuclear%20Society | The American Nuclear Society (ANS) is an international, not-for-profit organization of scientists, engineers, and industry professionals that promote the field of nuclear engineering and related disciplines.
ANS is composed of three communities: professional divisions, local sections/plant branches, and student sections. Individual members consist of fellows, professional members, and student members. Various organization members are also included in the Society including corporations, governmental agencies, educational institutions, and associations.
As of spring 2024, ANS is composed of more than 10,000 members from more than 40 countries. ANS is also a member of the International Nuclear Societies Council (INSC).
Professional Divisions within the American Nuclear Society focus on specific technical domains, encompassing 18 areas and the Young Members Group. They provide members with specialized engagement opportunities in nuclear science and technology. ANS members can join any number of these divisions. Their activities are coordinated by the Professional Divisions Committee. Topics covered by the divisions range from Accelerator Applications to Fusion Energy and more.
The main objectives of ANS are to provide professional development opportunities for members, engage and inform the public and students about the benefits of nuclear technology, encourage innovation in the nuclear field, and advocate effectively for nuclear technology at both domestic and international levels.
History
The American Nuclear Society was founded in 1954 as a not-for-profit association to promote the growing nuclear field. Shortly thereafter in 1955, ANS held its first annual meeting and elected Walter Zinn as its first president. Originally headquartered in space provided by the Oak Ridge Institute of Nuclear Studies (ORINS), the Society's headquarters were moved to various locations over the years until 1977, when the Society settled into its own building in La Grange Park, Illinois. Since 2024, the Society has been headquartered in Westmont, Illinois.
The American Nuclear Society published "Fusion technology : a journal of the American Nuclear Society and the European Nuclear Society" from 1984 to 2001.
Divisions
Accelerator Applications
Aerospace Nuclear Science & Technology
Decommissioning & Environmental Sciences
Education, Training & Workforce Development
Fuel Cycle & Waste Management
Fusion Energy
Human Factors, Instrumentation & Controls
Isotopes & Radiation
Materials Science & Technology
Mathematics & Computation
Nuclear Criticality Safety
Nuclear Installations Safety
Nuclear Nonproliferation Policy
Operations & Power
Radiation Protection & Shielding
Reactor Physics
Robotics & Remote Systems
Thermal Hydraulics
Young Members Group
Publications
The American Nuclear Society publishes various journals, magazines, newsletters, and books.
Nuclear News
Radwaste Solutions
Nuclear Science and Engineering
Nuclear Technology
Fusion Science and Technology
Nuclear Newswire
Student sections
The American Nuclear Society consists of student sections at colleges and universities throughout the United States and abroad. As of spring 2020, the table below lists the active student sections of ANS.
Local sections
Throughout the US and the world, numerous Local sections constitute the foundation of ANS. Members are encouraged to affiliate with a Local Section or Plant Branch to expand their professional connections and contribute to public education and outreach in the nuclear sector.
See also
Alpha Nu Sigma
Institute of Nuclear Materials Management
Nuclear Energy Institute
Guy Tavernier (fr)
J. Ernest Wilkins, Jr.
Margaret K. Butler
European Nuclear Society
External links
ANS Official Website
ANS Young Members Group
References
Professional associations based in the United States
Nuclear organizations
1954 establishments in the United States
Organizations established in 1954
501(c)(3) organizations | American Nuclear Society | [
"Engineering"
] | 664 | [
"Nuclear organizations",
"Energy organizations"
] |
1,047,078 | https://en.wikipedia.org/wiki/Microalgae | Microalgae or microphytes are microscopic algae invisible to the naked eye. They are phytoplankton typically found in freshwater and marine systems, living in both the water column and sediment. They are unicellular species which exist individually, or in chains or groups. Depending on the species, their sizes can range from a few micrometers (μm) to a few hundred micrometers. Unlike higher plants, microalgae do not have roots, stems, or leaves. They are specially adapted to an environment dominated by viscous forces.
Microalgae, capable of performing photosynthesis, are important for life on earth; they produce approximately half of the atmospheric oxygen and use the greenhouse gas carbon dioxide to grow photoautotrophically. "Marine photosynthesis is dominated by microalgae, which together with cyanobacteria, are collectively called phytoplankton." Microalgae, together with bacteria, form the base of the food web and provide energy for all the trophic levels above them. Microalgae biomass is often measured with chlorophyll a concentrations and can provide a useful index of potential production.
The biodiversity of microalgae is enormous and they represent an almost untapped resource. It has been estimated that about 200,000-800,000 species in many different genera exist of which about 50,000 species are described. Over 15,000 novel compounds originating from algal biomass have been chemically determined. Examples include carotenoids, fatty acids, enzymes, polymers, peptides, toxins and sterols. Besides providing these valuable metabolites, microalgae are regarded as a potential feedstock for biofuels and has also emerged as a promising microorganism in bioremediation.
An exception to the microalgae family is the colorless Prototheca which are devoid of any chlorophyll. These achlorophic algae switch to parasitism and thus cause the disease protothecosis in human and animals.
Characteristics and uses
The chemical composition of microalgae is not an intrinsic constant factor but varies over a wide range of factors, both depending on species and on cultivation conditions. Some microalgae have the capacity to acclimate to changes in environmental conditions by altering their chemical composition in response to environmental variability. A particularly dramatic example is their ability to replace phospholipids with non-phosphorus membrane lipids in phosphorus-depleted environments. It is possible to accumulate the desired products in microalgae to a large extent by changing environmental factors, like temperature, illumination, pH, CO2 supply, salt and nutrients.
Microphytes also produce chemical signals which contribute to prey selection, defense, and avoidance. These chemical signals affect large scale tropic structures such as algal blooms but propagate by simple diffusion and laminar advective flow. Microalgae such as microphytes constitute the basic foodstuff for numerous aquaculture species, especially filtering bivalves.
Photo- and chemosynthetic algae
Photosynthetic and chemosynthetic microbes can also form symbiotic relationships with host organisms. They provide them with vitamins and polyunsaturated fatty acids, necessary for the growth of the bivalves which are unable to synthesize it themselves. In addition, because the cells grow in aqueous suspension, they have more efficient access to water, CO2, and other nutrients.
Microalgae play a major role in nutrient cycling and fixing inorganic carbon into organic molecules and expressing oxygen in marine biosphere.
While fish oil has become famous for its omega-3 fatty acid content, fish do not actually produce omega-3s, instead accumulating their omega-3 reserves by consuming microalgae. These omega-3 fatty acids can be obtained in the human diet directly from the microalgae that produce them.
Microalgae can accumulate considerable amounts of proteins depending on species and cultivation conditions. Due to their ability to grow on non-arable land microalgae may provide an alternative protein source for human consumption or animal feed. Microalgae proteins are also investigated as thickening agents or emulsion and foam stabilizers in the food industry to replace animal based proteins.
Some microalgae accumulate chromophores like chlorophyll, carotenoids, phycobiliproteins or polyphenols that may be extracted and used as coloring agents.
Cultivation of microalgae
A range of microalgae species are produced in hatcheries and are used in a variety of ways for commercial purposes, including for human nutrition, as biofuel, in the aquaculture of other organisms, in the manufacture of pharmaceuticals and cosmetics, and as biofertiliser. However, the low cell density is a major bottleneck in commercial viability of many microalgae derived products, especially low cost commodities.
Studies have investigated the main factors in the success of a microalgae hatchery system to be:
Geometry and scale of cultivation systems (referred as photobioreactors);
Light intensity;
Concentration of carbon dioxide () in the gas phase
Nutrient levels (mainly N, P, K)
Mixing of culture
See also
AlgaeBase
Raceway pond
References
Biological oceanography
Planktology
Aquatic ecology | Microalgae | [
"Biology"
] | 1,112 | [
"Aquatic ecology",
"Ecosystems"
] |
1,047,111 | https://en.wikipedia.org/wiki/HSAB%20theory | HSAB is an acronym for "hard and soft (Lewis) acids and bases". HSAB is widely used in chemistry for explaining the stability of compounds, reaction mechanisms and pathways. It assigns the terms 'hard' or 'soft', and 'acid' or 'base' to chemical species. 'Hard' applies to species which are small, have high charge states (the charge criterion applies mainly to acids, to a lesser extent to bases), and are weakly polarizable. 'Soft' applies to species which are big, have low charge states and are strongly polarizable.
The theory is used in contexts where a qualitative, rather than quantitative, description would help in understanding the predominant factors which drive chemical properties and reactions. This is especially so in transition metal chemistry, where numerous experiments have been done to determine the relative ordering of ligands and transition metal ions in terms of their hardness and softness.
HSAB theory is also useful in predicting the products of metathesis reactions. In 2005 it was shown that even the sensitivity and performance of explosive materials can be explained on basis of HSAB theory.
Ralph Pearson introduced the HSAB principle in the early 1960s as an attempt to unify inorganic and organic reaction chemistry.
Theory
Essentially, the theory states that soft acids prefer to form bonds with soft bases, whereas hard acids prefer to form bonds with hard bases, all other factors being equal. It can also be said that hard acids bind strongly to hard bases and soft acids bind strongly to soft bases. The HSAB classification in the original work was largely based on equilibrium constants of Lewis acid/base reactions with a reference base for comparison.
Borderline cases are also identified: borderline acids are trimethylborane, sulfur dioxide and ferrous Fe2+, cobalt Co2+ caesium Cs+ and lead Pb2+ cations. Borderline bases are: aniline, pyridine, nitrogen N2 and the azide, chloride, bromide, nitrate and sulfate anions.
Generally speaking, acids and bases interact and the most stable interactions are hard–hard (ionogenic character) and soft–soft (covalent character).
An attempt to quantify the 'softness' of a base consists in determining the equilibrium constant for the following equilibrium:
BH + CH3Hg+ H+ + CH3HgB
where CH3Hg+ (methylmercury ion) is a very soft acid and H+ (proton) is a hard acid, which compete for B (the base to be classified).
Some examples illustrating the effectiveness of the theory:
Bulk metals are soft acids and are poisoned by soft bases such as phosphines and sulfides.
Hard solvents such as hydrogen fluoride, water and the protic solvents tend to dissolve strong solute bases such as fluoride and oxide anions. On the other hand, dipolar aprotic solvents such as dimethyl sulfoxide and acetone are soft solvents with a preference for solvating large anions and soft bases.
In coordination chemistry soft–soft and hard–hard interactions exist between ligands and metal centers.
Chemical hardness
In 1983 Pearson together with Robert Parr extended the qualitative HSAB theory with a quantitative definition of the chemical hardness (η) as being proportional to the second derivative of the total energy of a chemical system with respect to changes in the number of electrons at a fixed nuclear environment:
The factor of one-half is arbitrary and often dropped as Pearson has noted.
An operational definition for the chemical hardness is obtained by applying a three-point finite difference approximation to the second derivative:
where I is the ionization potential and A the electron affinity. This expression implies that the chemical hardness is proportional to the band gap of a chemical system, when a gap exists.
The first derivative of the energy with respect to the number of electrons is equal to the chemical potential, μ, of the system,
,
from which an operational definition for the chemical potential is obtained from a finite difference approximation to the first order derivative as
which is equal to the negative of the electronegativity (χ) definition on the Mulliken scale: μ = −χ.
The hardness and Mulliken electronegativity are related as
,
and in this sense hardness is a measure for resistance to deformation or change. Likewise a value of zero denotes maximum softness, where softness is defined as the reciprocal of hardness.
In a compilation of hardness values only that of the hydride anion deviates. Another discrepancy noted in the original 1983 article are the apparent higher hardness of Tl3+ compared to Tl+.
Modifications
If the interaction between acid and base in solution results in an equilibrium mixture the strength of the interaction can be quantified in terms of an equilibrium constant. An alternative quantitative measure is the heat (enthalpy) of formation of the Lewis acid-base adduct in a non-coordinating solvent. The ECW model is quantitative model that describes and predicts the strength of Lewis acid base interactions, -ΔH . The model assigned E and C parameters to many Lewis acids and bases. Each acid is characterized by an EA and a CA. Each base is likewise characterized by its own EB and CB. The E and C parameters refer, respectively, to the electrostatic and covalent contributions to the strength of the bonds that the acid and base will form. The equation is
-ΔH = EAEB + CACB + W
The W term represents a constant energy contribution for acid–base reaction such as the cleavage of a dimeric acid or base. The equation predicts reversal of acids and base strengths. The graphical presentations of the equation show that there is no single order of Lewis base strengths or Lewis acid strengths. The ECW model accommodates the failure of single parameter descriptions of acid-base interactions.
A related method adopting the E and C formalism of Drago and co-workers quantitatively predicts the formation constants for complexes of many metal ions plus the proton with a wide range of unidentate Lewis acids in aqueous solution, and also offered insights into factors governing HSAB behavior in solution.
Another quantitative system has been proposed, in which Lewis acid strength toward Lewis base fluoride is based on gas-phase affinity for fluoride. Additional one-parameter base strength scales have been presented. However, it has been shown that to define the order of Lewis base strength (or Lewis acid strength) at least two properties must be considered. For Pearson's qualitative HSAB theory the two properties are hardness and strength while for Drago's quantitative ECW model the two properties are electrostatic and covalent .
Kornblum's rule
An application of HSAB theory is the so-called Kornblum's rule (after Nathan Kornblum) which states that in reactions with ambident nucleophiles (nucleophiles that can attack from two or more places), the more electronegative atom reacts when the reaction mechanism is SN1 and the less electronegative one in a SN2 reaction. This rule (established in 1954) predates HSAB theory but in HSAB terms its explanation is that in a SN1 reaction the carbocation (a hard acid) reacts with a hard base (high electronegativity) and that in a SN2 reaction tetravalent carbon (a soft acid) reacts with soft bases.
According to findings, electrophilic alkylations at free CN− occur preferentially at carbon, regardless of whether the SN1 or SN2 mechanism is involved and whether hard or soft electrophiles are employed. Preferred N attack, as postulated for hard electrophiles by the HSAB principle, could not be observed with any alkylating agent. Isocyano compounds are only formed with highly reactive electrophiles that react without an activation barrier because the diffusion limit is approached. It is claimed that the knowledge of absolute rate constants and not of the hardness of the reaction partners is needed to predict the outcome of alkylations of the cyanide ion.
Criticism
Reanalysis of a large number of various most typical ambident organic system reveals that thermodynamic/kinetic control describes reactivity of organic compounds perfectly, whereas the HSAB principle fails and should be abandoned in the rationalization of ambident reactivity of organic compounds.
See also
Acid-base reaction
Oxophilicity
References
Acid–base chemistry
Inorganic chemistry | HSAB theory | [
"Chemistry"
] | 1,741 | [
"Equilibrium chemistry",
"Acid–base chemistry",
"nan"
] |
1,047,161 | https://en.wikipedia.org/wiki/Chapters%20and%20verses%20of%20the%20Bible | Chapter and verse divisions did not appear in the original texts of Jewish or Christian bibles; such divisions form part of the paratext of the Bible. Since the early 13th century, most copies and editions of the Bible have presented all but the shortest of the scriptural books with divisions into chapters, generally a page or so in length. Since the mid-16th century, editors have further subdivided each chapter into verses – each consisting of a few short lines or of one or more sentences. Sometimes a sentence spans more than one verse, as in the case of Ephesians 2:8–9, and sometimes there is more than one sentence in a single verse, as in the case of Genesis 1:2.
The Jewish divisions of the Hebrew text differ at various points from those used by Christians. For instance, Jewish tradition regards the ascriptions to many Psalms as independent verses or as parts of the subsequent verses, whereas established Christian practice treats each Psalm ascription as independent and unnumbered, resulting in 116 more verses in Jewish versions than in the Christian texts. Some chapter divisions also occur in different places, e.g. Hebrew Bibles have 1 Chronicles 5:27–41 where Christian translations have 1 Chronicles 6:1–15.
History
Chapters
Early manuscripts of the biblical texts did not contain the chapter and verse divisions in the numbered form familiar to modern readers. In antiquity Hebrew texts were divided into paragraphs (parashot) that were identified by two letters of the Hebrew alphabet. Peh (פ) indicated an "open" paragraph that began on a new line, while Samekh (ס) indicated a "closed" paragraph that began on the same line after a small space. These two letters begin the Hebrew words open () and closed (), and are, themselves, open in shape (פ) and closed (ס). The earliest known copies of the Book of Isaiah from the Dead Sea Scrolls used parashot divisions, although they differ slightly from the Masoretic divisions.
The Hebrew Bible was also divided into some larger sections. In Israel, the Torah (its first five books) were divided into 154 sections so that they could be read through aloud in weekly worship over the course of three years. In Babylonia, it was divided into 53 or 54 sections (Parashat ha-Shavua) so it could be read through in one year. The New Testament was divided into topical sections known as kephalaia by the fourth century. Eusebius of Caesarea divided the gospels into parts that he listed in tables or canons. Neither of these systems corresponds with modern chapter divisions. (See fuller discussions below.)
Chapter divisions, with titles, are also found in the 9th-century Tours manuscript Paris Bibliothèque Nationale MS Lat. 3, the so-called Bible of Rorigo.
Cardinal archbishop Stephen Langton and Cardinal Hugo de Sancto Caro developed different schemas for systematic division of the Bible in the early 13th century. It is the system of Archbishop Langton on which the modern chapter divisions are based.
While chapter divisions have become nearly universal, editions of the Bible have sometimes been published without them. Such editions, which typically use thematic or literary criteria to divide the biblical books instead, include John Locke's Paraphrase and Notes on the Epistles of St. Paul (1707), Alexander Campbell's The Sacred Writings (1826), Daniel Berkeley Updike's fourteen-volume The Holy Bible Containing the Old and New Testaments and the Apocrypha, Richard Moulton's The Modern Reader's Bible (1907), Ernest Sutherland Bates's The Bible Designed to Be Read as Living Literature (1936), The Books of the Bible (2007) from the International Bible Society (Biblica), Adam Lewis Greene's five-volume Bibliotheca (2014), and the six-volume ESV Reader's Bible (2016) from Crossway Books.
Verses
Since at least 916 the Tanakh has contained an extensive system of multiple levels of section, paragraph, and phrasal divisions that were indicated in Masoretic vocalization and cantillation markings. One of the most frequent of these was a special type of punctuation, the sof passuq, symbol for a period or sentence break, resembling the colon (:) of English and Latin orthography. With the advent of the printing press and the translation of the Hebrew Bible into English, versifications were made that correspond predominantly with the existing Hebrew sentence breaks, with a few isolated exceptions. Most attribute these to Rabbi Isaac Nathan ben Kalonymus's work for the first Hebrew Bible concordance around 1440.
The first person to divide New Testament chapters into verses was the Italian Dominican biblical scholar Santes Pagnino (1470–1541), but his system was never widely adopted. His verse divisions in the New Testament were far longer than those known today. The Parisian printer Robert Estienne created another numbering in his 1551 edition of the Greek New Testament, which was also used in his 1553 publication of the Bible in French. Estienne's system of division was widely adopted, and it is this system which is found in almost all modern Bibles. Estienne produced a 1555 Vulgate that is the first Bible to include the verse numbers integrated into the text. Before this work, they were printed in the margins.
The first English New Testament to use the verse divisions was a 1557 translation by William Whittingham (c. 1524–1579). The first Bible in English to use both chapters and verses was the Geneva Bible published shortly afterwards by Sir Rowland Hill in 1560. These verse divisions soon gained acceptance as a standard way to notate verses, and have since been used in nearly all English Bibles and the vast majority of those in other languages.
Jewish tradition
The Masoretic Text of the Hebrew Bible notes several different kinds of subdivisions within the biblical books:
Passukim
Most important are the verses, or passukim (MH spelling; now pronounced pesukim by all speakers). According to Talmudic tradition, the division of the text into verses is of ancient origin. In Masoretic versions of the Bible, the end of a verse, or sof passuk, is indicated by a small mark in its final word called a silluq (which means "stop"). Less formally, verse endings are usually also indicated by two vertical dots following the word with a silluq.
Parashot
The Masoretic Text also contains sections, or portions, called parashot or parashiyot. The end of a parashah is usually indicated by a space within a line (a "closed" section) or a new line beginning (an "open" section). The division of the text reflected in the parashot is usually thematic. Unlike chapters, the parashot are not numbered, but some of them have special titles.
In early manuscripts (most importantly in Tiberian Masoretic manuscripts, such as the Aleppo codex), an "open" section may also be represented by a blank line, and a "closed" section by a new line that is slightly indented (the preceding line may also not be full). These latter conventions are no longer used in Torah scrolls and printed Hebrew Bibles. In this system, the one rule differentiating "open" and "closed" sections is that "open" sections must always start at the beginning of a new line, while "closed" sections never start at the beginning of a new line.
Sedarim
Another division of the biblical books found in the Masoretic Text is the division into sedarim. This division is not thematic, but is almost entirely based upon the quantity of text. For the Torah, this division reflects the triennial cycle of reading that was practiced by the Jews of the Land of Israel.
Christian versions
Christians also introduced a concept roughly similar to chapter divisions, called kephalaia (singular kephalaion, literally meaning heading).
Cardinal Hugo de Sancto Caro is often given credit for first dividing the Latin Vulgate into chapters in the real sense, but it is the arrangement of his contemporary and fellow cardinal Stephen Langton who in 1205 created the chapter divisions which are used today. They were then inserted into Greek manuscripts of the New Testament in the 16th century. Robert Estienne (Robert Stephanus) was the first to number the verses within each chapter, his verse numbers entering printed editions in 1551 (New Testament) and 1553 (Hebrew Bible).
Several modern publications of the Bible have eliminated numbering of chapters and verses. Biblica published such a version of the NIV in 2007 and 2011. In 2014, Crossway published the ESV Reader's Bible and Bibliotheca published a modified ASV.
See also
References
External links
How Many Words In Each Book of the Bible Sortable table of data about chapters, verses, words, and other info on each Bible book
STEP Documentation
OSIS Documentation
13th-century introductions
Referencing systems | Chapters and verses of the Bible | [
"Technology"
] | 1,862 | [
"Referencing systems",
"Information systems"
] |
1,047,173 | https://en.wikipedia.org/wiki/Integrator | An integrator in measurement and control applications is an element whose output signal is the time integral of its input signal. It accumulates the input quantity over a defined time to produce a representative output.
Integration is an important part of many engineering and scientific applications. Mechanical integrators are the oldest type and are still used for metering water flow or electrical power. Electronic analogue integrators, which have generally displaced mechanical integrators, are the basis of analog computers and charge amplifiers. Integration can also be performed by algorithms in digital computers.
Mechanical integrators
One simple kind of mechanical integrator is the disk-and-wheel integrator. This functions by placing a wheel on and perpendicular to a spinning disk, held there by means of a freely spinning shaft parallel to the disk. Because the speed at which a part of the disk turns is proportional to its distance from the center, the rate at which the wheel turns is proportional to its distance from the center of the disk. Therefore, the number of turns made by the integrating wheel is equal to the definite integral of the integrating wheel's distance from the center, which is in turn controlled by the motion of the shaft relative to the disk.
In signal processing circuits
A current integrator is an electronic device performing a time integration of an electric current, thus measuring a total electric charge. In combination with time it can be used to determine the average current during an experiment. Feeding current into a capacitor (initialized with zero volts) and monitoring the capacitor's voltage has been used in nuclear physics experiments before 1953 to measure the number of ions received. Such a simple circuit works because the capacitor's current–voltage relation when written in integral form mathematically states that a capacitor's final voltage equals its initial voltage plus the time integral of its current divided by its capacitance:
More sophisticated current integrator circuits build on this relation, such as the charge amplifier. A current integrator is also used to measure the electric charge on a Faraday cup in a residual gas analyzer to measure partial pressures of gasses in a vacuum. Another application of current integration is in ion beam deposition, where the measured charge directly corresponds to the number of ions deposited on a substrate, assuming the charge state of the ions is known. The two current-carrying electrical leads must to be connected to the ion source and the substrate, closing the electric circuit which in part is given by the ion beam.
A voltage integrator is an electronic device performing a time integration of an electric voltage, thus measuring the total volt-second product. A first-order low-pass filter such as a resistor–capacitor circuit acts like a voltage integrator at high frequencies well above the filter's cutoff frequency.
Op amp integrator
See also Integrator at op amp applications and op amp integrator
An ideal op amp integrator (e.g. Figure 1) is a voltage integrator that works over all frequencies (limited by the op amp's gain–bandwidth product) and provides gain.
Drawbacks of ideal op amp integrator
For DC input (f = 0), the capacitive reactance X is infinite. Because of this, the op amp gets effectively in an open loop configuration, which has infinite open-loop gain (for an ideal op amp, or simply very large for real op amps). Hence, any small input offset voltages are also amplified and appears at output as a large error. This is referred as false triggering and must be avoided.
Thus, an ideal integrator needs to be modified with additional components to reduce the effect of an error voltage in practice. This modified integrator is referred as practical integrator.
Practical op amp integrator
Main description at:
The gain of an integrator at low frequency can be limited to avoid the saturation problem, by shunting the feedback capacitor with a feedback resistor. This practical integrator acts as a low-pass filter with constant gain in its low frequency pass band. It only performs integration in high frequencies, not in low frequencies, so bandwidth for integrating is limited.
Applications
Integrating circuits are most commonly used in analog-to-digital converters, ramp generators, and also in wave shaping applications.
Op-amp integrating amplifiers are used to perform calculus operations in analog computers.
A totalizer in the industrial instrumentation trade integrates a signal representing water flow, producing a signal representing the total quantity of water that has passed by the flow meter.
In software
Integrators may also be software components.
In some computational physics computer simulations, such as numerical weather prediction, molecular dynamics, flight simulators, reservoir simulation, noise barrier design, architectural acoustics, and electronic circuit simulation, an integrator is a numerical method for integrating trajectories from forces (and thereby accelerations) that are only calculated at discrete time steps.
Mechanical integrators
Mechanical integrators were key elements in the mechanical differential analyser, used to solve practical physical problems. Mechanical integration mechanisms were also used in control systems such as regulating flows or temperature in industrial processes. Mechanisms such as the ball-and-disk integrator were used both for computation in differential analysers and as components of instruments such as naval gun directors, flow totalizers and others. A planimeter is a mechanical device used for calculating the definite integral of a curve given in graphical form, or more generally finding the area of a closed curve. An integraph is used to plot the indefinite integral of a function given in graphical form.
See also
Differentiator
Digital differential analyzer
Fractional-order integrator
Integrating ADC
Low-pass filter
Operational amplifier
Signal processing
References
External links
Wolfram Online Integrator
Calc.Matthen Online Integrator, can do definite integrals
Further reading
Mathematical tools | Integrator | [
"Mathematics",
"Technology"
] | 1,209 | [
"Applied mathematics",
"Mathematical tools",
"History of computing",
"nan"
] |
1,047,175 | https://en.wikipedia.org/wiki/8%20Eyes | is a 2D action platform game developed by Thinking Rabbit for the Nintendo Entertainment System in 1988. The game features eight levels, and can be played by one or two players. It also features a large, diverse soundtrack, composed by Kenzou Kumei, consisting of three pieces for each of the eight levels, that are set in different parts of the world.
Piko Interactive acquired the rights of 8 Eyes and released it for Windows via Steam on August 14, 2019. 8 Eyes is also included in Evercade Piko Interactive Collection 1 that was released on June 18, 2020.
Story
English version
8 Eyes is set in a post-apocalyptic future. Mankind is recovering from hundreds of years of chaos and nuclear war, and civilization is being rebuilt by the Great King, who harnesses the power of eight jewels. The jewels, known as the 8 Eyes, were formed at the centers of eight nuclear explosions that came close to destroying Earth. The 8 Eyes have mysterious power which, in the wrong hands, could bring about the end of the world. The Great King's eight power-hungry dukes steal the jewels for themselves and banish the King to the nuclear wastelands, threatening to once again plunge the Earth into war.
The player controls Orin the Falconer and his fighting falcon Cutrus. His mission is to infiltrate the Dukes' eight castles and retrieve the 8 Eyes. With the help of Cutrus, Orin must fight the Dukes' soldiers, nuclear mutants, and the duke of each castle to retrieve the jewels. After the jewels have been recovered, Orin must return them to the Altar of Peace so that the Great King may return and finish rebuilding the Earth.
Japanese version
The Japanese version of the game is set in the Balkans in the late 19th century. The Ottoman Empire, which had barely survived due to the power balance among the great European powers, was beginning to feel the shadow of its survival. The seeds of various conflicts, which had been so densely packed that it was called the "Powder Keg of the Balkans," were now igniting all at once. Terrorism and assassinations of important people by the Free Armenian Army were already commonplace. With the dignity of the government in turmoil and pessimistic rumors and speculation abound, an incident occurred that made restoring public order the most urgent task for the British Empire, which had Ottoman Turkey as its protectorate.
A research team was sent by the British Museum to investigate some newly discovered ruins. However, the entire team was brutally slaughtered, with their heads missing and their stomachs cut open. All of the artifacts they uncovered and their excavation journals were stolen.
The perpetrator of the murders was a group led by Ruth Grandier, a female bandit known for her demonic worship who claimed that she and her group were fallen children of Lucifer. The College of Arms learned that Ruth's group stole artifacts which concealed demonic secrets and could be used in a dark ritual to resurrect demons.
To stop Ruth, the College of Arms Seventh Division (which would eventually become the MI5) decided to send Baronet Sir Julian James Bond, the greatest swordsman in all of England. Sir Bond infiltrated the Balkans along with his pet eagle, Cutlass.
Gameplay
The game consists of eight levels, each set in the castle of one of the dukes. At the completion of each level, Orin receives a new sword. The player can choose to play each of the first seven castles in any order, though the boss at the end of each is vulnerable to only one sword. It is therefore easier to play levels in a particular order. There are hints about the correct order hidden throughout the game. Only after each has been completed can the House of Ruth be played.
After the House of Ruth has been cleared and the 8 Eyes recovered, the player must return the jewels to the Altar of Peace. At this point the jewels must be placed in a particular order, or the game is lost. Hints about the order of the jewels are also hidden throughout the game.
8 Eyes features a cooperative mode in which one player controls Orin and the other player controls Cutrus. In the single-player mode, the player has limited control of both characters simultaneously, making the game significantly more difficult. The gameplay and graphical style are noticeably similar to Castlevania.
Reception
8 Eyes received mediocre reviews upon its release. Power Play gave the game a 60/100. Electronic Gaming Monthly scored 8 Eyes 23/40.
References
External links
1988 video games
Action-adventure games
Asymmetrical multiplayer video games
Gothic video games
Multiplayer and single-player video games
Nintendo Entertainment System games
Piko Interactive games
Post-apocalyptic video games
Side-scrolling platformers
Thinking Rabbit games
Video games developed in Japan
Video games set in Egypt
Video games set in Germany
Video games set in India
Video games set in Italy
Video games set in Spain | 8 Eyes | [
"Physics"
] | 978 | [
"Asymmetrical multiplayer video games",
"Symmetry",
"Asymmetry"
] |
1,047,605 | https://en.wikipedia.org/wiki/Recursive%20definition | In mathematics and computer science, a recursive definition, or inductive definition, is used to define the elements in a set in terms of other elements in the set (Aczel 1977:740ff). Some examples of recursively-definable objects include factorials, natural numbers, Fibonacci numbers, and the Cantor ternary set.
A recursive definition of a function defines values of the function for some inputs in terms of the values of the same function for other (usually smaller) inputs. For example, the factorial function is defined by the rules
This definition is valid for each natural number , because the recursion eventually reaches the base case of 0. The definition may also be thought of as giving a procedure for computing the value of the function , starting from and proceeding onwards with etc.
The recursion theorem states that such a definition indeed defines a function that is unique. The proof uses mathematical induction.
An inductive definition of a set describes the elements in a set in terms of other elements in the set. For example, one definition of the set of natural numbers is:
1 is in
If an element n is in then is in
is the smallest set satisfying (1) and (2).
There are many sets that satisfy (1) and (2) – for example, the set satisfies the definition. However, condition (3) specifies the set of natural numbers by removing the sets with extraneous members.
Properties of recursively defined functions and sets can often be proved by an induction principle that follows the recursive definition. For example, the definition of the natural numbers presented here directly implies the principle of mathematical induction for natural numbers: if a property holds of the natural number 0 (or 1), and the property holds of whenever it holds of , then the property holds of all natural numbers (Aczel 1977:742).
Form of recursive definitions
Most recursive definitions have two foundations: a base case (basis) and an inductive clause.
The difference between a circular definition and a recursive definition is that a recursive definition must always have base cases, cases that satisfy the definition without being defined in terms of the definition itself, and that all other instances in the inductive clauses must be "smaller" in some sense (i.e., closer to those base cases that terminate the recursion) — a rule also known as "recur only with a simpler case".
In contrast, a circular definition may have no base case, and even may define the value of a function in terms of that value itself — rather than on other values of the function. Such a situation would lead to an infinite regress.
That recursive definitions are valid – meaning that a recursive definition identifies a unique function – is a theorem of set theory known as the recursion theorem, the proof of which is non-trivial. Where the domain of the function is the natural numbers, sufficient conditions for the definition to be valid are that the value of (i.e., base case) is given, and that for , an algorithm is given for determining in terms of , (i.e., inductive clause).
More generally, recursive definitions of functions can be made whenever the domain is a well-ordered set, using the principle of transfinite recursion. The formal criteria for what constitutes a valid recursive definition are more complex for the general case. An outline of the general proof and the criteria can be found in James Munkres' Topology. However, a specific case (domain is restricted to the positive integers instead of any well-ordered set) of the general recursive definition will be given below.
Principle of recursive definition
Let be a set and let be an element of . If is a function which assigns to each function mapping a nonempty section of the positive integers into , an element of , then there exists a unique function such that
Examples of recursive definitions
Elementary functions
Addition is defined recursively based on counting as
Multiplication is defined recursively as
Exponentiation is defined recursively as
Binomial coefficients can be defined recursively as
Prime numbers
The set of prime numbers can be defined as the unique set of positive integers satisfying
2 is a prime number,
any other positive integer is a prime number if and only if it is not divisible by any prime number smaller than itself.
The primality of the integer 2 is the base case; checking the primality of any larger integer by this definition requires knowing the primality of every integer between 2 and , which is well defined by this definition. That last point can be proved by induction on , for which it is essential that the second clause says "if and only if"; if it had just said "if", the primality of, for instance, the number 4 would not be clear, and the further application of the second clause would be impossible.
Non-negative even numbers
The even numbers can be defined as consisting of
0 is in the set of non-negative evens (basis clause),
For any element in the set , is in (inductive clause),
Nothing is in unless it is obtained from the basis and inductive clauses (extremal clause).
Well formed formula
The notion of a well-formed formula (wff) in propositional logic is defined recursively as the smallest set satisfying the three rules:
is a wff if is a propositional variable.
is a wff if is a wff.
is a wff if and are wffs and • is one of the logical connectives ∨, ∧, →, or ↔.
The definition can be used to determine whether any particular string of symbols is a wff:
is a wff, because the propositional variables and are wffs and is a logical connective.
is a wff, because is a wff.
is a wff, because and are wffs and is a logical connective.
Recursive definitions as logic programs
Logic programs can be understood as sets of recursive definitions. For example, the recursive definition of even number can be written as the logic program:
even(0).
even(s(s(X))) :- even(X).
Here :- represents if, and s(X) represents the successor of X, namely X+1, as in Peano arithmetic.
The logic programming language Prolog uses backward reasoning to solve goals and answer queries. For example, given the query ?- even(s(s(0))) it produces the answer true. Given the query ?- even(s(0)) it produces the answer false.
The program can be used not only to check whether a query is true, but also to generate answers that are true. For example:
?- even(X).
X = 0
X = s(s(0))
X = s(s(s(s(0))))
X = s(s(s(s(s(s(0))))))
.....
Logic programs significantly extend recursive definitions by including the use of negative conditions, implemented by negation as failure, as in the definition:
even(0).
even(s(X)) :- not(even(X)).
See also
Definition
Logic programming
Mathematical induction
Recursive data types
Recursion
Recursion (computer science)
Structural induction
Notes
References
Definition
Mathematical logic
Theoretical computer science
Recursion | Recursive definition | [
"Mathematics"
] | 1,559 | [
"Theoretical computer science",
"Applied mathematics",
"Mathematical logic",
"Recursion"
] |
2,259,426 | https://en.wikipedia.org/wiki/Juggling%20notation | Juggling notation is the written depiction of concepts and practices in juggling. Toss juggling patterns have a reputation for being "easier done than said" – while it might be easy to learn a given maneuver and demonstrate it for others, it is often much harder to communicate the idea accurately using speech or plain text. To circumvent this problem, various numeric or diagram-based notation systems have been developed to facilitate communication of patterns or tricks between jugglers, as well the investigation and discovery of new patterns.
A juggling notation system (based on music notation) was first proposed by Dave Storer in 1978 and while the first juggling diagram (a ladder diagram), by Claude Shannon around 1981, was not printed till 2010, the first printed diagram and second oldest notation system were proposed by Jeff Walker in 1982.
Diagram-based
While diagrams are the most visual and reader-friendly way to notate many juggling patterns, they rely on images, so are complicated to produce and unwieldy to share via text or speech.
Ladder diagrams - Each rung on the "ladder" represents a point in time (or "beat"). The juggled objects are represented as lines, their paths through time and between a pair of hands.
Causal diagrams - Similar to the ladder diagram but doesn't show the props held in a juggler's hands. Instead it only shows each "problem" — an incoming prop — and what the juggler should do to make space in his or her hands to catch that incoming prop. It is usually used for club passing and can be displayed or edited in some juggling software.
Mills Mess State Transition Diagrams - Mills Mess is a popular pattern in which the arms cross and uncross. Mills Mess State Transition Diagrams can be used to track these basic arm movements.
Numeric
The following notation systems use only numbers and common characters. The patterns can easily be communicated by text. Most numeric systems are designed to be processed by software juggling simulators — for example, to view juggling patterns as computer animations.
Siteswap
Developed by mathematically inclined jugglers Bengt Magnusson and Bruce "Boppo" Tiemann in 1985, siteswap is by far the most common juggling notation.
A given juggling pattern is represented by a sequence of digits, like "333", "97531", or "744". Each digit represents the number of throws that occur by the time that same prop will be caught. For example, "333" represents a common three-ball cascade, where three props are thrown before the same prop will be caught and thrown again. Within the "531531" pattern, the prop thrown first, the '5' throw, will not be caught until five throws have been made, including itself, where it will be thrown again as a '1'. The prop thrown next, the '3', will be thrown again on the third throw afterwards, the next '3'. And the next prop is thrown with a '1' throw, which is a direct pass to the other hand and will be thrown on the very next throw as a '5'.
Because the number represents the number of throws that occur before that prop will be caught, it also can be thought to describe how high one throws the prop, or how long it remains in the air relative to the other throws, where even numbers inevitably come back to the same hand and odd numbers cross over to the other hand.
The number of props in a given juggling pattern can be determined by the average of one repeating group. "633633633", therefore describes a four-prop pattern, while "414414414" describes a three-prop juggling pattern.
"Vanilla" siteswap is the most basic form of siteswap and uses only a simple string of digits to describe patterns that throw only one prop at a time, alternating between hands. For slightly more complicated patterns, extra rules and syntax are added to create the following two siteswap extensions:
Synchronous Siteswap, or "Synch" Siteswap. This is used to notate patterns where both hands throw at the same time, rather than alternating left and right hands. The numbers for the two throws are combined in parentheses and separated by a comma. For example, "(4,4)(4,4)(4,4)".
Multiplex Siteswap. "Multiplex", in the world of juggling, means "more than one ball is in the hand at the time of the throw". Multiplex Siteswap allows you to notate such patterns, and also can be mixed with synchronous siteswap. A multiplex is described by a digit for each prop in the multiplex throw contained within square brackets. "23[43]23[43]" is a common four ball multiplex.
Vanilla, synch, and multiplex siteswap are the "standard" forms of siteswap. Not only are they understood by jugglers, there are also many computer programs capable of animating juggling patterns entered in siteswap notation.
Other extensions to siteswap have been developed for specific purposes. These are far less common than the "standard" forms of siteswap, understood by far fewer jugglers and only specialized software.
Passing siteswap - used for simple passing patterns and prechac transforms
Multi-Hand Notation (MHN) - Developed by Ed Carstens for use with his juggling program JugglePro, MHN can describe patterns with any number of hands and at any rhythm.
Generalised Siteswap (GS) - Developed by Ben Beever, GS places siteswap into a matrix that uses optional, additional rows to describe any desired attributes of the throws or catches within a pattern, such as timing issues (e.g. for synch patterns), number of spins (e.g. for clubs) and hand position/orientation (e.g. for backcrosses, claw catches etc.).
References
External links
Notation
Toss juggling | Juggling notation | [
"Mathematics"
] | 1,249 | [
"Symbols",
"Notation"
] |
2,259,967 | https://en.wikipedia.org/wiki/W49B | W49B (also known as SNR G043.3-00.2 or 3C 398) is a nebula in Westerhout 49 (W49). The nebula is a supernova remnant, probably from a type Ib or Ic supernova that occurred around 1,000 years ago. It may have produced a gamma-ray burst and is thought to have left a black hole remnant.
Nebula
W49B is a supernova remnant (SNR) located roughly 33,000 light-years from Earth. Radio wavelengths show a shell four arc minutes across. There are infrared "rings" (about 25 light-years in diameter) forming a "barrel", and intense X-ray radiation coming from forbidden emission of nickel and iron in a bar along its axis. W49B is also one of the most luminous SNRs in the galaxy at gamma-ray wavelengths. It is invisible at optical wavelengths.
W49B has a number of other unusual properties. It shows x-ray emission from chromium and manganese, something seen in only one other SNR. The iron in the nebula is seen only in the western half of the nebula, while other elements are distributed throughout the nebula.
The outer shell is interpreted as a wind-blown bubble of molecular hydrogen within the interstellar medium, commonly seen around hot luminous stars. Away from the galactic plane, there is little gas and it is very faint optically. The shell is around 10 parsecs across and 1.9 parsecs thick. Inside the shell are the x-ray jets. Where the southeastern jet reaches the shell there is a bow-shock.
Supernova
The quantity of iron and nickel within the SNR, and its asymmetric nature, imply a jet-driven type Ib or Ic supernova produced by a star with an initial mass around . Such supernovae are thought to be the source of some long-duration gamma-ray bursts. The properties of the SNR suggest that the supernova occurred about 1,000 years ago.
Due to large amounts of galactic dust, the supernova would have been invisible to Earthly viewers.
The quantities of heavy elements such as chromium and manganese, produced by the explosive nucleosynthesis of silicon during the supernova itself, suggests that the explosion was not sufficiently energetic to produce a gamma-ray burst but does not rule it out entirely.
Remnant
The remnant from a core collapse supernova may be a neutron star or black hole. No neutron star can be detected within W49B although it would be expected to be clearly visible. This, and the models which best reproduce the nebula, imply that the remnant is a black hole.
See also
List of supernova remnants
References
External links
Aquila (constellation)
Supernova remnants
Gamma-ray bursts
398 | W49B | [
"Physics",
"Astronomy"
] | 573 | [
"Physical phenomena",
"Astronomical events",
"Constellations",
"Aquila (constellation)",
"Gamma-ray bursts",
"Stellar phenomena"
] |
2,260,019 | https://en.wikipedia.org/wiki/Catenin | Catenins are a family of proteins found in complexes with cadherin cell adhesion molecules of animal cells. The first two catenins that were identified became known as α-catenin and β-catenin. α-Catenin can bind to β-catenin and can also bind filamentous actin (F-actin). β-Catenin binds directly to the cytoplasmic tail of classical cadherins. Additional catenins such as γ-catenin and δ-catenin have been identified. The name "catenin" was originally selected ('catena' means 'chain' in Latin) because it was suspected that catenins might link cadherins to the cytoskeleton.
Types
α-catenin
β-catenin
γ-catenin
δ-catenin
All but α-catenin contain armadillo repeats. They exhibit a high degree of protein dynamics, alone or in complex.
Function
Several types of catenins work with N-cadherins to play an important role in learning and memory.
Cell-cell adhesion complexes are required for simple epithelia in higher organisms to maintain structure, function and polarity. These complexes, which help regulate cell growth in addition to creating and maintaining epithelial layers, are known as adherens junctions and they typically include at least cadherin, β-catenin, and α-catenin. Catenins play roles in cellular organization and polarity long before the development and incorporation of Wnt signaling pathways and cadherins.
The primary mechanical role of catenins is to connect cadherins to actin filaments, such as the adhesion junctions of epithelial cells. Most studies investigating catenin actions have focused on α-catenin and β-catenin. β-catenin is particularly interesting as it plays a dual role in the cell. First of all, by binding to cadherin receptor intracellular cytoplasmic tail domains, it can act as an integral component of a protein complex in adherens junctions that helps cells maintain epithelial layers. β-catenin acts by anchoring the actin cytoskeleton to the junctions, and may possibly aid in contact inhibition signaling within the cell. For instance, when an epithelial layer is complete and the adherens junctions indicate that the cell is surrounded, β-catenin may play a role in telling the cell to stop proliferating, as there is no room for more cells in the area. Secondly, β-catenin participates in the Wnt signaling pathway as a downstream target. While the pathway is very detailed and not completely understood, in general, when Wnt is not present, GSK-3B (a member of the pathway) is able to phosphorylate β-catenin as a result of a complex formation that includes β-catenin, AXIN1, AXIN2, APC (a tumor suppressor gene product), CSNK1A1, and GSK3B. Following phosphorylation of the N-terminal Ser and Thr residues of β-catenin, BTRC promotes its ubiquitination, which causes it to be degraded by the TrCP/SKP complex. On the other hand, when Wnt is present, GSK-3B is displaced from the previously mentioned complex, causing β-catenin to not be phosphorylated, and thus not ubiquitinated. As a result, its levels in the cell are stabilized as it builds up in the cytoplasm. Eventually, some of this accumulated β-catenin will move into the nucleus with the help of Rac1. At this point, β-catenin becomes a coactivator for TCF and LEF to activate Wnt genes by displacing Groucho and HDAC transcription repressors. These gene products are important in determining cell fates during normal development and in maintaining homeostasis, or they can lead to de-regulated growth in disorders like cancer by responding to mutations in β-catenin, APC or Axin, each of which can lead to this de-regulated β-catenin level stabilization in cells.
While less attention is directed at α-catenin in studies involving cell adhesion, it is nonetheless an important player in cellular organization, function and growth. α-catenin participates in the formation and stabilization of adherens junctions by binding to β-catenin-cadherin complexes in the cell. The exact protein dynamics by which α-catenin acts in adherens junctions is still unclear. It is believed however that α-catenin acts in concert with vinculin to bind to actin and stabilize the junctions.
Interaction with cadherins
F9 embryonal carcinoma cells are similar to the P19 cells shown in Figure 1 and normally have cell-to-cell adhesion mediated by E-cadherin with β-catenin bound to the cytoplasmic domain of E-cadherin. F9 cells were genetically engineered to lack β-catenin, resulting in increased association of plakoglobin with E-cadherin. In F9 cells lacking both β-catenin and plakoglobin, very little E-cadherin and α-catenin accumulated at the cell surface. Mice lacking β-catenin have defective embryos. Mice engineered to specifically have vascular endothelium cells deficient in β-catenin showed disrupted adhesion between vascular endothelial cells. Mice lacking plakoglobin have cell adhesion defects in many tissues, although β-catenin substitutes for plakoglobin at many cellular junctions. Keratinocytes engineered to not express alpha-catenin have disrupted cell adhesion and activated NF-κB. A tumor cell line with defective δ-catenin, low levels of E-cadherin and poor cell-to-cell adhesion could be restored to normal epithelial morphology and increased E-cadherin levels by expression of normal levels of functional δ-catenin.
Clinical significance
As previously mentioned, the same properties of catenin that give it an important role in normal cell fate determination, homeostasis and growth, also make it susceptible to alterations that can lead to abnormal cell behavior and growth. Any changes in cytoskeletal organization and adhesion can lead to altered signaling, migration and a loss of contact inhibition that can promote cancer development and tumor formation. In particular, catenins have been identified to be major players in aberrant epithelial cell layer growth associated with various types of cancer. Mutations in genes encoding these proteins can lead to inactivation of cadherin cell adhesions and elimination of contact inhibition, allowing cells to proliferate and migrate, thus promoting tumorigenesis and cancer development. Catenins are known to be associated with colorectal and ovarian cancer, and they have been identified in pilomatrixoma, medulloblastoma, pleomorphic adenomas, and malignant mesothelioma.
While less is known about the exact mechanism of α-catenin, its presence in cancer is widely felt. Through the interaction of β-catenin and α-catenin, actin and E-cadherin are linked, providing the cell with a means of stable cell adhesion. However, decreases in this adhesion ability of the cell has been linked to metastasis and tumor progression. In normal cells, α-catenin may act as a tumor suppressor and can help prevent the adhesion defects associated with cancer. On the other hand, a lack of α-catenin can promote aberrant transcription, which can lead to cancer. As a result, it can be concluded, that cancers are most often associated with decreased levels of α-catenin.
β-catenin also likely plays a significant role in various forms of cancer development. However, in contrast to α-catenin, heightened β-catenin levels may be associated with carcinogenesis. In particular, abnormal interactions between epithelial cells and the extracellular matrix are associated with the over-expression of these β-catenins and their relationship with cadherins in some cancers. Stimulation of the Wnt/β-catenin pathway, and its role in promoting malignant tumor formations and metastases, has also been implicated in cancers.
The role of catenin in epithelial-mesenchymal transition (or EMT) has also received a lot of recent attention for its contributions to cancer development. It has been shown that HIF-1α can induce the EMT pathway, as well as the Wnt/β-catenin signaling pathway, thus enhancing the invasive potential of LNCaP cells (human prostate cancer cells). As a result, it is possible that the EMT associated with upregulated HIF-1α is controlled by signals from this Wnt/β-catenin pathway. Catenin and EMT interactions may also play a role in hepatocellular carcinoma. VEGF-B treatment of hepatoma carcinoma cells can cause α-catenin to move from its normal location on the membrane into the nucleus and E-cadherin expression to decrease, thus promoting EMT and tumor invasiveness.
There are other physiological factors that are associated with cancer development through their interactions with catenins. For instance, higher levels of collagen XXIII have been associated with higher levels of catenins in cells. These heightened levels of collagen helped facilitate adhesions and anchorage-independent cell growth and provided evidence of collagen XXIII's role in mediating metastasis. In another example, Wnt/β-catenin signaling has been identified as activating microRNA-181s in hepatocellular carcinoma that play a role in its tumorigenesis.
Recent clinical studies
Recently, there have been a number of studies in the lab and in the clinic investigating new possible therapies for cancers associated with catenin. Integrin antagonists and immunochemotherapy with 5-fluorouracil plus polysaccharide-K have shown promising results. Polysaccharide K can promote apoptosis by inhibiting NF-κB activation, which is normally up-regulated, and inhibiting apoptosis, when β-catenin levels are increased in cancer. Therefore, using polysaccharide K to inhibit NF-κB activation can be used to treat patients with high β-catenin levels.
In the short-term, combining current treatment techniques with therapeutics targeting catenin-associated elements of cancer might be most effective in treating the disease. By disrupting Wnt/β-catenin signaling pathways, short-term neoadjuvant radiotherapy (STNR) may help prevent clinical recurrence of the disease after surgery, but much more work is needed before an adequate treatment based on this concept can be determined.
Lab studies have also implicated potential therapeutic targets for future clinical studies. VEGFR-1 and EMT mediators may be ideal targets for preventing cancer development and metastasis. 5-aminosalicylate (ASA) has been shown to reduce β-catenin and its localization to the nucleus in colon cancer cells isolated from and in patients. As a result, it may be useful as a chemopreventative agent for colorectal cancer. Additionally, acyl hydrazones have been shown to inhibit the Wnt signaling characteristic of many cancers by destabilizing β-catenin, thus disrupting Wnt signaling and preventing the aberrant cell growth associated with cancer. On the other hand, some treatment concepts involve upregulating the E-cadherin/catenin adhesion system to prevent disruptions in adhesions and contact inhibition from promoting cancer metastasis. One possible way to achieve this, which has been successful in mouse models, is to use inhibitors of Ras activation in order to enhance the functionality of these adhesion systems. Other catenin, cadherin or cell cycle regulators may also be useful in treating a variety of cancers.
While recent studies in the lab and in the clinic have provided promising results for treating various catenin-associated cancers, the Wnt/β-catenin pathway may make finding a single correct therapeutic target difficult as the pathway has been shown to elicit a variety of different actions and functions, some of which may possibly even prove to be anti-oncogenic.
Catenins and cancer
Summary:
Associated Cancers: colorectal and ovarian cancer; pilomatrixoma; medulloblastoma; pleomorphic adenomas; malignant mesothelioma; glioblastomas.
Mutations in catenin genes can cause loss of contact inhibition that can promote cancer development and tumor formation.
Mutations associated with aberrant epithelial cell layer growth due to lack of adhesions and contact inhibition
Down-regulated levels of α-catenin
Up-regulated levels of β-catenin
Stimulation of the Wnt/β-catenin pathway
Catenin alteration (and Wnt/β-catenin pathway up-regulation) may help stimulate epithelial-mesenchymal transition (or EMT)
Mutations or aberrant regulation of catenins may also associate with other factors that promote metastasis and tumorigenesis
Treatments focus on correcting aberrant catenin levels or regulating catenin pathways that are associated with cancer development and progression
References
External links
Protein families | Catenin | [
"Biology"
] | 2,886 | [
"Protein families",
"Protein classification"
] |
2,260,106 | https://en.wikipedia.org/wiki/Epidemic%20Intelligence%20Service | The Epidemic Intelligence Service (EIS) is a program of the United States' Centers for Disease Control and Prevention (CDC). The modern EIS is a two-year, hands-on post-doctoral training program in epidemiology, with a focus on field work.
History
Alexander Langmuir, Chief of the U.S. Public Health Service, proposed the creation of the Epidemic Intelligence Service on March 30, 1951. Langmuir argued that the agency could identify appropriate defense measures against biological warfare germs, develop new detection methods, and train laboratory workers to rapidly recognize biological warfare germs. This justification arose from biological warfare concerns during the Korean War.
The Epidemic Intelligence Service was organized on September 26, 1951, with the purpose of investigating disease outbreaks that are beyond the control of state and local health departments, enforcing interstate quarantine regulations, and providing epidemic aid at the request of state health agencies. The Epidemic Intelligence Service's first staff members were 21 medical officers of the Public Health Service.
Background
The EIS is operated by the CDC's Center for Surveillance, Epidemiology, and Laboratory Services (CSELS), in the Office of Public Health Scientific Services (OPHSS).
Program participants, known colloquially as "disease detectives", are formally called "EIS officers" (or EIS fellows) by the CDC and have been dispatched to investigate hundreds of possible epidemics created by natural and artificial causes. Since 1951, more than 3,000 EIS officers have been involved in domestic and international response efforts, including the anthrax, hantavirus, West Nile virus in the United States, and the 2013–2016 Ebola outbreak in West Africa.
EIS officers begin their fellowship with a one-month training program at CDC headquarters in Atlanta, Georgia; however, 95% of their two-year term consists of experiential rather than classroom training. For the remainder of their service, EIS officers are assigned to operational branches within the CDC or at state and local health departments around the country. Placement is determined via a highly competitive matching process. The CDC pairs EIS officers with a Public Health Advisor, forming a scientist (EIS officer) and operations (PHA) team. The EIS is a common recruiting pathway into the Public Health Service Commissioned Corps.
The EIS is the prototype for Field Epidemiology Training Programs (FETP), which operate in numerous countries with technical assistance provided by the CDC. However, attempts to establish FETPs in Indonesia, Hungary, Ivory Coast, and within the World Health Organization have failed due to insufficient long-term support.
History of responses
Since the inception of the EIS, officers have been involved with treatment, eradication, and disease-control efforts for a variety of medically related crises. Below is an abridged timeline of their work.
1950s: The EIS worked on polio, lead poisoning, and Asian influenza
1960s: Cancer clusters, and smallpox
1970s: Legionnaires' disease, Ebola, and Reye syndrome
1980s: Toxic shock syndrome, birth defects, and HIV/AIDS
1990s: Tobacco, West Nile virus, and contaminated water
2000s: Post 9/11 anthrax attacks, E. coli O157:H7, SARS, H1N1, and the aftermath of Hurricane Katrina
2010s: The aftermath of the Haiti earthquake, obesity, fungal meningitis, and Ebola
2020s: Zika virus, COVID-19 pandemic
EIS conference
EIS officers attend an annual conference in Atlanta, Georgia, to present components of their work from the preceding year.
During the conference, the Alexander D. Langmuir Prize is awarded "to a current officer or first-year alumnus of the EIS for the best scientific publication. The award consists of a $100 cash prize, an engraved paperweight, a case of ale or beer redolent of the John Snow Pub in London, and an inscription on the permanent plaque at CDC."
A complete list of Langmuir Prize winners is included below:
In popular culture
In the 2011 film Contagion, the character Doctor Erin Mears (portrayed by Kate Winslet) is a physician and investigator with the Epidemic Intelligence Service who was tasked by the CDC to discover the origin of a highly contagious and deadly virus known as MEV-1 which was rapidly spreading throughout the world following initial outbreaks in Kowloon, Hong Kong and Minneapolis, Minnesota.
References
Further reading
Gerard Gallagher (2017). "CDC's EIS program molds clinicians into public health professionals." (Healio.com)
External links
Epidemiology
Centers for Disease Control and Prevention
Epidemiology organizations
Organizations associated with the COVID-19 pandemic | Epidemic Intelligence Service | [
"Environmental_science"
] | 971 | [
"Epidemiology",
"Environmental social science"
] |
2,260,140 | https://en.wikipedia.org/wiki/Surface%20roughness | Surface roughness can be regarded as the quality of a surface of not being smooth and it is hence linked to human (haptic) perception of the surface texture. From a mathematical perspective it is related to the spatial variability structure of surfaces, and inherently it is a multiscale property. It has different interpretations and definitions depending on the disciplines considered.
In surface metrology
Surface roughness, often shortened to roughness, is a component of surface finish (surface texture). It is quantified by the deviations in the direction of the normal vector of a real surface from its ideal form. If these deviations are large, the surface is rough; if they are small, the surface is smooth. In surface metrology, roughness is typically considered to be the high-frequency, short-wavelength component of a measured surface. However, in practice it is often necessary to know both the amplitude and frequency to ensure that a surface is fit for a purpose.
Roughness plays an important role in determining how a real object will interact with its environment. In tribology, rough surfaces usually wear more quickly and have higher friction coefficients than smooth surfaces. Roughness is often a good predictor of the performance of a mechanical component, since irregularities on the surface may form nucleation sites for cracks or corrosion. On the other hand, roughness may promote adhesion. Generally speaking, rather than scale specific descriptors, cross-scale descriptors such as surface fractality provide more meaningful predictions of mechanical interactions at surfaces including contact stiffness and static friction.
Although a high roughness value is often undesirable, it can be difficult and expensive to control in manufacturing. For example, it is difficult and expensive to control surface roughness of fused deposition modelling (FDM) manufactured parts. Decreasing the roughness of a surface usually increases its manufacturing cost. This often results in a trade-off between the manufacturing cost of a component and its performance in application.
Roughness can be measured by manual comparison against a "surface roughness comparator" (a sample of known surface roughness), but more generally a surface profile measurement is made with a profilometer. These can be of the contact variety (typically a diamond stylus) or optical (e.g.: a white light interferometer or laser scanning confocal microscope).
However, controlled roughness can often be desirable. For example, a gloss surface can be too shiny to the eye and too slippery to the finger (a touchpad is a good example) so a controlled roughness is required. This is a case where both amplitude and frequency are very important.
Parameters
A roughness value can either be calculated on a profile (line) or on a surface (area). The profile roughness parameter (, , ...) are more common. The area roughness parameters (, , ...) give more significant values.
Profile roughness parameters
The profile roughness parameters are included in BS EN ISO 4287:2000 British standard, identical with the ISO 4287:1997 standard. The standard is based on the ″M″ (mean line) system.
There are many different roughness parameters in use, but is by far the most common, though this is often for historical reasons and not for particular merit, as the early roughness meters could only measure . Other common parameters include , , and . Some parameters are used only in certain industries or within certain countries. For example, the family of parameters is used mainly for cylinder bore linings, and the Motif parameters are used primarily in the French automotive industry. The MOTIF method provides a graphical evaluation of a surface profile without filtering waviness from roughness. A motif consists of the portion of a profile between two peaks and the final combinations of these motifs eliminate ″insignificant″ peaks and retains ″significant″ ones. Please note that is a dimensional unit that can be micrometer or microinch.
Since these parameters reduce all of the information in a profile to a single number, great care must be taken in applying and interpreting them. Small changes in how the raw profile data is filtered, how the mean line is calculated, and the physics of the measurement can greatly affect the calculated parameter. With modern digital equipment, the scan can be evaluated to make sure there are no obvious glitches that skew the values.
Because it may not be obvious to many users what each of the measurements really mean, a simulation tool allows a user to adjust key parameters, visualizing how surfaces which are obviously different to the human eye are differentiated by the measurements. For example, fails to distinguish between two surfaces where one is composed of peaks on an otherwise smooth surface and the other is composed of troughs of the same amplitude. Such tools can be found in app format.
By convention every 2D roughness parameter is a capital followed by additional characters in the subscript. The subscript identifies the formula that was used, and the means that the formula was applied to a 2D roughness profile. Different capital letters imply that the formula was applied to a different profile. For example, is the arithmetic average of the roughness profile, is the arithmetic average of the unfiltered raw profile, and is the arithmetic average of the 3D roughness.
Each of the formulas listed in the tables assumes that the roughness profile has been filtered from the raw profile data and the mean line has been calculated. The roughness profile contains ordered, equally spaced points along the trace, and is the vertical distance from the mean line to the data point. Height is assumed to be positive in the up direction, away from the bulk material.
Amplitude parameters
Amplitude parameters characterize the surface based on the vertical deviations of the roughness profile from the mean line. Many of them are closely related to the parameters found in statistics for characterizing population samples. For example, is the arithmetic average value of filtered roughness profile determined from deviations about the center line within the evaluation length and is the range of the collected roughness data points.
The arithmetic average roughness, , is the most widely used one-dimensional roughness parameter.
Here is a common conversion table with roughness grade numbers:
Slope, spacing and counting parameters
Slope parameters describe characteristics of the slope of the roughness profile. Spacing and counting parameters describe how often the profile crosses certain thresholds. These parameters are often used to describe repetitive roughness profiles, such as those produced by turning on a lathe.
Other "frequency" parameters are Sm, a and q. Sm is the mean spacing between peaks. Just as with real mountains it is important to define a "peak". For Sm the surface must have dipped below the mean surface before rising again to a new peak. The average wavelength a and the root mean square wavelength q are derived from a. When trying to understand a surface that depends on both amplitude and frequency it is not obvious which pair of metrics optimally describes the balance, so a statistical analysis of pairs of measurements can be performed (e.g.: Rz and a or Ra and Sm) to find the strongest correlation.
Common conversions:
Bearing ratio curve parameters
These parameters are based on the bearing ratio curve (also known as the Abbott-Firestone curve.) This includes the Rk family of parameters.
Fractal theory
The mathematician Benoît Mandelbrot has pointed out the connection between surface roughness and fractal dimension. The description provided by a fractal at the microroughness level may allow the control of the material properties and the type of the occurring chip formation. But fractals cannot provide a full-scale representation of a typical machined surface affected by tool feed marks; it ignores the geometry of the cutting edge. (J. Paulo Davim, 2010, op.cit.). Fractal descriptors of surfaces have an important role to play in correlating physical surface properties with surface structure. Across multiple fields, connecting physical, electrical and mechanical behavior with conventional surface descriptors of roughness or slope has been challenging. By employing measures of surface fractality together with measures of roughness or surface shape, certain interfacial phenomena including contact mechanics, friction and electrical contact resistance, can be better interpreted with respect to surface structure.
Areal roughness parameters
Areal roughness parameters are defined in the ISO 25178 series. The resulting values are Sa, Sq, Sz,... Many optical measurement instruments are able to measure the surface roughness over an area. Area measurements are also possible with contact measurement systems. Multiple, closely spaced 2D scans are taken of the target area. These are then digitally stitched together using relevant software, resulting in a 3D image and accompanying areal roughness parameters.
Practical effects
Surface structure plays a key role in governing contact mechanics, that is to say the mechanical behavior exhibited at an interface between two solid objects as they approach each other and transition from conditions of non-contact to full contact. In particular, normal contact stiffness is governed predominantly by asperity structures (roughness, surface slope and fractality) and material properties.
In terms of engineering surfaces, roughness is considered to be detrimental to part performance. As a consequence, most manufacturing prints establish an upper limit on roughness, but not a lower limit. An exception is in cylinder bores where oil is retained in the surface profile and a minimum roughness is required.
Surface structure is often closely related to the friction and wear properties of a surface. A surface with a higher fractal dimension, large value, or a positive , will usually have somewhat higher friction and wear quickly. The peaks in the roughness profile are not always the points of contact. The form and waviness (i.e. both amplitude and frequency) must also be considered.
In Earth Sciences
In Earth Sciences (e.g., Shepard et al., 2001; Smith, 2014) and Ecology (e.g., Riley et al., 1999; Sappington et al., 2007) surface roughness has a quite broad meaning (e.g. Smith, 2014), with multiple definitions, and generally it is considered a multi-scale property related to surface spatial variability; it is often referred as surface texture (e.g., Trevisani et al., 2012), given the evident analogies to image texture (e.g., Haralick et al. 1973; Lucieer and Stein, 2005) when the analysis is performed on digital elevation models. From this perspective there are various interlinks with methodologies related to geostatistics (e.g., Herzfeld and Higginson, 1996), fractal analysis (e.g. Bez and Bertrand, 2011) and pattern recognition (e.g., Ojala et al. 2002), including many interrelations with remote sensing approaches. In the context of geomorphometry (or just morphometry, Pike, 2000) the applications cover many research topics in applied and environmental geology, geomorphology, geostructural studies and soil science. An example (non exhaustive) of the related literature can be found in the following articles:
Cavalli and Marchi, 2008
Dusséaux and Vannier, 2022
Evans et al., 2022
Frankel and Dolan 2007
Glenn et al. 2006
Grohmann et al., 2011
Guth, 1999
Lindsay, 2019
Misiuk et al., 2021
Pollyea and Fairley, 2011
Trevisani and Rocca, 2015
Trevisani et al. 2023
Woodcock, 1977
Soil-surface roughness
Soil-surface roughness (SSR) refers to the vertical variations present in the micro- and macro-relief of a soil surface, as well as their stochastic distribution. There are four distinct classes of SSR, each one of them representing a characteristic vertical length scale; the first class includes microrelief variations from individual soil grains to aggregates on the order of 0.053–2.0 mm; the second class consists of variations due to soil clods ranging between 2 and 100 mm; the third class of soil surface roughness is systematic elevation differences due to tillage, referred to as oriented roughness (OR), ranging between 100 and 300 mm; the fourth class includes planar curvature, or macro-scale topographic features.
The two first classes account for the so-called microroughness, which has been shown to be largely influenced on an event and seasonal timescale by rainfall and tillage, respectively. Microroughness is most commonly quantified by means of the Random Roughness, which is essentially the standard deviation of bed surface elevation data around the mean elevation, after correction for slope using the best-fit plane and removal of tillage effects in the individual height readings. Rainfall impact can lead to either a decay or increase in microroughnesss, depending upon initial microroughness conditions and soil properties. On rough soil surfaces, the action of rainsplash detachment tends to smoothen the edges of soil surface roughness, leading to an overall decrease in RR. However, a recent study which examined the response of smooth soil surfaces on rainfall showed that RR can considerably increase for low initial microroughness length scales in the order of 0 – 5 mm. It was also shown that the increase or decrease is consistent among various SSR indices.
See also
Discontinuity (Geotechnical engineering)
Rugosity
Normal contact stiffness
Surface finish
Surface metrology
Surface roughness measurement ISO 25178
Waviness
Asperity (materials science)
References
External links
Surface Metrology Guide
Roughness terminology
Ra and Rz description
Surface Roughness (Finish) Review and Equations
SPE (Surface Profile Explorer)
Online calculator to convert roughness parameters Ra and Rz
Enache, Ştefănuţă, La qualité des surfaces usinées (Transl.: Quality of machined surfaces).Dunod, Paris, 1972, 343 pp.
Husu, A.P., Vitenberg, Iu., R., Palmov, V. A., Sherohovatost poverhnostei (Teoretiko-veroiatnostnii podhod) (Transl.: Surface roughness (theoretical-probabilistic approach)), Izdatelstvo "Nauka", Moskva, 1975, 342 pp.
Davim, J. Paulo, Surface Integrity in Machining, Springer-Verlag London Limited 2010,
Whitehouse, D. Handbook of Surface Metrology, Institute of Physics Publishing for Rank Taylor-Hobson Co., Bristol 1996
Geostatistical-based tools for surface roughness or image texture analysis:https://doi.org/10.5281/zenodo.7132160
Tribology
Metalworking terminology
Mechanical engineering | Surface roughness | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 3,048 | [
"Tribology",
"Applied and interdisciplinary physics",
"Materials science",
"Surface science",
"Mechanical engineering"
] |
2,260,252 | https://en.wikipedia.org/wiki/Hercynian%20Forest | The Hercynian Forest was an ancient and dense forest that stretched across Western Central Europe, from Northeastern France to the Carpathian Mountains, including most of Southern Germany, though its boundaries are a matter of debate. It formed the northern boundary of that part of Europe known to writers of Antiquity. The ancient sources are equivocal about how far east it extended. Many agree that the Black Forest, which extended east from the Rhine valley, formed the western side of the Hercynian, except, for example, Lucius of Tongeren. According to him, it included many massifs west of the Rhine.
Across the Rhine to the west extended the Silva Carbonaria, the forest of the Ardennes and the forest of the Vosges. All these old-growth forests of antiquity represented the original post-glacial temperate broadleaf forest ecosystem of Europe.
Relict tracts of this once-continuous forest exist with many local names: the Black Forest, the Ardennes, the Bavarian Forest, the Vosges, the Eifel, the Jura Mountains, the Swabian Jura, the Franconian Jura, the Polish Jura, the Palatinate Forest, the Teutoburg Forest, the Argonne Forest, the Morvan, the Langres plateau, the Odenwald, the Spessart, the Rhön, the Thuringian Forest, the Harz, the Rauhe Alb, the Steigerwald, the Fichtel Mountains, the Ore Mountains, the Giant Mountains, the Bohemian Forest and the Sudetes. In present-day Czech Republic and southern Poland, it joined the forested Carpathians. The Mittelgebirge seem to correspond more or less to a stretch of the Hercynian mountains. Many present-day smaller forests were also included like the Bienwald and the Haguenau Forest. The Hercynian Forest maybe extended northwest to the Veluwe and east to the Białowieża Forest.
Etymology
Hercynian has a Proto-Celtic derivation, from ɸerkuniā, later erkunia. Julius Pokorny lists Hercynian as being derived from * "oak" (compare quercus). He further identifies the name as Celtic. Proto-Celtic regularly loses initial preceding a vowel, hence the earliest attestations in Greek as Ἀρκόνια (Aristotle, the e~a interchange common in Celtic names), later Ὀρκύνιος (Ptolemy, with the o unexplained) and Ἑρκύνιος δρυμός (Strabo). The latter form first appears in Latin as Hercynia in Julius Caesar, inheriting the aspiration and the letter y from a Greek source.
The Germanic forms appear with an f for *p by Grimm's Law, perhaps indicating an early borrowing from Celtic before it lost the initial consonant: Gothic faírguni = "mountain, mountain range", Old English firgen = "mountain, mountain-woodland". Still the Celtic and Germanic words could also be old relatives, or the Celtish word could be borrowed from Germanic.
The assimilated would be regular in Italo-Celtic, and Pokorny associates the ethnonym Querquerni, found in Hispania in Galicia, which features an Italic-Venetic name. In fact, it is not directly associated to the Hercynian Forest's name. Proto-European * explains ɸerkuniā, later erkunia, with regular shift kʷ > ku that occurred before the assimilation .
The name of the Hercynian Forest is also considered to be etymologically related to Lithuanian thunder god Perkūnas. He is also known as Pērkons in Latvian; Perkūns or Perkunos in Old Prussian; Parkuns in Yotvingian and Pārkiuņs in Latgalian.
It is possible that the name of the Harz Mountains in Germany is derived from Hercynian, as Harz is a Middle High German word meaning "mountain forest." Also, the Old High German name Fergunna apparently refers to the Ore Mountains and Virgundia (cf. modern Virngrund forest) to a range between Ansbach and Ellwangen.
Hercyne was the classical name (modern Libadia) of a small rapid stream in Boeotia that issued from two springs near Lebadea, modern Livadeia, and emptied into Lake Copais.
Ancient references
The name is cited dozens of times in several classical authors, but most of the references are non-definitive, e.g., the Hercynian Forest is Pomponius Mela's silvis ac paludibus invia, "trackless forest and swamps" (Mela, De Chorographia, iii.29), as the author is assuming the reader would know where the forest is. The earliest reference is in Aristotle's (Meteorologica). He refers to the Arkýnia (or Orkýnios) mountains of Europe, but tells us only that, remarkably in his experience, rivers flow north from there.
During the time of Julius Caesar, the forest blocked the advance of the Roman legions into Germania. His few statements are the most definitive. In De Bello Gallico he says that the forest stretches along the Danube from the territory of the Helvetii (present-day Switzerland) to Dacia (present-day Romania). Its implied northern boundary is nine days' march, while its eastern boundary is indefinitely more than sixty days' march. The region fascinated him, even the old tales of unicorns (which may have represented reindeer). Caesar's references to moose and aurochs and of elk without joints which leaned against trees to sleep in the endless forests of Germania, were probably later interpolations in his Commentaries. Caesar's name for the forest is the one most used: Hercynia Silva.
Pliny the Elder, in Natural History, places the eastern regions of the Hercynium jugum, the "Hercynian mountain chain", in Pannonia (present-day Hungary and Croatia) and Dacia (present-day Romania). He also gives us some dramaticised description of its composition, in which the close proximity of the forest trees causes competitive struggle among them (inter se rixantes). He mentions its gigantic oaks. But even he—if the passage in question is not an interpolated marginal gloss—is subject to the legends of the gloomy forest. He mentions unusual birds, which have feathers that "shine like fires at night". Medieval bestiaries named these birds the Ercinee. The impenetrable nature of the Hercynia Silva hindered the last concerted Roman foray into the forest, by Drusus, during 12..9 BCE: Florus asserts that Drusus invisum atque inaccessum in id tempus Hercynium saltum (Hercynia saltus, the "Hercynian ravine-land") patefecit.
The isolated modern remnants of the Hercynian Forest identify its flora as a mixed one; Oscar Drude identified its Baltic elements associated with North Alpine flora, and North Atlantic species with circumpolar representatives. Similarly, Edward Gibbon noted the presence of reindeer—pseudo-Caesar's bos cervi figura—and elk—pseudo-Caesar's alces—in the forest. The wild bull which the Romans named the urus was present also, and the European bison and the now-extinct aurochs, Bos primigenius.
In the Roman sources, the Hercynian Forest was part of ethnographic Germania. There is an indication that this circumstance was fairly recent; that is, Posidonius states that the Boii, were once there (as well as in Bohemia which is named for them).
It is believed that before the Boii the Hercuniates tribe inhabited the area, later migrating to Pannonia in Illyria. By the middle of the first century BC, the Hercuniates were a minor tribe that was located along a narrow band of Celtic settlement close to the Danube, on the western side of the river a little way west of modern Budapest. Their name comes from an ancient proto-Indo-European word for an oak. The tribe is referred to by Pliny and Ptolemy as a civitas peregrina, a wandering tribe that had travelled to Pannonia from foreign parts. Little else is known of them save that they were issuing their own coins by the second century BC. By AD 40 the tribe was eventually subdued by Rome.
Medieval period
Monks sent out from Niederaltaich Abbey (founded in the eighth century) brought under cultivation for the first time great forested areas of Lower Bavaria as far as the territory of the present Czech Republic, and founded 120 settlements in the Bavarian Forest, as that stretch of the ancient forest came to be known. The forest is also mentioned in Hypnerotomachia Poliphili as the setting for the dream allegory of the work.
Modern references
The German journal Hercynia, published by the Universities and Landesbibliothek of Sachsen-Anhalt, pertains to ecology and environmental biology.
Some geographers apply the term Hercynian Forest to the complex of mountain ranges, mountain groups, and plateaus which stretch from Westphalia across Middle Germany and along the northern borders of Austria to the Carpathians.
See also
Białowieża Forest
Myrkviðr
Broceliande
Notes
Forests and woodlands of Germany
Former forests
Historical regions in Germany
Old-growth forests
Italo-Celtic | Hercynian Forest | [
"Biology"
] | 1,996 | [
"Old-growth forests",
"Ecosystems"
] |
2,260,316 | https://en.wikipedia.org/wiki/Okeechobee%20Waterway | The Okeechobee Waterway or Okeechobee Canal is a relatively shallow artificial waterway in the United States, stretching across Florida from Fort Myers on the west coast to Stuart on Florida's east coast. The waterway can support tows such as barges or private vessels up to wide x long which draw less than , as parts of the system, especially the locks may have low water depths of just ten feet. The system of channels runs through Lake Okeechobee and consists of the Caloosahatchee River to the west of the lake and the St. Lucie Canal east of the lake.
Geologically and geographically, the north bank of the canal is the official southern limit of the Eastern Continental Divide.
History
It was built/finished in 1937 to provide a water route across Florida, allowing boats to pass east–west across the state rather than traveling the long route around the southern end of the state.
Management
Lake Okeechobee and the Okeechobee Waterway Project is part of the complex water-management system known as the Central and Southern Florida Flood Control Project. The projects cover starting just south of Orlando and extending southward through the Kissimmee River Basin to the Everglades National Park to Florida Bay.
The U.S. Army Corps of Engineers manages five locks and dams along the Okeechobee Waterway.
Locks and dams
St. Lucie Lock and Dam
The St. Lucie lock was built in 1941 for navigation and flood-control purposes. In 1944, the connecting spillway structure was built for flood and regulatory flow control through the St. Lucie Canal to manage the water level in Lake Okeechobee.
Port Mayaca Lock and Dam
The Port Mayaca Lock and Dam was built in 1977 for navigation purposes, to permit the raising of water levels in Lake Okeechobee, and to moderate the effects of higher lake stages along the St. Lucie Canal.
Ortona Lock and Dam
The Ortona Lock and Dam were constructed in 1937 for navigation purposes.
In 1934, the locks were dredged by Captain James B. Cox, who worked on the Hoover Dike, with Robert Pierce as engineer. The first lockmaster was Jack O'Day, then Captain Cox, afterward.
Moore Haven Lock and Dam
The Moore Haven Lock and Dam were constructed in 1935 for navigation and flood-control purposes. The lock was renamed Julian Keen Jr. Lock and Dam. Effective: 06/18/2021
Notice to Navigation 2021-014
W.P. Franklin Lock and Dam
The W.P. Franklin Lock and Dam were constructed in 1965 for flood control, water control, prevention of saltwater intrusion, and navigation purposes.
See also
List of canals in the United States
References
External links
Cruising the Okeechobee Waterway - BlueSeas
Okeechobee Waterway - U.S. Army Corps of Engineers Jacksonville District
Lake Okeechobee Watershed - Florida DEP
1937 establishments in Florida
Canals in Florida
Canals opened in 1937
Transportation buildings and structures in Glades County, Florida
Transportation buildings and structures in Hendry County, Florida
Transportation buildings and structures in Lee County, Florida
Transportation buildings and structures in Martin County, Florida
Transportation buildings and structures in Palm Beach County, Florida
Historic American Engineering Record in Florida
Indian River Lagoon
Lake Okeechobee
United States Army Corps of Engineers | Okeechobee Waterway | [
"Engineering"
] | 662 | [
"Engineering units and formations",
"United States Army Corps of Engineers"
] |
2,260,583 | https://en.wikipedia.org/wiki/Tris%28benzyltriazolylmethyl%29amine | Tris((1-benzyl-4-triazolyl)methyl)amine (TBTA) is a tertiary amine containing the 1,2,3-triazole moiety. When used as a ligand, complexed to copper(I), it allows for quantitative, regioselective formal Huisgen 1,3-dipolar cycloadditions between alkynes and azides, in a variety of aqueous and organic solvents.
It is believed that the ligand promotes catalysis through the stabilization of the copper(I)-oxidation state, while still allowing for the catalytic cycle of the CuAAC reaction to proceed.
Single crystal X-ray diffraction of the Cu(I) complex of tris((1-benzyl-4-triazolyl)methyl)amine revealed an unusual dinuclear dication with one triazole unit bridging two metal centers, and is an effective catalyst for the 'click' cycloaddition reaction. The structure of the complex of TBTA with Cu(II) in the crystalline state is trigonal bipyramidal and can be reduced to the active 'click' catalyst form by sodium ascorbate, copper metal, or other reducing agents.
In the literature, it has been gaining widespread use as a biochemical tool for the tagging of proteins and enzymes. The compound is now commercially available through Sigma-Aldrich and Invitrogen. It may be prepared by the click reaction between tripropargylamine and benzyl azide:
References
Reagents for organic chemistry
Benzyl compounds | Tris(benzyltriazolylmethyl)amine | [
"Chemistry"
] | 340 | [
"Reagents for organic chemistry"
] |
2,260,612 | https://en.wikipedia.org/wiki/Griess%20test | The Griess test is an analytical chemistry test which detects the presence of nitrite ion in solution. One of its most important uses is the determination of nitrite in drinking water. The Griess diazotization reaction, on which the Griess reagent relies, was first described in 1858 by Peter Griess. The test has also been widely used for the detection of nitrates (N-oxidation state = 5+), which are a common component of explosives, as they can be reduced to nitrites (N-oxidation state = 3+) and detected with the Griess test.
Method
Nitrite is detected and analyzed by the formation of a red pink colour upon treatment of a nitrite-containing sample with the Griess reagent, which consists of two components in an acidic solution: an aniline derivative and a coupling agent. The most common arrangements use sulfanilamide and N-(1-naphthyl)ethylenediamine: a typical commercial Griess reagent contains 0.2% N-(1-naphthyl)ethylenediamine dihydrochloride, and 2% sulfanilamide in 5% phosphoric acid. This diamine is used in place of the simpler and cheaper 1-naphthylamine because the latter is a potent carcinogen and moreover the diamine forms a more polar and hence a much more soluble dye in acidic aqueous medium. Other aniline derivatives that have been used include sulfanilic acid, nitroaniline, and p-aminoacetophenone.
The Griess test involves two subsequent reactions. When sulfanilamide is added, the nitrite ion reacts with it in the Griess diazotization reaction to form a diazonium salt, which then reacts with N-(1-naphthyl)ethylenediamine in an azo coupling reaction, forming a pink-red azo dye.
Using a spectrophotometer, it is possible to quantitatively determine the nitrite concentration. The detection limit of the Griess test generally ranges between 0.02 and 2 μM, depending on the exact details of the specific components used in the Griess reagent.
Forensics
The test was used in forensics for many years to test for the traces of nitroglycerine. Caustic soda is used to break down sample containing nitroglycerine to produce nitrite ions.
The test involves the taking of a sample with ether and its division into two bowls. Caustic soda is added to the first bowl followed by the Griess reagent; if the solution turns pink within ten seconds, this indicates the presence of nitrites. The test itself is positive if, after adding only Griess reagent to the second bowl, the solution there remains clear.
The convictions of Judith Ward and the Birmingham Six were assisted by Frank Skuse's flawed interpretation of Griess test results.
See also
Nitrite test
Nitrate test
References
Chemical tests
Forensic techniques | Griess test | [
"Chemistry"
] | 647 | [
"Chemical tests"
] |
2,260,696 | https://en.wikipedia.org/wiki/Oxymonad | The Oxymonads (or Oxymonadida) are a group of flagellated protists found exclusively in the intestines of animals, mostly termites and other wood-eating insects. Along with the similar parabasalid flagellates, they harbor the symbiotic bacteria that are responsible for breaking down cellulose. There is no evidence for presence of mitochondria (not even anaerobic mitochondrion-like organelles like hydrogenosomes or mitosomes) in oxymonads and three species have been shown to completely lack any molecular markers of mitochondria.
It includes e.g. Dinenympha, Pyrsonympha, Oxymonas, Streblomastix, Monocercomonoides, and Blattamonas.
Characteristics
Most Oxymonads are around 50 μm in size and have a single nucleus, associated with four flagella. Their basal bodies give rise to several long sheets of microtubules, which form an organelle called an axostyle, but different in structure from the axostyles of parabasalids. The cell may use the axostyle to swim, as the sheets slide past one another and cause it to undulate. An associated fiber called the preaxostyle separates the flagella into two pairs. A few oxymonads have multiple nuclei, flagella, and axostyles.
Relationship to Trimastix and Paratrimastix
The free-living flagellates Trimastix and Paratrimastix are closely related to the oxymonads. They lack aerobic mitochondria and have four flagella separated by a preaxostyle, but unlike the oxymonads have a feeding groove. This character places the Oxymonads, Trimastix, and Paratrimastix among the Excavata, and in particular they may belong to the metamonads. Molecular phylogenetic studies indeed place Preaxostyla (oxymonads, Trimastix, and Paratrimastix) in Metamonada.
Taxonomy
Order Oxymonadida Grassé 1952 emend. Cavalier-Smith 2003
Family Oxymonadidae Kirby 1928 [Oxymonadaceae; Oxymonadinae Cleveland 1934]
Genus ?Metasaccinobaculus Freitas 1945
Genus Barroella Zeliff 1944 [Kirbyella Zeliff 1930 non Kirkaldy 1906 non Bolivar 1909]
Genus Microrhopalodina Grassé & Foa 1911 [Proboscidiella Kofoid & Swezy 1926]
Genus Opisthomitus Grassé 1952 non Duboscq & Grassé 1934
Genus Oxymonas Janicki 1915
Genus Sauromonas Grassé & Hollande 1952
Family Polymastigidae Bütschli 1884 [Polymastiginae Kirby 1931; Polymastigaceae; Streblomastigaceae; Streblomastigidae Kofoid & Swezy 1919]
Genus ?Paranotila Cleveland 1966
Genus ?Tubulimonoides Krishnamurthy & Sultana 1976
Genus Blattamonas Treitli et al. 2018
Genus Brachymonas (Grassé 1952) Treitli et al. 2018 non Hiraishi et al. 1995
Genus Monocercomonoides Travis 1932
Genus Polymastix Bütschli 1884 non Gruber 1884
Genus Streblomastix Kofoid & Swezy 1920
Family Pyrsonymphidae Grassé 1892 [Pyrsonymphaceae; Pyrsonymphinae Kirby 1937 nom. nud.; Dinenymphidae Grassé 1911; Dinenymphinae Cleveland et al. 1934; Dinenymphaceae]
Genus Dinenympha Leidy 1877 [Pyrsonympha (Dinenympha) (Leidy 1877) Koidzumi 1921]
Genus Pyrsonympha Leidy 1877 [Pyrsonema Kent 1881; Lophophora Comes 1910 non Coulter 1894 non Kraatz 1895 non Moeschler 1890]
Family Saccinobaculidae Brugerolle & Lee 2002 ex Cavalier-Smith 2012 [Saccinobaculinae Cleveland et al. 1934]
Genus Notila Cleveland 1950
Genus Saccinobaculus Cleveland-Hall & Sanders & Collier 1934
References
Flagellates
Metamonads
Anaerobes | Oxymonad | [
"Biology"
] | 907 | [
"Bacteria",
"Anaerobes"
] |
2,260,743 | https://en.wikipedia.org/wiki/Homopolar%20motor | A homopolar motor is a direct current electric motor with two magnetic poles, the conductors of which always cut unidirectional lines of magnetic flux by rotating a conductor around a fixed axis so that the conductor is at right angles to a static magnetic field. The resulting force being continuous in one direction, the homopolar motor needs no commutator but still requires slip rings. The name homopolar indicates that the electrical polarity of the conductor and the magnetic field poles do not change (i.e., that it does not require commutation).
History
The homopolar motor was the first electrical motor to be built. Its operation was demonstrated by Michael Faraday in 1821 at the Royal Institution in London.
In 1821, soon after the Danish physicist and chemist Hans Christian Ørsted discovered the phenomenon of electromagnetism, Humphry Davy and British scientist William Hyde Wollaston tried, but failed, to design an electric motor. Faraday, having been challenged by Davy as a joke, went on to build two devices to produce what he called "electromagnetic rotation". One of these, now known as the homopolar motor, caused a continuous circular motion that was engendered by the circular magnetic force around a wire that extended into a pool of mercury wherein was placed a magnet. The wire would then rotate around the magnet if supplied with current from a chemical battery. These experiments and inventions formed the foundation of modern electromagnetic technology. In his excitement, Faraday published results. This strained his mentor relationship with Davy, due to his mentor's jealousy of Faraday's achievement, and is the reason for Faraday’s assignment to other activities, which consequently prevented his involvement in electromagnetic research for several years.
B. G. Lamme described in 1913 a homopolar machine rated 2,000 kW, 260 V, 7,700 A and 1,200 rpm with 16 slip rings operating at a peripheral velocity of 67 m/s. A unipolar generator rated 1,125 kW, 7.5 V 150,000 A, 514 rpm built in 1934 was installed in a U.S. steel mill for pipe welding purposes.
Principle of operation
The homopolar motor is driven by the Lorentz force. A conductor with a current flowing through it when placed in a magnetic field which is perpendicular to the current feels a force in the direction perpendicular to both the magnetic field and the current. This force form a couple of forces with itself whose torque generates movement around the axis of rotation. Because the axis of rotation is parallel to the magnetic field, and the opposing magnetic fields do not change polarity, no commutation is required for the conductor to keep turning. This simplicity is most readily achieved with single turn designs, which makes homopolar motors unsuitable for most practical applications.
Like most electro-mechanical machines, a homopolar motor is reversible: if the conductor is turned mechanically, then it will operate as a homopolar generator, producing a direct current voltage between the two terminals of the conductor. The direct current produced is an effect of the homopolar nature of the design. Homopolar generators (HPGs) were extensively researched in the late 20th century as low voltage but very high current DC power supplies and have achieved some success powering experimental railguns.
Building a simple homopolar motor
A homopolar motor is very easy to build. A permanent magnet is used to provide the external magnetic field in which the conductor will turn, and a battery causes a current to flow along a conducting wire. It is not necessary for the magnet to move, or even to be in contact with the rest of the motor; its sole purpose is to provide a magnetic field that will interact with the magnetic field induced by the current in the wire. One can attach the magnet to the battery and allow the conducting wire to rotate freely while closing the electric circuit by touching both the top of the battery and the magnet attached to the bottom of the battery. The wire and the battery may become hot if operated continuously.
Gallery
Examples
Railgun
Ball bearing motor
Magnet
See also
Homopolar generators
Barlow's wheel
Lorentz Force
References
DC motors
Electric motors
Michael Faraday
Articles containing video clips | Homopolar motor | [
"Technology",
"Engineering"
] | 857 | [
"Electrical engineering",
"Engines",
"Electric motors"
] |
2,260,854 | https://en.wikipedia.org/wiki/Divide%20and%20choose | Divide and choose (also Cut and choose or I cut, you choose) is a procedure for fair division of a continuous resource, such as a cake, between two parties. It involves a heterogeneous good or resource ("the cake") and two partners who have different preferences over parts of the cake (both want as much of it as possible). The procedure proceeds as follows: one person ("the cutter") cuts the cake into two pieces; the other person ("the chooser") selects one of the pieces; the cutter receives the remaining piece.
Since ancient times some have used the procedure to divide land, food and other resources between two parties. Currently, there is an entire field of research, called fair cake-cutting, devoted to various extensions and generalizations of cut-and-choose.
History
Divide and choose is mentioned in the Bible, in the Book of Genesis (chapter 13). When Abraham and Lot came to the land of Canaan, Abraham suggested that they divide it among them. Then Abraham, coming from the south, divided the land to a "left" (western) part and a "right" (eastern) part, and let Lot choose. Lot chose the eastern part, which contained Sodom and Gomorrah, and Abraham was left with the western part, which contained Beer Sheva, Hebron, Bethel, and Shechem.
The United Nations Convention on the Law of the Sea applies a procedure similar to divide-and-choose for allocating areas in the ocean among countries. A developed state applying for a permit to mine minerals from the ocean must prepare two areas of approximately similar value, let the UN authority choose one of them for reservation to developing states, and get the other area for mining:
Each application... shall cover a total area... sufficiently large and of sufficient estimated commercial value to allow two mining operations... of equal estimated commercial value... Within 45 days of receiving such data, the Authority shall designate which part is to be reserved solely for the conduct of activities by the Authority through the Enterprise or in association with developing States... The area designated shall become a reserved area as soon as the plan of work for the non-reserved area is approved and the contract is signed.
Analysis
Divide and choose is envy-free in the following sense: each of the two partners can act in a way that guarantees that, according to their own subjective taste, their allocated share is at least as valuable as the other share, regardless of what the other partner does. Here is how each partner can act:
The cutter can cut the cake to two pieces that they consider equal. Then, regardless of what the chooser does, they are left with a piece that is as valuable as the other piece.
The chooser can select the piece they consider more valuable. Then, even if the cutter divided the cake to pieces that are unequal (in the chooser's mind), the chooser has no reason to complain because they chose the piece they wanted.
To an external viewer, the division might seem unfair, but to the two involved partners, the division is fair — no partner has cause to envy the other's share.
If the value functions of the partners are additive functions, then divide and choose is also proportional in the following sense: each partner can act in a way that guarantees that their allocated share has a value of at least 1/2 of the total cake value. This is because, with additive valuations, every envy-free division is also proportional.
The protocol works both for dividing a desirable resource (as in fair cake-cutting) and for dividing an undesirable resource (as in chore division).
Divide and choose assumes that the parties have equal entitlements and wish to decide the division themselves or use mediation rather than arbitration. The goods are assumed to be divisible in any way, but each party may value the bits differently.
The cutter has an incentive to divide as fairly as possible: if they do not, they will likely receive an undesirable portion. This rule is a concrete application of the veil of ignorance concept.
The divide and choose method does not guarantee that each person gets exactly half the cake by their own valuations -- the cutter may perceive the advantage of parts of the cake differently from the chooser and anyways the chooser chooses what he thinks is the better half. So the "divide and choose" procedure does not produce an exact division. There is no definite procedure for exact division, but it can be done using two moving knives; see Austin moving-knife procedure.
Generalizations and improvements
Dividing among more than two parties
Divide-and-choose works only for two parties. When there are more parties, other procedures such as last diminisher or Even–Paz protocol can be used. Martin Gardner popularized the problem of designing a similarly fair procedure for larger groups in his May 1959 "Mathematical Games column" in Scientific American. See also proportional cake-cutting. A newer method was reported in Scientific American. It was developed by Aziz and Mackenzie. While faster in principle than the earlier method, it is still potentially very slow. See envy-free cake-cutting.
Efficient allocations
Divide-and-choose might yield inefficient allocations. One commonly used example is a cake that is half vanilla and half chocolate. Suppose Bob likes only chocolate, and Carol only vanilla. If Bob is the cutter and he is unaware of Carol's preference, his safe strategy is to divide the cake so that each half contains an equal amount of chocolate. But then, regardless of Carol's choice, Bob gets only half the chocolate, and the allocation is clearly not Pareto efficient. It is entirely possible that Bob, in his ignorance, would put all the vanilla (and some amount of chocolate) in one larger portion, so Carol gets everything she wants while he would receive less than what he could have gotten by negotiating. If Bob knew Carol's preference and liked her, he could cut the cake into an all-chocolate piece and an all-vanilla piece, Carol would choose the latter, and Bob would get all the chocolate. On the other hand, if he does not like Carol, he can cut the cake with slightly less than half the vanilla part in one portion and the rest of the vanilla and all the chocolate in the other. Carol might also be motivated to take the portion with the chocolate to spite Bob. There is a procedure to solve even this, but it is very unstable in the face of a small error in judgement. More practical solutions that can't guarantee optimality but are much better than divide and choose have been devised, in particular the adjusted winner procedure (AW) and the surplus procedure (SP). See also Efficient cake-cutting.
Dividing with a single point
Wagener studies a variant of Divide and Choose on a two-dimensional cake, in which the divider is disadvantaged: instead of making a cut, he can only mark a point on the cake. The chooser can then make a straight cut through that point, and choose the piece he prefers. He proves that, if the cake is bounded, the divider can always secure at least 1/3 of the cake. If the cake is both bounded and convex, the divider can secure 4/9 of the cake.
See also
, players in financial markets who offer to either buy or sell at a given price (plus a spread)
Notes and references
Fair division protocols
Welfare economics
Non-cooperative games
Cake-cutting | Divide and choose | [
"Mathematics"
] | 1,536 | [
"Game theory",
"Non-cooperative games"
] |
2,260,887 | https://en.wikipedia.org/wiki/Politics%20of%20climate%20change | The politics of climate change results from different perspectives on how to respond to climate change. Global warming is driven largely by the emissions of greenhouse gases due to human economic activity, especially the burning of fossil fuels, certain industries like cement and steel production, and land use for agriculture and forestry. Since the Industrial Revolution, fossil fuels have provided the main source of energy for economic and technological development. The centrality of fossil fuels and other carbon-intensive industries has resulted in much resistance to climate friendly policy, despite widespread scientific consensus that such policy is necessary.
Climate change first emerged as a political issue in the 1970s. Efforts to mitigate climate change have been prominent on the international political agenda since the 1990s, and are also increasingly addressed at national and local level. Climate change is a complex global problem. Greenhouse gas (GHG) emissions contribute to global warming across the world, regardless of where the emissions originate. Yet the impact of global warming varies widely depending on how vulnerable a location or economy is to its effects. Global warming is on the whole having negative impact, which is predicted to worsen as heating increases. Ability to benefit from both fossil fuels and renewable energy sources vary substantially from nation to nation.
Different responsibilities, benefits and climate related threats faced by the world's nations contributed to early climate change conferences producing little beyond general statements of intent to address the problem, and non-binding commitments from the developed countries to reduce emissions. In the 21st century, there has been increased attention to mechanisms like climate finance in order for vulnerable nations to adapt to climate change. In some nations and local jurisdictions, climate friendly policies have been adopted that go well beyond what was committed to at international level. Yet local reductions in GHG emission that such policies achieve have limited ability to slow global warming unless the overall volume of GHG emission declines across the planet.
Since entering the 2020s, the feasibility of replacing energy from fossil fuel with renewable energy sources significantly increased, with some countries now generating almost all their electricity from renewables. Public awareness of the climate change threat has risen, in large part due to social movement led by youth and visibility of the impacts of climate change, such as extreme weather events and flooding caused by sea level rise. Many surveys show a growing proportion of voters support tackling climate change as a high priority, making it easier for politicians to commit to policies that include climate action. The COVID-19 pandemic and economic recession lead to widespread calls for a "green recovery", with some polities like the European Union successfully integrating climate action into policy change. Outright climate change denial had become a much less influential force by 2019, and opposition has pivoted to strategies of encouraging delay or inaction.
Policy debate
Like all policy debates, the political debate on climate change is fundamentally about action. Various distinct arguments underpin the politics of climate change - such as different assessments of the urgency of the threat, and on the feasibility, advantages and disadvantages of various responses. But essentially, these all relate to potential responses to climate change.
The statements that form political arguments can be divided into two types: positive and normative statements. Positive statements can generally be clarified or refuted by careful definition of terms, and scientific evidence. Whereas normative statements about what one "ought" to do often relate at least partly to morality, and are essentially a matter of judgement. Experience has indicated that better progress is often made at debates if participants attempt to disentangle the positive and normative parts of their arguments, reaching agreement on the positive statements first. In the early stages of a debate, the normative positions of participants can be strongly influenced by perceptions of the best interests of whatever constituency they represent. In achieving exceptional progress at the 2015 Paris conference, Christiana Figueres and others noted it was helpful that key participants were able to move beyond a competitive mindset concerning competing interests, to normative statements that reflected a shared abundance based collaborative mindset.
Actions in response to climate change can be divided into three classes: mitigation – actions to reduce greenhouse gas emissions and to enhance carbon sinks, adaptation – actions to defend against the negative results of global warming, and solar geoengineering – a technology in which sunlight would be reflected back to outer space.
Most 20th century international debate on climate change focused almost entirely on mitigation. It was sometimes considered defeatist to pay much attention to adaptation. Also, compared to mitigation, adaptation is more a local matter, with different parts of the world facing vastly different threats and opportunities from climate change. By the early 21st century, while mitigation still receives most attention in political debates, it is no longer the sole focus. Some degree of adaptation is now widely considered essential, and is discussed internationally at least at high level, though which specific actions to take remain mostly a local matter. A commitment to provide $100 billion per year worth of funding to developing countries was made at the 2009 Copenhagen Summit. At Paris, it was clarified that allocation of the funding should involve a balanced split between adaptation and mitigation, though , not all funding had been provided, and what had been delivered was going mainly to mitigation projects. By 2019, possibilities for geoengineering were also increasingly being discussed, and were expected to become more prominent in future debates.
Political debate on how to mitigate tends to vary depending on the scale of governance concerned. Different considerations apply for international debate, compared with national and municipal level discussion. In the 1990s, when climate change first became prominent on the political agenda, there was optimism that the problem could be successfully tackled. The then recent signing of the 1987 Montreal Protocol to protect the ozone layer had indicated that the world was able to act collectively to address a threat warned about by scientists, even when it was not yet causing significant harm to humans. Yet by the early 2000s GHG emissions had continued to rise, with little sign of agreement to penalise emitters or reward climate friendly behaviour. It had become clear that achieving global agreement for effective action to limit global warming would be much more challenging. Some politicians, such as Arnold Schwarzenegger with his slogan "terminate pollution", say that activists should generate optimism by focusing on the health co-benefits of climate action.
Multilateral
Climate change became a fixture on the global political agenda in the early 1990s, with United Nations Climate Change conferences set to run yearly. These annual events are also called Conferences of the Parties (COPs). Major landmark COPs were the 1997 Kyoto Protocol, the 2009 Copenhagen Summit and the 2015 Paris conference. Kyoto was initially considered promising, yet by the early 2000s its results had proved disappointing. Copenhagen saw a major attempt to move beyond Kyoto with a much stronger package of commitments, yet largely failed. Paris was widely considered successful, yet how effective it will be at reducing long term global warming remains to be seen.
At international level, there are three broad approaches to emissions reduction that nations can attempt to negotiate. Firstly, the adoption of emissions reductions targets. Secondly, setting a carbon price. Lastly, creating a largely voluntary set of processes to encourage emission reduction, which include the sharing of information and progress reviews. These approaches are largely complementary, though at various conferences much of the focus has often been on a single approach. Until about 2010, international negotiations focused largely on emissions targets. The success of the Montreal treaty in reducing emissions that damaged the ozone layer suggested that targets could be effective. Yet in the case of greenhouse gas reductions, targets have not in general led to substantial cuts in emissions. Ambitious targets have usually not been met. Attempts to impose severe penalties that would incentivize more determined efforts to meet challenging targets, have always been blocked by at least one or two nations.
In the 21st century, there is widespread agreement that a carbon price is the most effective way to reduce emissions, at least in theory. Generally though, nations have been reluctant to adopt a high carbon price, or in most cases any price at all. One of the main reasons for this reluctance is the problem of carbon leakage – the phenomena where activities producing GHG emissions are moved out of the jurisdiction that imposes the carbon price thus depriving the jurisdiction of jobs & revenue, and to no benefit, as the emissions will be released elsewhere. Nonetheless, the percentage of the worlds' emissions that are covered by a carbon price rose from 5% in 2005, to 15% by 2019, and should reach over 40% once China's carbon price comes fully into force. Existing carbon price regimes have been implemented mostly independently by the European Union, nations and sub national jurisdictions acting autonomously.
The largely voluntary pledge and review system where states make their own plans for emissions reduction was introduced in 1991, but abandoned before the 1997 Kyoto treaty, where the focus was on securing agreement for "top down" emissions targets. The approach was revived at Copenhagen, and gained further prominence with the 2015 Paris Agreement, though pledges came to be called nationally determined contributions (NDCs). These are meant to be re-submitted in enhanced form every 5 years. How effective this approach is remains to be seen. Some countries submitted elevated NDCs in 2021, around the time of the Glasgow conference. Accounting rules for carbon trading were agreed at the 2021 Glasgow COP meeting.
Regional, national and sub-national
Policies to reduce GHG emissions are set by either national or sub national jurisdictions, or at regional level in the case of the European Union. Much of the emission reduction policies that have been put into place have been beyond those required by international agreements. Examples include the introduction of a carbon price by some individual US states, or Costa Rica reaching 99% electrical power generation by renewables in the 2010s.
Actual decisions to reduce emissions or deploy clean technologies are mostly not made by governments themselves, but by individuals, businesses and other organizations. Yet it is national and local governments that set policies to encourage climate friendly activity. Broadly these policies can be divided into four types: firstly, the implementation of a carbon price mechanism and other financial incentives; secondly prescriptive regulations, for example mandating that a certain percentage of electricity generation must be from renewables; thirdly, direct government spending on climate friendly activity or research; and fourthly, approaches based on information sharing, education and encouraging voluntary climate friendly behavior. Local politics is sometimes combined with air pollution, for example the politics of creating low emission zones in cities may also aim to reduce carbon emissions from road transport.
Non-governmental actors
Individuals, businesses and NGOs can affect the politics of climate change both directly and indirectly. Mechanisms include individual rhetoric, aggregate expression of opinion by means of polls, and mass protests. Historically, a significant proportion of these protests have been against climate friendly policies. Since the 2000 UK fuel protests there have been dozens of protests across the world against fuel taxes or the ending of fuel subsidies. Since 2019 and the advent of the school strike and Extinction Rebellion, pro climate protests have become more prominent. Indirect channels for apolitical actors to effect the politics of climate change include funding or working on green technologies, and the fossil fuel divestment movement.
Special interests and lobbying by non-country actors
There are numerous special interest groups, organizations, and corporations who have public and private positions on the multifaceted topic of global warming. The following is a partial list of the types of special interest parties that have shown an interest in the politics of global warming:
Fossil fuel companies: Traditional fossil fuel corporations stand to lose from stricter global warming regulations, though there are exceptions. The fact fossil fuel companies are engaged in energy trading might mean that their participation in trading schemes and other such mechanisms could give them a unique advantage, so it is unclear whether every traditional fossil fuel companies would always be against stricter global warming policies. As an example, Enron, a traditional gas pipeline company with a large trading desk heavily lobbied the United States government to regulate : they thought that they would dominate the energy industry if they could be at the center of energy trading.
Farmers and agribusiness are an important lobby but vary in their views on effects of climate change on agriculture and greenhouse gas emissions from agriculture and, for example, the role of the EU Common Agricultural Policy.
Financial Institutions: Financial institutions generally support policies against global warming, particularly the implementation of carbon trading schemes and the creation of market mechanisms that associate a price with carbon. These new markets require trading infrastructures, which banking institutions can provide. Financial institutions are also well positioned to invest, trade and develop various financial instruments that they could profit from through speculative positions on carbon prices and the use of brokerage and other financial functions like insurance and derivative instruments.
Environmental groups: Environmental advocacy groups generally favor strict restrictions on emissions. Environmental groups, as activists, engage in raising awareness.
Renewable energy and energy efficiency companies: companies in wind, solar and energy efficiency generally support stricter global warming policies. They expect their share of the energy market to expand as fossil fuels are made more expensive through trading schemes or taxes.
Nuclear power companies: support and benefit from carbon pricing or subsidies of low-carbon energy production, as nuclear power produces minimal greenhouse gas emissions.
Electricity distribution companies: may lose from solar panels but benefit from electric vehicles.
Traditional retailers and marketers: traditional retailers, marketers, and the general corporations respond by adopting policies that resonate with their customers. If "being green" provides customer appeal, then they could undertake modest programs to please and better align with their customers. However, since the general corporation does not make a profit from their particular position, it is unlikely that they would strongly lobby either for or against a stricter global warming policy position.
Medics: often say that climate change and air pollution can be tackled together and so save millions of lives.
Information and communications technology companies: say their products help others combat climate change, tend to benefit from reductions in travel, and many purchase green electricity.
The various interested parties sometimes align with one another to reinforce their message, for example electricity companies fund the purchase of electric school buses to benefit medics by reducing the load on the health service whilst at the same time selling more electricity. Sometimes industries will fund specialty nonprofit organizations to raise awareness and lobby on their behest.
Collective action
Current climate politics are influenced by a number of social and political movements focused on different parts of building political will for climate action. This includes the climate justice movement, youth climate movement and movements to divest from fossil fuel industries.
Divestment movement
Youth movement
Outlook
Historical political attempts to agree on policies to limit global warming have largely failed to mitigate climate change. Commentators have expressed optimism that the 2020s can be more successful, due to various recent developments and opportunities that were not present during earlier periods. Other commentators have expressed warnings that there is now very little time to act in order to have any chance of keeping warming below 1.5 °C, or even to have a good chance of keeping global heating under 2 °C.
According to Torsten Lichtenau, leading expert in global carbon transition, there was a huge peak on corporate climate action in 2021 - 2022 at the time of COP26, but in 2024 “it’s dropped back to 2019 levels." As for 2024 issues like geopolitics, inflation and artificial intelligence became more important for corporations even though the number of climate concerned consumers rose. 2024 was the first year in which the amount of money given to ESG declined.
Opportunities
In the late 2010s, various developments conducive to climate friendly politics saw commentators express optimism that the 2020s might see good progress in addressing the threat of global heating.
Tipping point in public opinion
The year 2019 has been described as "the year the world woke up to climate change", driven by factors such growing recognition of the global warming threat resulting from recent extreme weather events, the Greta effect and the IPPC 1.5 °C report.
In 2019, the secretary general of OPEC recognized the school strike movement as the greatest threat faced by the fossil fuel industry. According to Christiana Figueres, once about 3.5% of a population start participating in non violent protest, they are always successful in sparking political change, with the success of Greta Thunberg's Fridays for Future movement suggesting that reaching this threshold may be obtainable.
A 2023 review study published in One Earth stated that opinion polls show that most people perceive climate change as occurring now and close by. The study concluded that seeing climate change as more distant does not necessarily result in less climate action, and reducing psychological distancing does not reliably increase climate action.
Reduced influence of climate change denial
By 2019, outright climate change denial had become a much less influential force than it had been in previous years. Reasons for this include the increasing frequency of extreme weather events, more effective communication on the part of climate scientists, and the Greta effect. As an example, in 2019 the Cato Institute closed down its climate shop.
Growth of renewable energy
Renewable energy is an inexhaustible source of naturally replenishing energy. The major renewable energy sources are wind, hydropower, solar, geothermal, and biomass. In 2020, renewable energy generated 29% of world electricity.
In the wake of the Paris Agreement, adopted by 196 Parties, 194 of these Parties have submitted their Nationally Determined Contributions (NDCs), i.e., climate pledges, as of November 2021. There are many different efforts used by these countries to help include renewable energy investments such as 102 countries have implemented tax credits, 101 countries include some sort of public investment, and 100 countries currently use tax reductions. The largest emitters tend to be industrialized countries like the US, China, UK, and India. These countries aren't implementing enough industrial policies (188) compared to deployment policies (more than 1,000).
In November 2021, the 26th United Nation Conference of the Parties (COP26) took place in Glasgow, Scotland. Almost 200 nations agreed to accelerate the fight against climate change and commit to more effective climate pledges. Some of the new pledges included reforms on methane gas pollution, deforestation, and coal financing. Surprisingly, the US and China (the two largest carbon emitters) also both agreed to work together on efforts to prevent global warming from surpassing 1.5 degrees Celsius. Some scientists, politicians, and activist say that not enough was done at this summit and that we will still reach that 1.5 degree tipping point. An Independent report by Climate Action Tracker said the commitments were "lip service" and "we will emit roughly twice as much in 2030 as required for 1.5 degrees."
As of 2020, the feasibility of replacing energy from fossil fuel with nuclear and especially renewable energy has much increased, with dozens of countries now generating more than half of their electricity from renewable sources.
Green recovery
Challenges
Despite various promising conditions, commentators tend to warn that several difficult challenges remain, which need to be overcome if climate change politics is to result in a substantial reduction of greenhouse gas emissions. For example, increasing tax on meat can be politically difficult.
Urgency
As of 2021, levels have already increased by about 50% since the pre-industrial era, with billions of tons more being released each year. Global warming has already passed the point where it is beginning to have a catastrophic impact in some localities. So major policy changes need to be implemented very soon if the risk of escalating environmental impact is to be avoided.
Centrality of fossil fuel
Energy from fossil fuels remains central to the worlds economy, accounting for about 80% of its energy generation as of 2019. Suddenly removing fossil fuel subsidies from consumers has often been found to cause riots. While clean energy can sometimes be cheaper, provisioning large amounts of renewable energy in a short period of time tends to be challenging. According to a 2023 report by the International Energy Agency, coal emissions grew 243 Mt to a new all-time high of almost 15.5 Gt. This 1.6% increase was faster than the 0.4% annual average growth over the past decade. In 2022 the European Central Bank argued that high energy prices were accelerating the energy transition away from fossil fuel, but that governments should take steps to prevent energy poverty without hindering the move to low carbon energy.
Inactivism
While outright denial of climate change is much less prevalent in the 2020s compared to the preceding decades, many arguments continue to be made against taking action to limit GHG emissions. Such arguments include the view that there are better ways to spend available funds (such as adaptation), that it would be better to wait until new technology is developed as that would make mitigation cheaper, that technology and innovation will render climate change moot or resolve certain aspects, and that the future negative effects of climate change should be heavily discounted compared to current needs.
Fossil fuel lobby and political spending
The largest oil and gas corporations that comprise Big Oil and their industry lobbyist arm, the American Petroleum Institute (API), spend large amounts of money on lobbying and political campaigns, and employ hundreds of lobbyists, to obstruct and delay government action to address climate change. The fossil fuel lobby has considerable clout in Washington, D.C. and in other political centers, including the European Union and the United Kingdom. Fossil fuel industry interests spend many times as much on advancing their agenda in the halls of power than do ordinary citizens and environmental activists, with the former spending $2 billion in the years 2000–2016 on climate change lobbying in the United States. The five largest Big Oil corporations spent hundreds of millions of euros to lobby for its agenda in Brussels.
Big Oil companies often adopt "sustainability principles" that are at odds with the policy agenda their lobbyists advocate, which often entails sowing doubt about the reality and impacts of climate change and forestalling government efforts to address them. API launched a public relations disinformation campaign with the aim of creating doubt in the public mind so that "climate change becomes a non-issue." This industry also spends lavishly on American political campaigns, with approximately 2/3 of its political contributions over the past several decades fueling Republican Party politicians, and outspending many-fold political contributions from renewable energy advocates. Fossil fuel industry political contributions reward politicians who vote against environmental protections. According to a study published by the Proceedings of the National Academy of Sciences of the United States of America, as voting by a member of United States Congress turned more anti-environment, as measured by his/her voting record as scored by the League of Conservation Voters (LCV), the fossil fuel industry contributions that this member of Congress received increased. On average, a 10% decrease in the LCV score was correlated with an increase of $1,700 in campaign contributions from the fossil fuel industry for the campaign following the Congressional term.
Suppression of climate science
Big Oil companies, starting as early as the 1970s, suppressed their own scientists' reports of major climate impacts of the combustion of fossil fuels. ExxonMobil launched a corporate propaganda campaign promoting false information about the issue of climate change, a tactic that has been compared to Big Tobacco's public relations efforts to hoodwink the public about the dangers of smoking. Fossil fuel industry-funded think tanks harassed climate scientists who were publicly discussing the dire threat of climate change. As early as the 1980s when larger segments of the American public began to become aware of the climate change issue, the administrations of some United States presidents scorned scientists who spoke publicly of the threat fossil fuels posed for the climate. Other U.S. administrations have silenced climate scientists and muzzled government whistleblowers. Political appointees at a number of federal agencies prevented scientists from reporting their findings regarding aspects of the climate crisis, changed data modeling to arrive at conclusions they had set out a prior to prove, and shut out the input of career scientists of the agencies.
Targeting of climate activists
Climate and environmental activists, including, increasingly, those defending woodlands against the logging industry, have been killed in several countries, such as Colombia, Brazil and the Philippines. The perpetrators of most such killings have not been punished. A record number of such killings was recorded for the year 2019. Indigenous environmental activists are disproportionately targeted, comprising as many as 40% of fatalities worldwide. Domestic intelligence services of several governments, such as those of the U.S. government, have targeted environmental activists and climate change organizations as "domestic terrorists," surveilling them, investigating them, questioning them, and placing them on national "watchlists" that could make it more difficult for them to board airplanes and could instigate local law enforcement monitoring. Other U.S. tactics have included preventing media coverage of American citizen assemblies and protests against climate change, and partnering with private security companies to monitor activists.
Doomism
In the context of climate change politics, doomism refers to pessimistic narratives that claim that it is now too late to do anything about climate change. Doomism can include exaggeration of the probability of cascading climate tipping points, and their likelihood in triggering runaway global heating beyond human ability to control, even if humanity was able to immediately stop all burning of fossil fuels. In the US, polls found that for people who did not support further action to limit global warming, a belief that it is too late to do so was given as a more common reason than skepticism about man made climate change.
Lack of compromise
Several climate friendly policies have been blocked in the legislative process by environmental and/or left leaning pressure groups and parties. For example, in 2009, the Australian green party voted against the Carbon Pollution Reduction Scheme, as they felt it did not impose a high enough carbon price. In the US, the Sierra Club helped defeat a 2016 climate tax bill which they saw as lacking in social justice. Some of the attempts to impose a carbon price in US states have been blocked by left wing politicians because they were to be implemented by a cap and trade mechanism, rather than a tax.
Multi-sector governance
The issue of climate change usually fits into various sectors, which means that the integration of climate change policies into other policy areas is frequently called for. Thus the problem is difficult, as it needs to be addressed at multiple scales with diverse actors involved in the complex governance process.
Maladaptation
Successful adaptation to climate change requires balancing competing economic, social, and political interests. In the absence of such balancing, harmful unintended consequences can undo the benefits of adaptation initiatives. For example, efforts to protect coral reefs in Tanzania forced local villagers to shift from traditional fishing activities to farming that produced higher greenhouse gas emissions.
Wars and tensions
"Conflict sensitivity and peacebuilding" are a "key for climate policy-making." Wars and geopolitical tensions harm climate action, including by preventing just distribution of needed resources. Climate change can increase conflicts, creating a vicious cycle. The war in Ukraine seriously disturbed climate action. Military forces are responsible for 5.5% of global emissions and wars diverte resources from climate action.
Technology
The promise of technology is seen as both a threat and a potential boon. New technologies can open up possibilities for new and more effective climate policies. Most models that indicate a path to limiting warming to 2 °C have a big role for carbon dioxide removal, one of the approaches of climate change mitigation. Commentators from across the political spectrum tend to welcome removal. But some are skeptical that it will be ever be able to remove enough to slow global warming without there also being rapid cuts in emissions, and they warn that too much optimism about such technology may make it harder for mitigation policies to be enacted.
Solar radiation management is another technology aiming to reduce global warming. At least with stratospheric aerosol injection, there is broad agreement that it would be effective in bringing down average global temperatures. Yet the prospect is considered unwelcome by many climate scientists. They warn that side effects would include possible reductions in agricultural yields due to reduced sunlight and rainfall, and possible localized temperature rises and other weather disruptions. According to Michael Mann, the prospect of using solar management to reduce temperatures is another argument used to reduce willingness to enact emissions reduction policy.
Just transition
Economic disruption due to phaseout of carbon-intensive activities, such as coal mining, cattle farming or bottom trawling, can be politically sensitive due to the high political profile of coal miners, farmers and fishers in some countries. Many labor and environmental groups advocate for a just transition that minimizes the harm and maximizes the benefits associated with climate-related changes to society, for example by providing job training.
Different responses on the political spectrum
Climate friendly policies are generally supported across the political spectrum, though there have been many exceptions among voters and politicians leaning towards the right, and even politicians on the left have rarely made addressing climate change a top priority. In the 20th century, right wing politicians led much significant action against climate change, both internationally and domestically, with Richard Nixon and Margaret Thatcher being prominent examples. Yet by the 1990s, especially in some English speaking countries and most especially in the US, the issue began to be polarized. Right wing media started arguing that climate change was being invented or at least exaggerated by the left to justify an expansion in the size of government. As of 2020, some right wing governments have enacted increased climate friendly policies. Various surveys indicated a slight trend for even U.S. right wing voters to become less skeptical of global warming, and groups like American Conservation Coalition indicate young Republican voters embrace climate as a central policy field. Though in the view of Anatol Lieven, for some right wing US voters, being skeptical of climate change has become part of their identity, so their position on the matter cannot easily be shifted by rational argument.
A 2014 study from the University of Dortmund concluded that countries with center and left-wing governments had higher emission reductions than right-wing governments in OECD countries during 1992–2008. Historically, nationalist governments have been among the worst performers in enacting policies. Though according to Lieven, as climate change is increasingly seen as a threat to the ongoing existence of nation states, nationalism is likely to become one of the most effective forces to drive determined mitigation efforts. The growing trend to securitize the climate change threat may be especially effective for increasing support among nationalist and conservatives.
A 2024 analysis found 100 U.S. representatives and 23 U.S. senators—23% of the 535 members of Congress—to be climate change deniers, all the deniers being Republicans.
History
Relationship to climate science
In the scientific literature, there is an overwhelming consensus that global surface temperatures have increased in recent decades and that the trend is caused primarily by human-induced emissions of greenhouse gases.
The politicization of science in the sense of a manipulation of science for political gains is a part of the political process. It is part of the controversies about intelligent design (compare the Wedge strategy) or Merchants of Doubt, scientists that are under suspicion to willingly obscure findings. e.g. about issues like tobacco smoke, ozone depletion, global warming or acid rain. However, e.g. in case of ozone depletion, global regulation based on the Montreal Protocol was successful, in a climate of high uncertainty and against strong resistance while in case of climate change, the Kyoto Protocol failed.
While the IPCC process tries to find and orchestrate the findings of global climate change research to shape a worldwide consensus on the matter it has itself been the object of a strong politicization. Anthropogenic climate change evolved from a mere science issue to a top global policy topic.
The IPCC process having built a broad science consensus does not stop governments following different, if not opposing goals. For ozone depletion, global regulation was already being put into place before a scientific consensus was established. So a linear model of policy-making, based on a the more knowledge we have, the better the political response will be view is not necessarily accurate. Instead knowledge policy, successfully managing knowledge and uncertainties as a foundation for political decision making; requires a better understanding of the relation between science, public (lack of) understanding and policy.
Most of the policy debate concerning climate change mitigation has been framed by projections for the twenty-first century. Academics have criticized this as short term thinking, as decisions made in the next few decades will have environmental consequences that will last for many millennia.
It has been estimated that only 0.12% of all funding for climate-related research is spent on the social science of climate change mitigation. Vastly more funding is spent on natural science studies of climate change and considerable sums are also spent on studies of the impact of and adaptation to climate change. It has been argued that this is a misallocation of resources, as the most urgent challenge is to work out how to change human behavior to mitigate climate change, whereas the natural science of climate change is already well established and there will be decades and centuries to handle adaptation.
Political economy of climate change
Political economy of climate change is an approach that applies the political economy thinking concerning social and political processes to study the critical issues surrounding decision-making on climate change.
The ever-increasing awareness and urgency of climate change had led scholars to explore a better understanding of the multiple actors and influencing factors that affect climate change negotiation, and to seek more effective solutions to tackle climate change. Analyzing these complex issues from a political economy perspective helps to explain the interactions between different stakeholders in response to climate change impacts, and provides opportunities to achieve better implementation of climate change policies.
Introduction
Background
Climate change has become one of the most pressing environmental concerns and global challenges in society today. As the issue rises in prominence the international agenda, researchers from different academic sectors have for long been devoting great efforts to explore effective solutions to climate change. Technologists and planners have been devising ways of mitigating and adapting to climate change; economists estimating the cost of climate change and the cost of tackling it; development experts exploring the impact of climate change on social services and public goods. However, Cammack (2007) points out two problems with many of the above discussions, namely the disconnection between the proposed solutions to climate change from different disciplines; and the devoid of politics in addressing climate change at the local level. Further, the issue of climate change is facing various other challenges, such as the problem of elite-resource capture, the resource constraints in developing countries and the conflicts that frequently result from such constraints, which have often been less concerned and stressed in suggested solutions. In recognition of these problems, it is advocated that “understanding the political economy of climate change is vital to tackling it”.
Meanwhile, the unequal distribution of the impacts of climate change and the resulting inequity and unfairness on the poor who contribute least to the problem have linked the issue of climate change with development study, which has given rise to various programs and policies that aim at addressing climate change and promoting development. Although great efforts have been made on international negotiations concerning the issue of climate change, it is argued that much of the theory, debate, evidence-gathering and implementation linking climate change and development assume a largely apolitical and linear policy process. In this context, Tanner and Allouche (2011) suggest that climate change initiatives must explicitly recognize the political economy of their inputs, processes and outcomes so as to find a balance between effectiveness, efficiency and equity.
Definition
In its earliest manifestations, the term “political economy” was basically a synonym of economics, while it is now a rather elusive term that typically refers to the study of the collective or political processes through which public economic decisions are made. In the climate change domain, Tanner and Allouche (2011) define the political economy as “the processes by which ideas, power and resources are conceptualized, negotiated and implemented by different groups at different scales”. While there have emerged a substantial literature on the political economy of environmental policy, which explains the “political failure” of the environmental programmes to efficiently and effectively protect the environment, systematic analysis on the specific issue of climate change using the political economy framework is relatively limited.
Characteristics of climate change
The urgent need to consider and understand the political economy of climate change is based on the specific characteristics of the problem.
The key issues include:
The cross-sectoral nature of climate change: The issue of climate change usually fits into various sectors, which means that the integration of climate change policies into other policy areas is frequently called for. Thus the problem is complicated as it needs to be tackled at multiple scales, with diverse actors involved in the complex governance process. The interaction of these facets leads to political processes with multiple and overlapping conceptualizations, negotiation and governance issues, which requires the understanding of political economy processes.
The problematic perception of climate change as simply a “global” issue: Climate change initiatives and governance approaches have tended to be driven from a global scale. While the development of international agreements has witnessed a progressive step of global political action, this globally-led governance of climate change issue may be unable to provide adequate flexibility for specific national or sub-national conditions. Besides, from the development point of view, the issue of equity and global environmental justice would require a fair international regime within which the impact of climate change and poverty could be simultaneously prevented. In this context, climate change is not only a global crisis that needs the presence of international politics, but also a challenge for national or sub-national governments. The understanding of the political economy of climate change could explain the formulation and translation of international initiatives to specific national and sub-national policy context, which provides an important perspective to tackle climate change and achieve environmental justice.
The growth of climate change finance: Recent years have witnessed a growing number of financial flows and the development of financing mechanisms in the climate change arena. The 2010 United Nations Climate Change Conference in Cancun, Mexico committed a significant amount of money from developed countries to developing a world in supportive of the adaptation and mitigation technologies. In short terms, the fast start finance will be transferred through various channels including bilateral and multilateral official development assistance, the Global Environment Facility, and the UNFCCC. Besides, a growing number of public funds have provided greater incentives to tackle climate change in developing countries. For instance, the Pilot Program for Climate Resilience aims at creating an integrated and scaled-up approach of climate change adaptation in some low-income countries and preparing for future finance flows. In addition, climate change finance in developing countries could potentially change the traditional aid mechanisms, through the differential interpretations of ‘common but differentiated responsibilities’ by developing and developed countries. As a result, it is inevitable to change the governance structures so as for developing countries to break the traditional donor-recipient relationships. Within these contexts, the understanding of the political economy processes of financial flows in the climate change arena would be crucial to effectively govern the resource transfer and to tackling climate change.
Different ideological worldviews of responding to climate change: Nowadays, because of the perception of science as a dominant policy driver, much of the policy prescription and action in climate change arena have concentrated on assumptions around standardized governance and planning systems, linear policy processes, readily transferable technology, economic rationality, and the ability of science and technology to overcome resource gaps. As a result, there tends to be a bias towards technology-led and managerial approaches to address climate change in apolitical terms. Besides, a wide range of different ideological worldviews would lead to a high divergence of the perception of climate change solutions, which also has a great influence on decisions made in response to climate change. Exploring these issues from a political economy perspective provides the opportunity to better understand the “complexity of politic and decision-making processes in tackling climate change, the power relations mediating competing claims over resources, and the contextual conditions for enabling the adoption of technology”.
Unintended negative consequences of adaptation policies that fail to factor in environmental-economic trade-offs: Successful adaptation to climate change requires balancing competing economic, social, and political interests. In the absence of such balancing, harmful unintended consequences can undo the benefits of adaptation initiatives. For example, efforts to protect coral reefs in Tanzania forced local villagers to shift from traditional fishing activities to farming that produced higher greenhouse gas emissions.
Socio-political constraints
The role of political economy in understanding and tackling climate change is also founded upon the key issues surrounding the domestic socio-political constraints:
The problems of fragile states: Fragile states—defined as poor performers, conflict and/or post-conflict states—are usually incapable of using the aid for climate change effectively. The issues of power and social equity have exacerbated the climate change impacts, while insufficient attention has been paid to the dysfunction of fragile states. Considering the problems of fragile states, the political economy approach could improve the understanding of the long-standing constraints upon capacity and resilience, through which the problems associated with weak capacity, state-building and conflicts could be better addressed in the context of climate change.
Informal governance: In many poorly performing states, decision-making around the distribution and use of state resources is driven by informal relations and private incentives rather than formal state institutions that are based on equity and law. This informal governance nature that underlies in the domestic social structures prevents the political systems and structures from rational functioning and thus hinders the effective response towards climate change. Therefore, domestic institutions and incentives are critical to the adoption of reforms.
The difficulty of social change: Developmental change in underdeveloped countries is painfully slow because of a series of long-term collective problems, including the societies’ incapacity of working collectively to improve wellbeing, the lack of technical and social ingenuity, the resistance and rejection to innovation and change. In the context of climate change, these problems significantly hinder the promotion of climate change agenda. Taking a political economy view in the underdeveloped countries could help to understand and create incentives to promote transformation and development, which lays a foundation for the expectation of implementing a climate change adaptation agenda.
Research focuses and approaches
Brandt and Svendsen (2003) introduce a political economy framework that is based on the political support function model by Hillman (1982) into the analysis of the choice of instruments to control climate change in the European Union policy to implement its Kyoto Protocol target level. In this political economy framework, the climate change policy is determined by the relative strength of stakeholder groups. By examining the different objective of different interest groups, namely industry groups, consumer groups and environmental groups, the authors explain the complex interaction between the choices of an instrument for the EU climate change policy, specifically the shift from the green taxation to a grandfathered permit system.
A report by the Bank for Reconstruction and Development (EBRD) (2011) takes a political economy approach to explain why some countries adopt climate change policies while others do not, specifically among the countries in the transition region. This work analyzes the different political economy aspects of the characteristics of climate change policies so as to understand the likely factors driving climate change mitigation outcomes in many transition countries. The main conclusions are listed below:
The level of democracy alone is not a major driver of climate change policy adoption, which means that the expectations of contribution to global climate change mitigation are not necessarily limited by the political regime of a given country.
Public knowledge, shaped by various factors including the threat of climate change in a particular country, the national level of education and existence of free media, is a critical element in climate change policy adoption, as countries with the public more aware of the climate change causes are significantly more likely to adopt climate change policies. The focus should, therefore, be on promoting public awareness of the urgent threat of climate change and prevent information asymmetries in many transition countries.
The relative strength of the carbon-intensive industry is a major deterrent to the adoption of climate change policies, as it partly accounts for the information asymmetries. However, the carbon-intensive industries often influence government's decision-making on climate change policy, which thus calls for a change of the incentives perceived by these industries and a transition of them to a low-carbon production pattern. Efficient means include the energy price reform and the introduction of international carbon trading mechanisms.
The competitive edge gained national economies in the transition region in a global economy, where increasing international pressure is put to reduce emissions, would enhance their political regime's domestic legitimacy, which could help to address the inherent economic weaknesses underlying the lack of economic diversification and global economic crisis.
Tanner and Allouche (2011) propose a new conceptual and methodological framework for analyzing the political economy of climate change in their latest work, which focuses on the climate change policy processes and outcomes in terms of ideas, power and resources. The new political economy approach is expected to go beyond the dominant political economy tools formulated by international development agencies to analyze climate change initiatives that have ignored the way that ideas and ideologies determine the policy outcomes (see table). The authors assume that each of the three lenses, namely ideas, power and resources, tends to be predominant at one stage of the policy process of the political economy of climate change, with “ideas and ideologies predominant in the conceptualization phase, power in the negotiation phase and resource, institutional capacity and governance in the implementation phase”. It is argued that these elements are critical in the formulation of international climate change initiatives and their translation to national and sub-national policy context.
See also
Business action on climate change
Carbon Disclosure Project
Carbon emission trading
Climate target
Clean Development Mechanism
Economic analysis of climate change
List of international environmental agreements
Project 2025
Notes
References
Further reading
Naomi Klein (2019). On Fire: The Burning Case for a Green New Deal, Allen Lane, .
The Climate Transparency Project, The National Security Archive
Climate change
Environmental social science
Political economy
.
Environmental terminology | Politics of climate change | [
"Environmental_science"
] | 9,272 | [
"Environmental social science"
] |
2,260,933 | https://en.wikipedia.org/wiki/Bijective%20numeration | Bijective numeration is any numeral system in which every non-negative integer can be represented in exactly one way using a finite string of digits. The name refers to the bijection (i.e. one-to-one correspondence) that exists in this case between the set of non-negative integers and the set of finite strings using a finite set of symbols (the "digits").
Most ordinary numeral systems, such as the common decimal system, are not bijective because more than one string of digits can represent the same positive integer. In particular, adding leading zeroes does not change the value represented, so "1", "01" and "001" all represent the number one. Even though only the first is usual, the fact that the others are possible means that the decimal system is not bijective. However, the unary numeral system, with only one digit, is bijective.
A bijective base-k numeration is a bijective positional notation. It uses a string of digits from the set {1, 2, ..., k} (where k ≥ 1) to encode each positive integer; a digit's position in the string defines its value as a multiple of a power of k. calls this notation k-adic, but it should not be confused with the p-adic numbers: bijective numerals are a system for representing ordinary integers by finite strings of nonzero digits, whereas the p-adic numbers are a system of mathematical values that contain the integers as a subset and may need infinite sequences of digits in any numerical representation.
Definition
The base-k bijective numeration system uses the digit-set {1, 2, ..., k} (k ≥ 1) to uniquely represent every non-negative integer, as follows:
The integer zero is represented by the empty string.
The integer represented by the nonempty digit-string
is
.
The digit-string representing the integer m > 0 is
where
and
being the least integer not less than x (the ceiling function).
In contrast, standard positional notation can be defined with a similar recursive algorithm where
Extension to integers
For base , the bijective base- numeration system could be extended to negative integers in the same way as the standard base- numeral system by use of an infinite number of the digit , where , represented as a left-infinite sequence of digits . This is because the Euler summation
meaning that
and for every positive number with bijective numeration digit representation is represented by . For base , negative numbers are represented by with , while for base , negative numbers are represented by . This is similar to how in signed-digit representations, all integers with digit representations are represented as where . This representation is no longer bijective, as the entire set of left-infinite sequences of digits is used to represent the -adic integers, of which the integers are only a subset.
Properties of bijective base-k numerals
For a given base ,
the number of digits in the bijective base-k numeral representing a nonnegative integer n is
, in contrast to for ordinary base-k numerals;if k = 1 (i.e., unary), then the number of digits is just n;
the smallest nonnegative integer, representable in a bijective base-k numeral of length , is
;
the largest nonnegative integer, representable in a bijective base-k numeral of length , is
, equivalent to , or ;
the bijective base-k and ordinary base-k numerals for a nonnegative integer n are identical if and only if the ordinary numeral does not contain the digit 0 (or, equivalently, the bijective numeral is neither the empty string nor contains the digit k).
For a given base ,
there are exactly bijective base-k numerals of length ;
a list of bijective base-k numerals, in natural order of the integers represented, is automatically in shortlex order (shortest first, lexicographical within each length). Thus, using λ to denote the empty string, the base 1, 2, 3, 8, 10, 12, and 16 numerals are as follows (where the ordinary representations are listed for comparison):
Examples
34152 (in bijective base-5) = 3×54 + 4×53 + 1×52 + 5×51 + 2×1 = 2427 (in decimal).
119A (in bijective base-10, with "A" representing the digit value ten) = 1×103 + 1×102 + 9×101 + 10×1 = 1200 (in decimal).
A typical alphabetic list with more than 26 elements is bijective, using the order of A, B, C...X, Y, Z, AA, AB, AC...ZX, ZY, ZZ, AAA, AAB, AAC...
The bijective base-10 system
The bijective base-10 system is a base ten positional numeral system that does not use a digit to represent zero. It instead has a digit to represent ten, such as A.
As with conventional decimal, each digit position represents a power of ten, so for example 123 is "one hundred, plus two tens, plus three units." All positive integers that are represented solely with non-zero digits in conventional decimal (such as 123) have the same representation in the bijective base-10 system. Those that use a zero must be rewritten, so for example 10 becomes A, conventional 20 becomes 1A, conventional 100 becomes 9A, conventional 101 becomes A1, conventional 302 becomes 2A2, conventional 1000 becomes 99A, conventional 1110 becomes AAA, conventional 2010 becomes 19AA, and so on.
Addition and multiplication in this system are essentially the same as with conventional decimal, except that carries occur when a position exceeds ten, rather than when it exceeds nine. So to calculate 643 + 759, there are twelve units (write 2 at the right and carry 1 to the tens), ten tens (write A with no need to carry to the hundreds), thirteen hundreds (write 3 and carry 1 to the thousands), and one thousand (write 1), to give the result 13A2 rather than the conventional 1402.
The bijective base-26 system
In the bijective base-26 system one may use the Latin alphabet letters "A" to "Z" to represent the 26 digit values one to twenty-six. (A=1, B=2, C=3, ..., Z=26)
With this choice of notation, the number sequence (starting from 1) begins A, B, C, ..., X, Y, Z, AA, AB, AC, ..., AX, AY, AZ, BA, BB, BC, ...
Each digit position represents a power of twenty-six, so for example, the numeral WI represents the value = 607 in base 10.
Many spreadsheets including Microsoft Excel use this system to assign labels to the columns of a spreadsheet, starting A, B, C, ..., Z, AA, AB, ..., AZ, BA, ..., ZZ, AAA, etc. For instance, in Excel 2013, there can be up to 16384 columns (214 in binary code), labeled from A to XFD. Malware variants are also named using this system: for example, the first widespread Microsoft Word macro virus, Concept, is formally named WM/Concept.A, its 26th variant WM/Concept.Z, the 27th variant WM/Concept.AA, et seq. A variant of this system is used to name variable stars. It can be applied to any problem where a systematic naming using letters is desired, while using the shortest possible strings.
Historical notes
The fact that every non-negative integer has a unique representation in bijective base-k (k ≥ 1) is a "folk theorem" that has been rediscovered many times. Early instances are for the case k = 10, and and for all k ≥ 1. Smullyan uses this system to provide a Gödel numbering of the strings of symbols in a logical system; Böhm uses these representations to perform computations in the programming language P′′. mentions the special case of k = 10, and discusses the cases k ≥ 2. appears to be another rediscovery, and hypothesizes that if ancient numeration systems used bijective base-k, they might not be recognized as such in archaeological documents, due to general unfamiliarity with this system.
Notes
References
.
.
.
. (Discusses bijective base-10.)
. (Discusses bijective base-k for all k ≥ 2.)
.
Numeral systems
Non-standard positional numeral systems | Bijective numeration | [
"Mathematics"
] | 1,881 | [
"Numeral systems",
"Mathematical objects",
"Numbers"
] |
2,260,942 | https://en.wikipedia.org/wiki/Shear%20strength | In engineering, shear strength is the strength of a material or component against the type of yield or structural failure when the material or component fails in shear. A shear load is a force that tends to produce a sliding failure on a material along a plane that is parallel to the direction of the force. When a paper is cut with scissors, the paper fails in shear.
In structural and mechanical engineering, the shear strength of a component is important for designing the dimensions and materials to be used for the manufacture or construction of the component (e.g. beams, plates, or bolts). In a reinforced concrete beam, the main purpose of reinforcing bar (rebar) stirrups is to increase the shear strength.
Equations
For shear stress applies
where
is major principal stress and
is minor principal stress.
In general: ductile materials (e.g. aluminum) fail in shear, whereas brittle materials (e.g. cast iron) fail in tension .
To calculate:
Given total force at failure (F) and the force-resisting area (e.g. the cross-section of a bolt loaded in shear), ultimate shear strength () is:
For average shear stress
where
is the average shear stress,
is the shear force applied to each section of the part, and
is the area of the section.
Average shear stress can also be defined as the total force of as
This is only the average stress, actual stress distribution is not uniform. In real world applications, this equation only gives an approximation and the maximum shear stress would be higher. Stress is not often equally distributed across a part so the shear strength would need to be higher to account for the estimate.
Comparison
As a very rough guide relating tensile, yield, and shear strengths:
USS: Ultimate Shear Strength, UTS: Ultimate Tensile Strength, SYS: Shear Yield Stress, TYS: Tensile Yield Stress
There are no published standard values for shear strength like with tensile and yield strength. Instead, it is common for it to be estimated as 60% of the ultimate tensile strength. Shear strength can be measured by a torsion test where it is equal to their torsional strength.
When values measured from physical samples are desired, a number of testing standards are available, covering different material categories and testing conditions. In the US, ASTM standards for measuring shear strength include ASTM B769, B831, D732, D4255, D5379, and D7078. Internationally, ISO testing standards for shear strength include ISO 3597, 12579, and 14130.
See also
Shear modulus
Shear stress
Shear strain
Shear strength (soil)
Shear strength (Discontinuity)
Strength of materials
Tensile strength
References
Shear strength | Shear strength | [
"Engineering"
] | 560 | [
"Structural engineering",
"Shear strength",
"Mechanical engineering"
] |
2,261,087 | https://en.wikipedia.org/wiki/Sign%20of%20the%20horns | The sign of the horns is a hand gesture with a variety of meanings and uses in various cultures. It is formed by extending the index and little fingers while holding the middle and ring fingers down with the thumb.
Religious and superstitious meaning
In Hatha Yoga, a similar hand gesture – with the tips of middle and ring finger touching the thumb – is known as , a gesture believed to rejuvenate the body. In Indian classical dance forms, it symbolizes the lion. In Buddhism, the is seen as an apotropaic gesture to expel demons, remove negative energy, and ward off evil. It is commonly found on depictions of Gautama Buddha. It is also found on the Song dynasty statue of Laozi, the founder of Taoism, on Mount Qingyuan, China.
An apotropaic usage of the sign can be seen in Italy and in other Mediterranean cultures where, when confronted with unfortunate events, or simply when these events are mentioned, the sign of the horns may be given to ward off further bad luck. It is also used traditionally to counter or ward off the "evil eye" (). In Italy specifically, the gesture is known as the ('horns'). With fingers pointing down, it is a common Mediterranean apotropaic gesture, by which people seek protection in unlucky situations (a Mediterranean equivalent of knocking on wood). The President of the Italian Republic, Giovanni Leone, startled the media when, while in Naples during an outbreak of cholera, he shook the hands of patients with one hand while with the other behind his back he superstitiously made the , presumably to ward off the disease or in reaction to being confronted by such misfortune. Very often it is accompanied by a characteristic superstitious invocation: "Tèee!", a slang form derived from "Tiè!", "Tieni!", ("Hold it !"), second person of the imperative of the verb "Tenere" ("To Hold").
In Italy and other parts of the Mediterranean region, the gesture must usually be performed with the fingers tilting downward or in a leveled position not pointed at someone and without movement to signify the warding off of bad luck; in the same region and elsewhere, the gesture may take a different, offensive, and insulting meaning if it is performed with fingers upward or if directed aggressively towards someone especially in a swiveling motion (see section below).
The sign of the horns is used during religious rituals in Wicca, to invoke or represent the Horned God.
In LaVeyan Satanism, the sign of the horns is used as a traditional salutation, either for informal or ritual purposes.
Offensive gesture
In many Mediterranean and Latin countries, such as Colombia, Greece, Italy, Portugal, Spain and Mexico, when directed towards someone, pointed upward, and/or swiveled back and forth, the sign offensively implies cuckoldry in regard to the targeted individual; the common words for cuckolded in Greek, Italian, Spanish, and Portuguese are, respectively, (), , and , literally meaning "horned [one]". In this particular case, in Italy, the gesture is often accompanied by the invocation: "Cornuto!" ("Cuckold!"). As previously stated above, in Italy and certain other Mediterranean countries, the sign, often when pointing downwards, but occasionally also upwards, can serve also as a talismanic gesture to ward off bad luck. However, the positioning of the hand sign and the context in which it is used generally renders obvious to Italian and other Mediterranean people the meaning of the sign in a particular situation. During a European Union meeting in February 2002, Italian prime minister Silvio Berlusconi was photographed performing in a jocular manner the offensive "cornuto" version of gesture behind the back of Josep Piqué, the Spanish foreign minister.
Northwestern European and North American popular culture
Contemporary use by musicians and actors
There is a 1927 jazz recording by the New Orleans Owls, "Throwin' the Horns", on 78 rpm, Columbia 1261-D. It has a humorous vocal by two of the band members.
Ike Turner told in an interview that he used the sign in his piano playing on Howlin' Wolf's blues song "How Many More Years" in 1951.
Marlon Brando makes the sign whilst singing "Luck Be a Lady" in the 1955 film Guys and Dolls, seeming to indicate it was a sign for snake eyes in the craps game he is playing for the gamblers' souls.
The 1969 back album cover for Witchcraft Destroys Minds & Reaps Souls on Mercury Records by Chicago-based psychedelic-occult rock band Coven, led by singer Jinx Dawson, pictured Coven band members giving the "sign of the horns". According to a Facebook post by Dawson, she used the sign as early as late 1967 when Coven started, to which she posted a photo showing her giving the sign on stage.
Beginning in the early 1970s, the horns were known as the "P-Funk sign" to fans of Parliament-Funkadelic. It was used by George Clinton and Bootsy Collins as the password to the Mothership, a central element in Parliament's science-fiction mythology, and fans used it in return to show their enthusiasm for the band. Collins is depicted showing the P-Funk sign on the cover of his 1977 album Ahh... The Name Is Bootsy, Baby!.
Heavy metal culture
Ronnie James Dio was known for popularizing the sign of the horns in heavy metal. He claimed his Italian grandmother used it to ward off the evil eye (which is known in Italy as malocchio). Dio began using the sign soon after joining the metal band Black Sabbath in 1979. The previous singer in the band, Ozzy Osbourne, was rather well known for using the "peace" sign at concerts, raising the index and middle finger in the form of a V. Dio, in an attempt to connect with the fans, wanted to similarly use a hand gesture. However, not wanting to copy Osbourne, he chose to use the sign his grandmother always made. The horns became famous in metal concerts very soon after Black Sabbath's first tour with Dio. The sign would later be appropriated by heavy metal fans.
Geezer Butler of Black Sabbath can be seen "raising the horns" in a photograph taken in 1969. The photograph is included in the CD booklet of the Symptom of the Universe: The Original Black Sabbath 1970–1978 2002 compilation album. This would indicate that there had been some association between the "horns" and heavy metal before Dio's popularization of it. Although The Beatles aren’t directly associated with heavy metal, John Lennon can be seen doing the "horn-sign" in a photograph already two years prior to Butler. The photoshoot was done for the promotion for their upcoming cartoon movie Yellow Submarine in late 1967. The official movie poster of 1968 showing the Beatles in cartoon form depicts Lennon performing the same gesture.
When asked if he was the one who introduced the hand gesture to metal subculture, Dio said in a 2001 interview:
Gene Simmons of the rock group KISS attempted to claim the "devil horns" hand gesture for his own. According to CBS News, Simmons filed an application on June 16, 2017, with the United States Patent and Trademark Office for a trademark on the hand gesture he regularly shows during concerts and public appearances—thumb, index, and pinky fingers extended, with the middle and ring fingers folded down (like the ILY sign meaning "I love you" in the American Sign Language). According to Simmons, this hand gesture was first commercially used—by him—on November 14, 1974. He claimed the hand gesture should be trademarked for "entertainment, namely live performances by a musical artist [and] personal appearances by a musical artist." Simmons abandoned this application on June 21, 2017.
The Japanese kawaii metal band Babymetal uses the kitsune sign, their own variation of the sign of the horns, symbolizing their personal deity, the Fox God. The middle, ring finger, and thumb join at the tips to form the snout, the extended index and pinky fingers are the ears. This gesture is similar in appearance to the salute of the Turanist Grey Wolves movement.
Electronic communication
In text-based electronic communication, the sign of the horns is represented with the \../, \m/ or |m| emoticon and sometimes with /../. The Unicode character U+1F918 🤘 SIGN OF THE HORNS was introduced in Unicode 8.0 as an emoji, on June 17, 2015.
Gang hand signal
The "sign of the horns" hand gesture is used in criminal gang subcultures to indicate membership or affiliation with Mara Salvatrucha. The significance is both the resemblance of an inverted "devil horns" to the Latin letter 'M', and in the broader demonic connotation, of fierceness and nonconformity.
Sports culture
Hook 'em Horns is the slogan and hand signal of the University of Texas at Austin (UT). Students and alumni of the university employ a greeting consisting of the phrase "Hook 'em" or "Hook 'em Horns" and also use the phrase as a parting good-bye or as the closing line in a letter or story. The gesture is meant to approximate the shape of the head and horns of the UT mascot, the Texas Longhorn Bevo. Rival schools such as the Oklahoma Sooners or Texas A&M Aggies will turn the horns upside down meaning "Horns Down" as an insult.
Fans of the University of South Florida Bulls use the same hand sign at their athletic events, except that the hand is turned around and facing the other way. With the middle and ring finger extending towards the person presenting the "Go Bulls" sign.
Fans of North Dakota State University Bison athletics also use a similar hand gesture, known as "Go Bison!" The pinky and index fingers are usually slightly bent, however, to mimic the shape of a bison's horns.
Fans of North Carolina State University Wolfpack athletics use a similar gesture with the middle and ring fingers moving up and down over the thumb to mimic a wolf's jaw.
Fans of University of California, Irvine Anteaters use a similar sign with the middle and ring fingers out to resemble the head of an anteater.
Fans of University of Nevada, Reno Wolf Pack athletics use a similar sign with the middle and ring fingers out to resemble the wolf's snout.
A variation of this hand gesture is also used in the professional wrestling industry, which fans dub the "Too Sweet". It was possibly innovated by Scott Hall and the other members of The Kliq based on the Turkish Grey Wolves organization hand gesture according to Sean Waltman, and has since been attributed to other wrestling groups such as the nWo and Bullet Club, as well as individual wrestlers such as Finn Bálor.
Fans of University of Utah athletics, particularly football and gymnastics, use a gesture where the index and pinky finger are straight and parallel to each other, forming a block "U."
Fans of Northwestern State University Demon athletics also use a similar hand gesture, known as "Fork 'em!" The pinky and index fingers are extended but a little more parallel to each other resembling the horns on a demon.
Arizona State University Sun Devil fans make a pitchfork sign by extending the index and middle fingers, as well as the pinky. The thumb holds down the ring finger to complete the gesture.
Fans of the Wichita State University Shockers frequently hold up their middle finger in addition to the pointer and pinky fingers as a reference to the comic sexual act.
Fans of the Grand Canyon University Antelopes use this hand gesture with a slight variation by touching the tips of the ring and middle finger with the thumb to form the shape of an antelope and its horns. Often followed by the phrase "Lopes up".
Fans of the Universidad de Chile soccer team use this gesture to represent their support for the team by forming a U-shaped hand gesture, often followed by the phrase "Grande la U".
Fans of University at Buffalo Buffalo Bulls athletics use the same hand sign at their athletic events. This gesture is meant to resemble a bull's horns.
Russian culture
In Russian children's folklore the sign of the horns (called koza, "goat") is associated with the nursery rhyme ("Here comes a horned goat"). When telling the rhyme to a toddler, the narrator tickles the child with the "horns" at the end of the rhyme.
See also
Cornicello
ILY sign is sometimes confused with this gesture because many users tend to do the "horns" improperly by extending their thumb.
Shaka sign
References
External links
President George W. Bush, gestures the "Hook 'em, horns," the salute of the University of Texas Longhorns.(...)
Odyssey of the Devil Horns: Who is responsible for bringing metal's famous hand signal to the tribe? Los Angeles CityBeat, September 9, 2004
Hand gestures
Heavy metal subculture
Superstitions of Italy
Symbols
Mediterranean | Sign of the horns | [
"Mathematics"
] | 2,707 | [
"Symbols"
] |
2,261,176 | https://en.wikipedia.org/wiki/Hepatocyte%20nuclear%20factor%204 | HNF4 (Hepatocyte Nuclear Factor 4) is a nuclear receptor protein mostly expressed in the liver, gut, kidney, and pancreatic beta cells that is critical for liver development. In humans, there are two paralogs of HNF4, HNF4α and HNF4γ, encoded by two separate genes and respectively.
Ligands
HNF4 was originally classified as an orphan receptor that exhibits constitutive transactivation activity apparently by being continuously bound to a variety of fatty acids. The existence of a ligand for HNF4 has been somewhat controversial, but linoleic acid (LA) has been identified as the endogenous ligand of native HNF4 expressed in mouse liver; the binding of LA to HNF4 is reversible.
The ligand binding domain of HNF4, as with other nuclear receptors, adopts a canonical alpha helical sandwich fold and interacts with co-activator proteins.
HNF4 binds to the consensus sequence AGGTCAaAGGTCA in order to activate transcription.
Pathology
Mutations in the HNF4A gene have been linked to maturity onset diabetes of the young 1 (MODY1).
This seems to be caused by HNF4-a's role in the synthesis of SHBG, which is known to be severely diminished in patients with insulin-resistance.
See also
Hepatocyte nuclear factors
Hepatocyte nuclear factor 4A
References
External links
Intracellular receptors
Transcription factors | Hepatocyte nuclear factor 4 | [
"Chemistry",
"Biology"
] | 319 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
2,261,362 | https://en.wikipedia.org/wiki/Legacy%20port | In computing, a legacy port is a computer port or connector that is considered by some to be fully or partially superseded. The replacement ports usually provide most of the functionality of the legacy ports with higher speeds, more compact design, or plug and play and hot swap capabilities for greater ease of use. Modern PC motherboards use separate Super I/O controllers to provide legacy ports, since current chipsets do not offer direct support for them. A category of computers called legacy-free PCs omits these ports, typically retaining only USB for external expansion.
USB adapters are often used to provide legacy ports if they are required on systems not equipped with them.
Common legacy ports
See also
Legacy encoding
Legacy system
References
Computer buses
Legacy hardware | Legacy port | [
"Technology"
] | 146 | [
"Computing stubs",
"Computer hardware stubs"
] |
2,261,503 | https://en.wikipedia.org/wiki/Aluminium%20bronze | Aluminium bronze is a type of bronze in which aluminium is the main alloying metal added to copper, in contrast to standard bronze (copper and tin) or brass (copper and zinc). A variety of aluminium bronzes of differing compositions have found industrial use, with most ranging from 5% to 11% aluminium by weight, the remaining mass being copper; other alloying agents such as iron, nickel, manganese, and silicon are also sometimes added to aluminium bronzes.
Compositions
The following table lists the most common standard aluminium bronze wrought alloy compositions, by ISO 428 designations. The percentages show the proportional composition of the alloy by weight. Copper is the remainder by weight and is not listed:
Material properties
Aluminium bronzes are most valued for their higher strength and corrosion resistance as compared to other bronze alloys. These alloys are tarnish-resistant and show low rates of corrosion in atmospheric conditions, low oxidation rates at high temperatures, and low reactivity with sulfurous compounds and other exhaust products of combustion. They are also resistant to corrosion in sea water. Aluminium bronzes' resistance to corrosion results from the aluminium in the alloys, which reacts with atmospheric oxygen to form a thin, tough surface layer of alumina (aluminium oxide) which acts as a barrier to corrosion of the copper-rich alloy. The addition of tin can improve corrosion resistance.
Another notable property of aluminium bronzes are their biostatic effects. The copper component of the alloy prevents colonization by marine organisms including algae, lichens, barnacles, and mussels, and therefore can be preferable to stainless steel or other non-cupric alloys in applications where such colonization would be unwanted.
Aluminium bronzes tend to have a golden color.
Applications
Aluminium bronzes are most commonly used in applications where their resistance to corrosion makes them preferable to other engineering materials. These applications include plain bearings and landing gear components on aircraft, guitar strings, valve components, engine components (especially for seagoing ships), underwater fastenings in naval architecture, and ship propellers. Aluminium bronze is also used to fulfil the ATEX directive for Zones 1, 2, 21, and 22. The attractive gold-toned coloration of aluminium bronzes has also led to their use in jewellery.
Aluminium bronzes are in the highest demand from the following industries and areas:
General sea water-related service
Water supply
Oil and petrochemical industries (i.e. tools for use in non-sparking environments)
Specialised anti-corrosive applications
Certain structural retrofit building applications
Aluminium bronze can be welded using the MIG welding technique with an aluminium bronze core and pure argon gas.
Aluminium bronze is used to replace gold for the casting of dental crowns. The alloys used are chemically inert and have the appearance of gold.
The Doehler Die Casting Co. of Toledo, Ohio were known for the production of Brastil, a high tensile corrosion resistant bronze alloy.
Italy pioneered the use for coinage of an aluminium-bronze alloy called bronzital (literally "Italian bronze") in its 5- and 10-centesimi from 1939. Its alloy was finalized in 1967 to 92% copper, 6% aluminium, and 2% nickel, and was since used in the 20, 200 and 500 Italian Lira coins until 2001. Bronzital has since been used for the Australian and New Zealand 1- and 2-dollar coins, the pre-2009 Mexican 20- and 50-centavo coins, the inner cores of the bi-metallic Mexican 1-, 2- and 5-peso coins, the pre-2017 Philippine 10-peso coin, the Canadian 2 dollar coin (a.k.a. the 'toonie'), and the outer rings of the Mexican 10-, 20-, 50- and 100-peso coins.
Nordic Gold, composed of 89% copper, 5% aluminium, 5% zinc, and 1% tin, is a more recently developed aluminium-bronze alloy for coinage. It was first used for the Swedish 10-kronor coin in 1991, and became widespread after the introduction of Nordic gold 10, 20 and 50-cent Euro coins in 2002.
Aluminium bronze is used in marine applications due to its excellent corrosion resistance in seawater. It is found in marine hardware like propellers, pumps, and valves, as well as in shipbuilding components and hull fittings. For its non-magnetic properties, it is also used in naval vessels, particularly in sonar equipment.
References
External links
Copper Development Association. "Publication Number 80: Aluminium Bronze Alloys Corrosion Resistance Guide", PDF . Retrieved April 9, 2014.
Copper Development Association. "Publication Number 82: Aluminium Bronze Alloys Technical Data". Retrieved April 9, 2014
Bronze
Copper alloys
Aluminium–copper alloys | Aluminium bronze | [
"Chemistry"
] | 964 | [
"Alloys",
"Copper alloys",
"Aluminium alloys"
] |
2,261,519 | https://en.wikipedia.org/wiki/User%20interface%20design | User interface (UI) design or user interface engineering is the design of user interfaces for machines and software, such as computers, home appliances, mobile devices, and other electronic devices, with the focus on maximizing usability and the user experience. In computer or software design, user interface (UI) design primarily focuses on information architecture. It is the process of building interfaces that clearly communicate to the user what's important. UI design refers to graphical user interfaces and other forms of interface design. The goal of user interface design is to make the user's interaction as simple and efficient as possible, in terms of accomplishing user goals (user-centered design). User-centered design is typically accomplished through the execution of modern design thinking which involves empathizing with the target audience, defining a problem statement, ideating potential solutions, prototyping wireframes, and testing prototypes in order to refine final interface mockups.
User interfaces are the points of interaction between users and designs.
Three types of user interfaces
Graphical user interfaces (GUIs)
Users interact with visual representations on a computer's screen. The desktop is an example of a GUI.
Interfaces controlled through voice
Users interact with these through their voices. Most smart assistants, such as Siri on smartphones or Alexa on Amazon devices, use voice control.
Interactive interfaces utilizing gestures
Users interact with 3D design environments through their bodies, e.g., in virtual reality (VR) games.
Interface design is involved in a wide range of projects, from computer systems, to cars, to commercial planes; all of these projects involve much of the same basic human interactions yet also require some unique skills and knowledge. As a result, designers tend to specialize in certain types of projects and have skills centered on their expertise, whether it is software design, user research, web design, or industrial design.
Good user interface design facilitates finishing the task at hand without drawing unnecessary attention to itself. Graphic design and typography are utilized to support its usability, influencing how the user performs certain interactions and improving the aesthetic appeal of the design; design aesthetics may enhance or detract from the ability of users to use the functions of the interface. The design process must balance technical functionality and visual elements (e.g., mental model) to create a system that is not only operational but also usable and adaptable to changing user needs.
UI design vs. UX design
Compared to UX design, UI design is more about the surface and overall look of a design. User interface design is a craft in which designers perform an important function in creating the user experience. UI design should keep users informed about what is happening, giving appropriate feedback in a timely manner. The visual look and feel of UI design sets the tone for the user experience. On the other hand, the term UX design refers to the entire process of creating a user experience.
Don Norman and Jakob Nielsen said:
Design thinking
User interface design requires a good understanding of user needs. It mainly focuses on the needs of the platform and its user expectations. There are several phases and processes in the user interface design, some of which are more demanded upon than others, depending on the project. The modern design thinking framework was created in 2004 by David M. Kelley, the founder of Stanford’s d.school, formally known as the Hasso Plattner Institute of Design. EDIPT is a common acronym used to describe Kelley’s design thinking framework—it stands for empathize, define, ideate, prototype, and test. Notably, the EDIPT framework is non-linear, therefore a UI designer may jump from one stage to another when developing a user-centric solution. Iteration is a common practice in the design thinking process; successful solutions often require testing and tweaking to ensure that the product fulfills user needs.
EDIPT
Empathize
Conducting user research to better understand the needs and pain points of the target audience. UI designers should avoid developing solutions based on personal beliefs and instead seek to understand the unique perspectives of various users. Qualitative data is often gathered in the form of semi-structured interviews.
Common areas of interest include:
What would the user want the system to do?
How would the system fit in with the user's normal workflow or daily activities?
How technically savvy is the user and what similar systems does the user already use?
What interface aesthetics and functionalities styles appeal to the user?
Define
Solidifying a problem statement that focuses on user needs and desires; effective problem statements are typically one sentence long and include the user, their specific need, and their desired outcome or goal.
Ideate
Brainstorming potential solutions to address the refined problem statement. The proposed solutions should ideally align with the stakeholders' feasibility and viability criteria while maintaining user desirability standards.
Prototype
Designing potential solutions of varying fidelity (low, mid, and high) while applying user experience principles and methodologies. Prototyping is an iterative process where UI designers should explore multiple design solutions rather than settling on the initial concept.
Test
Presenting the prototypes to the target audience to gather feedback and gain insights for improvement. Based on the results, UI designers may need to revisit earlier stages of the design process to enhance the prototype and user experience.
Usability testing
The Nielsen Norman Group, co-founded by Jakob Nielsen and Don Norman in 1998, promotes user experience and interface design education. Jakob Nielsen pioneered the interface usability movement and created the "10 Usability Heuristics for User Interface Design." Usability is aimed at defining an interface’s quality when considering ease of use; an interface with low usability will burden a user and hinder them from achieving their goals, resulting in the dismissal of the interface. To enhance usability, user experience researchers may conduct usability testing—a process that evaluates how users interact with an interface. Usability testing can provide insight into user pain points by illustrating how efficiently a user can complete a task without error, highlighting areas for design improvement.
Usability inspection
Letting an evaluator inspect a user interface. This is generally considered to be cheaper to implement than usability testing (see step below), and can be used early on in the development process since it can be used to evaluate prototypes or specifications for the system, which usually cannot be tested on users. Some common usability inspection methods include cognitive walkthrough, which focuses the simplicity to accomplish tasks with the system for new users, heuristic evaluation, in which a set of heuristics are used to identify usability problems in the UI design, and pluralistic walkthrough, in which a selected group of people step through a task scenario and discuss usability issues.
Usability testing
Testing of the prototypes on an actual user—often using a technique called think aloud protocol where the user is asked to talk about their thoughts during the experience. User interface design testing allows the designer to understand the reception of the design from the viewer's standpoint, and thus facilitates creating successful applications.
Requirements
The dynamic characteristics of a system are described in terms of the dialogue requirements contained in seven principles of part 10 of the ergonomics standard, the ISO 9241. This standard establishes a framework of ergonomic "principles" for the dialogue techniques with high-level definitions and illustrative applications and examples of the principles. The principles of the dialogue represent the dynamic aspects of the interface and can be mostly regarded as the "feel" of the interface.
Seven dialogue principles
Suitability for the task
The dialogue is suitable for a task when it supports the user in the effective and efficient completion of the task.
Self-descriptiveness
The dialogue is self-descriptive when each dialogue step is immediately comprehensible through feedback from the system or is explained to the user on request.
Controllability
The dialogue is controllable when the user is able to initiate and control the direction and pace of the interaction until the point at which the goal has been met.
Conformity with user expectations
The dialogue conforms with user expectations when it is consistent and corresponds to the user characteristics, such as task knowledge, education, experience, and to commonly accepted conventions.
Error tolerance
The dialogue is error-tolerant if, despite evident errors in input, the intended result may be achieved with either no or minimal action by the user.
Suitability for individualization
The dialogue is capable of individualization when the interface software can be modified to suit the task needs, individual preferences, and skills of the user.
Suitability for learning
The dialogue is suitable for learning when it supports and guides the user in learning to use the system.
The concept of usability is defined of the ISO 9241 standard by effectiveness, efficiency, and satisfaction of the user.
Part 11 gives the following definition of usability:
Usability is measured by the extent to which the intended goals of use of the overall system are achieved (effectiveness).
The resources that have to be expended to achieve the intended goals (efficiency).
The extent to which the user finds the overall system acceptable (satisfaction).
Effectiveness, efficiency, and satisfaction can be seen as quality factors of usability. To evaluate these factors, they need to be decomposed into sub-factors, and finally, into usability measures.
The information presented is described in Part 12 of the ISO 9241 standard for the organization of information (arrangement, alignment, grouping, labels, location), for the display of graphical objects, and for the coding of information (abbreviation, colour, size, shape, visual cues) by seven attributes. The "attributes of presented information" represent the static aspects of the interface and can be generally regarded as the "look" of the interface. The attributes are detailed in the recommendations given in the standard. Each of the recommendations supports one or more of the seven attributes.
Seven presentation attributes
Clarity
The information content is conveyed quickly and accurately.
Discriminability
The displayed information can be distinguished accurately.
Conciseness
Users are not overloaded with extraneous information.
Consistency
A unique design, conformity with user's expectation.
Detectability
The user's attention is directed towards information required.
Legibility
Information is easy to read.
Comprehensibility
The meaning is clearly understandable, unambiguous, interpretable, and recognizable.
Usability
The user guidance in Part 13 of the ISO 9241 standard describes that the user guidance information should be readily distinguishable from other displayed information and should be specific for the current context of use.
User guidance can be given by the following five means:
Prompts indicating explicitly (specific prompts) or implicitly (generic prompts) that the system is available for input.
Feedback informing about the user's input timely, perceptible, and non-intrusive.
Status information indicating the continuing state of the application, the system's hardware and software components, and the user's activities.
Error management including error prevention, error correction, user support for error management, and error messages.
On-line help for system-initiated and user-initiated requests with specific information for the current context of use.
Research
User interface design has been a topic of considerable research, including on its aesthetics. Standards have been developed as far back as the 1980s for defining the usability of software products.
One of the structural bases has become the IFIP user interface reference model.
The model proposes four dimensions to structure the user interface:
The input/output dimension (the look)
The dialogue dimension (the feel)
The technical or functional dimension (the access to tools and services)
The organizational dimension (the communication and co-operation support)
This model has greatly influenced the development of the international standard ISO 9241 describing the interface design requirements for usability.
The desire to understand application-specific UI issues early in software development, even as an application was being developed, led to research on GUI rapid prototyping tools that might offer convincing simulations of how an actual application might behave in production use. Some of this research has shown that a wide variety of programming tasks for GUI-based software can, in fact, be specified through means other than writing program code.
Research in recent years is strongly motivated by the increasing variety of devices that can, by virtue of Moore's law, host very complex interfaces.
See also
Chief experience officer (CXO)
Cognitive dimensions
Discoverability
Experience design
Gender HCI
Human interface guidelines
Human-computer interaction
Icon design
Information architecture
Interaction design
Interaction design pattern
Interaction Flow Modeling Language (IFML)
Interaction technique
Knowledge visualization
Look and feel
Mobile interaction
Natural mapping (interface design)
New Interfaces for Musical Expression
Participatory design
Principles of user interface design
Process-centered design
Progressive disclosure
T Layout
User experience design
User-centered design
References
Usability
Design
Graphic design
Industrial design
Information architecture
Design | User interface design | [
"Technology",
"Engineering"
] | 2,590 | [
"User interfaces",
"Industrial design",
"Design engineering",
"Interfaces",
"Design"
] |
2,261,795 | https://en.wikipedia.org/wiki/Newer%20Technology | Newer Technology is an American technology company headquartered in Woodstock, Illinois, that designs and manufactures accessories primarily for Apple, Inc. products.
History
Founded in 1988 Newer Technology initially focused on manufacturing computer upgrades and accessories such as processor cards, CPU caches, memory, and PowerBook batteries. At its peak in the mid-1990s the company was known in Apple Macintosh communities for its end-of-life extending upgrades. This included user-installable MAXpowr CPU cards that could upgrade an obsolete Macintosh computer across architectures, such as from Motorola 68000 to PowerPC thereby extending the usable life of the computer by years.
NewerTech filed for Chapter 11 bankruptcy in June 1996 due to a rapidly declining Apple Macintosh market and sharp decrease in memory prices. An effort to diversify revenue by developing Windows products was announced, but by the end of 2000 and the company was dissolved. The name and remaining intellectual property were purchased in 2001 by Rick Estes, NewerTech's former Vice President of Operations.
In 2002 their name and products were acquired by Other World Computing and now do business as "NewerTech an OWC brand". As of 2019 they continue to supply Macintosh, iPad, and iPhone accessories.
References
External links
Newer Technology Homepage
Computer companies established in 1988
Macintosh platform
Macintosh peripherals
Computer companies of the United States
Computer hardware companies | Newer Technology | [
"Technology"
] | 268 | [
"Computer hardware companies",
"Computing platforms",
"Macintosh platform",
"Computers"
] |
2,262,154 | https://en.wikipedia.org/wiki/Doppler%20broadening | In atomic physics, Doppler broadening is broadening of spectral lines due to the Doppler effect caused by a distribution of velocities of atoms or molecules. Different velocities of the emitting (or absorbing) particles result in different Doppler shifts, the cumulative effect of which is the emission (absorption) line broadening.
This resulting line profile is known as a Doppler profile.
A particular case is the thermal Doppler broadening due to the thermal motion of the particles. Then, the broadening depends only on the frequency of the spectral line, the mass of the emitting particles, and their temperature, and therefore can be used for inferring the temperature of an emitting (or absorbing) body being spectroscopically investigated.
Derivation (non-relativistic case)
When a particle moves (e.g., due to the thermal motion) towards the observer, the emitted radiation is shifted to a higher frequency. Likewise, when the emitter moves away, the frequency is lowered. In the non-relativistic limit, the Doppler shift is
where is the observed frequency, is the frequency in the rest frame, is the velocity of the emitter towards the observer, and is the speed of light.
Since there is a distribution of speeds both toward and away from the observer in any volume element of the radiating body, the net effect will be to broaden the observed line. If is the fraction of particles with velocity component to along a line of sight, then the corresponding distribution of the frequencies is
where is the velocity towards the observer corresponding to the shift of the rest frequency to . Therefore,
We can also express the broadening in terms of the wavelength . Since , , and so . Therefore,
Thermal Doppler broadening
In the case of the thermal Doppler broadening, the velocity distribution is given by the Maxwell distribution
where is the mass of the emitting particle, is the temperature, and is the Boltzmann constant.
Then
We can simplify this expression as
which we immediately recognize as a Gaussian profile with the standard deviation
and full width at half maximum (FWHM)
Applications and caveats
In astronomy and plasma physics, the thermal Doppler broadening is one of the explanations for the broadening of spectral lines, and as such gives an indication for the temperature of observed material. Other causes of velocity distributions may exist, though, for example, due to turbulent motion. For a fully developed turbulence, the resulting line profile is generally very difficult to distinguish from the thermal one.
Another cause could be a large range of macroscopic velocities resulting, e.g., from the receding and approaching portions of a rapidly spinning accretion disk. Finally, there are many other factors that can also broaden the lines. For example, a sufficiently high particle number density may lead to significant Stark broadening.
Doppler broadening can also be used to determine the velocity distribution of a gas given its absorption spectrum. In particular, this has been used to determine the velocity distribution of interstellar gas clouds.
Doppler broadening, the physical phenomenon driving the fuel temperature coefficient of reactivity also been used as a design consideration in high-temperature nuclear reactors. In principle, as the reactor fuel heats up, the neutron absorption spectrum will broaden due to the relative thermal motion of the fuel nuclei with respect to the neutrons. Given the shape of the neutron absorption spectrum, this has the result of reducing neutron absorption cross section, reducing the likelihood of absorption and fission. The end result is that reactors designed to take advantage of Doppler broadening will decrease their reactivity as temperature increases, creating a passive safety measure. This tends to be more relevant to gas-cooled reactors, as other mechanisms are dominant in water cooled reactors.
Saturated absorption spectroscopy, also known as Doppler-free spectroscopy, can be used to find the true frequency of an atomic transition without cooling a sample down to temperatures at which the Doppler broadening is negligible.
See also
Mössbauer effect
Dicke effect
References
Doppler effects
Physical phenomena | Doppler broadening | [
"Physics"
] | 844 | [
"Doppler effects",
"Physical phenomena",
"Astrophysics"
] |
2,262,238 | https://en.wikipedia.org/wiki/Nanaerobe | Nanaerobes are organisms that cannot grow in the presence of micromolar concentrations of oxygen, but can grow with and benefit from the presence of nanomolar concentrations of oxygen (e.g. Bacteroides fragilis). Like other anaerobes, these organisms do not require oxygen for growth. This growth benefit requires the expression of an oxygen respiratory chain that is typically associated with microaerophilic respiration. Recent studies suggest that respiration in low concentrations of oxygen is an ancient process which predates the emergence of oxygenic photosynthesis.
References
Cellular respiration | Nanaerobe | [
"Chemistry",
"Biology"
] | 120 | [
"Biochemistry",
"Cellular respiration",
"Metabolism"
] |
2,262,244 | https://en.wikipedia.org/wiki/Satellite%20Business%20Systems | Satellite Business Systems (SBS) was a company founded by IBM, Aetna, COMSAT (and later wholly purchased by IBM and then subsequently sold to MCI), that provided private professional satellite communications through its SBS fleet of FSS geosynchronous satellites, and was the first company to do so.
SBS was founded on December 15, 1975 by the aforementioned companies with the goal of providing a digital satellite communications network for business and other professional clients.
History
In late 1970, MCI Communications created a subsidiary company named MCI Satellite, Inc. The idea was that satellites could provide 'long distance' service from anywhere to anywhere without having to build thousands of miles of terrestrial network facilities. In early 1971, MCI and Lockheed Missiles and Space Company created a joint venture named MCI Lockheed Satellite Corp. which was the first company to request FCC authorization as a Specialized Common Carrier using satellite based communications. A year later, MCI and Lockheed sought an additional source of funding and Comsat Corp. entered the venture which was renamed CML Satellite Corp. In need of cash, MCI sold its share of the venture to IBM Corporation in 1974 (Lockheed also subsequently sold its share to IBM).
IBM owned one third of the company by 1975. It and Comsat brought in Aetna Insurance Company as a third partner and renamed the company Satellite Business Systems (SBS).
Marketing
The original concept was for a large corporation to install SBS earth stations at each of its major sites. This strategy limited SBS' addressable market to corporations with enough concentrated voice and data traffic to justify the installation of multiple earth stations. Earth stations were generally not shared by SBS customers.
Although the SBS technology provided switched digital, leased digital, switched analog, and leased analog connections between a customer's earth stations, most customers used it mainly for intra-corporate voice calls. Data communications protocols of the period were not efficient over satellite links.
One SBS customer, ISACOMM, extended the business model to smaller corporate customers and provided offnet connectivity as well.
The high initial costs of deploying earth stations, along with the rapid success and expansion of terrestrial competitors like MCI and Sprint, prevented SBS from attaining its commercial targets. Excess transponders on SBS satellites were leased to other companies, and SBS adopted some of ISACOMM's marketing tactics and even pursued the consumer long-distance market on a limited basis.
Technology
SBS' fleet of satellites were the first in orbit to offer transponders in the Ku band, meaning that smaller, less expensive dishes and equipment could be used to receive and transmit to the satellites, making SBS' satellite system attractive for business customers. This was opposed to then-current satellites using the C band of RF spectrum, requiring a larger (and more costly) dish 8 feet (and up) in size. However, Ku signals suffered more from rain fade.
The SBS earth station was designed by IBM. It consisted of a highly modified IBM 1800 and a TDMA modem. All earth stations were managed from the SBS central network operations facility located in McLean, VA, which was also the headquarters location for the corporation.
Historical Note
The first use of the NASA Shuttle for commercial purposes was the deployment of the SBS 3 satellite in November, 1982 from STS-5. SBS engineers designed the cradle that sat in the cargo bay of the shuttle and spun up to 50 RPM, then ejected the spinning satellite with the use of explosive bolts.
SBS satellites in orbit
Through its existence as a company, SBS had six satellites in orbit:
SBS 1-5 were built by Hughes using the HS-376 platform. SBS-6 used the HS-393 platform.
SBS 1-6 are no longer in service and have been placed in graveyard orbits. SBS-6 was the last SBS satellite in operation at 74 degrees west longitude. It was decommissioned in July 2007 and replaced by Intelsat Galaxy 17.
The end of SBS
In July 1984, Comsat left SBS, and exactly a year later, Satellite Business Systems was sold to MCI. MCI migrated the voice and data traffic of most SBS customers to its terrestrial network. During the sale of SBS to MCI, four satellites (SBS 1-4) were then in orbit.
In 1987, SBS' fleet was sold off. SBS 1 and 2 were sold to Comsat, SBS 3 remained with MCI, and SBS 4 was sold to IBM's Satellite Transponder Leasing Corporation (STLC) together with the SBS 5 and 6 satellites, which were then still on the ground.
In April 1990, Hughes Communications Inc (HCI), a subsidiary of Hughes Aircraft (who built the satellites) bought STLC from IBM. Sometime later (possibly around 1992) SBS 3 was sold to Comsat. Comsat was later bought by Lockheed Martin.
Due to the divestiture of its fleet (to MCI & HCI, and to Comsat and IBM as well, the former founders of SBS), SBS no longer exists as an entity, with the last satellite left of its fleet, SBS 6, being decommissioned in July 2007 and last being owned by Intelsat.
References
External links
List of feeds on SBS 6 from lyngsat.com
Defunct telecommunications companies of the United States
Communications satellite operators
Derelict satellites orbiting Earth
Telecommunications companies established in 1975
American companies established in 1975
Companies based in McLean, Virginia
Aerospace companies of the United States
Space technology | Satellite Business Systems | [
"Astronomy"
] | 1,088 | [
"Space technology",
"Outer space"
] |
2,262,290 | https://en.wikipedia.org/wiki/NGC%201569 | NGC 1569 is a dwarf irregular galaxy in Camelopardalis. The galaxy is relatively nearby and consequently, the Hubble Space Telescope can easily resolve the stars within the galaxy. The distance to the galaxy was previously believed to be only 2.4 Mpc (7.8 Mly). However, in 2008 scientists studying images from Hubble calculated the galaxy's distance at nearly 11 million light-years away, about 4 million light-years farther than previously thought, meaning it is a member of the IC 342 group of galaxies.
Physical characteristics
NGC 1569 is smaller than the Small Magellanic Cloud, but brighter than the Large Magellanic Cloud
Starburst
NGC 1569 is characterized by a large starburst. It has formed stars at a rate 100 times greater than that of the Milky Way during the last 100 million years. It contains two prominent super star clusters with different histories.
Both clusters have experienced episodic star formation. Super star
cluster A, located in the northwest of the galaxy and actually formed of two close clusters (NGC 1569 A1 and NGC 1569 A2), contains young stars
(including Wolf-Rayet stars) that formed less than 5 million years ago (in NGC 1569 A1) as well as older red stars (in NGC 1569 A2).
Super star cluster B, located near the center of the galaxy, contains an older stellar population of red giants and red supergiants. Both of these star clusters are thought to have masses equivalent to the masses of the globular clusters in the Milky Way (approximately (6-7) × 105 solar masses). Numerous smaller star clusters, some of them having masses similar to those of small globular clusters or R136 in the Large Magellanic Cloud, with relatively young ages (between 2 million years and 1 billion years) have also been identified. These results, along with the results from other dwarf galaxies such as the Large Magellanic Cloud and NGC 1705, demonstrate that star formation in dwarf galaxies does not occur continuously but instead occurs in a series of short, nearly instantaneous bursts.
The numerous supernovae produced in the galaxy as well as the strong stellar winds of its stars have produced filaments and bubbles of ionized hydrogen with respective sizes of up to 3,700 and 380 light years that shine excited by the light of the young stars contained within them and that are conspicuous on images taken with large telescopes.
The NGC 1569 starburst is believed to have been triggered by interactions with other galaxies of the IC 342 group, in particular a nearby cloud of neutral hydrogen. A 2013 study suggested the presence of tidal tails linking this galaxy with IC 342 and the dwarf galaxy UGCA 92 (see below) whose nature, however, is unclear and may actually be structures within our galaxy.
Blueshift
The spectrum of NGC 1569 is blueshifted. This means that the galaxy is moving towards the Earth. In contrast, the spectra of most other galaxies are redshifted because of the expansion of the universe.
Environment
The dwarf irregular galaxy UGCA 92 is often assumed to be a companion of NGC 1569; however, its relationship to the starburst the latter is experiencing is unclear, with some authors suggesting UGCA 92 has not started it and others suggesting it has interacted with NGC 1569, being connected with it by a tidal tail and several filaments of neutral hydrogen; however, it is still unclear if those structures are associated with them or actually within the Milky Way, being unrelated with those two galaxies.
References
External links
NGC 1569 at ESA/Hubble
NGC 1569 at Constellation Guide
Dwarf irregular galaxies
Camelopardalis
1569
03056
15345
210
IC 342/Maffei Group | NGC 1569 | [
"Astronomy"
] | 764 | [
"Camelopardalis",
"Constellations"
] |
2,262,293 | https://en.wikipedia.org/wiki/Rheometer | A rheometer is a laboratory device used to measure the way in which a viscous fluid (a liquid, suspension or slurry) flows in response to applied forces. It is used for those fluids which cannot be defined by a single value of viscosity and therefore require more parameters to be set and measured than is the case for a viscometer. It measures the rheology of the fluid.
There are two distinctively different types of rheometers. Rheometers that control the applied shear stress or shear strain are called rotational or shear rheometers, whereas rheometers that apply extensional stress or extensional strain are extensional rheometers.
Rotational or shear type rheometers are usually designed as either a native strain-controlled instrument (control and apply a user-defined shear strain which can then measure the resulting shear stress) or a native stress-controlled instrument (control and apply a user-defined shear stress and measure the resulting shear strain).
Meanings and origin
The word rheometer comes from the Greek, and means a device for measuring main flow. In the 19th century it was commonly used for devices to measure electric current, until the word was supplanted by galvanometer and ammeter. It was also used for the measurement of the flow of liquids, in medical practice (flow of blood) and in civil engineering (flow of water). This latter use persisted to the second half of the 20th century in some areas. Following the coining of the term rheology the word came to be applied to instruments for measuring the character rather than quantity of flow, and the other meanings are obsolete. (Principal Source: Oxford English Dictionary) The principle and working of rheometers is described in several texts.
Types of shear rheometer
Shearing geometries
Four basic shearing planes can be defined according to their geometry,
Couette drag plate flow
Cylindrical flow
Poiseuille flow in a tube and
Plate-plate flow
The various types of shear rheometers then use one or a combination of these geometries.
Linear shear
One example of a linear shear rheometer is the Goodyear linear skin rheometer, which is used to test cosmetic cream formulations, and for medical research purposes to quantify the elastic properties of tissue.
The device works by attaching a linear probe to the surface of the tissue under test, a controlled cyclical force is applied, and the resultant shear force measured using a load cell. Displacement is measured using a Linear variable differential transformer (LVDT). Thus the basic stress–strain parameters are captured and analysed to derive the dynamic spring rate of the tissue under tests.
Pipe or capillary
Liquid is forced through a tube of constant cross-section and precisely known dimensions under conditions of laminar flow. Either the flow-rate or the pressure drop are fixed and the other measured. Knowing the dimensions, the flow-rate can be converted into a value for the shear rate and the pressure drop into a value for the shear stress. Varying the pressure or flow allows a flow curve to be determined. When a relatively small amount of fluid is available for rheometric characterization, a microfluidic rheometer with embedded pressure sensors can be used to measure pressure drop for a controlled flow rate.
Capillary rheometers are especially advantageous for characterization of therapeutic protein solutions since it determines the ability to be syringed. Additionally, there is an inverse relationship between the rheometry and solution stability, as well as thermodynamic interactions.
Dynamic shear rheometer
A dynamic shear rheometer, commonly known as DSR is used for research and development as well as for quality control in the manufacturing of a wide range of materials. Dynamic shear rheometers have been used since 1993 when Superpave was used for characterising and understanding high temperature rheological properties of asphalt binders in both the molten and solid state and is fundamental in order to formulate the chemistry and predict the end-use performance of these materials.
Rotational cylinder
The liquid is placed within the annulus of one cylinder inside another. One of the cylinders is rotated at a set speed. This determines the shear rate inside the annulus. The liquid tends to drag the other cylinder round, and the force it exerts on that cylinder (torque) is measured, which can be converted to a shear stress.
One version of this is the Fann V-G Viscometer, which runs at two speeds, (300 and 600 rpm) and therefore only gives two points on the flow curve. This is sufficient to define a Bingham plastic model which was once widely used in the oil industry for determining the flow character of drilling fluids. In recent years rheometers that spin at 600, 300, 200, 100, 6 & 3 RPM have become more commonplace. This allows for more complex fluids models such as Herschel–Bulkley to be used. Some models allow the speed to be continuously increased and decreased in a programmed fashion, which allows the measurement of time-dependent properties.
Cone and plate
The liquid is placed on horizontal plate and a shallow cone placed into it. The angle between the surface of the cone and the plate is around 1–2 degrees but can vary depending on the types of tests being run. Typically the plate is rotated and the torque on the cone measured. A well-known version of this instrument is the Weissenberg rheogoniometer, in which the movement of the cone is resisted by a thin piece of metal which twists—known as a torsion bar. The known response of the torsion bar and the degree of twist give the shear stress, while the rotational speed and cone dimensions give the shear rate. In principle the Weissenberg rheogoniometer is an absolute method of measurement providing it is accurately set up. Other instruments operating on this principle may be easier to use but require calibration with a known fluid.
Cone and plate rheometers can also be operated in an oscillating mode to measure elastic properties, or in combined rotational and oscillating modes.
Basic concepts of shear rheometer
In the past, devices with controlled strain or strain rate (CR rheometers) were distinguished from rheometers with controlled stress (CS rheometers) depending on the measuring principle.
In a controlled strain (CR) rheometer, the sample is subjected to displacement or speed (strain or strain rate) using a DC motor, and the resulting torque (stress) is measured separately using an additional force-torque sensor (torque compensation transducer). The electric current used to generate the displacement or speed of the motor is not used as a measure of the torque acting in the sample. This mode of operation is also referred to as separate motor transducer mode (SMT).
Deflection angle/strain and shear rate are set by the motor based on the position control of the optical encoder in the lower part.
Sample reaction (the stress acting within the sample) is measured by an additional force-torque transducer (torque re-balance transducer)
The separation of drive and torque measurement has advantages in strain-controlled tests, since the motor's moment of inertia has no influence on the measured torque.
Limitations of the SMT mode can be found in stress-controlled measurements (e.g. creep tests)
In a controlled-stress (CS) rheometer, the torque acting in the sample is determined directly from the electrical torque generated in the motor. With such a design, no separate torque sensor is required. Usually, this mode of operation is described as combined motor-transducer mode (CMT).
The stress acting in the sample is determined directly from the torque generated in the motor, which is required to deform the sample.
Deflection angle/strain and shear rate are determined by the use of an optical encoder.
Single-motor rheometers allow characterization of samples in either strain/shear rate or shear stress-controlled tests
Since only one actor (motor) is required, the single-motor rheometer can be easily combined with additional application-specific accessories that enable the study of material properties in a variety of different applications.
Limitations may occur from less precise data evaluation in the transient regime of start-up shear tests.
Nowadays, there are device concepts that allow both working modes, the combined motor transducer mode and the separate motor transducer mode, by using two motors in one device. The use of only one motor enables measurements to be made in the combined motor transducer mode. Using both motors allows working in the separate motor transducer mode, where one motor is used to deform the sample while the other motor is used to record the torque acting in the sample. Furthermore, this concept allows for additional modes of operation, such as counter-rotating mode, where both motors can rotate or oscillate in opposite directions. This mode of operation is used, for example, to increase the maximum achievable shear rate range or for advanced rheooptical characterization of samples.
Types of extensional rheometer
The development of extensional rheometers has proceeded more slowly than shear rheometers, due to the challenges associated with generating a homogeneous extensional flow. Firstly, interactions of the test fluid or melt with solid interfaces will result in a component of shear flow, which will compromise the results. Secondly, the strain history of all the material elements must be controlled and known. Thirdly, the strain rates and strain levels must be high enough to stretch the polymeric chains beyond their normal radius of gyration, requiring instrumentation with a large range of deformation rates and a large travel distance.
Commercially available extensional rheometers have been segregated according to their applicability to viscosity ranges. Materials with a viscosity range from approximately 0.01 to 1 Pa.s. (most polymer solutions) are best characterized with capillary breakup rheometers, opposed jet devices, or contraction flow systems. Materials with a viscosity range from approximately 1 to 1000 Pa.s. are used in filament stretching rheometers. Materials with a high viscosity >1000 Pa.s., such as polymer melts, are best characterized by constant-length devices.
Extensional rheometry is commonly performed on materials that are subjected to a tensile deformation. This type of deformation can occur during processing, such as injection molding, fiber spinning, extrusion, blow-molding, and coating flows. It can also occur during use, such as decohesion of adhesives, pumping of hand soaps, and handling of liquid food products.
A list of currently and previously marketed commercially available extensional rheometers is shown in the table below.
Commercially available extensional rheometers
Rheotens
Rheotens is a fiber spinning rheometer, suitable for polymeric melts. The material is pumped from an upstream tube, and a set of wheels elongates the strand. A force transducer mounted on one of the wheels measures the resultant extensional force. Because of the pre-shear induced as the fluid is transported through the upstream tube, a true extensional viscosity is difficult to obtain. However, the Rheotens is useful to compare the extensional flow properties of a homologous set of materials.
CaBER
The CaBER is a capillary breakup rheometer. A small quantity of material is placed between plates, which are rapidly stretched to a fixed level of strain. The midpoint diameter is monitored as a function of time as the fluid filament necks and breaks up under the combined forces of surface tension, gravity, and viscoelasticity. The extensional viscosity can be extracted from the data as a function of strain and strain rate. This system is useful for low viscosity fluids, inks, paints, adhesives, and biological fluids.
FiSER
The FiSER (filament stretching extensional rheometer) is based on the works by Sridhar et al. and Anna et al. In this instrument, a set of linear motors drive a fluid filament apart at an exponentially increasing velocity while measuring force and diameter as a function of time and position. By deforming at an exponentially increasing rate, a constant strain rate can be achieved in the samples (barring endplate flow limitations). This system can monitor the strain-dependent extensional viscosity, as well as stress decay following flow cessation. A detailed presentation on the various uses of filament stretching rheometry can be found on the MIT web site.
Sentmanat
The Sentmanat extensional rheometer (SER) is actually a fixture that can be field installed on shear rheometers. A film of polymer is wound on two rotating drums, which apply constant or variable strain rate extensional deformation on the polymer film. The stress is determined from the torque exerted by the drums.
Other types of extensional rheometers
Acoustic rheometer
Acoustic rheometers employ a piezo-electric crystal that can easily launch a successive wave of extensions and contractions into the fluid. This non-contact method applies an oscillating extensional stress. Acoustic rheometers measure the sound speed and attenuation of ultrasound for a set of frequencies in the megahertz range. Sound speed is a measure of system elasticity. It can be converted into fluid compressibility. Attenuation is a measure of viscous properties. It can be converted into viscous longitudinal modulus. In the case of a Newtonian liquid, attenuation yields information on the volume viscosity. This type of rheometer works at much higher frequencies than others. It is suitable for studying effects with much shorter relaxation times than any other rheometer.
Falling plate
A simpler version of the filament stretching rheometer, the falling plate rheometer sandwiches liquid between two solid surfaces. The top plate is fixed, and bottom plate falls under the influence of gravity, drawing out a string of the liquid.
Capillary/contraction flow
Other systems involve liquid going through an orifice, expanding from a capillary, or sucked up from a surface into a column by a vacuum. A pressurized capillary rheometer can be used to design thermal treatments of fluid food. This instrumentation could help prevent over and under-processing of fluid food because extrapolation to high temperatures would not be necessary.
See also
Acoustic rheometer
Dynamic shear rheometer
Food rheology
Piezometer
Rheometry
References
K. Walters (1975) Rheometry (Chapman & Hall)
A.S.Dukhin and P.J.Goetz "Ultrasound for characterizing colloids", Elsevier, (2002)
External links
See Dynamic Shear Rheometer by Cooper Research Technology
Presentation on alternative uses of rheometers
Fluid dynamics
Measuring instruments
Tribology | Rheometer | [
"Chemistry",
"Materials_science",
"Technology",
"Engineering"
] | 3,091 | [
"Tribology",
"Chemical engineering",
"Materials science",
"Measuring instruments",
"Surface science",
"Mechanical engineering",
"Piping",
"Fluid dynamics"
] |
2,262,333 | https://en.wikipedia.org/wiki/Software%20as%20a%20service | Software as a service (SaaS ) is a cloud computing service model where the provider offers use of application software to a client and manages all needed physical and software resources. SaaS is usually accessed via a web application. Unlike other software delivery models, it separates "the possession and ownership of software from its use". SaaS use began around 2000, and by 2023 was the main form of software application deployment.
Unlike most self-hosted software products, only one version of the software exists and only one operating system and configuration is supported. SaaS products typically run on rented infrastructure as a service (IaaS) or platform as a service (PaaS) systems including hardware and sometimes operating systems and middleware, to accommodate rapid increases in usage while providing instant and continuous availability to customers. SaaS customers have the abstraction of limitless computing resources, while economy of scale drives down the cost. SaaS architectures are typically multi-tenant; usually they share resources between clients for efficiency, but sometimes they offer a siloed environment for an additional fee. Common SaaS revenue models include freemium, subscription, and usage-based fees. Unlike traditional software, it is rarely possible to buy a perpetual license for a certain version of the software.
There are no specific software development practices that distinguish SaaS from other application development, although there is often a focus on frequent testing and releases.
Cloud computing
Infrastructure as a service (IaaS) is the most basic form of cloud computing, where infrastructure resources—such as physical computers—are not owned by the user but instead leased from a cloud provider. As a result, infrastructure resources can be increased rapidly, instead of waiting weeks for computers to ship and set up. IaaS requires time and expertise to make use of the infrastructure in the form of operating systems and applications. Platform as a service (PaaS) includes the operating system and middleware, but not the applications. SaaS providers typically use PaaS or IaaS services to run their applications.
Without IaaS, it would be extremely difficult to make an SaaS product scalable for a variable number of users while providing the instant and continual availability that customers expect. Most end users consume only the SaaS product and do not have to worry about the technical complexity of the physical hardware and operating system. Because cloud resources can be accessed without any human interactions, SaaS customers are provided with the abstraction of limitless computing resources, while economy of scale drives down the cost. Another key feature of cloud computing is that software updates can be rolled out and made available to all customers nearly instantaneously. In 2019, SaaS was estimated to make up the plurality, 43 percent, of the cloud computing market while IaaS and PaaS combined account for approximately 25 percent.
History
In the 1960s, multitasking was invented, enabling mainframe computers to serve multiple users simultaneously. Over the next decade, timesharing became the main business model for computing, and cluster computing enabled multiple computers to work together. Cloud computing emerged in the late 1990s with companies like Amazon (1994), Salesforce (1999), and Concur (1993) offering Internet-based applications on a pay-per-use basis. All of these focused on a single product to seize a high market share. Beginning with Gmail in 2004, email services were some of the first SaaS products to be mass-marketed to consumers. The market for SaaS grew rapidly throughout the early twenty-first century. Initially viewed as a technological innovation, SaaS has come to be perceived more as a business model. By 2023, SaaS had become the primary method that companies deliver applications.
Popular consumer SaaS products include all social media websites, email services like Gmail and its associated Google Docs Editors, Skype, Dropbox, and entertainment products like Netflix and Spotify. Enterprise SaaS products include Salesforce's customer relationship management (CRM) software, SAP Cloud Platform, and Oracle Cloud Enterprise Resource Planning.
Revenue models
Some SaaS providers offer free services to consumers that are funded by means such as advertising, affiliate marketing, or selling consumer data. One of the most popular models for Internet start-ups and mobile apps is freemium, where the company charges for continued use or a higher level of service. Even if the user never upgrades to the paid version, it helps the company capture a higher market share and displace customers from a rival. However, the company's hosting cost increases with the number of users, regardless of whether it is successful at enticing them to use the paid version. Another common model is where the free version only provides demonstration (crippleware). Online marketplaces may charge a fee on transactions to cover the SaaS provider costs. It used to be more common for SaaS products to be offered for a one-time cost, but this model is declining in popularity. A few SaaS products have open source code, called open SaaS. This model can provide advantages such as reduced deployment cost, less vendor commitment, and more portable applications.
The most common SaaS revenue models involve subscription and pay for usage. For customers, the advantages include reduced upfront cost, increased flexibility, and lower overall cost compared to traditional software with perpetual software licenses. In some cases, the steep one-time cost demanded by sellers of traditional software were out of the reach of smaller businesses, but pay-per-use SaaS models makes the software affordable. Usage may be charged based on the number of users, transactions, amount of storage spaced used, or other metrics. Many buyers prefer pay-per-usage because they believe that they are relatively light users of the software, and the seller benefits by reaching occasional users who would otherwise not buy the software. However, it can cause revenue uncertainty for the seller and increases the overhead for billing.
The subscription model of SaaS offers a continuing and renewable revenue stream to the provider, although vulnerable to cancellation. If a significant number are cancelled, the viability of the business can be placed in jeopardy. The ease of canceling a subscription and switching to a competitor leave customers with the leverage to get concessions from the seller. While recurring revenues can help the business and attract investors, the need for customer service skills in convincing the customer to renew their subscription is a challenge for providers switching to subscription from other revenue models.
Adoption
SaaS products are typically accessed via a web browser as a publicly available web application. This means that customers can access the application anywhere from any device without needing to install or update it. SaaS providers often try to minimize the difficulty of signing up for the product. Many capitalize on the service-oriented structure to respond to customer feedback and evolve their product quickly to meet demands. This can enable customers to believe in the continued improvement of the product and help the SaaS provider get customers from an established traditional software company that likely can offer a deeper feature set.
Although on-premises software is often less secure than SaaS alternatives, security and privacy are among the main reasons cited by companies that do not adopt SaaS products. SaaS companies have to protect their publicly available offerings from abuse, including denial-of-service attacks and hacking. They often use technologies such as access control, authentication, and encryption to protect data confidentiality. Nevertheless, not all companies trust SaaS providers to keep sensitive data secured. The vendor is responsible for software updates, including security patches, and for protecting the customers' data. SaaS systems inherently have a greater latency than software run on-premises due to the time for network packets to be delivered to the cloud facility. This can be prohibitive for some uses, such as time-sensitive industrial processes or warehousing.
The rise of SaaS products is one factor leading many companies switched from budgeting for IT as a capital expenditure to an operating expenditure. The process of migration to SaaS and supporting it can also be a significant cost that must be accounted for.
Development
A challenge for SaaS providers is that demand is not known in advance. Their system must have enough slack to be able to handle all users without turning any away, but without paying for too many resources that will be unnecessary. If resources are static, they are guaranteed to be wasted during non-peak time. Sometimes cheaper off-peak rates are offered to balance the load and reduce waste. The expectation for continuous service is so high that outages in SaaS software are often reported in the news.
There are not specific software development practices that differentiate SaaS from other application development. SaaS products are often released early and often to take advantage of the flexibility of the SaaS delivery model. Agile software development is commonly used to support this release schedule. Many SaaS developers use test-driven development, or otherwise emphasize frequent software testing, because of the need to ensure availability of their service and rapid deployment. Domain-driven design, in which business goals drive development, is popular because SaaS products must sell themselves to the customer by being useful. SaaS developers do not know in advance which devices customers will try to access the product from—such as a desktop computer, tablet, or smartphone—and supporting a wide range of devices is often an important concern for the front-end development team. Progressive web applications allow some functionality to be available even if the device is offline.
SaaS applications predominantly offer integration protocols and application programming interfaces (APIs) that operate over a wide area network.
Architecture
SaaS architecture varies significantly from product to product. Nevertheless, most SaaS providers offer a multi-tenant architecture. With this model, a single version of the application, with a single configuration (hardware, network, operating system), is used for all customers ("tenants"). This means that the company does not need to support multiple versions and configurations. The architectural shift from each customer running their own version of the software on their own hardware affects many aspects of the application's design and security features. In a multi-tenant architecture, many resources can be used by different tenants or shared between multiple tenants.
The structure of a typical SaaS application can be separated into application and control planes. SaaS products differ in how these planes are separated, which might be closely integrated or loosely coupled in an event- or message-driven model. The control plane is in charge of directing the system and covers functionality such as tenant onboarding, billing, and metrics, as well as the system used by the SaaS provider to configure, manage, and operate the service. Many SaaS products are offered at different levels of service for different prices, called tiering. This can also affect the architecture for both planes, although it is commonly placed in the control plane. Unlike the application plane, the services in the control plane are not designed for multitenancy.
The application plane—which varies a great deal depending on the nature of the product—implements the core functionality of the SaaS product. Key design issues include separating different tenants so they cannot view or change other tenants' data or resources. Except for the simplest SaaS applications, some microservices and other resources are allocated on a per-tenant basis, rather than shared between all tenants. Routing functionality is necessary to direct tenant requests to the appropriate services.
Some SaaS products do not share any resources between tenants—called siloing. Although this negates many of the efficiency benefits of SaaS, it makes it easier to migrate legacy software to SaaS and is sometimes offered as a premium offering at a higher price. Pooling all resources might make it possible to achieve higher efficiency, but an outage affects all customers so availability must be prioritized to a greater extent. Many systems use a combination of both approaches, pooling some resources and siloing others. Other companies group multiple tenants into pods and share resources between them.
Legal issues
In the United States, constitutional search warrant laws do not protect all forms of SaaS dynamically stored data. The result is that governments may be able to request data from SaaS providers without the owner's consent.
Certain open-source licenses such as GPL-2.0 do not explicitly grant rights permitting distribution as a SaaS product in Germany.
References
Sources
Further reading
As a service
Cloud applications
Software delivery methods
Software distribution
Software industry
Revenue models | Software as a service | [
"Technology",
"Engineering"
] | 2,490 | [
"Computer industry",
"Software industry",
"Software engineering"
] |
2,262,337 | https://en.wikipedia.org/wiki/Signalman | A signalman is a rank who makes signals using flags and light. The role has evolved and now usually uses electronic communication equipment. Signalmen usually work in rail transport networks, armed forces, or construction (to direct heavy equipment such as cranes).
Transport occupations | Signalman | [
"Physics"
] | 54 | [
"Physical systems",
"Transport",
"Transport stubs"
] |
2,262,347 | https://en.wikipedia.org/wiki/Bacillus%20safensis | Bacillus safensis is a gram-positive, spore-forming, and rod bacterium, originally isolated from a spacecraft in Florida and California. B. safensis could have possibly been transported to the planet Mars on spacecraft Opportunity and Spirit in 2004. There are several known strains of this bacterium, all of which belong to the Bacillota phylum of Bacteria. This bacterium also belongs to the large, pervasive genus Bacillus. B. safensis is an aerobic chemoheterotroph and is highly resistant to salt and UV radiation. B. safensis affects plant growth, since it is a powerful plant hormone producer, and it also acts as a plant growth-promoting rhizobacteria, enhancing plant growth after root colonization. Strain B. safensis JPL-MERTA-8-2 is (so far) the only bacterial strain shown to grow noticeably faster in micro-gravity environments than on the Earth surface.
Discovery and importance
Thirteen strains of the novel bacterium Bacillus safensis were first isolated from spacecraft surfaces and assembly-facility surfaces at the Kennedy Space Center in Florida as well as the Jet Propulsion Laboratory in California. The bacterium gets its name from the JPL Spacecraft Assembly Facility (SAF). Researchers used customary swabbing techniques to detect and collect the bacteria from cleanrooms where the spacecraft were put together in the Jet Propulsion Laboratory. The bacterium was accidentally brought to Mars during space missions due to contamination of clean rooms. Contamination of clean rooms during space travel is an area of concern for planetary protection as it can threaten microbial experimentation and give false positives of other microbial life forms on other planets.
V.V. Kothari and his colleagues from Saurashtra University in Gujarat, India, first isolated another strain, B. safensis VK. Strain VK was collected from Cuminum cyminum, a cumin plant in the desert area of Gujarat, India. Specifically, the bacteria were collected from the rhizosphere of the cumin plant.
Ram S. Singh and colleagues discovered one of the strains, AS-08, in soil samples of root tubers of asparagus plants in a botanical garden at Punjabi University in India. B. safensis AS-08 was found to have inulase activity, which is used for the production of fructooligosaccharides and high-fructose corn syrup. Fructooligosaccharides are used as artificial sweeteners and can be found in many commercial food products. Corn syrup is also found in many processed foods.
Davender Kumar and colleagues from Kurukshetra University in India isolated strain DVL-43 from soil samples. This strain was found to possess lipase, which is an important enzyme for fat digestion. Lipases are a class of chemicals that are abundant in nature amongst plants, animals and microorganisms that are widely used in industry for production of food, paper products, detergents and biodiesel fuel.
P. Ravikumar of the Government Arts College at Bharathiar University in India isolated strain PR-2 from explosive-laden soil samples. This strain was identified by its 16S rDNA sequence by Sanger dideoxy sequencing method and deposited in the GenBank in Maryland, U.S. It carries the accession number KP261381 with 885 base pairs of linear DNA and the base count 175 a 295 c 199 g 216 t.
Physical characteristics and metabolism
Bacillus safensis is a gram-positive, spore-forming rod bacterium. B. safensis is also an aerobic chemoheterotroph. Cell size ranges from 0.5 to 0.7 μm in diameter and 1.0–1.2 μm in length. This species is motile, and use polar flagella for locomotion. Many B. safensis strains are considered mesophilic, as they grow optimally in temperatures of , though certain strains exhibit extremophilic tendencies in tolerating salinity, UV exposure, low moisture, and temperatures commonly considered extreme. B. safensis FO-036b has an optimal temperature range of , and cannot grow at . B. safensis FO-036b prefers 0–10% salt, and a pH of 5.6. This strain was also found to produce spores that are resistant to hydrogen peroxide and UV radiation.
Strain VK of B. safensis is a salt-tolerant microorganism, and can grow beyond the 0–10% salt range of the general microbial species. This strain can grow in 14% NaCl, with a pH ranging from 4 to 8. Strain VK also contains genes that encode for 1-aminocyclopropane-1-carboxylate deaminase enzyme. This enzyme is able to generate 2-oxobutanoate and ammonia () by cleaving the precursor of plant hormone, ethylene 1-aminocyclopropane-1-carboxylate. This enables the plant to tolerate salt, heavy metals, and polyaromatic hydrocarbons. Because of these features, B. safensis VK is a powerful plant hormone producer.
Genomics
The genome of Bacillus safensis strain FO-036b shows a GC-content of 41.0-41.4 mol%.
The B. safensis VK genomic DNA was obtained from a 24-hr-old nutrient broth culture. Isolation of this strain was performed using a GenElute commercial DNA isolation kit, and whole-genome shotgun sequencing was carried out. Thirty-nine contigs, overlapping DNA fragments, greater in size than 200 base pairs were observed in strain VK. This strain displays a GC-content of 46.1% in a circular chromosome of 3.68 Mbp. 3,928 protein-coding sequences were identified, and 1,822 protein-coding sequences were appointed to one of the 457 RAST subsystems. RAST, Rapid Annotation using Subsystem Technology, is a server that generates bacterial and archaeal genome annotations. The genome also displays 73 tRNA genes. The B. safensis VK genome sequence can be found in GenBank under the accession number AUPF00000000. Another strain, DVL-43, can also be found in GenBank under the accession number KC156603, and strain PR-2 can be found under accession number KP261381. A detailed Whole Genome Phylogenetic Analysis of the genomes of B. safensis, B. pumilus and other Bacillota species, showed them to be separated into three distinct clusters. One of the large sub-clusters includes not only strains classified/identified (in literature) as belonging to B. safensis
but also some B. pumilus strains, thus suggesting how phylogenetic profiling may enable re-examining the strain designations.
Strains
Listed below are currently identified Bacillus safensis strains, including where they were discovered, and the year discovered (if available).
Bacillus safensis subsp. safensis FO-36B – clean room – California (1999)
Bacillus safensis NH21E_2 – sediment – South China Sea
Bacillus safensis B204-B1-5 – sediment – South China Sea
Bacillus safensis EMJ-O3-B1-22 – sediment – South China Sea
Bacillus safensis CJWT7 – sediment – South China Sea
Bacillus safensis SLN29 – sediment – South China Sea
Bacillus safensis BMO4-13 – surface water – Pacific Ocean
Bacillus safensis D21 – sediment – Arctic Ocean
Bacillus safensis HYg-9 – intestinal tract contents of fish – Xiamen Island
Bacillus safensis NP-4 – surface water – Arctic Ocean
Bacillus safensis 15-BO4 10-15-3 – Sediment – Bering Sea
Bacillus safensis DW3-7 – aquaculture water – shrimp farm
Bacillus safensis FO-33 – clean room – California (1999)
Bacillus safensis SAFN-001 – entrance floor of Jet Propulsion Lab (2001)
Bacillus safensis SAFN-027 – ante room of Jet Propulsion Lab (2001)
Bacillus safensis SAFN-036 – clean room of Jet Propulsion Lab (2001)
Bacillus safensis SAFN-037 – clean room floor of JPL (2001)
Bacillus safensis KL-052 – clean room cabinet top of JPL (2001)
Bacillus safensis 51-3C – Mars Odyssey spacecraft surface (2002)
Bacillus safensis 81-4C – Mars Odyssey assembly facility floor (2002)
Bacillus safensis A2-2C – Mars Odyssey assembly facility floor (2002)
Bacillus safensis 84-1C – Mars Odyssey assembly facility floor (2002)
Bacillus safensis 84-3C – Mars Odyssey assembly facility floor (2002)
Bacillus safensis 84-4C – Mars Odyssey assembly facility floor (2002)
Bacillus safensis JPL-MERTA-8-2 - Mars Exploration Rover clean room of JPL (2004)
Bacillus safensis DVL-43 – India
Bacillus safensis VK – Gujarat, India
Bacillus safensis AS08 – botanical garden - Punjabi University, India
Bacillus safensis PR-2 – explosive laden soil - Tamil Nadu, India (2015)
Bacillus safensis subsp. safensis HCM-06 – rhizospheric soil – India(2022)
Bacillus safensis BB2 – bee bread (fermented bee pollen) - Malaysia (2018)
Bacillus safensis MAE 17 - soil - Egypt (2019)
Bacillus safensis subsp. osmophilus BC09 - condensed milk - Spain (2019)
Differentiation between related species
Several isolates of the genus Bacillus are nearly identical to Bacillus pumilus. The group of isolates related to B. pumilus contains five related species: B. pumilus, B. safensis, B. stratosphericus, B. altitudinis, and B. aerophilus. These species are difficult to distinguish due to their 99.5% similarity in their 16S rRNA gene sequence. Recently, scientists have discovered an alternate way to differentiate between these closely related species, especially B. pumilus and B. safensis.
DNA gyrase is an important enzyme that introduces a negative supercoil to the DNA and is responsible for the biological processes in DNA replication and transcription. DNA gyrase is made of two subunits, A and B. These subunits are denoted as gyrA and gyrB. The gyrB gene, subunit B protein, is a type II topoisomerase that is essential for DNA replication. This gene is conserved among bacterial species. The rate of evolution at the molecular level deduced from gyrB-related gene sequences can be determined at a more accelerated rate compared to the 16S rRNA gene sequences. These subunits have provided a way to phylogenetically distinguish between the diversity of species related to B. pumilus, which includes B. safensis. Strain B. safensis DSM19292 shares 90.2% gyrA sequence similarity with B. pumilus strain .
In 1952, a strain of B. pumilus was discovered in the DSMZ culture and labeled as strain . The strain was identified before B. safensis was discovered. In 2012, a gyrA sequence similarity was tested between the B. pumilus strain and B. pumilus strain , as well as against B. safensis strain (type strain FO-36b). Strain showed a 90.4% and 98% sequence similarity with B. pumilus strain and B. safensis strain , respectively. These results indicated that may in fact be a B. safensis strain, instead of a B. pumilus strain. These results supported that gyrA sequences could be used to differentiate between closely related bacteria.
See also
Interplanetary contamination
Tersicoccus phoenicis
References
External links
B. safensis type strain at BacDive
B. safensis type strain at NamesforLife.com
safensis
Extremophiles
Bacteria described in 2006 | Bacillus safensis | [
"Biology",
"Environmental_science"
] | 2,589 | [
"Organisms by adaptation",
"Extremophiles",
"Environmental microbiology",
"Bacteria"
] |
2,262,550 | https://en.wikipedia.org/wiki/Thermodynamic%20limit | In statistical mechanics, the thermodynamic limit or macroscopic limit, of a system is the limit for a large number of particles (e.g., atoms or molecules) where the volume is taken to grow in proportion with the number of particles.
The thermodynamic limit is defined as the limit of a system with a large volume, with the particle density held fixed.
In this limit, macroscopic thermodynamics is valid. There, thermal fluctuations in global quantities are negligible, and all thermodynamic quantities, such as pressure and energy, are simply functions of the thermodynamic variables, such as temperature and density. For example, for a large volume of gas, the fluctuations of the total internal energy are negligible and can be ignored, and the average internal energy can be predicted from knowledge of the pressure and temperature of the gas.
Note that not all types of thermal fluctuations disappear in the thermodynamic limit—only the fluctuations in system variables cease to be important.
There will still be detectable fluctuations (typically at microscopic scales) in some physically observable quantities, such as
microscopic spatial density fluctuations in a gas scatter light (Rayleigh scattering)
motion of visible particles (Brownian motion)
electromagnetic field fluctuations, (blackbody radiation in free space, Johnson–Nyquist noise in wires)
Mathematically an asymptotic analysis is performed when considering the thermodynamic limit.
Origin
The thermodynamic limit is essentially a consequence of the central limit theorem of probability theory. The internal energy of a gas of N molecules is the sum of order N contributions, each of which is approximately independent, and so the central limit theorem predicts that the ratio of the size of the fluctuations to the mean is of order 1/N1/2. Thus for a macroscopic volume with perhaps the Avogadro number of molecules, fluctuations are negligible, and so thermodynamics works. In general, almost all macroscopic volumes of gases, liquids and solids can be treated as being in the thermodynamic limit.
For small microscopic systems, different statistical ensembles (microcanonical, canonical, grand canonical) permit different behaviours. For example, in the canonical ensemble the number of particles inside the system is held fixed, whereas particle number can fluctuate in the grand canonical ensemble. In the thermodynamic limit, these global fluctuations cease to be important.
It is at the thermodynamic limit that the additivity property of macroscopic extensive variables is obeyed. That is, the entropy of two systems or objects taken together (in addition to their energy and volume) is the sum of the two separate values. In some models of statistical mechanics, the thermodynamic limit exists, but depends on boundary conditions. For example, this happens in six vertex model: the bulk free energy is different for periodic boundary conditions and for domain wall boundary conditions.
Inapplicability
A thermodynamic limit does not exist in all cases. Usually, a model is taken to the thermodynamic limit by increasing the volume together with the particle number while keeping the particle number density constant. Two common regularizations are the box regularization, where matter is confined to a geometrical box, and the periodic regularization, where matter is placed on the surface of a flat torus (i.e. box with periodic boundary conditions). However, the following three examples demonstrate cases where these approaches do not lead to a thermodynamic limit:
Particles with an attractive potential that (unlike the Van der Waals force between molecules) doesn't turn around and become repulsive even at very short distances: In such a case, matter tends to clump together instead of spreading out evenly over all the available space. This is the case for gravitational systems, where matter tends to clump into filaments, galactic superclusters, galaxies, stellar clusters and stars.
A system with a nonzero average charge density: In this case, periodic boundary conditions cannot be used because there is no consistent value for the electric flux. With a box regularization, on the other hand, matter tends to accumulate along the boundary of the box instead of being spread more or less evenly with only minor fringe effects.
Certain quantum mechanical phenomena near absolute zero temperature present anomalies; e.g., Bose–Einstein condensation, superconductivity and superfluidity.
Any system that is not H-stable; this case is also called catastrophic.
References
Concepts in physics
Statistical mechanics
Thermodynamics | Thermodynamic limit | [
"Physics",
"Chemistry",
"Mathematics"
] | 948 | [
"Statistical mechanics",
"Thermodynamics",
"nan",
"Dynamical systems"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.