id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160
values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
42,177,410 | https://en.wikipedia.org/wiki/DNA%20base%20flipping | DNA base flipping, or nucleotide flipping, is a mechanism in which a single nucleotide base, or nucleobase, is rotated outside the nucleic acid double helix. This occurs when a nucleic acid-processing enzyme needs access to the base to perform work on it, such as its excision for replacement with another base during DNA repair. It was first observed in 1994 using X-ray crystallography in a methyltransferase enzyme catalyzing methylation of a cytosine base in DNA. Since then, it has been shown to be used by different enzymes in many biological processes such as DNA methylation, various DNA repair mechanisms, and DNA replication. It can also occur in RNA double helices or in the DNA:RNA intermediates formed during RNA transcription.
DNA base flipping occurs by breaking the hydrogen bonds between the bases and unstacking the base from its neighbors. This could occur through an active process, where an enzyme binds to the DNA and then facilitates rotation of the base, or a passive process, where the
base rotates out spontaneously, and this state is recognized and bound by an enzyme. It can be detected using
X-ray crystallography, NMR spectroscopy, fluorescence spectroscopy, or hybridization probes.
Discovery
Base flipping was first observed in 1994 when researchers Klimasauskas, Kumar, Roberts, and Cheng used X-ray crystallography to view an intermediate step in the chemical reaction of a methyltransferase bound to DNA. The methyltransferase they used was the C5-cytosine methyltransferase from Haemophilus haemolyticus (M. HhaI). This enzyme recognizes a specific sequence of the DNA (5'-GCGC-3') and methylates the first cytosine base of the sequence at its C5 location. Upon crystallization of the M. HhaI-DNA complex, they saw the target cytosine base was rotated completely out of the double helix and was positioned in the active site of the M. HhaI. It was held in place by numerous interactions between the M. HhaI and DNA.
The authors theorized that base flipping was a mechanism used by many other enzymes, such as helicases, recombination enzymes, RNA polymerases, DNA polymerases, and Type II topoisomerases. Much research has been done in the years subsequent to this discovery and it has been found that base flipping is a mechanism used in many of the biological processes the authors suggest.
Mechanism
DNA nucleotides are held together with hydrogen bonds, which are relatively weak and can be easily broken. Base flipping occurs on a millisecond timescale by breaking the hydrogen bonds between bases and unstacking the base from its neighbors. The base is rotated out of the double helix by 180 degrees, typically via the major groove, and into the active site of an enzyme. This opening leads to small conformational changes in the DNA backbone which are quickly stabilized by the increased enzyme-DNA interactions. Studies looking at the free-energy profiles of base flipping have shown that the free-energy barrier to flipping can be lowered by 17 kcal/mol for M.HhaI in the closed conformation.
There are two mechanisms of DNA base flipping: active and passive. In the active mechanism, an enzyme binds to the DNA and then actively rotates the base, while in the passive mechanism a damaged base rotates out spontaneously first, then is recognized and bound by the enzyme. Research has demonstrated both mechanisms: uracil-DNA glycosylase follows the passive mechanism and Tn10 transposase follows the active mechanism.
Furthermore, studies have shown that DNA base flipping is used by many different enzymes in a variety biological processes such as DNA methylation, various DNA repair mechanisms, RNA transcription and DNA replication.
Biological processes
DNA modification and repair
DNA can have mutations that cause a base in the DNA strand to be damaged. To ensure genetic integrity of the DNA, enzymes need to repair any damage. There are many types of DNA repair. Base excision repair utilizes base flipping to flip the damaged base out of the double helix and into the specificity pocket of a glycosylase which hydrolyzes the glycosidic bond and removes the base. DNA glycosylases interact with DNA, flipping bases to determine a mismatch. An example of base excision repair occurs when a cytosine base is deaminated and becomes a uracil base. This causes a U:G mispair which is detected by Uracil DNA glycosylase. The uracil base is flipped out into the glycosylase active pocket where it is removed from the DNA strand. Base flipping is used to repair mutations such as 8-Oxoguanine (oxoG) and thymine dimers created by UV radiation.
Replication, transcription and recombination
DNA replication and RNA transcription both make use of base flipping. DNA polymerase is an enzyme that carries out replication. It can be thought of as a hand that grips the DNA single strand template. As the template passes across the palm region of the polymerase, the template bases are flipped out of the helix and away from the dNTP binding site. During transcription, RNA polymerase catalyzes RNA synthesis. During the initiation phase, two bases in the -10 element flip out from the helix and into two pockets in RNA polymerase. These new interactions stabilize the -10 element and promote the DNA strands to separate or melt.
Base flipping occurs during latter stages of recombination. RecA is a protein that promotes strand invasion during homologous recombination. Base flipping has been proposed as the mechanism by which RecA can enable a single strand to recognize homology in duplex DNA. Other studies indicate that it is also involved in V(D)J Recombination.
DNA methylation
DNA methylation is the process in which a methyl group is added to either a cytosine or adenine. This process causes the activation or inactivation of gene expression, thereby resulting in gene regulation in eukaryotic cells. DNA methylation process is also known to be involved in certain types of cancer formation. In order for this chemical modification to occur, it is necessary that the target base flips out of the DNA double helix to allow the methyltransferases to catalyze the reaction.
Target recognition by restriction endonucleases
Restriction endonucleases, also known as restriction enzymes are enzymes that cleave the sugar-phosphate backbone of the DNA at specific nucleotides sequences that are usually four to six nucleotides long. Studies performed by Horton and colleagues have shown that the mechanism by which these enzymes cleave the DNA involves base flipping as well as bending the DNA and the expansion of the minor groove. In 2006, Horton and colleagues, x-ray crystallography evidence was presented showing that the restriction endonuclease HinP1I utilizes base flipping in order to recognize its target sequence. This enzyme is known to cleave the DNA at the palindromic tetranucleotide sequence G↓CGC.
Experimental approaches for detection
X-ray crystallography
X-ray crystallography is a technique that measures the angles and intensities of crystalline atoms in order to determine the atomic and molecular structure of the crystal of interest. Crystallographers are then able to produce and three-dimensional picture where the positions of the atoms, chemical bonds as well as other important characteristics can be determined. Klimasaukas and colleagues used this technique to observe the first base flipping phenomenon, in which their experimental procedure involved several steps:
Purification
Crystallization
Data Collection
Structure determination and refinement
During purification, Haemophilus haemolyticus methyltransferase was overexpressed and purified using a high salt back-extraction step to selectively solubilize M.HhaI, followed by fast protein liquid chromatography (FPLC) as done previously by Kumar and colleagues. Authors utilized a Mono-Q anion exchange column to remove the small quantity of proteinaceous materials and unwanted DNA prior to the crystallization step. Once M.HhaI was successfully purified, the sample was then grown using a method that mixes the solution containing the complex at a temperature of 16 °C and the hanging-drop vapor diffusion technique to obtain the crystals. Authors were then able to collect the x-ray data according to a technique used by Cheng and colleagues in 1993. This technique involved the measurement of the diffraction intensities on a FAST detector, where the exposure times for 0.1° rotation were 5 or 10 seconds. For the structure determination and refinement, Klimasaukas and colleagues used the molecular replacement of the refined apo structure described by Cheng and colleagues in 1993 where the search models X-PLOR, MERLOT, and TRNSUM were used to solve the rotation and translation functions. This part of the study involves the use of a variety of software and computer algorithms to solve the structures and characteristics of the crystal of interest.
NMR spectroscopy
NMR spectroscopy is a technique that has been used over the years to study important dynamic aspects of base flipping. This technique allows researchers to determine the physical and chemical properties of atoms and other molecules by utilizing the magnetic properties of atomic nuclei. In addition, NMR can provide a variety of information including structure, reaction states, chemical environment of the molecules, and dynamics. During the DNA base flipping discovery experiment, researchers utilized NMR spectroscopy to investigate the enzyme-induced base flipping of HhaI methyltransferase. In order to accomplish this experiment, two 5-fluorocytosine residues were incorporated into the target and the reference position with the DNA substrate so the 19F chemical shift analysis could be performed. Once the 19F chemical shift analysis was evaluated, it was then concluded that the DNA complexes existed with multiple forms of the target 5-fluorocytosine along the base flipping pathway.
Fluorescence spectroscopy
Fluorescence spectroscopy is a technique that is used to assay a sample using a fluorescent probe. DNA nucleotides themselves are not good candidates for this technique because they do not readily re-emit light upon light excitation. A fluorescent marker is needed to detect base flipping. 2-Aminopurine is a base that is structurally similar to adenine, but is very fluorescent when flipped out from the DNA duplex. It is commonly used to detect base flipping and has an excitation at 305‑320 nm and emission at 370 nm so that it well separated from the excitations of proteins and DNA. Other fluorescent probes used to study DNA base flipping are 6MAP (4‑amino‑6‑methyl‑7(8H)‑pteridone) and Pyrrolo‑C (3-[β-D-2-ribofuranosyl]-6-methylpyrrolo[2,3-d]pyrimidin-2(3H)-one). Time-resolved fluorescence spectroscopy is also employed to provide a more detailed picture of the extent of base flipping as well as the conformational dynamics occurring during base flipping.
Hybridization probing
Hybridization probes can be used to detect base flipping. This technique uses a molecule that has a complementary sequence to the sequence you would like to detect such that it binds to a single-strand of the DNA or RNA. Several hybridization probes have been used to detect base flipping. Potassium permanganate is used to detect thymine residues that have been flipped out by cytosine-C5 and adenine-N6 methyltransferases. Chloroacetaldehyde is used to detect cytosine residues flipped out by the HhaI DNA cytosine-5 methyltransferase (M. HhaI).
See also
DNA repair
Base excision repair
DNA replication
RNA transcription
DNA methylation
DNA methyltransferase
Genetic recombination
Homologous recombination
DNA
Epigenetics
Epigenomics
References
Molecular biology
Base flipping | DNA base flipping | Chemistry,Biology | 2,499 |
25,394,740 | https://en.wikipedia.org/wiki/Spaghetti%20bridge | A spaghetti bridge is an architectural model of a bridge, made of uncooked spaghetti or other hard, dry, straight noodles. Bridges are constructed for both educational experiments and competitions. The aim is usually to construct a bridge with a specific quantity of materials over a specific span, that can sustain a load. In competitions, the bridge that can hold the greatest load for a short period of time wins the contest. There are many contests around the world, usually held by schools and colleges.
Okanagan College contest
The original Spaghetti Bridge competition has run at Okanagan College in British Columbia since 1983, and is open to international entrants who are full-time secondary or post-secondary students.
The winners of the 2009 competition were Norbert Pozsonyi and Aliz Totivan of the Szechenyi Istvan University of Győr in Hungary. They won $1,500 with a bridge that weighed 982 grams and held 443.58 kg. Second place went to Brendon Syryda and Tyler Pearson of Okanagan College with a bridge that weighed 982 grams and held 98.71 kg.
Contests
Spaghetti bridge building contests around the world include:
Abbotsford School District
Australian Maritime College
Budapest Technical University
Camosun College
Coonabarabran High School
Delft University of Technology
Ferris State University
George Brown College
Institute of Machine Design and Security Technology
Instituto GayLussac - Ensino Fundamental e Médio
Universidade da Coruña, Escola Politécnica de Enxeñaría
Italy High School
James Cook University
Johns Hopkins University
McGill University
Monash University
Nathan Hale High School
Okanagan College
Riga Technical University
Rowan University
Technical University of Madrid
Universidad del Valle de Guatemala
Universidade Federal do Rio Grande do Sul
University of Architecture, Civil Engineering and Geodesy
University of British Columbia
University of Maribor
University of Salento
University of South Australia
University of Southern California
University of Technology Sydney
University of Tehran
University of the Andes
Vilnius Gediminas Technical University
Winston Science
Woodside Elementary School
Bezalel Academy of Arts and Design
Universidad de Buenos Aires - Facultad de Arquitectura Diseño y Urbanismo
Instituto Federal de Educação, Ciência e Tecnologia de Santa Catarina - Joinville, Brazil
See also
Architectural engineering
Balsa wood bridge
Civil engineering
Physics
Problem-based learning
Statics
Truss
References
Winston Science http://www.winston-school.org/?PageName=LatestNews&Section=Highlights&ItemID=106650&ISrc=School&Itype=Highlights&SchoolID=4831
Further reading
- Estimating the weight and the failure load of a spaghetti bridge: a deep learning approach DOI:10.1080/0952813X.2019.1694590
External links
Resources
Bridges
Scale modeling | Spaghetti bridge | Physics,Engineering | 571 |
69,426,301 | https://en.wikipedia.org/wiki/Japanese%20Federation%20of%20Chemical%20and%20General%20Workers%27%20Unions | The Japanese Federation of Chemical and General Workers' Unions (; Zenka Domei) was a trade union representing workers in various industries, especially the chemical industry, in Japan.
The union was established in 1951, affiliated with the Japanese Federation of Labour, and later, with the Japanese Confederation of Labour. In 1958, it had 31,801 members, growing to 88,233 by 1967. It was later a founding affiliate of the Japanese Trade Union Confederation. In 1995, it merged with the National Federation of General Workers' Unions to form the Japanese Federation of Chemical, Service and General Trade Unions.
References
Chemical industry trade unions
Trade unions established in 1951
Trade unions disestablished in 1995
Trade unions in Japan | Japanese Federation of Chemical and General Workers' Unions | Chemistry | 143 |
5,565,333 | https://en.wikipedia.org/wiki/Squircle | A squircle is a shape intermediate between a square and a circle. There are at least two definitions of "squircle" in use, one based on the superellipse, the other arising from work in optics. The word "squircle" is a portmanteau of the words "square" and "circle". Squircles have been applied in design and optics.
Superellipse-based squircle
In a Cartesian coordinate system, the superellipse is defined by the equation
where and are the semi-major and semi-minor axes, and are the and coordinates of the centre of the ellipse, and is a positive number. The squircle is then defined as the superellipse with and . Its equation is:
where is the minor radius of the squircle, and the major radius is the geometric average between square and circle. Compare this to the equation of a circle. When the squircle is centred at the origin, then , and it is called Lamé's special quartic.
The area inside the squircle can be expressed in terms of the gamma function as
where is the minor radius of the squircle, and is the lemniscate constant.
p-norm notation
In terms of the -norm on , the squircle can be expressed as:
where , is the vector denoting the centre of the squircle, and . Effectively, this is still a "circle" of points at a distance from the centre, but distance is defined differently. For comparison, the usual circle is the case , whereas the square is given by the case (the supremum norm), and a rotated square is given by (the taxicab norm). This allows a straightforward generalization to a spherical cube, or sphube, in , or hypersphube in higher dimensions.
Fernández-Guasti squircle
Another squircle comes from work in optics. It may be called the Fernández-Guasti squircle or FG squircle, after one of its authors, to distinguish it from the superellipse-related squircle above. This kind of squircle, centered at the origin, is defined by the equation:
where is the minor radius of the squircle, is the squareness parameter, and and are in the interval . If , the equation is a circle; if , it is a square. This equation allows a smooth parametrization of the transition to a square from a circle, without involving infinity.
Polar form
The FG squircle's radial distance from center to edge can be described parametrically in terms of the circle radius and rotation angle:
In practice, when plotting on a computer, a small value like 0.001 can be added to the angle argument to avoid the indeterminate form when for any integer , or one can set for these cases.
Linearizing squareness
The squareness parameter in the FG squircle, while bounded between 0 and 1, results in a nonlinear interpolation of the squircle "corner" between the inner circle and the square corner. The following relationship converts to , which can then be used in the squircle formula to obtain correctly interpolated squircles:
Periodic squircle
Another type of squircle arises from trigonometry. This type of squircle is periodic in and has the equation
where r is the minor radius of the squircle, s is the squareness parameter, and x and y are in the interval [−r, r]. As s approaches 0 in the limit, the equation becomes a circle. When s = 1, the equation is a square. This shape can be visualized using online graphing calculators such as Desmos.
Similar shapes
Rounded square
A shape similar to a squircle, called a , may be generated by separating four quarters of a circle and connecting their loose ends with straight lines, or by separating the four sides of a square and connecting them with quarter-circles. Such a shape is very similar but not identical to the squircle. Although constructing a rounded square may be conceptually and physically simpler, the squircle has a simpler equation and can be generalised much more easily. One consequence of this is that the squircle and other superellipses can be scaled up or down quite easily. This is useful where, for example, one wishes to create nested squircles.
Truncated circle
Another similar shape is a truncated circle, the boundary of the intersection of the regions enclosed by a square and by a concentric circle whose diameter is both greater than the length of the side of the square and less than the length of the diagonal of the square (so that each figure has interior points that are not in the interior of the other). Such shapes lack the tangent continuity possessed by both superellipses and rounded squares.
Rounded cube
A rounded cube can be defined in terms of superellipsoids.
Sphube
Similar to the name squircle, a sphube is a portmanteau of sphere and cube. It is the three-dimensional counterpart to the squircle. The equation for the FG-squircle in three dimensions is:
In polar coordinates, the sphube is expressed parametrically as
While the squareness parameter in this case does not behave identically to its squircle counterpart, nevertheless the surface is a sphere when and approaches a cube with sharp corners as .
Uses
Squircles are useful in optics. If light is passed through a two-dimensional square aperture, the central spot in the diffraction pattern can be closely modelled by a squircle or supercircle. If a rectangular aperture is used, the spot can be approximated by a superellipse.
Squircles have also been used to construct dinner plates. A squircular plate has a larger area (and can thus hold more food) than a circular one with the same radius, but still occupies the same amount of space in a rectangular or square cupboard.
Many Nokia phone models have been designed with a squircle-shaped touchpad button, as was the second generation Microsoft Zune. Apple uses an approximation of a squircle (actually a quintic superellipse) for icons in iOS, iPadOS, macOS, and the home buttons of some Apple hardware. One of the shapes for adaptive icons introduced in the Android "Oreo" operating system is a squircle. Samsung uses squircle-shaped icons in their Android software overlay One UI, and in Samsung Experience and TouchWiz.
Italian car manufacturer Fiat used numerous squircles in the interior and exterior design of the third generation Panda.
See also
Astroid
Ellipse
Ellipsoid
spaces
Oval
Squround
Superegg
References
External links
by Matt Parker
Online Calculator for supercircle and super-ellipse
Web based supercircle generator
Geometric shapes
Plane curves
Quartic curves | Squircle | Mathematics | 1,439 |
56,628,682 | https://en.wikipedia.org/wiki/Subfield%20of%20an%20algebra | In algebra, a subfield of an algebra A over a field F is an F-subalgebra that is also a field. A maximal subfield is a subfield that is not contained in a strictly larger subfield of A.
If A is a finite-dimensional central simple algebra, then a subfield E of A is called a strictly maximal subfield if .
References
Richard S. Pierce. Associative algebras. Graduate texts in mathematics, Vol. 88, Springer-Verlag, 1982,
Abstract algebra | Subfield of an algebra | Mathematics | 108 |
36,135,842 | https://en.wikipedia.org/wiki/NOTT-202 | NOTT-202 is a two-part chemical compound that is capable of selectively absorbing carbon dioxide. It is a metal–organic framework (MOF) that functions like a sponge, adsorbing selected gases at high pressures. Its creation was announced by scientists in 2012. The researchers claimed this structure was an entirely new class of porous material.
References
Carbon capture and storage
Metal-organic frameworks
Indium compounds | NOTT-202 | Chemistry,Materials_science,Engineering | 86 |
5,564,995 | https://en.wikipedia.org/wiki/Montignac%20diet | The Montignac diet is a high-protein low-carbohydrate fad diet that was popular in the 1990s, mainly in Europe. It was invented by Frenchman Michel Montignac (1944–2010), an international executive for the pharmaceutical industry, who, like his father, was overweight in his youth. His method is aimed at people wishing to lose weight efficiently and lastingly, reduce risks of heart failure, and prevent diabetes.
The Montignac diet is based on the glycemic index (GI) and forbids high‐carbohydrate foods that stimulate secretion of insulin.
Principle
Carbohydrate-rich foods are classified according to their glycemic index (GI), a ranking system for carbohydrates based on their effect on blood glucose levels after meals. High-GI carbohydrates are considered "bad" (with the exception of those foodstuffs like carrots that, even though they have high GIs, have a quite low carbohydrate content and should not significantly affect blood sugar levels, also called low glycemic load or low GL). The glycemic index was devised by Jenkins et al. at the University of Toronto as a way of conveniently classifying foods according to the way they affected blood sugar and was developed for diabetics suffering from diabetes mellitus. Montignac was the first to recommend using the glycemic index as a slimming diet rather than a way of managing blood sugar levels, and recommendations to avoid sharp increases in glucose blood sugar levels (as opposed to gradual increases) as a strategy for anyone to lose weight rather than a strategy for diabetics to stabilize blood sugar levels.
Montignac's diet was followed by the South Beach Diet that also used the GI principle, and Michael Mosley's 5:2 diet incorporates a recommendation to select foods with a low glycemic index or glycemic load.
"Bad carbohydrates", such as those in sweets, potatoes, rice and white bread, may not be taken together with fats, especially during Phase 1 of the Method. According to Montignac's theory, these combinations will lead to the fats in the food being stored as body fat. (Some kinds of pasta, such as "al dente" durum wheat spaghetti; some varieties of rice, such as long-grain basmati; whole grains; and foods rich in fiber, have a lower GI.)
Another aspect of the diet regards the choice of fats: the desirability of fatty foods depends on the nature of their fatty acids: polyunsaturated omega 3 acids (fish fat) as well as monounsaturated fatty acids (olive oil) are the best choice, while saturated fatty acids (butter and animal fat) should be restricted. Fried foods and butter used in cooking should be avoided.
The Montignac Method is divided into two phases.
Phase I: the weight-losing phase. This phase consists chiefly of eating the appropriate carbs, namely those with glycemic index ranked at 35 or lower (pure glucose is 100 by definition). A higher protein intake, such as 1.3–1.5 grams per kg of body weight, especially from fish and legumes, can help weight loss, but people with kidney disease should ask their doctor.
Phase II: stabilization and prevention phase. Montignac states on his website that we "can even enhance our ability to choose by applying a new concept, the glycemic outcome (synthesis between glycemic index and pure carbohydrate content) and the blood sugar levels which result from the meals. Under these conditions, we can eat whatever carbohydrate we want, even those with high glycemic indexes".
In his books, Montignac also provides a good number of filling French and Mediterranean style recipes. The pleasure of food and the feeling of fullness are key concepts in the Method as they are believed to help dieters stick to the rules in the long term and not go on a binge. Montignac also recommends that dieters should never miss a meal, and have between-meal snacks if that helps to eat less at meals.
Scientific studies
Montignac's theory is disputed by nutrition experts who claim that any calorie intake that exceeds the amount that the body needs will be converted into body fat. It has been argued that Montignac confuses the direction of causality between obesity and hyperinsulinemia and that the weight loss is simply due to the hypocaloric character of the diet.
Kathleen Melanson and Johanna Dwyer in the Handbook of Obesity Treatment have noted that:
The scientific literature refutes the hypotheses of Montignac regarding the metabolic effects of carbohydrates and fatty acids.
Critics also point out that the Glycemic Index is not easy to use, as it depends on the exact variety of the food; how it was cooked; combinations with other foods in the same meal, and so on. Despite these scientific doubts, there are other serious scientific studies which endorse this method. Although a review concluded that low glycemic index diets do not achieve greater weight loss than low-fat diets, the former might lead to greater reductions in cardiovascular risk factors.
Popularity
Montignac sold 15 million books about his diet, and his method has been made famous by the celebrities who adopted it, including Gérard Depardieu and others.
See also
Diabetic diet
Glycemic efficacy
Glycemic index#Weight control
Glycemic load
Insulin index
List of diets
Low glycemic index diet
References
External links
Official Montignac website
Explanation of the Montignac Method
Diets
Fad diets
Low-carbohydrate diets | Montignac diet | Chemistry | 1,213 |
25,599,293 | https://en.wikipedia.org/wiki/Constellation%20family | Constellation families are collections of constellations sharing some defining characteristic, such as proximity on the celestial sphere, common historical origin, or common mythological theme. In the Western tradition, most of the northern constellations stem from Ptolemy's list in the Almagest (which in turn has roots that go back to Mesopotamian astronomy), and most of the far southern constellations were introduced by sailors and astronomers who traveled to the south in the 16th to 18th centuries. Separate traditions arose in India and China.
Menzel's families
Donald H. Menzel, director of the Harvard Observatory, gathered several traditional groups in his popular account, A Field Guide to the Stars and Planets (1975), and adjusted and regularized them so that his handful of groups covered all 88 of the modern constellations.
Of these families, one (Zodiac) straddles the ecliptic which divides the sky into north and south; one (Hercules) has nearly equal portions in the north and south; two are primarily in one hemisphere (Heavenly Waters in the south and Perseus in the north); and four are entirely in one hemisphere (La Caille, Bayer, and Orion in the south and Ursa Major in the north).
Ursa Major Family
The Ursa Major Family includes 10 northern constellations in the vicinity of Ursa Major: Ursa Major itself, Ursa Minor, Draco, Canes Venatici, Boötes, Coma Berenices, Corona Borealis, Camelopardalis, Lynx, and Leo Minor. The eponymous constellation Ursa Major contains the famous Big Dipper.
Zodiac
The Zodiac is a group of 12 constellations: Aries, Taurus, Gemini, Cancer, Leo, Virgo, Libra, Scorpius, Sagittarius, Capricornus, Aquarius, Pisces. Some version of these constellations are found in traditions around the world, for this band around the celestial sphere includes the ecliptic, the apparent path of the sun through the year. These constellations therefore are all associated with zodiac signs. (The ecliptic also passes through the constellation Ophiuchus, which does not have an associated zodiac sign.)
Perseus Family
The Perseus Family includes several constellations associated with the Perseus myth: Cassiopeia, Cepheus, Andromeda, Perseus, Pegasus, and Cetus (representing the monster sent to devour Andromeda). Menzel also included a few neighboring constellations: Auriga, Lacerta, and Triangulum. Except for Cetus, these constellations all lie north of the ecliptic. The group reaches from near the north celestial pole to declination −30°.
Hercules Family
The Hercules Family is a group of constellations connected mainly by their adjacency on the celestial sphere. It is Menzel's largest grouping, and extends from declination +60° to −70°, mostly in the western hemisphere. It includes Hercules, Sagitta, Aquila, Lyra, Cygnus, Vulpecula, Hydra, Sextans, Crater, Corvus, Ophiuchus, Serpens, Scutum, Centaurus, Lupus, Corona Australis, Ara, Triangulum Australe, and Crux.
Orion Family
The Orion Family, on the opposite side of the sky from the Hercules Family, includes Orion, Canis Major, Canis Minor, Lepus, and Monoceros. This group of constellations draws from Greek myth, representing the hunter (Orion) and his two dogs (Canis Major and Canis Minor) chasing the hare (Lepus). Menzel added the unicorn (Monoceros) for completeness.
Heavenly Waters
The Heavenly Waters draws from the Mesopotamian tradition associating the dim area between Sagittarius and Orion with the god Ea and the Waters of the Abyss. Aquarius and Capricornus, derived from Mesopotamian constellations, would have been natural members had they not already been assigned to the Zodiac group. Instead, Menzel expanded the area and included several disparate constellations, most associated with water in some form: Delphinus, Equuleus, Eridanus, Piscis Austrinus, Carina, Puppis, Vela, Pyxis, and Columba. Carina, Puppis, and Vela historically formed part of the former constellation Argo Navis, which in Greek tradition represented the ship of Jason.
Bayer Family
The Bayer Family collects several southern constellations first introduced by Petrus Plancius on several celestial globes in the late 16th century, based on astronomical observations by the Dutch explorers Pieter Dirkszoon Keyser and Frederick de Houtman. The constellations were named mostly for exotic animals reported in the travel journals of that period, and were copied in Johann Bayer's influential celestial atlas Uranometria in 1603. The group includes Hydrus, Dorado, Volans, Apus, Pavo, Grus, Phoenix, Tucana, Indus, Chamaeleon, and Musca. Bayer labeled Musca as "Apis" (the Bee), but over time it was renamed. (Bayer's twelfth new southern constellation, Triangulum Australe, was placed by Menzel in the Hercules Family.) The Bayer Family circles the south celestial pole, forming an irregular contiguous band. Because these constellations are located in the far southern sky, their stars were not visible to the ancient Greeks and Romans.
La Caille Family
The La Caille Family comprises 12 of the 13 constellations introduced by Nicolas-Louis de Lacaille in 1756 to represent scientific instruments, together with Mensa, which commemorates Table Mountain ("Mons Mensa") in South Africa, where he set up his telescope. The group includes Norma, Circinus, Telescopium, Microscopium, Sculptor, Fornax, Caelum, Horologium, Octans, Mensa, Reticulum, Pictor, and Antlia. These dim constellations are scattered throughout the far southern sky, and their stars were mostly not visible to the ancient Greeks and Romans. (Menzel assigned Pyxis, the remaining Lacaille instrument, to the Heavenly Waters group.)
See also
List of constellations
Quadrant (astronomy)
References
Other sources
Majumdar, R. C., et al. (1951), The Vedic Age (vol. 1), The History and Culture of the Indian People (11 vols.), Bharatiya Vidya Bhavan (publisher), 1951, Delhi, India.
Sundaramoorthy, G. (1974), "The Contribution of the Cult of Sacrifice to the Development of Indian Astronomy", Indian Journal of the History of Science, Indian National Science Academy, Vol. 9, No. 1, pp. 100–106, 1974, Bombay, India.
Das, S. R. (1930), "Some Notes on Indian Astronomy", Isis (journal), University of Chicago, Vol. 14, No. 2, October, 1930, pp. 388–402.
Neugebauer, Otto, & Parker, Richard A. (1960), Egyptian Astronomical Texts (4 vols.), Lund Humphries (publisher), London.
Clagett, Marshall (1989), Calendars, Clocks, and Astronomy (vol. 2), Ancient Egyptian Science – A Source Book (3 vols.), [Memoirs of the American Philosophical Society], American Philosophical Society, Philadelphia, 1989.
Condos, Theony (1997), Star Myths of the Greeks and Romans: A Sourcebook, Phanes Press, Grand Rapids, Michigan, 1997.
Young, Charles Augustus (1888), A Text-Book of General Astronomy for Colleges and Scientific Schools, Ginn & Company (publisher), Boston, 1888.
Schaaf, Fred (2007), The 50 Best Sights in Astronomy and How to See Them – Observing Eclipses, Bright Comets, Meteor Showers, and Other Celestial Wonders, John Wiley & Sons, Inc.; 2007.
Olcott, William Tyler (1911), Star Lore of All Ages, G. P. Putnam's Sons, New York, 1911 | Constellation family | Astronomy | 1,702 |
14,526,992 | https://en.wikipedia.org/wiki/Adenosine%203%27%2C5%27-bisphosphate | Adenosine 3',5'-bisphosphate is a form of an adenosine nucleotide with two phosphate groups attached to different carbons in the ribose ring. This is distinct from adenosine diphosphate, where the two phosphate groups are attached in a chain to the 5' carbon atom in the ring.
Adenosine 3',5'-bisphosphate is produced as a product of sulfotransferase enzymes from the donation of a sulfate group from the coenzyme 3'-phosphoadenosine-5'-phosphosulfate. This product is then hydrolysed by 3'(2'),5'-bisphosphate nucleotidase to give adenosine monophosphate, which can then be recycled into adenosine triphosphate.
See also
Adenine
Sulfur metabolism
Acetyl-CoA
References
Nucleotides
Sulfur metabolism | Adenosine 3',5'-bisphosphate | Chemistry | 199 |
67,720,453 | https://en.wikipedia.org/wiki/Photuris%20bethaniensis | Photuris bethaniensis, also known as the Bethany Beach firefly, is a species of firefly in the genus Photuris. It is found in interdunal swale habitats along a 25 kilometre stretch of shoreline in Sussex County, Delaware. It is extremely rare and in decline. The main threats to this species include habitat loss due to coastal development, sea level rise, light pollution, and the lowering of groundwater aquifers. This species has an estimated extent of occurrence of 33 km2, and the entire population occurs within one location, as the main threat (sea level rise) will probably impact all sites within the current known distribution by the end of the century. Various historical collection sites no longer contain this species, and the occurrence locality thought to hold the largest number of individuals, has recently been lost to a housing development. Therefore, continuing decline in the area of occupancy has been observed; continuing decline in the extent of occurrence is projected, as all remaining occurrences contain few individuals and face myriad threats; continuing decline in the area, extent, and quality of habitat has been observed; continuing decline in locations is projected as the sea level rises; and, as a result of the recent loss of a site, a decline in the number of mature individuals is inferred. As such, this species is listed as Critically Endangered under criteria B1ab (i, ii, iii, iv, v).
References
Lampyridae
Bioluminescent insects
Night | Photuris bethaniensis | Biology | 302 |
15,947,535 | https://en.wikipedia.org/wiki/6-APA | 6-APA ((+)-6-aminopenicillanic acid) is a chemical compound used as an intermediate in the synthesis of β–lactam antibiotics. The major commercial source of 6-APA is still natural penicillin G. The semi-synthetic penicillins derived from 6-APA are also referred to as penicillins and are considered part of the penicillin family of antibiotics.
In 1958, Beecham scientists from Brockham Park, Surrey, found a way to obtain 6-APA from penicillin. Other β-lactam antibiotics could then be synthesized by attaching various side-chains to the nucleus. The reason why this was achieved so many years after the commercial development of penicillin by Howard Florey and Ernst Chain lies in the fact that penicillin itself is very susceptible to hydrolysis, so direct replacement of the side-chain was not a practical route to other β-lactam antibiotics.
References
Beta-lactam antibiotics
Sulfur heterocycles
Amines | 6-APA | Chemistry | 212 |
439,282 | https://en.wikipedia.org/wiki/Gully | A gully is a landform created by running water, mass movement, or commonly a combination of both eroding sharply into soil or other relatively erodible material, typically on a hillside or in river floodplains or terraces.
Gullies resemble large ditches or small valleys, but are metres to tens of metres in depth and width, are characterized by a distinct 'headscarp' or 'headwall' and progress by headward (i.e., upstream) erosion. Gullies are commonly related to intermittent or ephemeral water flow, usually associated with localised intense or protracted rainfall events or snowmelt.
Gullies can be formed and accelerated by cultivation practices on hillslopes (often gentle gradients) in farmland, and they can develop rapidly in rangelands from existing natural erosion forms subject to vegetative cover removal and livestock activity.
Etymology
The earliest known usage of the term is from 1657. It originates from the French word goulet, a diminutive form of goule which means throat. The term may be connected to the name of a type of knife used at the time, a gully-knife.
Water erosion is more likely to occur on steep terrain because of erosive pressures, splashes, scour, and transport. Slope characteristics, such as slope length and amounts proportionate to slope length, affect soil erosion. Relief and soil erosion are positively correlated in southeast Nigeria. There are three types of topography: mountains, cuesta landscapes, and plains and lowlands. While highlands with stable lithology avoid gullying yet allow for vigorous runoff, uplands with friable sandstones are more prone to erosion.
Formation and consequences
Gully erosion can progress through a variety and combination of processes. The erosion processes include incision and bank erosion by water flow, mass movement of saturated or unsaturated bank or wall material, groundwater seepage - sapping the overlying material, collapse of soil pipes or tunnels in dispersive soils, or a combination of these to a greater or lesser degree. Hillsides are more prone to gully erosion when they are cleared of vegetation cover through deforestation, over-grazing, or other means. Gullies in rangelands can be initiated by concentrated water flow down tracks worn by livestock or vehicle tracks. The flowing water easily carries the eroded soil after being dislodged from the ground, typically when rainfall falls during short, intense storms such as thunderstorms.
A gully may grow in length through headward (i.e., upstream) erosion at a knick point. This erosion can result from interflow and soil piping (internal erosion) as well as surface runoff. Gully erosion may also advance laterally through similar methods, including mass movement, acting on the gully walls (banks), and the development of 'branches' (a type of tributary).
Gullies reduce the productivity of farmlands where they incise into the land and produce sediment that may choke downstream waterbodies and reduce water quality within the drainage system and lake or coastal system. Because of this, much effort is invested into the study of gullies within the scope of geomorphology and soil science, in the prevention of gully erosion, and the in remediation and rehabilitation of gullied landscapes. The total soil loss from gully formation and subsequent downstream river sedimentation can be substantial, especially from unstable soil materials prone to dispersion.
When water is directed over exposed ground, gully erosion removes soil near drainage lines. This may result in divided properties, loss of arable land, diminished amenities, and decreased property values. Additionally, it can lead to sedimentation, discoloration of the water supply, and creating a haven for rodents.
Water rushing over exposed, naked soil creates gullies and ridges that erode rock and soil. When water rushes across exposed terrain, it erodes or pushes dirt away, creating rills. Gravity causes rift erosion on a downward slope, with steeper slopes generating greater water flow. Sandier terrains are more commonly affected by rills most prevalent during the rainier months. Gullies develop when a rill is neglected for an extended time, thickening and expanding as soil erosion persists.
The factors influencing gully erosion were investigated in Zaria, Kaduna state, Nigeria, utilizing SRTM data, soil samples, rainfall data, and satellite imagery. The findings indicated that the factors that had the biggest effects on gully erosion were slope (56%) and rainfall (26%), land cover (12%), and soil (6%). The investigation concluded that each particular component significantly influenced soil loss.
Effects of gullies
The loss of fertile farmland due to gully erosion is a severe environmental problem that lowers crop quality and may cause famine and food shortages. It also causes the soil to lose organic content, which has an impact on plant viability. As items washed from fields end up in rivers, streams, or vacant land, erosion also contaminates the ecosystem. Because of increased population expansion and increasing land demand, erosion also threatens the natural ecosystem, encroaching on natural forests. Important assets including homes, power poles, and water pipelines may also be destroyed.
Prevention of gullies
Effective land management techniques can prevent gullies. These techniques include keeping vegetation along drainage lines, using more water, classifying drainage lines as distinct land classes, stabilizing erosion, preventing vermin, distributing runoff evenly, keeping soil organic matter levels high, and avoiding over-cultivation. These tactics guarantee uniform rates of penetration and robust plant coverage.
One serious environmental problem endangering sustainable development is gully erosion. Gullying prevention and control methods are dispersed and lacking, and they have low success and efficacy rates. This review attempts to make a valuable contribution to effective gully prevention and management techniques by combining information from previous research. It is possible to stop the creation of gullies by changing how land is used, conserving water and soil, or implementing specific actions in areas with concentrated flow. Plant leftovers and other vegetation barriers can prevent erosion, although their usefulness is limited. The biophysical environment, terrain, climate, and geomorphology are examples of external elements that affect gully prevention and control.
Stabilising gullies
Stabilizing gullies entails altering water flow to lessen scouring, sediment buildup, and revegetation. Water can be securely moved from the natural level to the gully floor using a variety of structures, including drop structures, pipe structures, grass chutes, and rock chutes. Structural modifications can be required along steep gully floors. Vegetation can reestablish itself thanks to sediments deposited over flatter gradients. Until the restoration is finished, damaged areas should be walled off.
Gully remediation in Eastern Nigeria
Eastern Nigeria's people and ecology are seriously threatened by gully erosion. A research project focused on 370 families and nine risk regions evaluated the region's gully erosion issues. The greatest perceived problem, according to the results, was biodiversity loss. In contrast, damage to properties, roads, and walkways was ranked as the least important issue. This implies a notable variation in the average evaluations across impacted individuals, underscoring the necessity for long-term repair approaches. Reducing soil loss, raising public knowledge of environmental issues, passing environmental legislation, and giving residents funds to strengthen their coping mechanisms are all advised by the study.
In Agulu-Nanka, Southeast Nigeria, a study examined the geoenvironmental causes driving gully erosion. It focuses on catchment management for gully erosion and geotechnical analysis. Through fieldwork, data was gathered utilizing GIS and GPS methods. According to the study, gully erosion occurs throughout, with Nanka/Oko having the highest concentration. The gully characteristic map shows variations in length and depth, emphasizing the necessity of considering gully vulnerability and giving erosion hazards immediate attention.
Artificial gullies
Gullies can be formed or enlarged by several human activities.
Artificial gullies are formed during hydraulic mining when jets or streams of water are projected onto soft alluvial deposits to extract gold or tin ore. The remains of such mining methods are very visible landform features in old goldfields such as in California and northern Spain. The badlands at Las Medulas, for example, was created during the Roman period by hushing or hydraulic mining of the gold-rich alluvium with water supplied by numerous aqueducts tapping nearby rivers. Each aqueduct produced large gullies below by erosion of the soft deposits. The effluvium was carefully washed with smaller streams of water to extract the nuggets and gold dust.
Termination of gullies
Gully initiation results from localized erosion by surface runoff, often focusing on areas where forest cover has been removed for agricultural purposes, uneven compaction of surface soils by foot and wheeled traffic, and poorly designed road culverts and gutters. Termination of gully processes requires water-resource management, soil conservation, and community migration. Gully erosion is localized in the Coastal Plain Sands, Nanka Sands, and Nsukka Sandstone of the Anambra-Imo basin region. The most affected deposits are unconsolidated or poorly consolidated and have short dispersion times. Public education is essential for a sustainable termination strategy, and collaboration between the government, donors, the private sector, and rural people is crucial.
On Mars
Gullies are widespread at mid-to-high latitudes on the surface of Mars and are some of the youngest features observed on that planet, probably forming within the last few 100,000 years. There, they are one of the best lines of evidence for the presence of liquid water on Mars in the recent geological past, probably resulting from the slight melting of snowpacks on the surface or ice in the shallow subsurface on the warmest days of the Martian year. Flow as springs from deeper seated liquid water aquifers in the deeper subsurface is also a possible explanation for the formation of some Martian gullies.
Gallery
See also
– a narrow gully with a steep gradient in a mountainous terrain
– gully in Scotland or Northern England in rock
– a shallow channel cut into soil by erosion from flowing water
Terrace Crossing - a geographical zone between the sedimentation (downstream) part and the erosion (upstream) part of a river
References
Oxford English Dictionary
External links
Canyons and gorges
Environmental soil science
Erosion landforms
Fluvial landforms
Slope landforms
Soil erosion
Soil landforms
Valleys | Gully | Environmental_science | 2,168 |
5,005,903 | https://en.wikipedia.org/wiki/%27t%20Hooft%20symbol | The t Hooft symbol is a collection of numbers which allows one to express the generators of the SU(2) Lie algebra in terms of the generators of Lorentz algebra. The symbol is a blend between the Kronecker delta and the Levi-Civita symbol. It was introduced by Gerard 't Hooft. It is used in the construction of the BPST instanton.
Definition
is the 't Hooft symbol:
Where and are instances of the Kronecker delta, and is the Levi–Civita symbol.
In other words, they are defined by
()
where the latter are the anti-self-dual 't Hooft symbols.
Matrix Form
In matrix form, the 't Hooft symbols are
and their anti-self-duals are the following:
Properties
They satisfy the self-duality and the anti-self-duality properties:
Some other properties are
The same holds for except for
and
Obviously due to different duality properties.
Many properties of these are tabulated in the appendix of 't Hooft's paper and also in the article by Belitsky et al.
See also
Instanton
't Hooft anomaly
't Hooft–Polyakov monopole
't Hooft loop
References
Gauge theories
Mathematical symbols | 't Hooft symbol | Mathematics | 255 |
59,605,831 | https://en.wikipedia.org/wiki/Pyramid%20wavefront%20sensor | A pyramid wavefront sensor is a type of a wavefront sensor. It measures the optical aberrations of an optical wavefront. This wavefront sensor uses a pyramidal prism with a large apex angle to split the beam into multiple parts at the geometric focus of a lens. A four-faceted prism, with its tip centered at the peak of the point spread function, will generate four identical pupil images in the absence of optical aberrations. In the presence of optical aberrations, the intensity distribution among the pupils will change. The local wavefront gradients can be obtained by recording the distribution of intensity in the pupil images. The wavefront aberrations can be evaluated from the estimated wavefront gradients.
It has potential applications in astronomy and ophthalmology.
Modulation
The prism is often modulated (mechanically moved in a circle/square) for averaging purposes and to make sure that the ray spends an equal fraction of the total time on every face of the pyramidal prism. This makes the wavefront sensor slightly inconvenient to use due to the need for mechanically moving parts – either the prism or the beam is modulated. Using a light diffusing plate, mechanically moving parts can be eliminated. Alternatively, it has been shown that the need for mechanically moving parts can be overcome in a digital pyramid wavefront sensor with the spatial light modulator.
References
Observational astronomy
Optical instruments
Sensors | Pyramid wavefront sensor | Astronomy,Technology,Engineering | 287 |
39,586,666 | https://en.wikipedia.org/wiki/Graphite-like%20zinc%20oxide%20nanostructure | Most of the synthesized Zinc oxide (ZnO) nanostructures in different geometric configurations such as nanowires, nanorods, nanobelts and nanosheets are usually in the wurtzite crystal structure. However, it was found from density functional theory calculations that for ultra-thin films of ZnO, the graphite-like structure was energetically more favourable as compared to the wurtzite structure. The stability of this phase transformation of wurtzite lattice to graphite-like structure of the ZnO film is only limited to the thickness of about several Zn-O layers and was subsequently verified by experiment. Similar phase transition was also observed in ZnO nanowire when it was subjected to uniaxial tensile loading. However, with the use of the first-principles all electron full-potential method, it was observed that the wurtzite to graphite-like phase transformation for ultra-thin ZnO films will not occur in the presence of a significant amount of oxygen vacancies (Vo) at the Zn-terminated (0001) surface of the thin film. The absence of the structural phase transformation was explained in terms of the Coulomb attraction at the surfaces. The graphitic ZnO thin films are structurally similar to the multilayer of graphite and are expected to have interesting mechanical and electronic properties for potential nanoscale applications. In addition, density functional theory calculations and experimental observations also indicate that the concentration of the Vo is the highest near the surfaces as compared to the inner parts of the nanostructures. This is due to the lower Vo defect formation energies in the interior of the nanostructures as compared to their surfaces.
References
II-VI semiconductors
Phase transitions
Nanomaterials
Crystallographic defects | Graphite-like zinc oxide nanostructure | Physics,Chemistry,Materials_science,Engineering | 368 |
55,647,358 | https://en.wikipedia.org/wiki/UT-88 | The UT-88 () is a DIY educational computer designed in the Soviet Union. Its description was published in YT dlya umelykh ruk (Young technical designer for skilled hands, ) — a supplement to Yunij Technik (Young technical designer, ) magazine in 1989. It was intended for building by school children of extracurricular hobby groups at Pioneers Palaces.
Description
At the time of publication, there were several DIY computers: Micro-80, Radio-86RK, and Specialist. The main feature of UT-88 was the possibility to build a computer in stages while getting a workable construction at each step. This approach made it easier to build by less skilled hobbyists.
The minimal configuration of the computer includes a power supply, CPU, 1 KiB of ROM and 1 KiB of RAM, 6 seven-segment displays, a 17-key keyboard, and a tape interface. This computer can be used as a scientific calculator.
Full configuration adds a display module with a TV interface, a full keyboard, and a 64 KiB dynamic RAM module.
References
Soviet computer systems | UT-88 | Technology | 226 |
32,093,342 | https://en.wikipedia.org/wiki/HAT-P-32b | HAT-P-32b is a planet orbiting the G-type or F-type star HAT-P-32, which is approximately 950 light years away from Earth. HAT-P-32b was first recognized as a possible planet by the planet-searching HATNet Project in 2004, although difficulties in measuring its radial velocity prevented astronomers from verifying the planet until after three years of observation. The Blendanal program helped to rule out most of the alternatives that could explain what HAT-P-32b was, leading astronomers to determine that HAT-P-32b was most likely a planet. The discovery of HAT-P-32b and of HAT-P-33b was submitted to a journal on 6 June 2011.
The planet is considered a hot Jupiter, and although it is slightly less massive than Jupiter, it is bloated to nearly twice Jupiter's size. At the time of its discovery, HAT-P-32b had one of the largest radii known amongst extrasolar planets. This phenomenon, which has also been observed in planets like WASP-17b and HAT-P-33b, has shown that something more than temperature is influencing why these planets become so large.
Discovery
It had been suggested that a planet was orbiting HAT-P-32 as early as 2004; these observations were collected by the six-telescope HATNet Project, an organization in search of transiting planets, or planets that cross in front of their host stars as seen from Earth. However, attempts to confirm the planetary candidate were extremely difficult because of a high level of jitter (a random, shaky deviation in the measurements of HAT-P-32's radial velocity) present in the star's observations. High-level jitter prevented the most common technique, that of bisector analysis, from revealing the star's radial velocity with enough certainty to confirm the planet's existence.
The spectrum of HAT-P-32 was collected using the digital speedometer on Arizona's Fred Lawrence Whipple Observatory (FLWO). Analysis of the data found that HAT-P-32 was a single, moderately rotating dwarf star. Some of its parameters were also derived, including its effective temperature and surface gravity.
Between August 2007 and December 2010, twenty-eight spectra were collected using the High Resolution Echelle Spectrometer (HIRES) at the W.M. Keck Observatory in Hawaii. Twenty-five of these spectra were used to deduce HAT-P-32's radial velocity. To compensate for jitter, a greater number of spectra than usual for planetary candidates was collected. From this, it was concluded that stellar activity (and not the presence of yet-undiscovered planets) was the cause of the jitter.
Because astronomers concluded that the use of radial velocity could not, alone, establish the existence of planet HAT-P-32b, the KeplerCam CCD instrument on FLWO's 1.2m telescope was used to take photometric observations of HAT-P-32. The data collected using the KeplerCam CCD helped astronomers construct HAT-P-32's light curve. The light curve displayed a slight dimming at a point where HAT-P-32b was believed to transit its star.
The astronomers utilized Blendanal, a program used to eliminate the possibilities of false positives. This process serves a similar purpose to the Blender technique, which was used to verify some planets discovered by the Kepler spacecraft. In doing so, HAT-P-32's planet-like signature was found to not be caused by either a hierarchical triple star system or by a mixture of light between a bright single star and that of a binary star in the background. Although the possibility that HAT-P-32 is actually a binary star with a dim secondary companion nearly indistinguishable from the primary companion could not be ruled out, HAT-P-32b was confirmed as a planet based on the Blendanal analysis.
Because of the high jitter of the star, the best way to collect more data on HAT-P-32b would be to observe an occultation of HAT-P-32b behind its star using the Spitzer Space Telescope.
HAT-P-32b's discovery was reported with that of HAT-P-33b in the Astrophysical Journal.
Host star
HAT-P-32, or GSC 3281–00800, is a double star; the primary is a G-type or F-type dwarf star, and the secondary is a M-type dwarf star. The system is located away from Earth. With 1.176 solar masses and 1.387 solar radii, HAT-P-32A is both larger and more massive than the Sun. HAT-P-32A's effective temperature is 6,001 K, making it slightly hotter than the Sun, although it is younger, at an estimated age of 3.8 billion years, thus beginning nuclear fusion in its core not long after the Archean eon started on Earth billion years ago. HAT-P-32A is metal-poor; its measured metallicity is [Fe/H] = -0.16, which means that it has 69% the iron content of the Sun. The star's surface gravity is determined to be 4.22, while its luminosity suggests that it emits 2.43 times the amount of energy that the Sun emits. These parameters are adopted given the condition that the planet HAT-P-32b has an irregular (eccentric) orbit.
HAT-P-32 has an apparent magnitude of 11.197, which makes it invisible to the naked eye. A search for a binary companion star using adaptive optics at the MMT Observatory discovered a companion at a distance of 2.9 arcseconds that is 3.4 magnitudes dimmer than the primary star.
A very high level of jitter has been detected in the star's spectrum. There is a possibility that the jitter could be induced by the dimmer secondary companion. HAT-P-32's dimmer constituent probably has a mass that is under half of the Sun's mass, while it has a temperature of K.
Other planets with orbital periods that are smaller than that of HAT-P-32b's orbit may be present in this system. However, when the discovery of the planet was published, not enough radial velocity measurements had been collected to determine if this was the case.
Characteristics
HAT-P-32b is a Hot Jupiter that has 0.941 Jupiter masses and 2.037 Jupiter radii. In other words, HAT-P-32b is slightly less massive than Jupiter is, although it is nearly twice Jupiter's size. The planet's average distance from its host star is 0.0344 AU, or approximately 3% of the mean distance between the Earth and the Sun. It completes an orbit every 2.150009 days (51.6 hours). HAT-P-32b has an equilibrium temperature of 1888 K, which is fifteen times hotter than Jupiter's equilibrium temperature. Nonetheless, limb temperature measured in 2020 was much cooler at 1248 K.
Many of the described characteristics are derived on the assumption that HAT-P-32b has an orbit that is elliptical (eccentric). The best fit for HAT-P-32b's orbital eccentricity is 0.163, denoting a slightly elliptical orbit, although the jitter effect observed in its host star has made the planet's eccentricity difficult to accurately find. The discoverers have also derived the planet's characteristics assuming that the planet has a circular orbit, although they have given preference to the elliptical model.
Because HAT-P-32b's orbital inclination with respect to Earth is 88.7º, the planet is seen almost edge-on with respect to Earth. It has been found to transit its host star.
A study in 2012, utilizing the Rossiter–McLaughlin effect, has determined the planet is orbiting at nearly polar orbit relative to the rotation of the star, misalignment equal to 85°.
HAT-P-32b had one of the highest radii known amongst planets at the time of its discovery. Like planets HAT-P-33b and WASP-17b, which are similarly inflated, the mechanism behind this is unknown; it is not solely related to temperature, which is known to have an effect. This is especially clear when compared to WASP-18b, a planet that is hotter than the aforementioned HAT and WASP planets, but despite its temperature its radius is far lower than that of its counterparts.
The planetary spectrum shows evidence of Roche lobe overflow and rapid mass loss 13 million tons per second.
It was also found that the planet's radius, observed with planetary transits, varies with wavelength. Different radii for each wavelength could arise from an atmosphere where a Rayleigh scattering haze is combined with a grey cloud deck. The thick (clouds up to pressure level of 0.4-33 kPa) cloud deck and haze above it was indeed confirmed in 2020, along with the detection of water in the atmosphere of HAT-P-32b.
References
Andromeda (constellation)
Exoplanets discovered in 2011
Transiting exoplanets
Hot Jupiters
Giant planets
Exoplanets discovered by HATNet | HAT-P-32b | Astronomy | 1,918 |
1,543,903 | https://en.wikipedia.org/wiki/Ribbon%20development | Ribbon development refers to the building of houses along the routes of communications radiating from a human settlement. The resulting linear settlements are clearly visible on land use maps and aerial photographs, giving cities and the countryside a particular character. Such development generated great concern in the United Kingdom during the 1920s and the 1930s as well as in numerous other countries during the decades since.
Normally the first ribbons are focused on roads. Following the Industrial Revolution, ribbon development became prevalent along railway lines, predominantly in Russia, the United Kingdom, and the United States. However, the investment required to build railway stations, the ensuing attractiveness of easy rail access, and need for accompanying roads often led to new small settlements outside of the center city. Ribbon developments yielded attractive home locations on isolated roads as increasing motor car ownership meant that houses could be sold easily even if they were remote from workplaces and urban centres. Developers were pleased to not have to construct additional roads, thereby saving money and plot space. Ribbon developments also filled spaces at the interstice between urban areas, and resultingly appealed to potential buyers needing to access one or more of these locations.
The extent of this development practice around roads led to several problems becoming more intense. Ribbon developments were ultimately recognized as an inefficient use of resources, requiring bypass roads to be built, and often served as a precursor to untrammelled urban sprawl. Thus a key aim for the United Kingdom's post-war planning system was to implement a presumption and convention that rendered new ribbon developments undesirable. Urban sprawl/suburbanization of large areas led to the introduction of green belt policies, new towns, planned suburbs and garden cities.
History
Following the Industrial Revolution, ribbon development became prevalent along railway lines, predominantly in Russia, the United Kingdom, and the United States. The deliberate promotion of Metro-land along London's Metropolitan Railway serves as a strong example of this form of development. Similar examples can be found from Long Island (where Frederick W Dunton bought much real estate to encourage New Yorkers to settle along the Long Island Rail Road lines), Boston and across the American Midwest.
Ribbon development is not restricted to construction along road or rail corridors, as it can also occur along ridge lines, canals and coastlines, the last of which occurs especially as people seeking seachange lifestyles build their houses for an optimal view.
The resulting towns and cities are often difficult to service efficiently due to their remoteness and lack of density. Often, the first problem noticed by residents is increased traffic congestion, as an increased number of people moves along the narrow urban corridor while development continues at the lengthening end of the corridor. Urban consolidation and smart city growth are often solutions that encourage growth towards a more compact urban form.
Ribbon development can also be compared with a linear village – a village that grows linearly along a transportation route as part of a city's expansion into the frontier. They also lead to dispersion of functions, as the need for pockets of dense development that rely on each other becomes less important.
Ribbon development has long been viewed as a special problem in the Republic of Ireland, where "one-off houses" proliferate on rural roads. This causes difficulties in the efficient supply of water, sewerage, broadband, electricity, telephones and public transport. In 1998, Frank McDonald contrasted development in the Republic with that in Northern Ireland: "Enniskillen [in Northern Ireland] is well defined with clear boundaries to the town and well-laid-out shopping streets. Letterkenny, [in the Republic] by contrast, appears as just one long street with bungalow development trailing off over all the surrounding hills." The houses (ofter disparaged as "McMansions") are also criticised for spoiling countryside scenery: Monaghan County Council in 2013 declared that "The Council will resist development that would create or extend ribbon development." Tipperary County Council and many other councils have adopted similar policies.
Recently, in places such as Flanders, Belgium, regional zoning policy has resulted in ribbon development patterns. Various spatial policies embedded in these plans help predict where ribbon developments may occur and at what rate.
Criticisms
Increased congestion
Due to the main road being flanked by homes or commercial establishments, stoppages in traffic may frequently occur as a result of deliveries or vehicles entering or exiting driveways. This can pose danger for other vehicles that may not see entering traffic, especially if the road is bordered by garages. Residents may also choose to walk alongside the road, an activity made more dangerous by fast-moving traffic.
Utility access
For as simple as linear construction emanating from a city is, the length of a ribbon corridor can pose financial concerns for utility companies as they serve buildings. Density is preferable for utility grids, thereby risking poor access for far-away buildings.
Disruptions during construction
Construction of a new home or building within a ribbon development may severely disrupt the flow of vehicles along the road because there are no feeder streets for construction vehicles to station on. Traffic may be forced into a singular lane or subjected to an alternating pattern.
Obstruction of countryside
Because most ribbon developments exist in rural areas outside of cities, properties can disturb or obstruct the natural landscapes along the road may be constructed along an overlook, removing the public's ability to enjoy the landscape in favor of a single property owner.
Municipal boundaries
Elongated ribbon developments also pose challenges for municipal governments as they partition out rural areas for townships and schools. Rather than development in small towns where schools and other public amenities reside, certain locations within a ribbon development may be difficult to serve by a government and, in turn, cost more in public expenditures.
See also
Green belt
Linear village
One-off housing
Towards an Urban Renaissance
Urban Sprawl
References
Urban planning
Urban studies and planning terminology | Ribbon development | Engineering | 1,174 |
7,992,036 | https://en.wikipedia.org/wiki/Integrodifference%20equation | In mathematics, an integrodifference equation is a recurrence relation on a function space, of the following form:
where is a sequence in the function space and is the domain of those functions. In most applications, for any , is a probability density function on . Note that in the definition above, can be vector valued, in which case each element of has a scalar valued integrodifference equation associated with it. Integrodifference equations are widely used in mathematical biology, especially theoretical ecology, to model the dispersal and growth of populations. In this case, is the population size or density at location at time , describes the local population growth at location and , is the probability of moving from point to point , often referred to as the dispersal kernel. Integrodifference equations are most commonly used to describe univoltine populations, including, but not limited to, many arthropod, and annual plant species. However, multivoltine populations can also be modeled with integrodifference equations, as long as the organism has non-overlapping generations. In this case, is not measured in years, but rather the time increment between broods.
Convolution kernels and invasion speeds
In one spatial dimension, the dispersal kernel often depends only on the distance between the source and the destination, and can be
written as . In this case, some natural conditions on f and k imply that there is a well-defined
spreading speed for waves of invasion generated from compact initial conditions. The wave speed is often calculated
by studying the linearized equation
where .
This can be written as the convolution
Using a moment-generating-function transformation
it has been shown that the critical wave speed
Other types of equations used to model population dynamics through space include reaction–diffusion equations and metapopulation equations. However, diffusion equations do not as easily allow for the inclusion of explicit dispersal patterns and are only biologically accurate for populations with overlapping generations. Metapopulation equations are different from integrodifference equations in the fact that they break the population down into discrete patches rather than a continuous landscape.
References
Mathematical and theoretical biology
Recurrence relations | Integrodifference equation | Mathematics | 444 |
32,182,651 | https://en.wikipedia.org/wiki/Clinical%20biophysics | Clinical biophysics is that branch of medical science that studies the action process and the effects of non-ionising physical energies utilised for therapeutic purposes.
Physical energy can be applied for diagnostic or therapeutic aims.
The principle on which clinical biophysics is based are represented by the recognizability and the specificity of the physical signal applied:
recognizability: the capacity of the biological target to recognise the presence of the physical energy: this aspect becomes more important with the lowering of the energy applied.
specificity: the capacity of the physical agent applied to the biological target to obtain a response depending on its physical characteristics: frequency, length, energy, etc. The effects do not necessarily depend on the quantity of energy applied to the biological target.
Definition
Several papers show that the response of a biological system when exposed to non-ionizing physical stimuli is not necessarily dependent on the amount of energy applied.
Specific combinations of amplitude, frequency and waveform may trigger the most intense response. For example, cell proliferation or activation of metabolic pathways.
This has been demonstrated for:
a) mechanical strains directly applied to the cells or tissue;
b) mechanical energy applied by ultrasound;
c) electromagnetic field exposure;
d) electric field exposure.
Several pre-clinical experiences have laid the foundation to identify exposure conditions that may be used in humans to treat diseases or to promote tissue healing.
The identification of the best parameters to apply in any particular circumstance is the current goal of research activities in the field.
Medical applications
Orthopaedics
PEMF
LIPUS
CCEF
Direct current
Neurology
Plastic surgery
Oncology
References
Biophysics
Medical physics | Clinical biophysics | Physics,Biology | 329 |
1,009,552 | https://en.wikipedia.org/wiki/GNU%20Linear%20Programming%20Kit | The GNU Linear Programming Kit (GLPK) is a software package intended for solving large-scale linear programming (LP), mixed integer programming (MIP), and other related problems. It is a set of routines written in ANSI C and organized in the form of a callable library. The package is part of the GNU Project and is released under the GNU General Public License.
GLPK uses the revised simplex method and the primal-dual interior point method for non-integer problems and the branch-and-bound algorithm together with Gomory's mixed integer cuts for (mixed) integer problems.
History
GLPK was developed by Andrew O. Makhorin (Андрей Олегович Махорин) of the Moscow Aviation Institute. The first public release was in October 2000.
Version 1.1.1 contained a library for a revised primal and dual simplex algorithm.
Version 2.0 introduced an implementation of the primal-dual interior point method.
Version 2.2 added branch and bound solving of mixed integer problems.
Version 2.4 added a first implementation of the GLPK/L modeling language.
Version 4.0 replaced GLPK/L by the GNU MathProg modeling language, which is a subset of the AMPL modeling language.
Interfaces and wrappers
Since version 4.0, GLPK problems can be modeled using GNU MathProg (GMPL), a subset of the AMPL modeling language used only by GLPK. However, GLPK is most commonly called from other programming languages. Wrappers exist for:
Julia and the JuMP modeling package
Java (using OptimJ)
Further reading
The book uses GLPK exclusively and contains numerous examples.
References
External links
GLPK official site
GLPK Wikibook
Linear Programming Kit
Mathematical optimization software
Free mathematics software
Free software programmed in C
Mathematics software for Linux | GNU Linear Programming Kit | Mathematics | 393 |
1,390,580 | https://en.wikipedia.org/wiki/Banana%20paper | Banana paper is a type of paper created from banana plant bark or banana peel fibers. Banana paper has a lower density, higher stiffness, higher disposability, higher renewability, and higher tensile strength compared to traditional paper. These qualities are due to the cellular composition of banana fiber, which consists of cellulose, hemicellulose, and lignin.
During the manufacturing process of banana paper, the fibers are ground until they appear similar to saw dust. Then, the fiber is washed to remove natural resins to create agricultural fiber. If the natural resins are not washed away, these resins would take away from the integrity of the paper. The process of pulping produces pulp to be used in the manufacturing of paper. This pulp is used to create post-consumer fiber (processed fiber). The post consumer fiber is combined with the agricultural fiber to make banana paper.
Development
The earliest evidence of the use of banana stems as a source of fiber dates back to 13th century Japan. However, its popularity declined with the upsurge of silk and cotton fibers imported from China and India.
Banana paper was first patented in the United States on March 16, 1912, by Charles M. Taylor and Howard Kay Cook. They both learned that cellulose fiber can be easily removed from the waste of the banana plant, and that the fiber is well adapted to making durable paper. Taylor and Cook applied for the patent on March 16, 1912. The application was granted on May 2, 1916, and they received a lifetime patent. The patent is now expired.
Properties
Raw banana paper has a coarse surface due to the presence of hemicellulose, lignin, and other waxy components in the fiber. Hemicellulose is located between and within the cellulose fibrils and is incorporated into the cellulose structure. The fiber or pulp with high hemicellulose content has a high maximum tensile strength and a low maximum tear index. The cellulosic fibers enclose the outside of cellulose fibers, acting as natural binders. Long wrapped fiber bundles are a key component of banana paper. Length is also a significant fiber property, as longer fibers contain more fiber joints. These fiber joints contribute to a stronger network of fibers. Long fiber manufactured papers usually have better strength properties than short fiber manufactured papers.
Banana fiber can vary in weight and thickness depending on the specific part of the banana stem used. Sturdy, thick fibers can be taken from the outer sheaths, and softer fibers can be extracted from the inner sheaths.
The properties of banana paper overall include a lower density, higher stiffness, higher disposability, higher renewability, and higher tensile strength compared to traditional paper.
Manufacturing process
The paper can be handmade or produced by machinery. Both the handmade and machine processes have similar steps. First, banana stems are collected as they contain more than 4% fiber which can be used to manufacture banana paper. The fiber from the banana is removed and washed in order to eliminate natural resins that can decrease the strength and durability of the paper. The washed fibers are used to form a stronger fiber (agricultural fiber). Then, the process of pulping makes pulp used in the production of paper. This pulp is used to produce the post-consumer fiber and is mixed with the agricultural fiber. Lastly, the mixed fibers are either molded together by a deckle (a tool used for handmade processes of molding fibers) or a machine.
Environmental impact
After bananas are harvested from plantations, the stems and trunks are usually discarded. However, these parts contain available sources of fibers. If the scrapped stems and trunks are utilized, this can lead to a decrease in synthetic fiber production. Synthetic fiber production requires extra energy, fertilizer, and chemicals. Banana paper does not require any chemicals to be used during manufacturing. Banana paper is also more durable and has a longer lifetime than conventional paper. Therefore, the manufacturing of banana paper does not add to environmental pollution. Banana paper reduces pollution by having lower disposal costs and less agricultural waste enter landfills and rivers. The production of banana paper uses less energy compared to traditional paper production as the traditional paper industry is one of the largest sources of energy consumption. Therefore, banana paper is less impactful on natural resources, such as forests.
Future of industry
The global banana paper market size was approximately $490 million in 2021, and is projected to reach $760 million by 2031, according to Business Research Insights, a global market research firm. The banana paper market is expanding because of a growing number of uses for banana paper such as paper pens, business cards, greeting cards, notebooks, and other stationery items. The market is specifically expanding in Europe, North America, South America, and APAC (Asia-Pacific). The expanding banana paper market is further supported by its low production cost. Factors contributing to the low production cost include relatively inexpensive banana fiber extraction machinery and ease of operation of these machines by unskilled workers.
References
External links
How Banana Paper is made
Paper | Banana paper | Physics | 1,033 |
22,136,046 | https://en.wikipedia.org/wiki/June%202094%20lunar%20eclipse | A total lunar eclipse will occur at the Moon’s descending node of orbit on Monday, June 28, 2094, with an umbral magnitude of 1.8249. It will be a central lunar eclipse, in which part of the Moon will pass through the center of the Earth's shadow. A lunar eclipse occurs when the Moon moves into the Earth's shadow, causing the Moon to be darkened. A total lunar eclipse occurs when the Moon's near side entirely passes into the Earth's umbral shadow. Unlike a solar eclipse, which can only be viewed from a relatively small area of the world, a lunar eclipse may be viewed from anywhere on the night side of Earth. A total lunar eclipse can last up to nearly two hours, while a total solar eclipse lasts only a few minutes at any given place, because the Moon's shadow is smaller. Occurring about 1.9 days before perigee (on June 30, 2094, at 7:50 UTC), the Moon's apparent diameter will be larger.
While the visual effect of a total eclipse is variable, the Moon may be stained a deep orange or red color at maximum eclipse. With a gamma value of only 0.0288 and an umbral eclipse magnitude of 1.8249, this is the greatest eclipse in Lunar Saros 131 as well as the second largest and darkest lunar eclipse of the 21st century.
Visibility
The eclipse will be completely visible over eastern Australia, Antarctica, and the central and eastern Pacific Ocean, seen rising over east Asia and western Australia and setting over North and South America.
Eclipse details
Shown below is a table displaying details about this particular solar eclipse. It describes various parameters pertaining to this eclipse.
Eclipse season
This eclipse is part of an eclipse season, a period, roughly every six months, when eclipses occur. Only two (or occasionally three) eclipse seasons occur each year, and each season lasts about 35 days and repeats just short of six months (173 days) later; thus two full eclipse seasons always occur each year. Either two or three eclipses happen each eclipse season. In the sequence below, each eclipse is separated by a fortnight. The first and last eclipse in this sequence is separated by one synodic month.
Related eclipses
Eclipses in 2094
A partial lunar eclipse on January 1.
A total solar eclipse on January 16.
A partial solar eclipse on June 13.
A total lunar eclipse on June 28.
A partial solar eclipse on July 12.
A partial solar eclipse on December 7.
A total lunar eclipse on December 21.
Metonic
Preceded by: Lunar eclipse of September 8, 2090
Followed by: Lunar eclipse of April 15, 2098
Tzolkinex
Preceded by: Lunar eclipse of May 17, 2087
Followed by: Lunar eclipse of August 9, 2101
Half-Saros
Preceded by: Solar eclipse of June 22, 2085
Followed by: Solar eclipse of July 4, 2103
Tritos
Preceded by: Lunar eclipse of July 29, 2083
Followed by: Lunar eclipse of May 28, 2105
Lunar Saros 131
Preceded by: Lunar eclipse of June 17, 2076
Followed by: Lunar eclipse of July 9, 2112
Inex
Preceded by: Lunar eclipse of July 17, 2065
Followed by: Lunar eclipse of June 9, 2123
Triad
Preceded by: Lunar eclipse of August 28, 2007
Followed by: Lunar eclipse of April 29, 2181
Lunar eclipses of 2092–2096
This eclipse is a member of a semester series. An eclipse in a semester series of lunar eclipses repeats approximately every 177 days and 4 hours (a semester) at alternating nodes of the Moon's orbit.
The penumbral lunar eclipses on February 23, 2092 and August 17, 2092 occur in the previous lunar year eclipse set, and the penumbral lunar eclipses on May 7, 2096 and October 31, 2096 occur in the next lunar year eclipse set.
Saros 131
Half-Saros cycle
A lunar eclipse will be preceded and followed by solar eclipses by 9 years and 5.5 days (a half saros). This lunar eclipse is related to two annular solar eclipses of Solar Saros 138.
See also
List of lunar eclipses and List of 21st-century lunar eclipses
Notes
External links
2094-06
2094-06
2094 in science
2094-06 | June 2094 lunar eclipse | Astronomy | 892 |
58,797,296 | https://en.wikipedia.org/wiki/Nanocar%20Race | Nanocar Race is an international scientific competition with the aim of testing the performance of getting a large molecule suspended over a solid surface to cover the largest distance with the use of a scanning tunneling microscope.
The first race consisted of overcoming a distance of 100 nanometers and was held for the first time in Toulouse on 28 and 29 April 2017. A second race was held in 2022 with the winners covering multiple hundreds of nanometers.
History
The idea for the race was formulated by scientists Christian Joachim and in Toulouse, France in January 2013 in the ACS Nano journal. A call for applications was launched to give the participating teams time to prepare appropriate nanocars. The race is officially announced by the French National Centre for Scientific Research (CNRS) in November 2015 in Toulouse during Futurapolis1. On this occasion, five teams presented their prototype projects on November 27, 2015.
The first race in the world of this type, between four vehicles, started on the 28 April 2017 at the CEMES-CNRS in Toulouse and lasted 36 hours. The Toulouse organizers also agreed on the competition of two more vehicles, which will then be remotely controlled via Internet from the CEMES-CNRS race room on the microscope of their own laboratory. These relates to the vehicles from Ohio and Graz-Rice.
Competition
The track
The track of the first competition is a gold surface, equipped with grooves to define race lanes in order to avoid losing vehicles. It is about 100 nanometres long, and includes two bends. It is located in a small enclosure cooled to -269°C under a primary vacuum of 10−10 mbar and is observed simultaneously by four scanning tunneling microscopes (STM) miniaturized for this event and operating on the same surface. Each microscope is responsible for driving a single vehicle (a single nanocar).
During this competition, the nanocars should move as far as possible on the gold track during the 36 hours race. Speeds of 5 nanometers per hour were expected.
Nanocars
Nanocars are a new class of molecular machines that can roll across solid surfaces with structurally defined direction. They are molecules essentially composed of a few tens or hundreds of hydrogen and carbon atoms and are measuring one to three nanometers.
The nanocar is propelled step by step by electrical impulses and electron transfer from the tip of the STM. The resulting tunnel current flows through the nanocar between the tip of the microscope and the common metal track. There is no direct mechanical contact with the tip. The nanocar is therefore neither pushed nor deformed by the tip of the microscope during the race. Some of the electrons that pass through the nanocar release energy as small intramolecular vibrations that activate the nanocar's motor.
Editions
2017 Nanocar Race I
Teams
Switzerland: Swiss Nano Dragster, University of Basel
France: Toulouse nanomobile club, Paul Sabatier University
Austria/United States: NanoPrix Team University of Graz / Rice University
Germany: Nano-windmill Company Technical University of Dresden (TU Dresden)
Japan: Nano-Vehicle NIMS-MANA National Institute for Materials Science
United States: Ohio Bobcat Nano-Wagon, Ohio University
Results
The race on the gold surface was won by the Swiss team that crossed the finish line first after covering a distance of 133 nanometers.
On the silver surface, the vehicle of the Austrian-American team from Rice University and the University of Graz set the first speed record with a peak speed of 95 nanometers per hour, and was ranked equally with the Swiss team which raced on the gold surface, given that motion of the same nanocar on silver surfaces are slower than on gold surfaces. This vehicle was remotely controlled from the Toulouse race hall on the University of Graz microscope. Specific properties of the chemical structure as well as a completely new manipulation technique (without time-consuming imaging steps) rendered this nanocar very fast. These properties even allowed it to complete a distance of more than 1000 nm after completion of the official race track.
The American team from Ohio University turned back for no apparent reason after 20 nanometers, the German team broke 2 vehicles without being able to restart, and the Japanese team ended up giving up. The French team lost sight of its vehicle on its surface area, and was also obliged to abandon, comforting itself with the symbolic prize of "the most elegant car in the competition".
2022 Nanocar Race II
Teams
France–Japan: TOULOUSE–NARA, Toulouse III - Paul Sabatier University. CEMES (CNRS) and Nara Institute of Science and Technology.
United States–Austria: Rice–Graz nanoprix, Rice University and University of Graz
Germany: GAZE, Technische Universitat Dresden
United States: Ohio Bobcat Nanowagon, Ohio University
France: StrasNanocar, University of Strasbourg and Strasbourg Institute of Material Physics and Chemistry (IPCMS)
Spain: SAN SEBASTIAN, Donostia International Physics Center and University of Santiago de Compostela
Japan: NIMS-MANA, National Institute for Materials Science (Tsukuba)
Spain–Sweden: NANOHISPA, IMDEA Nanoscience Institute (University of Madrid) and Linköping University
Results
NANOHISPA and NIMS-MANA were both ranked first, both making about 54 turns and covering 678 nm and 1054 nm, respectively. The first demonstrated a change of lane for overpassing while the latter crossed a trench and go back. StrasNanocar ranked third covering 476 nm and performing 28 turns.
Scientific interest
To make this kind of race possible, a considerable number of problems had to be solved beforehand, such as the choice of the track and its preparation, the improvement of monitoring and control devices, in particular the sensitivity of current measurements, the evaporation of a large number of very different molecules on the same surface and microscope validation
Among the benefits, the CNRS cites the development molecular motors and Tech-Atoms, that will make possible in the future the preparation of quantum electronic circuits on the surface of an insulator, atom by atom, whose calculating parts will measure less than 1 nm.
References
2017 establishments in France
Molecular machines
Nanotechnology
2017 in science | Nanocar Race | Physics,Chemistry,Materials_science,Technology,Engineering | 1,249 |
391,283 | https://en.wikipedia.org/wiki/Neutron%20emission | Neutron emission is a mode of radioactive decay in which one or more neutrons are ejected from a nucleus. It occurs in the most neutron-rich/proton-deficient nuclides, and also from excited states of other nuclides as in photoneutron emission and beta-delayed neutron emission. As only a neutron is lost by this process the number of protons remains unchanged, and an atom does not become an atom of a different element, but a different isotope of the same element.
Neutrons are also produced in the spontaneous and induced fission of certain heavy nuclides.
Spontaneous neutron emission
As a consequence of the Pauli exclusion principle, nuclei with an excess of protons or neutrons have a higher average energy per nucleon. Nuclei with a sufficient excess of neutrons have a greater energy than the combination of a free neutron and a nucleus with one less neutron, and therefore can decay by neutron emission. Nuclei which can decay by this process are described as lying beyond the neutron drip line.
Two examples of isotopes that emit neutrons are beryllium-13 (decaying to beryllium-12 with a mean life ) and helium-5 (helium-4, ).
In tables of nuclear decay modes, neutron emission is commonly denoted by the abbreviation n.
{| class="wikitable" align="left"
|+ Neutron emitters to the left of lower dashed line (see also: Table of nuclides)
|-
|-
|-
|-
|-
|-
|-
|-
|-
|-
|-
|-
|-
|-
|-
|-
|-
|}
Double neutron emission
Some neutron-rich isotopes decay by the emission of two or more neutrons. For example, hydrogen-5 and helium-10 decay by the emission of two neutrons, hydrogen-6 by the emission of 3 or 4 neutrons, and hydrogen-7 by emission of 4 neutrons.
Photoneutron emission
Some nuclides can be induced to eject a neutron by gamma radiation. One such nuclide is 9Be; its photodisintegration is significant in nuclear astrophysics, pertaining to the abundance of beryllium and the consequences of the instability of 8Be. This also makes this isotope useful as a neutron source in nuclear reactors. Another nuclide, 181Ta, is also known to be readily capable of photodisintegration; this process is thought to be responsible for the creation of 180mTa, the only primordial nuclear isomer and the rarest primordial nuclide.
Beta-delayed neutron emission
Neutron emission usually happens from nuclei that are in an excited state, such as the excited 17O* produced from the beta decay of 17N. The neutron emission process itself is controlled by the nuclear force and therefore is extremely fast, sometimes referred to as "nearly instantaneous". This process allows unstable atoms to become more stable. The ejection of the neutron may be as a product of the movement of many nucleons, but it is ultimately mediated by the repulsive action of the nuclear force that exists at extremely short-range distances between nucleons.
Delayed neutrons in reactor control
Most neutron emission outside prompt neutron production associated with fission (either induced or spontaneous), is from neutron-heavy isotopes produced as fission products. These neutrons are sometimes emitted with a delay, giving them the term delayed neutrons, but the actual delay in their production is a delay waiting for the beta decay of fission products to produce the excited-state nuclear precursors that immediately undergo prompt neutron emission. Thus, the delay in neutron emission is not from the neutron-production process, but rather its precursor beta decay, which is controlled by the weak force, and thus requires a far longer time. The beta decay half lives for the precursors to delayed neutron-emitter radioisotopes, are typically fractions of a second to tens of seconds.
Nevertheless, the delayed neutrons emitted by neutron-rich fission products aid control of nuclear reactors by making reactivity change far more slowly than it would if it were controlled by prompt neutrons alone. About 0.65% of neutrons are released in a nuclear chain reaction in a delayed way due to the mechanism of neutron emission, and it is this fraction of neutrons that allows a nuclear reactor to be controlled on human reaction time-scales, without proceeding to a prompt critical state, and runaway melt down.
Neutron emission in fission
Induced fission
A synonym for such neutron emission is "prompt neutron" production, of the type that is best known to occur simultaneously with induced nuclear fission. Induced fission happens only when a nucleus is bombarded with neutrons, gamma rays, or other carriers of energy. Many heavy isotopes, most notably californium-252, also emit prompt neutrons among the products of a similar spontaneous radioactive decay process, spontaneous fission.
Spontaneous fission
Spontaneous fission happens when a nucleus splits into two (occasionally three) smaller nuclei and generally one or more neutrons.
See also
Neutron radiation
Neutron source
Proton emission
References
External links
"Why Are Some Atoms Radioactive?" EPA. Environmental Protection Agency, n.d. Web. 31 Oct. 2014
The LIVEChart of Nuclides - IAEA with filter on delayed neutron emission decay
Nuclear Structure and Decay Data - IAEA with query on Neutron Separation Energy
Emission
Nuclear physics
Radioactivity | Neutron emission | Physics,Chemistry | 1,096 |
50,702 | https://en.wikipedia.org/wiki/Environmental%20engineering | Environmental engineering is a professional engineering discipline related to environmental science. It encompasses broad scientific topics like chemistry, biology, ecology, geology, hydraulics, hydrology, microbiology, and mathematics to create solutions that will protect and also improve the health of living organisms and improve the quality of the environment. Environmental engineering is a sub-discipline of civil engineering and chemical engineering. While on the part of civil engineering, the Environmental Engineering is focused mainly on Sanitary Engineering.
Environmental engineering applies scientific and engineering principles to improve and maintain the environment to protect human health, protect nature's beneficial ecosystems, and improve environmental-related enhancement of the quality of human life.
Environmental engineers devise solutions for wastewater management, water and air pollution control, recycling, waste disposal, and public health. They design municipal water supply and industrial wastewater treatment systems, and design plans to prevent waterborne diseases and improve sanitation in urban, rural and recreational areas. They evaluate hazardous-waste management systems to evaluate the severity of such hazards, advise on treatment and containment, and develop regulations to prevent mishaps. They implement environmental engineering law, as in assessing the environmental impact of proposed construction projects.
Environmental engineers study the effect of technological advances on the environment, addressing local and worldwide environmental issues such as acid rain, global warming, ozone depletion, water pollution and air pollution from automobile exhausts and industrial sources.
Most jurisdictions impose licensing and registration requirements for qualified environmental engineers.
Etymology
The word environmental has its root in the late 19th-century French word environ (verb), meaning to encircle or to encompass. The word environment was used by Carlyle in 1827 to refer to the aggregate of conditions in which a person or thing lives. The meaning shifted again in 1956 when it was used in the ecological sense, where Ecology is the branch of science dealing with the relationship of living things to their environment.
The second part of the phrase environmental engineer originates from Latin roots and was used in the 14th century French as engignour, meaning a constructor of military engines such as trebuchets, harquebuses, longbows, cannons, catapults, ballistas, stirrups, armour as well as other deadly or bellicose contraptions. The word engineer was not used to reference public works until the 16th century; and it likely entered the popular vernacular as meaning a contriver of public works during John Smeaton's time.
History
Ancient civilizations
Environmental engineering is a name for work that has been done since early civilizations, as people learned to modify and control the environmental conditions to meet needs. As people recognized that their health was related to the quality of their environment, they built systems to improve it. The ancient Indus Valley Civilization (3300 B.C.E. to 1300 B.C.E.) had advanced control over their water resources. The public work structures found at various sites in the area include wells, public baths, water storage tanks, a drinking water system, and a city-wide sewage collection system. They also had an early canal irrigation system enabling large-scale agriculture.
From 4000 to 2000 B.C.E., many civilizations had drainage systems and some had sanitation facilities, including the Mesopotamian Empire, Mohenjo-Daro, Egypt, Crete, and the Orkney Islands in Scotland. The Greeks also had aqueducts and sewer systems that used rain and wastewater to irrigate and fertilize fields.
The first aqueduct in Rome was constructed in 312 B.C.E., and the Romans continued to construct aqueducts for irrigation and safe urban water supply during droughts. They also built an underground sewer system as early as the 7th century B.C.E. that fed into the Tiber River, draining marshes to create farmland as well as removing sewage from the city.
Modern era
Very little change was seen from the decline of the Roman Empire until the 19th century, where improvements saw increasing efforts focused on public health. Modern environmental engineering began in London in the mid-19th century when Joseph Bazalgette designed the first major sewerage system following the Great Stink. The city's sewer system conveyed raw sewage to the River Thames, which also supplied the majority of the city's drinking water, leading to an outbreak of cholera. The introduction of drinking water treatment and sewage treatment in industrialized countries reduced waterborne diseases from leading causes of death to rarities.
The field emerged as a separate academic discipline during the middle of the 20th century in response to widespread public concern about water and air pollution and other environmental degradation. As society and technology grew more complex, they increasingly produced unintended effects on the natural environment. One example is the widespread application of the pesticide DDT to control agricultural pests in the years following World War II. The story of DDT as vividly told in Rachel Carson's Silent Spring (1962) is considered to be the birth of the modern environmental movement, which led to the modern field of "environmental engineering."
Education
Many universities offer environmental engineering programs through either the department of civil engineering or chemical engineering and also including electronic projects to develop and balance the environmental
conditions. Environmental engineers in a civil engineering program often focus on hydrology, water resources management, bioremediation, and water and wastewater treatment plant design. Environmental engineers in a chemical engineering program tend to focus on environmental chemistry, advanced air and water treatment technologies, and separation processes. Some subdivisions of environmental engineering include natural resources engineering and agricultural engineering.
Courses for students fall into a few broad classes:
Mechanical engineering courses oriented towards designing machines and mechanical systems for environmental use such as water and wastewater treatment facilities, pumping stations, garbage segregation plants, and other mechanical facilities.
Environmental engineering or environmental systems courses oriented towards a civil engineering approach in which structures and the landscape are constructed to blend with or protect the environment.
Environmental chemistry, sustainable chemistry or environmental chemical engineering courses oriented towards understanding the effects of chemicals in the environment, including any mining processes, pollutants, and also biochemical processes.
Environmental technology courses oriented towards producing electronic or electrical graduates capable of developing devices and artifacts able to monitor, measure, model and control environmental impact, including monitoring and managing energy generation from renewable sources.
Curriculum
The following topics make up a typical curriculum in environmental engineering:
Mass and Energy transfer
Environmental chemistry
Inorganic chemistry
Organic Chemistry
Nuclear Chemistry
Growth models
Resource consumption
Population growth
Economic growth
Risk assessment
Hazard identification
Dose-response Assessment
Exposure assessment
Risk characterization
Comparative risk analysis
Water pollution
Water resources and pollutants
Oxygen demand
Pollutant transport
Water and waste water treatment
Air pollution
Industry, transportation, commercial and residential emissions
Criteria and toxic air pollutants
Pollution modelling (e.g. Atmospheric dispersion modeling)
Pollution control
Air pollution and meteorology
Global change
Greenhouse effect and global temperature
Carbon, nitrogen, and oxygen cycle
IPCC emissions scenarios
Oceanic changes (ocean acidification, other effects of global warming on oceans) and changes in the stratosphere (see Physical impacts of climate change)
Solid waste management and resource recovery
Life cycle assessment
Source reduction
Collection and transfer operations
Recycling
Waste-to-energy conversion
Landfill
Applications
Water supply and treatment
Environmental engineers evaluate the water balance within a watershed and determine the available water supply, the water needed for various needs in that watershed, the seasonal cycles of water movement through the watershed and they develop systems to store, treat, and convey water for various uses.
Water is treated to achieve water quality objectives for the end uses. In the case of a potable water supply, water is treated to minimize the risk of infectious disease transmission, the risk of non-infectious illness, and to create a palatable water flavor. Water distribution systems are designed and built to provide adequate water pressure and flow rates to meet various end-user needs such as domestic use, fire suppression, and irrigation.
Wastewater treatment
There are numerous wastewater treatment technologies. A wastewater treatment train can consist of a primary clarifier system to remove solid and floating materials, a secondary treatment system consisting of an aeration basin followed by flocculation and sedimentation or an activated sludge system and a secondary clarifier, a tertiary biological nitrogen removal system, and a final disinfection process. The aeration basin/activated sludge system removes organic material by growing bacteria (activated sludge). The secondary clarifier removes the activated sludge from the water. The tertiary system, although not always included due to costs, is becoming more prevalent to remove nitrogen and phosphorus and to disinfect the water before discharge to a surface water stream or ocean outfall.
Air pollution management
Scientists have developed air pollution dispersion models to evaluate the concentration of a pollutant at a receptor or the impact on overall air quality from vehicle exhausts and industrial flue gas stack emissions. To some extent, this field overlaps the desire to decrease carbon dioxide and other greenhouse gas emissions from combustion processes.
Environmental impact assessment and mitigation
Environmental engineers apply scientific and engineering principles to evaluate if there are likely to be any adverse impacts to water quality, air quality, habitat quality, flora and fauna, agricultural capacity, traffic, ecology, and noise. If impacts are expected, they then develop mitigation measures to limit or prevent such impacts. An example of a mitigation measure would be the creation of wetlands in a nearby location to mitigate the filling in of wetlands necessary for a road development if it is not possible to reroute the road.
In the United States, the practice of environmental assessment was formally initiated on January 1, 1970, the effective date of the National Environmental Policy Act (NEPA). Since that time, more than 100 developing and developed nations either have planned specific analogous laws or have adopted procedure used elsewhere. NEPA is applicable to all federal agencies in the United States.
Regulatory agencies
Environmental Protection Agency
The U.S. Environmental Protection Agency (EPA) is one of the many agencies that work with environmental engineers to solve critical issues. An essential component of EPA's mission is to protect and improve air, water, and overall environmental quality to avoid or mitigate the consequences of harmful effects.
See also
Associations
References
Further reading
Davis, M. L. and D. A. Cornwell, (2006) Introduction to environmental engineering (4th ed.) McGraw-Hill
Chemical engineering
Civil engineering
Environmental science
Engineering disciplines
Environmental terminology | Environmental engineering | Chemistry,Engineering,Environmental_science | 2,085 |
49,489,032 | https://en.wikipedia.org/wiki/GloVe | GloVe, coined from Global Vectors, is a model for distributed word representation. The model is an unsupervised learning algorithm for obtaining vector representations for words. This is achieved by mapping words into a meaningful space where the distance between words is related to semantic similarity. Training is performed on aggregated global word-word co-occurrence statistics from a corpus, and the resulting representations showcase interesting linear substructures of the word vector space. As log-bilinear regression model for unsupervised learning of word representations, it combines the features of two model families, namely the global matrix factorization and local context window methods.
It is developed as an open-source project at Stanford and was launched in 2014. It was designed as a competitor to word2vec, and the original paper noted multiple improvements of GloVe over word2vec. , both approaches are outdated, and Transformer-based models, such as BERT, which add multiple neural-network attention layers on top of a word embedding model similar to Word2vec, have come to be regarded as the state of the art in NLP.
Definition
You shall know a word by the company it keeps (Firth, J. R. 1957:11)The idea of GloVe is to construct, for each word , two vectors , such that the relative positions of the vectors capture part of the statistical regularities of the word . The statistical regularity is defined as the co-occurrence probabilities. Words that resemble each other in meaning should also resemble each other in co-occurrence probabilities.
Word counting
Let the vocabulary be , the set of all possible words (aka "tokens"). Punctuation is either ignored, or treated as vocabulary, and similarly for capitalization and other typographical details.
If two words occur close to each other, then we say that they occur in the context of each other. For example, if the context length is 3, then we say that in the following sentenceGloVe1, coined2 from3 Global4 Vectors5, is6 a7 model8 for9 distributed10 word11 representation12the word "model8" is in the context of "word11" but not the context of "representation12".
A word is not in the context of itself, so "model8" is not in the context of the word "model8", although, if a word appears again in the same context, then it does count.
Let be the number of times that the word appears in the context of the word over the entire corpus. For example, if the corpus is just "I don't think that that is a problem." we have since the first "that" appears in the second one's context, and vice versa.
Let be the number of words in the context of all instances of word . By counting, we have(except for words occurring right at the start and end of the corpus)
Probabilistic modelling
Let be the co-occurrence probability. That is, if one samples a random occurrence of the word in the entire document, and a random word within its context, that word is with probability . Note that in general. For example, in a typical modern English corpus, is close to one, but is close to zero. This is because the word "ado" is almost only used in the context of the archaic phrase "much ado about", but the word "much" occurs in all kinds of contexts.
For example, in a 6 billion token corpus, we have
Inspecting the table, we see that the words "ice" and "steam" are indistinguishable along the "water" (often co-occurring with both) and "fashion" (rarely co-occurring with either), but distinguishable along the "solid" (co-occurring more with ice) and "gas" (co-occurring more with "steam").
The idea is to learn two vectors for each word , such that we have a multinomial logistic regression:and the terms are unimportant parameters.
This means that if the words have similar co-occurrence probabilities , then their vectors should also be similar: .
Logistic regression
Naively, logistic regression can be run by minimizing the squared loss:However, this would be noisy for rare co-occurrences. To fix the issue, the squared loss is weighted so that the loss is slowly ramped-up as the absolute number of co-occurrences increases:whereand are hyperparameters. In the original paper, the authors found that seem to work well in practice.
Use
Once a model is trained, we have 4 trained parameters for each word: . The parameters are irrelevant, and only are relevant.
The authors recommended using as the final representation vector for word , because empirically it worked better than or alone.
Applications
GloVe can be used to find relations between words like synonyms, company-product relations, zip codes and cities, etc. However, the unsupervised learning algorithm is not effective in identifying homographs, i.e., words with the same spelling and different meanings. This is as the unsupervised learning algorithm calculates a single set of vectors for words with the same morphological structure. The algorithm is also used by the SpaCy library to build semantic word embedding features, while computing the top list words that match with distance measures such as cosine similarity and Euclidean distance approach. GloVe was also used as the word representation framework for the online and offline systems designed to detect psychological distress in patient interviews.
See also
ELMo
BERT
Word2vec
fastText
Natural language processing
References
External links
GloVe
Deeplearning4j GloVe
Computational linguistics
Natural language processing toolkits | GloVe | Technology | 1,182 |
31,282,761 | https://en.wikipedia.org/wiki/OnApp | OnApp was a London, UK-based software company. Its software enabled service providers to build, operate and sell IaaS public cloud, private cloud and content delivery network services. OnApp also operated the OnApp Federation, a wholesale cloud infrastructure marketplace, which enabled service providers to buy and sell cloud infrastructure managed by OnApp software; and enables enterprises to adopt a hybrid cloud model by combining their on-premises cloud infrastructure with public cloud resources.
OnApp was founded in 2010. On 16 August 2021, OnApp became a division of Virtuozzo.
List of products
OnApp Cloud enables service providers to manage and sell different types of cloud hosting service. It includes a range of tools for server orchestration and virtual appliance management, and managing associated functions such as metering, monitoring, failover, backups, security, billing, user permissions and limits; plus software-defined networking and software-defined storage capabilities. OnApp Cloud is managed via a graphical user interface which is available for web browsers, iOS and Android devices; or via its REST API
Cloud.net provides OnApp cloud software and infrastructure as a service
OnApp CDN enables service providers to create their own content delivery network services
OnApp Edge Accelerator is a patented, automated content optimization and distribution tool for web applications running on a virtual server in OnApp clouds
OnApp for VMware Cloud Director enables OnApp to be used as a management, provisioning and billing portal for VMware Cloud Director
OnApp for VMware vCenter enables OnApp to be used as a management, provisioning and billing portal for VMware vCenter environments
OnApp Federation is a wholesale marketplace where service providers can buy and sell cloud infrastructure on demand
Partnerships
OnApp partnerships include:
VMware - OnApp is a VMware Technology Alliance Partner and a recommended VMware portal provider
Acquisitions
On 8 August 2011 OnApp announced the acquisition of Aflexi, a CDN management software company
On 16 September 2014 OnApp announced the acquisition of SolusVM, a virtual server management software company. On 7 June 2018, SolusVM was acquired from OnApp by Plesk
On 16 August 2021, Virtuozzo announced that it had acquired OnApp.
References
External links
Official OnApp website
Cloud computing providers
Cloud infrastructure | OnApp | Technology | 467 |
299,619 | https://en.wikipedia.org/wiki/Ketogenesis | Ketogenesis is the biochemical process through which organisms produce ketone bodies by breaking down fatty acids and ketogenic amino acids. The process supplies energy to certain organs, particularly the brain, heart and skeletal muscle, under specific scenarios including fasting, caloric restriction, sleep, or others. (In rare metabolic diseases, insufficient gluconeogenesis can cause excessive ketogenesis and hypoglycemia, which may lead to the life-threatening condition known as non-diabetic ketoacidosis.)
Ketone bodies are not obligately produced from fatty acids; rather a meaningful amount of them is synthesized only in a situation of carbohydrate and protein insufficiency, where only fatty acids are readily available as fuel for their production.
Recent evidence suggests that glial cells are ketogenic, supplying neurons with locally synthesized ketone bodies to sustain cognitive processes.
Production
Ketone bodies are produced mainly in the mitochondria of liver cells, and synthesis can occur in response to an unavailability of blood glucose, such as during fasting. Other cells, e.g. human astrocytes, are capable of carrying out ketogenesis, but they are not as effective at doing so. Ketogenesis occurs constantly in a healthy individual. Ketogenesis in healthy individuals is ultimately under the control of the master regulatory protein AMPK, which is activated during times of metabolic stress, such as carbohydrate insufficiency. Its activation in the liver inhibits lipogenesis, promotes fatty acid oxidation, switches off acetyl-CoA carboxylase, turns on malonyl-CoA decarboxylase, and consequently induces ketogenesis. Ethanol is a potent AMPK inhibitor and therefore can cause significant disruptions in the metabolic state of the liver, including halting of ketogenesis, even in the context of hypoglycemia.
Ketogenesis takes place in the setting of low glucose levels in the blood, after exhaustion of other cellular carbohydrate stores, such as glycogen. It can also take place when there is insufficient insulin (e.g. in type 1 (and less commonly type 2) diabetes), particularly during periods of "ketogenic stress" such as intercurrent illness.
The production of ketone bodies is then initiated to make available energy that is stored as fatty acids. Fatty acids are enzymatically broken down in β-oxidation to form acetyl-CoA. Under normal conditions, acetyl-CoA is further oxidized by the citric acid cycle (TCA/Krebs cycle) and then by the mitochondrial electron transport chain to release energy. However, if the amounts of acetyl-CoA generated in fatty-acid β-oxidation challenge the processing capacity of the TCA cycle; i.e. if activity in TCA cycle is low due to low amounts of intermediates such as oxaloacetate, acetyl-CoA is then used instead in biosynthesis of ketone bodies via acetoacetyl-CoA and β-hydroxy-β-methylglutaryl-CoA (HMG-CoA). Furthermore, since there is only a limited amount of coenzyme A in the liver, the production of ketogenesis allows some of the coenzyme to be freed to continue fatty-acid β-oxidation. Depletion of glucose and oxaloacetate can be triggered by fasting, vigorous exercise, high-fat diets or other medical conditions, all of which enhance ketone production. Deaminated amino acids that are ketogenic, such as leucine, also feed TCA cycle, forming acetoacetate & ACoA and thereby produce ketones. Besides its role in the synthesis of ketone bodies, HMG-CoA is also an intermediate in the synthesis of cholesterol, but the steps are compartmentalised. Ketogenesis occurs in the mitochondria, whereas cholesterol synthesis occurs in the cytosol, hence both processes are independently regulated.
Ketone bodies
The three ketone bodies, each synthesized from acetyl-CoA molecules, are:
Acetoacetate, which can be converted by the liver into β-hydroxybutyrate, or spontaneously turn into acetone. Most acetoacetate is reduced to beta-hydroxybutyrate, which serves to additionally ferry reducing electrons to the tissues, especially the brain, where they are stripped back off and used for metabolism.
Acetone, which is generated through the decarboxylation of acetoacetate, either spontaneously or through the enzyme acetoacetate decarboxylase. It can then be further metabolized either by CYP2E1 into hydroxyacetone (acetol) and then via propylene glycol to pyruvate, lactate and acetate (usable for energy) and propionaldehyde, or via methylglyoxal to pyruvate and lactate.
β-hydroxybutyrate (not technically a ketone according to IUPAC nomenclature) is generated through the action of the enzyme D-β-hydroxybutyrate dehydrogenase on acetoacetate. Upon entering the tissues, beta-hydroxybutyrate is converted by D-β-hydroxybutyrate dehydrogenase back to acetoacetate along with a proton and a molecule of NADH, the latter of which goes on to power the electron transport chain and other redox reactions. β-Hydroxybutyrate is the most abundant of the ketone bodies, followed by acetoacetate and finally acetone.
β-Hydroxybutyrate and acetoacetate can pass through membranes easily, and are therefore a source of energy for the brain, which cannot directly metabolize fatty acids. The brain receives 60-70% of its required energy from ketone bodies when blood glucose levels are low. These bodies are transported into the brain by monocarboxylate transporters 1 and 2. Therefore, ketone bodies are a way to move energy from the liver to other cells. The liver does not have the critical enzyme, succinyl CoA transferase, to process ketone bodies, and therefore cannot undergo ketolysis. The result is that the liver only produces ketone bodies, but does not use a significant amount of them.
Regulation
Ketogenesis may or may not occur, depending on levels of available carbohydrates in the cell or body. This is closely related to the paths of acetyl-CoA:
When the body has ample carbohydrates available as energy source, glucose is completely oxidized to CO2; acetyl-CoA is formed as an intermediate in this process, first entering the citric acid cycle followed by complete conversion of its chemical energy to ATP in oxidative phosphorylation.
When the body has excess carbohydrates available, some glucose is fully metabolized, and some of it is stored in the form of glycogen or, upon citrate excess, as fatty acids (see lipogenesis). Coenzyme A is recycled at this step.
When the body has no free carbohydrates available, fat must be broken down into acetyl-CoA in order to get energy. Under these conditions, acetyl-CoA cannot be metabolized through the citric acid cycle because the citric acid cycle intermediates (mainly oxaloacetate) have been depleted to feed the gluconeogenesis pathway. The resulting accumulation of acetyl-CoA activates ketogenesis.
Insulin and glucagon are key regulating hormones of ketogenesis, with insulin being the primary regulator. Both hormones regulate hormone-sensitive lipase and acetyl-CoA carboxylase. Hormone-sensitive lipase produces diglycerides from triglycerides, freeing a fatty acid molecule for oxidation. Acetyl-CoA carboxylase catalyzes the production of malonyl-CoA from acetyl-CoA. Malonyl-CoA reduces the activity of carnitine palmitoyltransferase I, an enzyme that brings fatty acids into the mitochondria for β-oxidation. Insulin inhibits hormone-sensitive lipase and activates acetyl-CoA carboxylase, thereby reducing the amount of starting materials for fatty acid oxidation and inhibiting their capacity to enter the mitochondria. Glucagon activates hormone-sensitive lipase and inhibits acetyl-CoA carboxylase, thereby stimulating ketone body production, and making passage into the mitochondria for β-oxidation easier. Insulin also inhibits HMG-CoA lyase, further inhibiting ketone body production. Similarly, cortisol, catecholamines, epinephrine, norepinephrine, and thyroid hormones can increase the amount of ketone bodies produced, by activating lipolysis (the mobilization of fatty acids out of fat tissue) and thereby increasing the concentration of fatty acids available for β-oxidation. Unlike glucagon, catecholamines are capable of inducing lipolysis even in the presence of insulin for use by peripheral tissues during acute stress.
Peroxisome Proliferator Activated Receptor alpha (PPARα) also has the ability to upregulate ketogenesis, as it has some control over a number of genes involved in ketogenesis. For example, monocarboxylate transporter 1, which is involved in transporting ketone bodies over membranes (including the blood–brain barrier), is regulated by PPARα, thus affecting ketone body transportation into the brain. Carnitine palmitoyltransferase is also upregulated by PPARα, which can affect fatty acid transportation into the mitochondria.
Pathology
Both acetoacetate and beta-hydroxybutyrate are acidic, and, if levels of these ketone bodies are too high, the pH of the blood drops, resulting in ketoacidosis. Ketoacidosis is known to occur in untreated type I diabetes (see diabetic ketoacidosis) and in alcoholics after prolonged binge-drinking without intake of sufficient carbohydrates (see alcoholic ketoacidosis).
The production and use of ketones can be ineffective in people with defects in the pathway for beta-oxidation, in the genes for ketogenesis (HMGCS2 and HMGCL), for ketolysis (OXCT1, ACAT1). Defects in this pathway can cause varying degrees of inability to cope with fasting. HMGCS2 deficiency, for example, can cause hypoglycemic crises that lead to brain damage, and death.
Individuals with diabetes mellitus can experience overproduction of ketone bodies due to a lack of insulin. Without insulin to help extract glucose from the blood, tissues the levels of malonyl-CoA are reduced, and it becomes easier for fatty acids to be transported into mitochondria, causing the accumulation of excess acetyl-CoA. The accumulation of acetyl-CoA in turn produces excess ketone bodies through ketogenesis. The result is a rate of ketone production higher than the rate of ketone disposal, and a decrease in blood pH. In extreme cases the resulting acetone can be detected in the patient's breath as a faint, sweet odor.
There are some health benefits to ketone bodies and ketogenesis as well. It has been suggested that a low-carb, high fat ketogenic diet can be used to help treat epilepsy in children. Additionally, ketone bodies can be anti-inflammatory. Some kinds of cancer cells are unable to use ketone bodies, as they do not have the necessary enzymes to engage in ketolysis. It has been proposed that actively engaging in behaviors that promote ketogenesis could help manage the effects of some cancers.
See also
Ketone bodies
Fatty acid metabolism
Ketosis
Ketogenic diet
References
External links
Fat metabolism at University of South Australia
James Baggott. (1998) Synthesis and Utilization of Ketone Bodies at University of Utah Retrieved 23 May 2005.
Richard A. Paselk. (2001) Fat Metabolism 2: Ketone Bodies at Humboldt State University Retrieved 23 May 2005.
Lipid metabolism | Ketogenesis | Chemistry | 2,580 |
58,948,209 | https://en.wikipedia.org/wiki/Dennis%20G.%20Peters | Dennis Gail Peters (April 17, 1937 – April 13, 2020) was an American analytical chemist who specialized in electrochemistry and was named the Herman T. Briscoe Professor at Indiana University in 1975. Peters led his own research group at Indiana University in Bloomington, Indiana until his death in 2020. Peters' research focused on the electrochemical behavior of halogenated organic compounds, more recently moving to focus on transition metal catalysts in regards to the oxidation and reduction of organic species. He authored or co-authored over 210 publications and 5 analytical chemistry textbooks.
Early life and education
Dennis Peters was born on April 17, 1937, in Los Angeles, California. He completed his Bachelor of Science in chemistry from the California Institute of Technology in 1958 and graduated cum laude before completing his PhD in analytical chemistry at Harvard University under James J. Lingane. After completing his PhD in 1962, Peters went to Indiana University in Bloomington, Indiana.
Career
Peters served as the chemistry department's graduate student advisor from 1969 to 1971 where he recruited the department's largest incoming class. His research has focused on the mechanistic and synthetic properties of the oxidation and reduction of halogenated organic compounds and electrocatalysis in organic synthesis. Peters was still actively teaching up to the time he suffered a fall during spring break 2020 and was taken to a hospital.
Death
Peters died from hospital-acquired COVID-19 on April 13, 2020, four days before his 83rd birthday. He contracted the virus while in a Bloomington hospital recovering from a fall.
Peters' name was included in the May 2020 New York Times tribute U.S. Deaths Near 100,000, An Incalculable Loss to the 100,000 Americans who lost their lives as a direct result of the pandemic. A reporter for the Indiana Daily Student wrote that "Peters had a roaring voice that filled lecture halls".
Awards and honors
1990, American Chemical Society Division of Analytical Chemistry J. Calvin Giddings Award for Excellence in Teaching
2002, Electrochemical Society Henry B. Linford Award for Distinguished Teaching
2006, W. George Pinnell Award for Outstanding Service, Indiana University Bloomington
2007, Elected Fellow of the Electrochemical Society
2012, Electrochemical Society Division of Organic and Biological Electrochemistry, Manuel M. Baizer Award
2012, Elected Fellow of the American Association for the Advancement of Science
2017, Elected Fellow of the American Chemical Society
2020, American Chemical Society Division of Analytical Chemistry, Roland F. Hirsch Award for Distinguished Service
Publications
Books
References
External links
http://www.indiana.edu/~echem/
Indiana University Bloomington faculty
California Institute of Technology alumni
Harvard University alumni
1937 births
2020 deaths
Writers from Los Angeles
Analytical chemists
20th-century American chemists
Deaths from the COVID-19 pandemic in Indiana | Dennis G. Peters | Chemistry | 563 |
4,121,649 | https://en.wikipedia.org/wiki/Halazepam | Halazepam is a benzodiazepine derivative that was marketed under the brand names Paxipam in the United States, Alapryl in Spain, and Pacinone in Portugal.
Medical uses
Halazepam was used for the treatment of anxiety.
Adverse effects
Adverse effects include drowsiness, confusion, dizziness, and sedation. Gastrointestinal side effects have also been reported including dry mouth and nausea.
Pharmacokinetics and pharmacodynamics
Pharmacokinetics and pharmacodynamics were listed in Current Psychotherapeutic Drugs published on June 15, 1998 as follows:
Regulatory Information
Halazepam is classified as a schedule 4 controlled substance with a corresponding code 2762 by the Drug Enforcement Administration (DEA).
Commercial production
Halazepam was invented by Schlesinger Walter in the U.S. It was marketed as an anti-anxiety agent in 1981. However, Halazepam is not commercially available in the United States because it was withdrawn by its manufacturer for poor sales.
See also
Benzodiazepines
Nordazepam
Diazepam
Chlordiazepoxide
Quazepam, fletazepam, triflubazam — benzodiazepines with trifluoromethyl group attached
References
External links
Inchem - Halazepam
Withdrawn drugs
Benzodiazepines
Chloroarenes
Lactams
Trifluoromethyl compounds | Halazepam | Chemistry | 307 |
27,365,300 | https://en.wikipedia.org/wiki/Etalocib | Etalocib is a drug candidate that was under development for the treatment of various types of cancer. It acts as a leukotriene B4 receptor antagonist and a PPARγ agonist.
Clinical trials were conducted measuring efficacy for treatment of non-small cell lung cancer and pancreatic cancer and the inflammatory conditions asthma, psoriasis, and ulcerative colitis, but were suspended due to lack of efficacy.
References
Leukotriene antagonists
Salicylic acids
Phenol ethers
4-Fluorophenyl compounds
Abandoned drugs
Biphenyls | Etalocib | Chemistry | 123 |
5,560,489 | https://en.wikipedia.org/wiki/Retrofitting | Retrofitting is the addition of new technology or features to older systems. Retrofits can happen for a number of reasons, for example with big capital expenditures like naval vessels, military equipment or manufacturing plants, businesses or governments may retrofit in order to reduce the need to replace a system entirely. Other retrofits may be due to changing codes or requirements, such as seismic retrofit which are designed strengthening older buildings in order to make them earthquake resistant.
Retrofitting is also an important part of climate change mitigation and climate change adaptation: because society invested in built infrastructure, housing and other systems before the magnitude of changes anticipated by climate change. Retrofits to increase building efficiency, for example, both help reduce the overall negative impacts of climate change by reducing building emissions and environmental impacts while also allowing the building to be more healthy during extreme weather events. Retrofitting also is part of a circular economy, reducing the amount of newly manufactured goods, thus reducing lifecycle emissions and environmental impacts.
In different contexts
Building efficiency and greening
Manufacturing
Principally retrofitting describes the measures taken in the manufacturing industry to allow new or updated parts to be fitted to old or outdated assemblies (like blades to wind turbines).
Retrofitting parts are necessary for manufacture when the design of a large assembly is changed or revised. If, after the changes have been implemented, a customer (with an old version of the product) wishes to purchase a replacement part, then retrofit parts and assembling techniques will have to be used so that the revised parts will fit suitably onto the older assembly.
Retrofitting is an important process used for valves and actuators to ensure optimal operation of an industrial plant. One example is retrofitting a 3-way valve into a 2-way valve, which results in closing one of the three openings to continue using the valve for certain industrial systems.
Retrofitting can improve a machine or system's overall functionality by using advanced and updated equipment and technology—such as integrating Human Machine Interfaces into older factories.
Benefits of manufacturing retrofits
Saving on capital expenditure while benefiting from new technologies
Optimization of existing plant components
Adaptation of the plant for new or changed products
Increase in piece number and cycle time
Guaranteed spare parts availability
Reduced maintenance costs and increased reliability
Vehicles
Car customizing is a form of retrofitting, where older vehicles are fitted with new technologies: power windows, cruise control, remote keyless systems, electric fuel pumps, driverless systems, etc.
Trucks and agricultural machines can also be given retrofits to make them driverless.
Military equipment
Many naval vessels have undergone retrofitting and refitting, sometimes entire classes at once. For instance, the New Threat Upgrade program of the US Navy saw many vessels retrofitted for improved anti-air capability. Naval vessels are often retrofit for one of three reasons: to incorporate new technology, to compensate for performance gaps or weaknesses in design, or to change the ship's classification.
Militaries of the world are often ardent adopters of the latest technology, and many technological advances have been spurred by warfare, especially in fields such as radar and radio communications. Because of this, and the significant investment that a ship hull represents, it is common for retrofitting to be performed whenever new systems are developed. This may be as small as replacing one type of radio with another, or replacing out-dated cryptography equipment with more secure methods of communication, or as major as replacing entire guns and turrets, adding armor plate, or new propulsion systems.
Other ships are retrofit to compensate for weaknesses perceived in their operational capabilities. This was the secondary purpose of the US Navy's New Threat Upgrade program, for instance. Major changes in doctrine or the art of warfare also necessitate changes, such as the anti-aircraft upgrades performed on many World War Two-era vessels as air power became a dominant part of naval strategy and tactics.
Additionally, because of the investment a hull represents, few navies scrap front-line warships. Many times smaller ships are retrofitted for patrol, coast guard, or specialized roles when they are no longer fit for duty as part of a warfleet. The Japanese Momi class from the interwar period, for example, was converted from destroyers to patrol boats in 1939, as they were no longer capable enough to serve in the role of destroyer. Other times classes are retrofit because they are no longer needed in warfare, due to changes in tactics. For instance, the was an aircraft carrier converted from a collier (coal-carrying ship to supply coal-fired steamships with fuel) of the Jupiter-class.
Because of the heavy use of retrofitting and refitting, fictional navies also include the concept. As an example, in the Star Trek MMORPG Star Trek Online players can purchase retrofitted ships of famous Star Trek ship classes, such as those crewed by the protagonists of the Star Trek TV series. This is done to allow players to pilot iconic ships from old series of the show, that wouldn't naturally be latest-and-greatest ships due to their obsolescence or size, but are retrofitted to be suitable for a maximum-level player-character admiral.
Environmental management
The term is also used in the field of environmental engineering, particularly to describe construction or renovation projects on previously built sites, to improve water quality in nearby streams, rivers or lakes. The concept has also been applied to changing the output mix of energy from power plants to cogeneration in urban areas with a potential for district heating.
Sites with extensive impervious surfaces (such as parking lots and rooftops) can generate high levels of stormwater runoff during rainstorms, and this can damage nearby water bodies. These problems can often be addressed by installing new stormwater management features on the site, a process that practitioners refer to as stormwater retrofitting. Stormwater management practices used in retrofit projects include rain gardens, permeable paving and green roofs. (See also stream restoration.)
See also
References
External links
Diesel Retrofit in Europe.
Diesel retrofit glossary
Diesel Retrofits Help Clean Regions' Air – Maryland Department of Environment
Diesel Emission Control Strategies Verification – California Air Resources Board
Electric vehicle conversion
Environmental engineering
Production and manufacturing
Vehicle modifications
Water pollution | Retrofitting | Chemistry,Engineering,Environmental_science | 1,281 |
501,787 | https://en.wikipedia.org/wiki/GNU/Linux%20naming%20controversy | Since the 1990s there is an ongoing controversy whether computer operating systems that use GNU software and the Linux kernel should be referred to as "GNU/Linux" or "Linux" systems.
Proponents of the term Linux argue that it is far more commonly used by the public and media and that it serves as a generic term for systems that combine that kernel with software from multiple other sources, while proponents of the term GNU/Linux note that GNU alone would be just as good a name for GNU variants which combine the GNU operating system software with software from other sources.
The term GNU/Linux is promoted by the Free Software Foundation (FSF) and its founder Richard Stallman. Their reasoning is that the operating system is seen as a modified version of the GNU operating system. Linux as a kernel is just a part of an operating system, whereas the whole operating system is basically the GNU system Several distributions of operating systems containing the Linux kernel use the name that the FSF prefers, such as Debian, Trisquel and Parabola GNU/Linux-libre. Others claim that GNU/Linux is a useful name to make a distinction between those and Linux distributions such as Android and Alpine Linux.
History
In 1983, Richard Stallman, founder of the Free Software Foundation, set forth plans of a complete Unix-like operating system, called GNU, composed entirely of free software. In September of that year, Stallman published a manifesto in Dr. Dobb's Journal detailing his new project publicly, outlining his vision of free software. Software development work began in January 1984. By 1991, the GNU mid-level portions of the operating system were almost complete, and the upper level could be supplied by the X Window System, but the lower level (kernel, device drivers, system-level utilities and daemons) was still mostly lacking.
The kernel officially developed by GNU was called GNU Hurd. The Hurd followed an ambitious microkernel design, which proved unexpectedly difficult to implement early on. However, in 1991, Linus Torvalds independently released the first version of the Linux kernel. Early Linux developers ported GNU code, including the GNU C Compiler, to run with Linux, while the free software community adopted the use of the Linux kernel as the missing kernel for the GNU operating system. This work filled the remaining gaps in providing a completely free operating system.
Over the next few years, several suggestions arose for naming operating systems using the Linux kernel and GNU components. In 1992, the Yggdrasil Linux distribution adopted the name "Linux/GNU/X". In Usenet and mailing-list discussions, one can find usages of "GNU/Linux" as early as 1992, and of "GNU+Linux" as early as 1993. The Debian project, which was at one time sponsored by the Free Software Foundation, switched to calling its product "Debian GNU/Linux" in early 1994.
This change followed a request by Richard Stallman (who initially proposed "LiGNUx," but suggested "GNU/Linux" instead after hearing complaints about the awkwardness of the former term). GNU's June 1994 Bulletin described "Linux" as a "free Unix system for 386 machines" (with "many of the utilities and libraries" from GNU), but the January 1995 Bulletin switched to the term "GNU/Linux" instead.
Stallman's and the FSF's efforts to include "GNU" in the name started around 1994, but were reportedly mostly via private communications (such as the above-mentioned request to Debian) until 1996. In May 1996, Stallman released Emacs 19.31 with the Autoconf system target "linux" changed to "lignux" (shortly thereafter changed to "linux-gnu" in emacs 19.32), and included an essay "Linux and the GNU system" suggesting that people use the terms "Linux-based GNU system" (or "GNU/Linux system" or "Lignux" for short). He later used "GNU/Linux" exclusively, and the essay was superseded by Stallman's 1997 essay, "Linux and the GNU System".
Composition of operating systems
Modern free software and open-source software operating systems are composed of software by many different authors, including the Linux kernel developers, the GNU project, and other vendors such as those behind the X Window System. Desktop and server-based distributions use GNU software such as the GNU C Library (glibc), GNU Core Utilities (coreutils), GNU Compiler Collection, GNU Binutils, GNU gzip, GNU tar, GNU gettext, GNU grep, GNU awk, GNU sed, GNU Findutils, gnupg, libgcrypt, gnutls, GRUB, GNU readline, GNU ncurses, and the Bash shell.
In a 2002 analysis of the source code for Red Hat Linux 7.1, a typical Linux distribution, the total size of the packages from the GNU project was found to be much larger than the Linux kernel. Later, a 2011 analysis of the Ubuntu distribution's "Natty" release main repository found that 8% to 13% of it consisted of GNU components (the range depending on whether GNOME is considered part of GNU), while only 6% is taken by the Linux kernel (9% when including its direct dependencies). Determining exactly what constitutes the "operating system" per se is a matter of continuing debate.
On the other hand, some embedded systems, such as handheld devices and smartphones (like Google's Android), residential gateways (routers), and Voice over IP devices, are engineered with space efficiency in mind and use a Linux kernel with few or no components of GNU, due to perceived issues surrounding bloat, and impeded performance. A system running μClinux is likely to substitute uClibc for glibc, and BusyBox for coreutils. Google's Linux-based Android operating system does not use any GNU components or libraries, using Google's own BSD-based Bionic C library in place of glibc. The FSF agrees that "GNU/Linux" is not an appropriate name for these systems.
There are also systems that use a GNU userspace and/or C library on top of a non-Linux kernel, for example Debian GNU/Hurd (GNU userland on the GNU kernel) or Debian GNU/kFreeBSD (which uses the GNU coreutils and C library with the kernel from FreeBSD).
Opinions
GNU/Linux
The FSF justifies the name "GNU/Linux" primarily on the grounds that the GNU project was specifically developing a complete system, of which they argue that the Linux kernel filled one of the final gaps; the large number of GNU components and GNU source code used in such systems is a secondary argument:
Other arguments include that the name "GNU/Linux" recognizes the role that the free-software movement played in building modern free and open source software communities, that the GNU project played a larger role in developing packages and software for GNU/Linux or Linux distributions, and that using the word "Linux" to refer to the Linux kernel, the operating system and entire distributions of software leads to confusion on the differences about the three. Because of this confusion, legal threats and public relations campaigns apparently directed against the kernel, such as those launched by the SCO Group or the Alexis de Tocqueville Institution (AdTI), have been misinterpreted by many commentators who assume that the whole operating system is being targeted. SCO and the AdTI have even been accused of deliberately exploiting this confusion.
Regarding suggestions that renaming efforts stem from egotism or personal pique, Stallman has responded that his interest is not in giving credit to himself but to the GNU Project: "Some people think that it's because I want my ego to be fed. Of course, I'm not asking you to call it 'Stallmanix'." In response to another common suggestion that many people have contributed to the system and that a short name cannot credit all of them, the FSF has argued that this cannot justify calling the system "Linux", since they believe that the GNU project's contribution was ultimately greater than that of the Linux kernel in these related systems.
In 2010, Stallman stated that naming is not simply a matter of giving equal mention to the GNU Project, saying that because the system is more widely referred as "Linux", people tend to "think it's all Linux, that it was all started by Mr. Torvalds in 1991, and they think it all comes from his vision of life, and that's the really bad problem."
Ariadne Conill, developer and security chair of Alpine Linux, has stated that in her opinion GNU/Linux is the correct name when referring to Linux distributions that are based on glibc and GNU coreutils, such as Debian and Fedora Linux. This can be contrasted to other Linux distributions which are based on bionic (Android) and musl (Alpine).
Linux
Proponents of naming the operating systems "Linux" state that "Linux" is used far more often than "GNU/Linux".
Eric S. Raymond writes (in the "Linux" entry of the Jargon File):
When Linus Torvalds was asked in the documentary Revolution OS whether the name "GNU/Linux" was justified, he replied:
An earlier comment by Torvalds on the naming controversy was:
The name "GNU/Linux," particularly when using Stallman's preferred pronunciation, has been criticized for its perceived clumsiness and verbosity, a factor that Torvalds has cited as the downfall of operating systems such as 386BSD.
The Linux Journal speculated that Stallman's advocacy of the combined name stems from frustration that "Linus got the glory for what [Stallman] wanted to do."
Others have suggested that, regardless of the merits, Stallman's persistence in what sometimes seems a lost cause makes him and GNU look bad. For example, Larry McVoy (author of BitKeeper, once used to manage Linux kernel development) opined that "claiming credit only makes one look foolish and greedy".
Many users and vendors who prefer the name "Linux," such as Jim Gettys, one of the original developers of the X Window System, point to the inclusion of non-GNU, non-kernel tools, such as KDE, LibreOffice, and Firefox, in end-user operating systems based on the Linux kernel:
See also
Alternative terms for free software
GNU variants
List of GNU packages
History of free software
References
External links
"Why GNU/Linux?" (or "What's in a name?"), by Richard Stallman
GNU Users Who Have Never Heard of GNU, also by Richard Stallman
GNU/Linux FAQ by Richard Stallman
The "Say Lignux" Campaign, by Richard Stallman, 2013
David A. Wheeler on why he mostly says "GNU/Linux"
Stallman explaining the relationship of GNU and Linux, Zagreb, 2006
Who wrote Linux?, by Josh Mehlman, ZDNet Australia, 7 July 2004
"What is GNU/Linux" , Debian Project
1994 controversies
Free Software Foundation
Linus Torvalds
Linux
Naming controversies
Computing terminology | GNU/Linux naming controversy | Technology | 2,328 |
49,113,314 | https://en.wikipedia.org/wiki/Judith%20Lowe | Judith Ann Lowe is the former deputy chair of the Construction Industry Training Board.
Honours
She was appointed Officer of the Most Excellent Order of the British Empire (OBE) in the 2015 Birthday Honours for services to the construction industry, particularly women in construction.
References
British civil engineers
Date of birth missing (living people)
Living people
Officers of the Order of the British Empire
Place of birth missing (living people)
Year of birth missing (living people) | Judith Lowe | Engineering | 89 |
49,969,406 | https://en.wikipedia.org/wiki/Gentex%20%28standard%29 | Gentex is an international standard (ITU F.20) for the transmission of telegrams over the Telex network. It replaces fixed telegraph connections between stations and means instead that the telegraph station that transmits the telegram connects directly to the receiving station and transmits the telegram with a remote typewriter.
The first official Gentex traffic was introduced in 1956 between the Netherlands, Switzerland, West Germany and Austria. Sweden introduced in 1960 as a test Gentexexpedition with the Netherlands. In 1963 the Nordic countries decided to introduce Gentexexpedition between their countries.
References
Telegraphy
Telecommunications systems | Gentex (standard) | Technology | 126 |
279,836 | https://en.wikipedia.org/wiki/W%20Ursae%20Majoris | W Ursae Majoris (W UMa) is the variable star designation for a binary star system in the northern constellation of Ursa Major. It has an apparent visual magnitude of about 7.9, which is too faint to be seen with the naked eye. However, it can be viewed with a small telescope. Parallax measurements place it at a distance of roughly 169 light years (52 parsecs) from Earth.
In 1903, the luminosity of this system was found to vary by the German astronomers Gustav Müller and Paul Kempf. It has since become the prototype and eponym for a class of variable stars called W Ursae Majoris variables. This system consists of a pair of stars in a tight, circular orbit with a period of 0.3336 days, or eight hours and 26 seconds. During every orbital cycle, each star eclipses the other, resulting in a decrease in magnitude. The maximum magnitude of the pair is 7.75 mag. During the eclipse of the primary, the net magnitude drops by 0.73 mag, while the eclipse of the secondary causes a magnitude decrease of 0.68 mag.
The two stars in W Ursae Majoris are so close together that their outer envelopes are in direct contact, making them a contact binary system. As a result, they have the same stellar classification of F8Vp, which matches the spectrum of a main-sequence star that is generating energy through the nuclear fusion of hydrogen. However, the primary component has a larger mass and radius than the secondary, with 1.14 times the Sun's mass and 1.09 times the Sun's radius. The secondary has 0.55 solar masses and 0.79 solar radii.
The orbital period of the system has changed since 1903, which may be the result of mass transfer or the braking effects of magnetic fields. Star spots have been observed on the surface of the stars and strong X-ray emissions have been detected, indicating a high level of magnetic activity that is common to W UMa variables. This magnetic activity may play a role in regulating the timing and magnitude of mass transfer occurs.
W Ursae Majoris has a 12th magnitude companion star with the designation ADS 7494B, not to be confused with W UMa B, the secondary of the close eclipsing pair. They may be moving together through space.
References
External links
AAVSO Variable Star of the Season, January 2010: W Ursae Majoris
W Ursae Majoris variables
Ursa Major
F-type main-sequence stars
Ursae Majoris, W
083950
Durchmusterung objects
047727 | W Ursae Majoris | Astronomy | 545 |
32,753,428 | https://en.wikipedia.org/wiki/FEZ-like%20protein | In molecular biology, the FEZ-like protein family is a family of eukaryotic proteins thought to be involved in axonal outgrowth and fasciculation. The N-terminal regions of these sequences are less conserved than the C-terminal regions, and are highly acidic. The Caenorhabditis elegans homologue, UNC-76, may play structural and signalling roles in the control of axonal extension and adhesion (particularly in the presence of adjacent neuronal cells) and these roles have also been postulated for other FEZ family proteins. Certain homologues have been definitively found to interact with the N-terminal variable region (V1) of PKC-zeta, and this interaction causes cytoplasmic translocation of the FEZ family protein in mammalian neuronal cells. The C-terminal region probably participates in the association with the regulatory domain of PKC-zeta. The members of this family are predicted to form coiled-coil structures which may interact with members of the RhoA family of signalling proteins, but are not thought to contain other characteristic protein motifs. Certain members of this family are expressed almost exclusively in the brain, whereas others (such as FEZ2) are expressed in other tissues, and are thought to perform similar but unknown functions in these tissues.
References
Protein families | FEZ-like protein | Biology | 278 |
9,826,766 | https://en.wikipedia.org/wiki/Acicular%20ferrite | Acicular ferrite is a microstructure of ferrite in steel that is characterised by needle-shaped crystallites or grains when viewed in two dimensions. The grains, actually three-dimensional in shape, have a thin lenticular shape. This microstructure is advantageous over other microstructures for steel because of its chaotic ordering, which increases toughness.
Acicular ferrite is formed in the interior of the original austenitic grains by direct nucleation on the inclusions, resulting in randomly oriented short ferrite needles with a 'basket weave' appearance. Acicular ferrite is also characterised by high angle boundaries between the ferrite grains. This further reduces the chance of cleavage, because these boundaries impede crack propagation.
In C-Mn steel weld metals, it is reported that nucleation of various ferrite morphologies is aided by non-metallic inclusion; in particular oxygen-rich inclusions of a certain type and size are associated with the intragranular nucleation of acicular ferrite. Acicular ferrite is a fine Widmanstätten constituent, which is nucleated by an optimum intragranular dispersion of oxide/sulfide/silicate particles. The interlocking nature of acicular ferrite, together with its fine grain size (0.5 to 5 μm with aspect ratio from 3:1 to 10:1), provides maximum resistance to crack propagation by cleavage.
Composition control of weld metal is often performed to maximise the volume fraction of acicular ferrite due to the toughness it imparts. During continuous cooling, higher alloy contents or faster cooling generally delay transformation, which will then take place at lower temperatures, below the bainite start temperature, and lead to higher hardness. The efficacy of inclusions as nucleation sites in modern low alloy steel weld metals is such that fine-scale intragranular bainite can nucleate on them, both by continuous cooling and by isothermal transformation below the bainite start temperature. Some confusion has arisen in the literature, as this fine-scale intragranular bainite, which can resemble acicular ferrite in appearance in the optical microscope, has been called acicular ferrite by some researchers. See, for example.
See also
Eutectic
Bainite
Martensite
References
External links
http://www.msm.cam.ac.uk/phase-trans/2007/acicular.html
Iron
Welding
Metallurgy | Acicular ferrite | Chemistry,Materials_science,Engineering | 527 |
15,931,153 | https://en.wikipedia.org/wiki/Model%20complete%20theory | In model theory, a first-order theory is called model complete if every embedding of its models is an elementary embedding.
Equivalently, every first-order formula is equivalent to a universal formula.
This notion was introduced by Abraham Robinson.
Model companion and model completion
A companion of a theory T is a theory T* such that every model of T can be embedded in a model of T* and vice versa.
A model companion of a theory T is a companion of T that is model complete. Robinson proved that a theory has at most one model companion. Not every theory is model-companionable, e.g. theory of groups. However if T is an -categorical theory, then it always has a model companion.
A model completion for a theory T is a model companion T* such that for any model M of T, the theory of T* together with the diagram of M is complete. Roughly speaking, this means every model of T is embeddable in a model of T* in a unique way.
If T* is a model companion of T then the following conditions are equivalent:
T* is a model completion of T
T has the amalgamation property.
If T also has universal axiomatization, both of the above are also equivalent to:
T* has elimination of quantifiers
Examples
Any theory with elimination of quantifiers is model complete.
The theory of algebraically closed fields is the model completion of the theory of fields. It is model complete but not complete.
The model completion of the theory of equivalence relations is the theory of equivalence relations with infinitely many equivalence classes, each containing an infinite number of elements.
The theory of real closed fields, in the language of ordered rings, is a model completion of the theory of ordered fields (or even ordered domains).
The theory of real closed fields, in the language of rings, is the model companion for the theory of formally real fields, but is not a model completion.
Non-examples
The theory of dense linear orders with a first and last element is complete but not model complete.
The theory of groups (in a language with symbols for the identity, product, and inverses) has the amalgamation property but does not have a model companion.
Sufficient condition for completeness of model-complete theories
If T is a model complete theory and there is a model of T that embeds into any model of T, then T is complete.
Notes
References
Mathematical logic
Model theory | Model complete theory | Mathematics | 500 |
8,465,350 | https://en.wikipedia.org/wiki/Search/Retrieve%20via%20URL | Search/Retrieve via URL (SRU) is a standard search protocol for Internet search queries, utilizing Contextual Query Language (CQL), a standard query syntax for representing queries.
SRU, along with the related Search/Retrieve via Web (SRW) service, were created by as part of the ZING (Z39.50 International: Next Generation) initiative as successors to the Z39.50 protocol.
Example usage
Sample code of a complete answer for this SRU Query-URL with URL query version=1.1&operation=searchRetrieve&query=dc.title=Darwinism and CQL query dc.title=Darwinism:
<?xml version="1.0"?>
<sru:searchRetrieveResponse xmlns:sru="https://www.loc.gov/zing/srw/" xmlns:diag="https://www.loc.gov/zing/srw/diagnostic/" xmlns:xcql="https://www.loc.gov/zing/cql/xcql/" xmlns:dc="http://purl.org/dc/elements/1.1/">
<sru:version>1.1</sru:version>
<sru:numberOfRecords>4</sru:numberOfRecords>
<sru:records>
<sru:record>
<sru:recordSchema>info:srw/schema/1/dc-v1.1</sru:recordSchema>
<sru:recordPacking>XML</sru:recordPacking>
<sru:recordData>
<sru:dc>
<sru:title>Darwinism</sru:title>
<sru:creator>Dennett</sru:creator>
<sru:subject>The rule of the Local is a basic principle of Darwinism - it corresponds to the principle that there is no Creator, no intelligent foresight. I 262</sru:subject>
</sru:dc>
</sru:recordData>
<sru:recordNumber>1</sru:recordNumber>
</sru:record>
<sru:record>
<sru:recordSchema>info:srw/schema/1/dc-v1.1</sru:recordSchema>
<sru:recordPacking>XML</sru:recordPacking>
<sru:recordData>
<sru:dc>
<sru:title>Darwinism</sru:title>
<sru:creator>McGinn</sru:creator>
<sru:subject>Design argument/William Paley: organisms have a brilliant design: We have not designed them, so we have to assume that a foreign intelligence did it. Let s call this intelligence "God". So God exists. II 98
DarwinVsPaley: intelligent design does not require a Creator. Selection is sufficient. II 98
Mind/consciousness/evolution/McGinn: evolution does not explain consciousness! nor sensations. II 99
Reason: sensation and consciousness cannot be explained through the means of Darwinian principles and physics, because if selection were to explain how sensations are supposed to be created by it, it must be possible to mold the mind from matter. II 100
(s) Consciousness or sensations would have to be visible for selection! (Similar GouldVsDawkins)</sru:subject>
</sru:dc>
</sru:recordData>
<sru:recordNumber>2</sru:recordNumber>
</sru:record>
<sru:record>
<sru:recordSchema>info:srw/schema/1/dc-v1.1</sru:recordSchema>
<sru:recordPacking>XML</sru:recordPacking>
<sru:recordData>
<sru:dc>
<sru:title>Darwinism</sru:title>
<sru:creator>Putnam</sru:creator>
<sru:subject>Rorty: Darwinism / Putnam: he does noit like the image of man as a more complicated animal (scientistic and reductionist physicalism).
Rorty VI 63</sru:subject>
</sru:dc>
</sru:recordData>
<sru:recordNumber>3</sru:recordNumber>
</sru:record>
<sru:record>
<sru:recordSchema>info:srw/schema/1/dc-v1.1</sru:recordSchema>
<sru:recordPacking>XML</sru:recordPacking>
<sru:recordData>
<sru:dc>
<sru:title>Darwinism</sru:title>
<sru:creator>Rorty</sru:creator>
<sru:subject>Darwinism/Rorty provides a useful vocabulary. "Darwinism": for me is a fable about human beings as animals with special skills and organs. But these organs and skills are just as little in a representational relation to the world as the anteater s snout. VI 69 ff
Darwinism/Rorty: it demands that we consider our doing and being as part of the same continuum, which also includes the existence of amoebae, spiders and squirrels. One way to do that is to say that our experience is just more complex. VI 424</sru:subject>
</sru:dc>
</sru:recordData>
<sru:recordNumber>4</sru:recordNumber>
</sru:record>
</sru:records>
</sru:searchRetrieveResponse>
See also
Search/Retrieve Web Service
External links
Search/Retrieve via URL at Library of Congress
CQL at Library of Congress
Sample Page from the Dictionary of Arguments
A complete example with query and answer
Internet search
Information retrieval techniques
URL | Search/Retrieve via URL | Technology | 1,350 |
2,570,207 | https://en.wikipedia.org/wiki/Bioplastic | Bioplastics are plastic materials produced from renewable biomass sources. Historically, bioplastics made from natural materials like shellac or cellulose had been the first plastics. Since the end of the 19th century they have been increasingly superseded by fossil-fuel plastics derived from petroleum or natural gas (fossilized biomass is not considered to be renewable in reasonable short time). Today, in the context of bioeconomy and circular economy, bioplastics are gaining interest again. Conventional petro-based polymers are increasingly blended with bioplastics to manufacture "bio-attributed" or "mass-balanced" plastic products - so the difference between bio- and other plastics might be difficult to define.
Bioplastics can be produced by:
processing directly from natural biopolymers including polysaccharides (e.g., corn starch or rice starch, cellulose, chitosan, and alginate) and proteins (e.g., soy protein, gluten, and gelatin),
chemical synthesis from sugar derivatives (e.g., lactic acid) and lipids (such as vegetable fats and oils) from either plants or animals,
fermentation of sugars or lipids,
biotechnological production in microorganisms or genetically modified plants (e.g., polyhydroxyalkanoates (PHA).
One advantage of bioplastics is their independence from fossil fuel as a raw material, which is a finite and globally unevenly distributed resource linked to petroleum politics and environmental impacts. Bioplastics can utilize previously unused waste materials (e.g., straw, woodchips, sawdust, and food waste). Life cycle analysis studies show that some bioplastics can be made with a lower carbon footprint than their fossil counterparts, for example when biomass is used as raw material and also for energy production. However, other bioplastics' processes are less efficient and result in a higher carbon footprint than fossil plastics.
Whether any kind of plastic is degradable or non-degradable (durable) depends on its molecular structure, not on whether or not the biomass constituting the raw material is fossilized. Both durable bioplastics, such as Bio-PET or biopolyethylene (bio-based analogues of fossil-based polyethylene terephthalate and polyethylene), and degradable bioplastics, such as polylactic acid, polybutylene succinate, or polyhydroxyalkanoates, exist. Bioplastics must be recycled similar to fossil-based plastics to avoid plastic pollution; "drop-in" bioplastics (such as biopolyethylene) fit into existing recycling streams. On the other hand, recycling biodegradable bioplastics in the current recycling streams poses additional challenges, as it may raise the cost of sorting and decrease the yield and the quality of the recyclate. However, biodegradation is not the only acceptable end-of-life disposal pathway for biodegradable bioplastics, and mechanical and chemical recycling are often the preferred choice from the environmental point of view.
Biodegradability may offer an end-of-life pathway in certain applications, such as agricultural mulch, but the concept of biodegradation is not as straightforward as many believe. Susceptibility to biodegradation is highly dependent on the chemical backbone structure of the polymer, and different bioplastics have different structures, thus it cannot be assumed that bioplastic in the environment will readily disintegrate. Conversely, biodegradable plastics can also be synthesized from fossil fuels.
As of 2018, bioplastics represented approximately 2% of the global plastics output (>380 million tons). In 2022, the commercially most important types of bioplastics were PLA and products based on starch. With continued research on bioplastics, investment in bioplastic companies and rising scrutiny on fossil-based plastics, bioplastics are becoming more dominant in some markets, while the output of fossil plastics also steadily increases.
IUPAC definition
The International Union of Pure and Applied Chemistry define biobased polymer as:
Proposed applications
Few commercial applications exist for bioplastics. Cost and performance remain problematic. Typical is the example of Italy, where biodegradable plastic bags are compulsory for shoppers since 2011 with the introduction of a specific law. Beyond structural materials, electroactive bioplastics are being developed that promise to carry electric current.
Bioplastics are used for disposable items, such as packaging, crockery, cutlery, pots, bowls, and straws.
Biopolymers are available as coatings for paper rather than the more common petrochemical coatings.
Bioplastics called drop-in bioplastics are chemically identical to their fossil-fuel counterparts but made from renewable resources. Examples include bio-PE, bio-PET, bio-propylene, bio-PP, and biobased nylons. Drop-in bioplastics are easy to implement technically, as existing infrastructure can be used. A dedicated bio-based pathway allows to produce products that cannot be obtained through traditional chemical reactions and can create products which have unique and superior properties, compared to fossil-based alternatives.
Types
Polysaccharide-based bioplastics
Starch-based plastics
Thermoplastic starch represents the most widely used bioplastic, constituting about 50 percent of the bioplastics market. Simple starch bioplastic film can be made at home by gelatinizing starch and solution casting. Pure starch is able to absorb humidity, and is thus a suitable material for the production of drug capsules by the pharmaceutical sector. However, pure starch-based bioplastic is brittle. Plasticizer such as glycerol, glycol, and sorbitol can also be added so that the starch can also be processed thermo-plastically. The characteristics of the resulting bioplastic (also called "thermoplastic starch") can be tailored to specific needs by adjusting the amounts of these additives. Conventional polymer processing techniques can be used to process starch into bioplastic, such as extrusion, injection molding, compression molding and solution casting. The properties of starch bioplastic is largely influenced by amylose/amylopectin ratio. Generally, high-amylose starch results in superior mechanical properties. However, high-amylose starch has less processability because of its higher gelatinization temperature and higher melt viscosity.
Starch-based bioplastics are often blended with biodegradable polyesters to produce starch/polylactic acid, starch/polycaprolactone or starch/Ecoflex (polybutylene adipate-co-terephthalate produced by BASF) blends. These blends are used for industrial applications and are also compostable. Other producers, such as Roquette, have developed other starch/polyolefin blends. These blends are not biodegradable, but have a lower carbon footprint than petroleum-based plastics used for the same applications.
Starch is cheap, abundant, and renewable.
Starch-based films (mostly used for packaging purposes) are made mainly from starch blended with thermoplastic polyesters to form biodegradable and compostable products. These films are seen specifically in consumer goods packaging of magazine wrappings and bubble films. In food packaging, these films are seen as bakery or fruit and vegetable bags. Composting bags with this films are used in selective collecting of organic waste. Further, starch-based films can be used as a paper.
Starch-based nanocomposites have been widely studied, showing improved mechanical properties, thermal stability, moisture resistance, and gas barrier properties.
Cellulose-based plastics
Cellulose bioplastics are mainly the cellulose esters (including cellulose acetate and nitrocellulose) and their derivatives, including celluloid.
Cellulose can become thermoplastic when extensively modified. An example of this is cellulose acetate, which is expensive and therefore rarely used for packaging. However, cellulosic fibers added to starches can improve mechanical properties, permeability to gas, and water resistance due to being less hydrophilic than starch.
Protein-based plastics
Bioplastics can be made from proteins from different sources. For example, wheat gluten and casein show promising properties as a raw material for different biodegradable polymers.
Additionally, soy protein is being considered as another source of bioplastic. Soy proteins have been used in plastic production for over one hundred years. For example, body panels of an original Ford automobile were made of soy-based plastic.
There are difficulties with using soy protein-based plastics due to their water sensitivity and relatively high cost. Therefore, producing blends of soy protein with some already-available biodegradable polyesters improves the water sensitivity and cost.
Some aliphatic polyesters
The aliphatic biopolyesters are mainly polyhydroxyalkanoates (PHAs) like the poly-3-hydroxybutyrate (PHB), polyhydroxyvalerate (PHV) and polyhydroxyhexanoate (PHH).
Polylactic acid (PLA)
Polylactic acid (PLA) is a transparent plastic produced from maize or dextrose. Superficially, it is similar to conventional petrochemical-based mass plastics like PS. It is derived from plants, and it biodegrades under industrial composting conditions. Unfortunately, it exhibits inferior impact strength, thermal robustness, and barrier properties (blocking air transport across the membrane) compared to non-biodegradable plastics. PLA and PLA blends generally come in the form of granulates. PLA is used on a limited scale for the production of films, fibers, plastic containers, cups, and bottles. PLA is also the most common type of plastic filament used for home fused deposition modeling in 3D printers.
Poly-3-hydroxybutyrate
The biopolymer poly-3-hydroxybutyrate (PHB) is a polyester produced by certain bacteria processing glucose, corn starch or wastewater. Its characteristics are similar to those of the petroplastic polypropylene (PP). PHB production is increasing. The South American sugar industry, for example, has decided to expand PHB production to an industrial scale. PHB is distinguished primarily by its physical characteristics. It can be processed into a transparent film with a melting point higher than 130 degrees Celsius, and is biodegradable without residue.
Polyhydroxyalkanoates
Polyhydroxyalkanoates (PHA) are linear polyesters produced in nature by bacterial fermentation of sugar or lipids. They are produced by the bacteria to store carbon and energy. In industrial production, the polyester is extracted and purified from the bacteria by optimizing the conditions for the fermentation of sugar. More than 150 different monomers can be combined within this family to give materials with extremely different properties. PHA is more ductile and less elastic than other plastics, and it is also biodegradable. These plastics are being widely used in the medical industry.
Polyamide 11
PA 11 is a biopolymer derived from natural oil. It is also known under the tradename Rilsan B, commercialized by Arkema. PA 11 belongs to the technical polymers family and is not biodegradable. Its properties are similar to those of PA 12, although emissions of greenhouse gases and consumption of nonrenewable resources are reduced during its production. Its thermal resistance is also superior to that of PA 12. It is used in high-performance applications like automotive fuel lines, pneumatic airbrake tubing, electrical cable antitermite sheathing, flexible oil and gas pipes, control fluid umbilicals, sports shoes, electronic device components, and catheters.
A similar plastic is Polyamide 410 (PA 410), derived 70% from castor oil, under the trade name EcoPaXX, commercialized by DSM.
PA 410 is a high-performance polyamide that combines the benefits of a high melting point (approx. 250 °C), low moisture absorption and excellent resistance to various chemical substances.
Bio-derived polyethylene
The basic building block (monomer) of polyethylene is ethylene. Ethylene is chemically similar to, and can be derived from ethanol, which can be produced by fermentation of agricultural feedstocks such as sugar cane or corn. Bio-derived polyethylene is chemically and physically identical to traditional polyethylene – it does not biodegrade but can be recycled. The Brazilian chemicals group Braskem claims that using its method of producing polyethylene from sugar cane ethanol captures (removes from the environment) 2.15 tonnes of per tonne of Green Polyethylene produced.
Genetically modified feedstocks
With GM corn being a common feedstock, it is unsurprising that some bioplastics are made from this.
Under the bioplastics manufacturing technologies there is the "plant factory" model, which uses genetically modified crops or genetically modified bacteria to optimize efficiency.
Polyhydroxyurethanes
The condensation of polyamines and cyclic carbonates produces polyhydroxyurethanes. Unlike traditional cross-linked polyurethanes, cross-linked polyhydroxyurethanes are in principle amenable to recycling and reprocessing through dynamic transcarbamoylation reactions.
Lipid derived polymers
A number bioplastic classes have been synthesized from plant and animal derived fats and oils. Polyurethanes, polyesters, epoxy resins and a number of other types of polymers have been developed with comparable properties to crude oil based materials. The recent development of olefin metathesis has opened a wide variety of feedstocks to economical conversion into biomonomers and polymers. With the growing production of traditional vegetable oils as well as low cost microalgae derived oils, there is huge potential for growth in this area.
In 2024, Lamanna et al. introduced oleogels based on ethyl cellulose and vegetable oils as a novel bioplastic, named OleoPlast. This bioplastic exhibits thermoplastic behavior, offering both recyclability and biodegradability. The key advantages of OleoPlast include the ability to customize its mechanical and physical properties, as well as its compatibility with different processing techniques, such as injection molding, hot pressing, extrusion, and fused filament fabrication.
Environmental impact
Materials such as starch, cellulose, wood, sugar and biomass are used as a substitute for fossil fuel resources to produce bioplastics; this makes the production of bioplastics a more sustainable activity compared to conventional plastic production. The environmental impact of bioplastics is often debated, as there are many different metrics for "greenness" (e.g., water use, energy use, deforestation, biodegradation, etc.). Hence bioplastic environmental impacts are categorized into nonrenewable energy use, climate change, eutrophication and acidification. Bioplastic production significantly reduces greenhouse gas emissions and decreases non-renewable energy consumption. Firms worldwide would also be able to increase the environmental sustainability of their products by using bioplastics
Although bioplastics save more nonrenewable energy than conventional plastics and emit less greenhouse gasses compared to conventional plastics, bioplastics also have negative environmental impacts such as eutrophication and acidification. Bioplastics induce higher eutrophication potentials than conventional plastics. Biomass production during industrial farming practices causes nitrate and phosphate to filtrate into water bodies; this causes eutrophication, the process in which a body of water gains excessive richness of nutrients. Eutrophication is a threat to water resources around the world since it causes harmful algal blooms that create oxygen dead zones, killing aquatic animals. Bioplastics also increase acidification. The high increase in eutrophication and acidification caused by bioplastics is also caused by using chemical fertilizer in the cultivation of renewable raw materials to produce bioplastics.
Other environmental impacts of bioplastics include exerting lower human and terrestrial ecotoxicity and carcinogenic potentials compared to conventional plastics. However, bioplastics exert higher aquatic ecotoxicity than conventional materials. Bioplastics and other bio-based materials increase stratospheric ozone depletion compared to conventional plastics; this is a result of nitrous oxide emissions during fertilizer application during industrial farming for biomass production. Artificial fertilizers increase nitrous oxide emissions especially when the crop does not need all the nitrogen. Minor environmental impacts of bioplastics include toxicity through using pesticides on the crops used to make bioplastics. Bioplastics also cause carbon dioxide emissions from harvesting vehicles. Other minor environmental impacts include high water consumption for biomass cultivation, soil erosion, soil carbon losses and loss of biodiversity, and they are mainly are a result of land use associated with bioplastics. Land use for bioplastics production leads to lost carbon sequestration and increases the carbon costs while diverting land from its existing uses
Although bioplastics are extremely advantageous because they reduce non-renewable consumption and GHG emissions, they also negatively affect the environment through land and water consumption, using pesticide and fertilizer, eutrophication and acidification; hence one's preference for either bioplastics or conventional plastics depends on what one considers the most important environmental impact.
Another issue with bioplastics, is that some bioplastics are made from the edible parts of crops. This makes the bioplastics compete with food production because the crops that produce bioplastics can also be used to feed people. These bioplastics are called "1st generation feedstock bioplastics".
2nd generation feedstock bioplastics use non-food crops (cellulosic feedstock) or waste materials from 1st generation feedstock (e.g. waste vegetable oil). Third generation feedstock bioplastics use algae as the feedstock.
Biodegradation of Bioplastics
Biodegradation of any plastic is a process that happens at solid/liquid interface whereby the enzymes in the liquid phase depolymerize the solid phase. Certain types of bioplastics as well as conventional plastics containing additives are able to biodegrade. Bioplastics are able to biodegrade in different environments hence they are more acceptable than conventional plastics. Biodegradability of bioplastics occurs under various environmental conditions including soil, aquatic environments and compost. Both the structure and composition of biopolymer or bio-composite have an effect on the biodegradation process, hence changing the composition and structure might increase biodegradability. Soil and compost as environment conditions are more efficient in biodegradation due to their high microbial diversity. Composting not only biodegrades bioplastics efficiently but it also significantly reduces the emission of greenhouse gases. Biodegradability of bioplastics in compost environments can be upgraded by adding more soluble sugar and increasing temperature. Soil environments on the other hand have high diversity of microorganisms making it easier for biodegradation of bioplastics to occur. However, bioplastics in soil environments need higher temperatures and a longer time to biodegrade. Some bioplastics biodegrade more efficiently in water bodies and marine systems; however, this causes danger to marine ecosystems and freshwater. Hence it is accurate to conclude that biodegradation of bioplastics in water bodies which leads to the death of aquatic organisms and unhealthy water can be noted as one of the negative environmental impacts of bioplastics.
Bioplastics for construction materials
The concept of bioplastics dates back to the early 20th century. However, significant advancements occurred in the 1980s and 1990s when researchers began developing biodegradable plastics from natural sources. The construction industry started to take notice of bioplastics' potential in the late 2000s, driven by the global push for greener building practices.
In recent years, bioplastics have seen considerable advancements in terms of durability, cost-effectiveness, and performance. Innovations in biopolymer blends and composites have made bioplastics more suitable for construction applications, ranging from insulation to structural components.
Applications in Construction
Insulation Bioplastics can be used to create effective and eco-friendly insulation materials. Polylactic acid (PLA) and polyhydroxyalkanoates (PHA) are commonly used for this purpose due to their thermal properties and biodegradability.
Flooring Bioplastic composites, such as those made from PLA and natural fibers, offer durable and sustainable alternatives to traditional flooring materials. They are particularly valued for their low carbon footprint and recyclability.
Panels and Cladding Bioplastic panels, made from blends of natural fibers and biopolymers, provide an eco-friendly option for wall cladding and partitioning. These materials are lightweight, durable, and can be designed to mimic traditional materials like wood or stone.
Formwork Bioplastics are increasingly used in formwork for concrete casting. They offer advantages in terms of reusability, weight reduction, and reduced environmental impact compared to conventional materials.
Reinforcement Bioplastic composites reinforced with natural fibers or other materials can be used in structural applications, offering a sustainable alternative to steel or fiberglass.
Benefits of Bioplastics in Construction Environmental Impact
Reduced Carbon Footprint Bioplastics are derived from renewable sources, significantly reducing the carbon footprint of construction materials.
Biodegradability Many bioplastics are biodegradable, which helps to reduce waste and environmental pollution at the end of their lifecycle.
Energy Efficiency The production of bioplastics generally requires less energy compared to conventional plastics, further reducing their environmental impact.
Economic Benefits
Resource Efficiency Using bioplastics can reduce dependence on fossil fuels and contribute to more efficient use of natural resources.
Market Growth The bioplastics market is expanding, driven by increasing demand for sustainable construction materials. This growth presents new economic opportunities for manufacturers and suppliers.
Challenges and Limitations
Cost Bioplastics are often more expensive to produce than traditional plastics, which can be a barrier to widespread adoption in the cost-sensitive construction industry. However, ongoing research and technological advancements are expected to reduce costs over time.
Performance While bioplastics have made significant strides, some types still lag behind traditional materials in terms of strength, durability, and resistance to environmental factors like UV exposure and moisture.
Limited Applications Currently, bioplastics are suitable for a limited range of applications within construction. Expanding their use to more demanding structural roles will require further development and testing.
Future Prospects
The future of bioplastics in construction looks promising, with continued research and innovation likely to expand their applications and improve their performance. As the construction industry increasingly embraces sustainability, bioplastics are poised to play a critical role in the development of eco-friendly building materials.
Bioplastics offer a sustainable and versatile alternative to traditional construction materials, with significant environmental and economic benefits. While challenges remain, particularly in terms of cost and performance, the ongoing advancements in bioplastic technology hold the potential to transform the construction industry and contribute to a more sustainable future.
Industry and markets
While plastics based on organic materials were manufactured by chemical companies throughout the 20th century, the first company solely focused on bioplastics—Marlborough Biopolymers—was founded in 1983. However, Marlborough and other ventures that followed failed to find commercial success, with the first such company to secure long-term financial success being the Italian company Novamont, founded in 1989.
Bioplastics remain less than one percent of all plastics manufactured worldwide. Most bioplastics do not yet save more carbon emissions than are required to manufacture them. It is estimated that replacing 250 million tons of the plastic manufactured each year with bio-based plastics would require 100 million hectares of land, or 7 percent of the arable land on Earth. And when bioplastics reach the end of their life cycle, those designed to be compostable and marketed as biodegradable are often sent to landfills due to the lack of proper composting facilities or waste sorting, where they then release methane as they break down anaerobically.
COPA (Committee of Agricultural Organisation in the European Union) and COGEGA (General Committee for the Agricultural Cooperation in the European Union) have made an assessment of the potential of bioplastics in different sectors of the European economy:
History and development of bioplastics
1855: First (inferior) version of linoleum produced
1862: At the Great London Exhibition, Alexander Parkes displays Parkesine, the first thermoplastic. Parkesine is made from nitrocellulose and had very good properties, but exhibits extreme flammability. (White 1998)
1897: Still produced today, Galalith is a milk-based bioplastic that was created by German chemists in 1897. Galalith is primarily found in buttons. (Thielen 2014)
1907: Leo Baekeland invented Bakelite, which received the National Historic Chemical Landmark for its non-conductivity and heat-resistant properties. It is used in radio and telephone casings, kitchenware, firearms and many more products. (Pathak, Sneha, Mathew 2014)
1912: Brandenberger invents Cellophane out of wood, cotton, or hemp cellulose. (Thielen 2014)
1920s: Wallace Carothers finds Polylactic Acid (PLA) plastic. PLA is incredibly expensive to produce and is not mass-produced until 1989. (Whiteclouds 2018)
1925: Polyhydroxybutyrate was isolated and characterised by French microbiologist Maurice Lemoigne
1926: Maurice Lemoigne invents polyhydroxybutyrate (PHB) which is the first bioplastic made from bacteria. (Thielen 2014)
1930s: The first bioplastic car was made from soy beans by Henry Ford. (Thielen 2014)
1940-1945: During World War II, an increase in plastic production is seen as it is used in many wartime materials. Due to government funding and oversight the United States production of plastics (in general, not just bioplastics) tripled during 1940-1945 (Rogers 2005). The 1942 U.S. government short film The Tree in a Test Tube illustrates the major role bioplastics played in the World War II victory effort and the American economy of the time.
1950s: Amylomaize (>50% amylose content corn) was successfully bred and commercial bioplastics applications started to be explored. (Liu, Moult, Long, 2009) A decline in bioplastic development is seen due to the cheap oil prices, however the development of synthetic plastics continues.
1970s: The environmental movement spurred more development in bioplastics. (Rogers 2005)
1983: The first bioplastics company, Marlborough Biopolymers, is started which uses a bacteria-based bioplastic called . (Feder 1985)
1989: The further development of PLA is made by Dr. Patrick R. Gruber when he figures out how to create PLA from corn. (Whiteclouds 2018). The leading bioplastic company is created called Novamount. Novamount uses matter-bi, a bioplastic, in multiple different applications. (Novamount 2018)
Late 1990s: The development of TP starch and BIOPLAST from research and production of the company BIOTEC lead to the BIOFLEX film. BIOFLEX film can be classified as blown film extrusion, flat film extrusion, and injection moulding lines. These three classifications have applications as follows: Blown films - sacks, bags, trash bags, mulch foils, hygiene products, diaper films, air bubble films, protective clothing, gloves, double rib bags, labels, barrier ribbons; Flat films - trays, flower pots, freezer products and packaging, cups, pharmaceutical packaging; Injection moulding - disposable cutlery, cans, containers, performed pieces, CD trays, cemetery articles, golf tees, toys, writing materials. (Lorcks 1998)
1992: It is reported in Science that PHB can be produced by the plant Arabidopsis thaliana. (Poirier, Dennis, Klomparens, Nawrath, Somerville 1992)
2001: Metabolix inc. purchases Monsanto's biopol business (originally Zeneca) which uses plants to produce bioplastics. (Barber and Fisher 2001)
2001: Nick Tucker uses elephant grass as a bioplastic base to make plastic car parts. (Tucker 2001)
2005: Cargill and Dow Chemicals is rebranded as NatureWorks and becomes the leading PLA producer. (Pennisi 2016)
2007: Metabolix inc. market tests its first 100% biodegradable plastic called Mirel, made from corn sugar fermentation and genetically engineered bacteria. (Digregorio 2009)
2012: A bioplastic is developed from seaweed proving to be one of the most environmentally friendly bioplastics based on research published in the journal of pharmacy research. (Rajendran, Puppala, Sneha, Angeeleena, Rajam 2012)
2013: A patent is put on bioplastic derived from blood and a crosslinking agent like sugars, proteins, etc. (iridoid derivatives, diimidates, diones, carbodiimides, acrylamides, dimethylsuberimidates, aldehydes, Factor XIII, dihomo bifunctional NHS esters, carbonyldiimide, , proanthocyanidin, reuterin). This invention can be applied by using the bioplastic as tissue, cartilage, tendons, ligaments, bones, and being used in stem cell delivery. (Campbell, Burgess, Weiss, Smith 2013)
2014: It is found in a study published in 2014 that bioplastics can be made from blending vegetable waste (parsley and spinach stems, the husks from cocoa, the hulls of rice, etc.) with TFA solutions of pure cellulose creates a bioplastic. (Bayer, Guzman-Puyol, Heredia-Guerrero, Ceseracciu, Pignatelli, Ruffilli, Cingolani, and Athanassiou 2014)
2016: An experiment finds that a car bumper that passes regulation can be made from nano-cellulose based bioplastic biomaterials using banana peels. (Hossain, Ibrahim, Aleissa 2016)
2017: A new proposal for bioplastics made from Lignocellulosics resources (dry plant matter). (Brodin, Malin, Vallejos, Opedal, Area, Chinga-Carrasco 2017)
2018: Many developments occur including Ikea starting industrial production of bioplastics furniture (Barret 2018), Project Effective focusing on replacing nylon with bio-nylon (Barret 2018), and the first packaging made from fruit (Barret 2018).
2019: Five different types of Chitin nanomaterials were extracted and synthesized by the 'Korea Research Institute of Chemical Technology' to verify strong personality and antibacterial effects. When buried underground, 100% biodegradation was possible within six months.
*This is not a comprehensive list. These inventions show the versatility of bioplastics and important breakthroughs. New applications and bioplastics inventions continue to occur.
Testing procedures
Industrial compostability – EN 13432, ASTM D6400
The EN 13432 industrial standard must be met in order to claim that a plastic product is compostable in the European marketplace. In summary, it requires multiple tests and sets pass/fail criteria, including disintegration (physical and visual break down) of the finished item within 12 weeks, biodegradation (conversion of organic carbon into ) of polymeric ingredients within 180 days, plant toxicity and heavy metals. The ASTM 6400 standard is the regulatory framework for the United States and has similar requirements.
Many starch-based plastics, PLA-based plastics and certain aliphatic-aromatic co-polyester compounds, such as succinates and adipates, have obtained these certificates. Additive-based bioplastics sold as photodegradable or Oxo Biodegradable do not comply with these standards in their current form.
Compostability – ASTM D6002
The ASTM D 6002 method for determining the compostability of a plastic defined the word compostable as follows:
that which is capable of undergoing biological decomposition in a compost site such that the material is not visually distinguishable and breaks down into carbon dioxide, water, inorganic compounds and biomass at a rate consistent with known compostable materials.
This definition drew much criticism because, contrary to the way the word is traditionally defined, it completely divorces the process of "composting" from the necessity of it leading to humus/compost as the end product. The only criterion this standard does describe is that a compostable plastic must look to be going away as fast as something else one has already established to be compostable under the traditional definition.
Withdrawal of ASTM D 6002
In January 2011, the ASTM withdrew standard ASTM D 6002, which had provided plastic manufacturers with the legal credibility to label a plastic as compostable. Its description is as follows:
This guide covered suggested criteria, procedures, and a general approach to establish the compostability of environmentally degradable plastics.
The ASTM has yet to replace this standard.
Biobased – ASTM D6866
The ASTM D6866 method has been developed to certify the biologically derived content of bioplastics. Cosmic rays colliding with the atmosphere mean that some of the carbon is the radioactive isotope carbon-14. CO2 from the atmosphere is used by plants in photosynthesis, so new plant material will contain both carbon-14 and carbon-12. Under the right conditions, and over geological timescales, the remains of living organisms can be transformed into fossil fuels. After ~100,000 years all the carbon-14 present in the original organic material will have undergone radioactive decay leaving only carbon-12. A product made from biomass will have a relatively high level of carbon-14, while a product made from petrochemicals will have no carbon-14. The percentage of renewable carbon in a material (solid or liquid) can be measured with an accelerator mass spectrometer.
There is an important difference between biodegradability and biobased content. A bioplastic such as high-density polyethylene (HDPE) can be 100% biobased (i.e. contain 100% renewable carbon), yet be non-biodegradable. These bioplastics such as HDPE nonetheless play an important role in greenhouse gas abatement, particularly when they are combusted for energy production. The biobased component of these bioplastics is considered carbon-neutral since their origin is from biomass.
Anaerobic biodegradability – ASTM D5511-02 and ASTM D5526
The ASTM D5511-12 and ASTM D5526-12 are testing methods that comply with international standards such as the ISO DIS 15985 for the biodegradability of plastic.
See also
Alkane
Biofuel
Biopolymer
BioSphere Plastic
Organisms breaking down plastic
Celluloid
Cutlery
Edible tableware
Food vs. fuel
Galalith
Health concerns of certain non-biodegradable (fossil fuel-based) plastic food packaging
Plastic bans
Organic photovoltaics
Sustainable packaging
References
Further reading
Plastics Without Petroleum History and Politics of 'Green' Plastics in the United States
Plastics and the environment
"The Social construction of Bakelite: Toward a theory of invention" in The Social Construction of Technological Systems, pp. 155–182
External links
Assessment of China's Market for Biodegradable Plastics , May 2017, GCiS China Strategic Research
Biodegradable waste management
Polymer chemistry | Bioplastic | Chemistry,Materials_science,Engineering | 7,681 |
25,284,384 | https://en.wikipedia.org/wiki/Workspace%20virtualization | Workspace virtualization is a way of distributing applications to client computers using application virtualization; however, it also bundles several applications together into one complete workspace.
Overview
Workspace virtualization is an approach that encapsulates and isolates an entire computing workspace. At a minimum, the workspace consists of everything above the operating system kernel – applications, data, settings, and any non-privileged operating system subsystems required to provide a functional desktop computing environment. By doing this, applications within the workspace can interact with each other, enabling them to do some of the things users are accustomed to and which are missing in application virtualization such as embedding a Microsoft Excel worksheet into a Microsoft Word document. Further, the workspace contains applications settings and user data enabling the user to move to a different operating system or to a different computer and still preserve applications, settings and data in one complete working unit. For deeper workspace virtualization, the virtualization engine implementation virtualizes privileged code modules and full operating system subsystems through a kernel-mode Workspace Virtualization Engine (WVE).
Advantages and disadvantages
Workspace Virtualization vs Application Virtualization
Workspace virtualization enables individual applications to interact with each other and also enables user settings/configuration and user data to stay within the workspace. Application virtualization does not. Application Virtualization shields independent applications from each other better should one of them prove to be hostile (i.e. contains a virus of some sort).
Workspace Virtualization vs Desktop Virtualization
Workspace Virtualization runs directly on the client computer hardware whereas Desktop Virtualization in many cases runs on a remote computer somewhere over a corporate LAN/WAN or over the Internet (called Hosted Desktop Virtualization).
In other cases Desktop Virtualization can be run on the client directly through a virtual machine environment such as VMware Workstation, VirtualBox or Thincast Workstation. Because the applications in a Desktop Virtualization environment run at a different location, on a remote machine or in a local virtual machine, and technology simply offers a way to present the graphics interface locally using technologies such as Remote Desktop. As a result, the graphics system is much slower and access to local data services such as USB or FireWire-connected cameras, scanners, & hard drives is much slower. Workspace virtualization, on the other hand, offers the advantage that the amount of time required to move from one client computer to another is small because applications, settings and data are stored locally on the client. When it comes to system resources, Workspace Virtualization requires fewer resources than Desktop Virtualization because it doesn't contain a complete copy of the operating system running in a virtualization environment.
See also
Application virtualization
Desktop virtualization
Platform virtualization
References
External links
Shared Office Spaces
Centralized computing | Workspace virtualization | Technology | 568 |
953,440 | https://en.wikipedia.org/wiki/Radiation%20exposure%20%28disambiguation%29 | Radiation exposure may refer to:
Exposure (radiation), caused by ionizing photons, namely X-rays and gamma rays; or ionizing particles, usually alpha particles, neutrons, protons, or electrons
Humans being subjected to an ionizing radiation hazard, either by irradiation or contamination
In modern radiology, and in scientific papers from the early 20th century, radiation exposure may refer to kerma (physics)
Exposure (photography), photographic film exposure to ionizing radiation
Any material being subjected to even everyday levels of any type of radiation, such as heat or light
Radiation Exposure Compensation Act, a federal statute of the United States providing for the monetary compensation of people who contracted cancer and a number of other specified diseases as a direct result of their exposure to radiation under certain circumstances
Radiation Exposure Monitoring, a system for monitoring exposure
Radiobiology
Radiation effects | Radiation exposure (disambiguation) | Physics,Chemistry,Materials_science,Engineering,Biology | 172 |
2,082,759 | https://en.wikipedia.org/wiki/Beck%27s%20theorem%20%28geometry%29 | In discrete geometry, Beck's theorem is any of several different results, two of which are given below. Both appeared, alongside several other important theorems, in a well-known paper by József Beck. The two results described below primarily concern lower bounds on the number of lines determined by a set of points in the plane. (Any line containing at least two points of point set is said to be determined by that point set.)
Erdős–Beck theorem
The Erdős–Beck theorem is a variation of a classical result by L. M. Kelly and W. O. J. Moser involving configurations of n points of which at most n − k are collinear, for some 0 < k < O(). They showed that if n is sufficiently large, relative to k, then the configuration spans at least kn − (1/2)(3k + 2)(k − 1) lines.
Elekes and Csaba Toth noted that the Erdős–Beck theorem does not easily extend to higher dimensions. Take for example a set of 2n points in R3 all lying on two skew lines. Assume that these two lines are each incident to n points. Such a configuration of points spans only 2n planes. Thus, a trivial extension to the hypothesis for point sets in Rd is not sufficient to obtain the desired result.
This result was first conjectured by Erdős, and proven by Beck. (See Theorem 5.2 in.)
Statement
Let S be a set of n points in the plane. If no more than n − k points lie on any line for some 0 ≤ k < n − 2, then there exist Ω(nk) lines determined by the points of S.
Proof
Beck's theorem
Beck's theorem says that finite collections of points in the plane fall into one of two extremes; one where a large fraction of points lie on a single line, and one where a large number of lines are needed to connect all the points.
Although not mentioned in Beck's paper, this result is implied by the Erdős–Beck theorem.
Statement
The theorem asserts the existence of positive constants C, K such that given any n points in the plane, at least one of the following statements is true:
There is a line which contains at least n/C of the points.
There exist at least lines, each of which contains at least two of the points.
In Beck's original argument, C is 100 and K is an unspecified constant; it is not known what the optimal values of C and K are.
Proof
A proof of Beck's theorem can be given as follows. Consider a set P of n points in the plane. Let j be a positive integer. Let us say that a pair of points A, B in the set P is j-connected if the line connecting A and B contains between and points of P (including A and B).
From the Szemerédi–Trotter theorem, the number of such lines is , as follows: Consider the set P of n points, and the set L of all those lines spanned by pairs of points of P that contain at least points of P. Since no two points can lie on two distinct lines . Now using Szemerédi–Trotter theorem, it follows that the number of incidences between P and L is at most . All the lines connecting j-connected points also lie in L, and each contributes at least incidences. Therefore, the total number of such lines is .
Since each such line connects together pairs of points, we thus see that at most pairs of points can be j-connected.
Now, let C be a large constant. By summing the geometric series, we see that the number of pairs of points which are j-connected for some j satisfying is at most .
On the other hand, the total number of pairs is . Thus if we choose C to be large enough, we can find at least pairs (for instance) which are not j-connected for any . The lines that connect these pairs either pass through fewer than 2C points, or pass through more than n/C points. If the latter case holds for even one of these pairs, then we have the first conclusion of Beck's theorem. Thus we may assume that all of the pairs are connected by lines which pass through fewer than 2C points. But each such line can connect at most pairs of points. Thus there must be at least lines connecting at least two points, and the claim follows by taking .
References
Euclidean plane geometry
Theorems in discrete geometry
Articles containing proofs | Beck's theorem (geometry) | Mathematics | 938 |
71,869,796 | https://en.wikipedia.org/wiki/Sanggai%20Yumpham | The Sanggāi Yumpham , () was the citadel, a fortified royal residence within the Kangla Fort, Imphal. It is preserved as an archaeological site as well as a tourist attraction.
History
The construction of the Citadel of the Kangla Fort in Imphal started in 1611, during the era of reign of King Khagemba.
The Kangla Fort was destroyed and abandoned multiple times during Burmese invasions, especially during the Chahi-Taret Khuntakpa, or Seven Years' Devastation (1819-1826). Later, the citadel was re-constructed during the reign of Chandrakirti Singh in 1873.
As a result of the Anglo Manipur War of 1891, on 27 April 1891, General Maxwell annexed the Kangla and the citadel was demolished simultaneously.
Features
The citadel is inside the fort's inner brick wall. It measures around , and its walls are high. There are four guarding pillars in the 4 corners of 4 directions.
The southern passageway door leads to the Govindajee Temple.
It built of bricks. It houses many holy sites, including the coronation site of Pakhangba. It has three main entrance gates, two on the western side, one facing the Royal Coronation Hall and one facing the darbar hall, and one on the southern side that leads to a passageway to the Shree Govindajee Temple.
It is surrounded by five walls of the Kangla Fort. The innermost wall is the only one that is standing still today. There is one octagonal watchtower at every corner of the wall, serving as sentry posts. Its entire perimeter has military installations (emplacements) of around 500 defender soldiers.
There was an old bridge built over the Imphal River from the passageway between the Sanggai Yumpham and the Govindajee Temple inside the Kangla. Later, it got deteriorated in 1891. During the 28th meeting of the Kangla Fort Board, on 15 December 2018, Nongthombam Biren Singh, the Chief Minister of Manipur, took a decision that a new bridge will be constructed similar to the features of the old bridge.
See also
Kangla Sanathong
Statue of Meidingu Nara Singh
Hijagang
Pakhangba Temple, Kangla
Manung Kangjeibung
Museums in Kangla
Kangla Nongpok Torban
Notes
References
External links
Meitei architecture
Royal residences in India
Tourist attractions in Manipur
Buildings and structures in Imphal
Monuments and memorials to Meitei royalty | Sanggai Yumpham | Engineering | 501 |
1,725,063 | https://en.wikipedia.org/wiki/George%20Allman%20%28natural%20historian%29 | George James Allman FRS FRSE (181224 November 1898) was an Irish ecologist, botanist and zoologist who served as Emeritus Professor of Natural History at Edinburgh University in Scotland.
Life
Allman was born in Cork, Ireland, the son of James C. Allman of Bandon, and received his early education at the Royal Academical Institution, Belfast. For some time he studied for the Irish Bar, but ultimately gave up law in favour of natural science. In 1843, he graduated in medicine at Trinity College, Dublin, and in the following year was appointed professor of botany in that university, succeeding the botanist William Allman (1776–1846), who was the father of George Johnston Allman (distant relations of George).
This position he held for about twelve years until he moved to Edinburgh as Regius Professor of natural history. There he remained until 1870, when considerations of health induced him to resign his professorship and retire to Dorset, where he devoted himself to his favourite pastime of horticulture.
The scientific papers which came from his pen are very numerous. His most important work was upon the gymnoblast group of the hydrozoa, on which he published in 1871-1872, through the Ray Society, an exhaustive monograph, based largely on his own researches and illustrated with drawings of remarkable excellence from his own hand. Biological science is also indebted to him for several convenient terms which have come into daily use, e.g. endoderm and ectoderm for the two cellular layers of the body-wall in Coelenterata. He contributed articles to the Irish Naturalist.
He became a fellow of the Royal Society in 1854, and received a Royal medal in 1873. He received the Cunningham Medal of the Royal Irish Academy in 1878.
In 1859–60, he was President of the Botanical Society of Edinburgh, for several years (1874–1881) President of the Linnaean Society, and in 1879 presided over the Sheffield meeting of the British Association.
He died in Ardmore, Parkstone in Dorset and is buried in Poole Cemetery.
Family
Allman married Hannah Louisa Shaen. They had no children. George Allman's family ran Allman's Bandon Distillery, his brother, the Liberal MP Richard Allman, was a partner in the Distillery.
Select bibliography
Allman G. J. 1843. On a new genus of terrestrial gasteropod. The Athenaeum 1843 (829): 851. London.
References
External links
1812 births
1898 deaths
Academics of the University of Edinburgh
Alumni of Trinity College Dublin
Ecologists
Fellows of the Royal Society
Fellows of the Royal Society of Edinburgh
19th-century Irish botanists
Irish naturalists
Irish Unitarians
19th-century Irish zoologists
Scientists from County Cork
Presidents of the British Science Association
Presidents of the Linnean Society of London
Royal Medal winners | George Allman (natural historian) | Environmental_science | 577 |
1,914,268 | https://en.wikipedia.org/wiki/Olga%20Ladyzhenskaya | Olga Aleksandrovna Ladyzhenskaya (; 7 March 1922 – 12 January 2004) was a Russian mathematician who worked on partial differential equations, fluid dynamics, and the finite-difference method for the Navier–Stokes equations. She received the Lomonosov Gold Medal in 2002. She authored more than two hundred scientific publications, including six monographs.
Biography
Ladyzhenskaya was born and grew up in the small town of Kologriv, the daughter of a mathematics teacher who is credited with her early inspiration and love of mathematics. The artist Gennady Ladyzhensky was her grandfather's brother, also born in this town. In 1937 her father, Aleksandr Ivanovich Ladýzhenski, was arrested by the NKVD and executed as an "enemy of the people".
Ladyzhenskaya completed high school in 1939, unlike her older sisters who weren't permitted to do the same. She was not admitted to the Leningrad State University due to her father's status and attended a pedagogical institute. After the German invasion of June 1941, she taught school in Kologriv. She was eventually admitted to Moscow State University in 1943 and graduated in 1947.
She began teaching in the Physics department of the university in 1950 and defended her PhD there, in 1951, under Sergei Sobolev and Vladimir Smirnov. She received a second doctorate from the Moscow State University in 1953. In 1954, she joined the mathematical physics laboratory of the Steklov Institute and became its head in 1961.
Ladyzhenskaya had a love of arts and storytelling, counting writer Aleksandr Solzhenitsyn and poet Anna Akhmatova among her friends. Like Solzhenitsyn she was religious. She was once a member of the city council, and engaged in philanthropic activities, repeatedly risking her personal safety and career to aid people opposed to the Soviet regime. Ladyzhenskaya suffered from various eye problems in her later years and relied on special pencils to do her work.
Two days before a trip to Florida, she died in her sleep in Russia on 12 January 2004.
Mathematical accomplishments
Ladyzhenskaya is known for her work on partial differential equations (especially Hilbert's nineteenth problem) and fluid dynamics. She provided the first rigorous proofs of the convergence of a finite difference method for the Navier–Stokes equations.
She analyzed the regularity of parabolic equations, with Vsevolod A. Solonnikov and her student Nina Ural'tseva, and the regularity of quasilinear elliptic equations.
She wrote a student thesis under Ivan Petrovsky and was on the shortlist for the 1958 Fields Medal, ultimately awarded to Klaus Roth and René Thom.
Publications
.
.
.
(Translated by Jack Lohwater).
Awards and recognitions
P. L. Chebyshev Prize (with Nina Nikolayevna Ural'tseva ) (1966) for the work "Linear and quasilinear equations of elliptic type"
USSR State Prize (1969)
Member of Lincei National Academy in Rome (1989)
Member of the Russian Academy of Sciences (1990)
Kovalevskaya Prize (1992) for the series of works "Attractors for Semigroups and Evolution Equations"
ICM Emmy Noether Lecture (1994)
John von Neumann Lecture (1998)
Order of Friendship (1999)
Lomonosov Gold Medal (2002) for outstanding achievements in the field of the theory of partial differential equations and mathematical physics
On 7 March 2019, the 97th anniversary of Ladyzhenskaya's birth, the search engine Google released a Google Doodle commemorating her. The accompanying comment read, "Today's Doodle celebrates Olga Ladyzhenskaya, a Russian mathematician who triumphed over personal tragedy and obstacles to become one of the most influential thinkers of her generation."
In 2022, the "Ladyzhenskaya Prize in Mathematical Physics" is created in her honor. It has been awarded for the first time on 2 July 2022 to Svetlana Jitomirskaya in a joint session at (WM)², World Meeting for Women in Mathematics and at the Probability and Mathematical Physics conference OAL Prize Winner 2022.
Notes
See also
Projection method (fluid dynamics)
References
. Some recollections of the authors about Olga Ladyzhenskaya and Olga Oleinik.
.
. A biography in the Biographies of Women Mathematicians, Agnes Scott College.
.
. Some recollections of the author about Olga Ladyzhenskaya and Olga Oleinik.
[]
External links
. The schedule of a workshop in honour of Olga A. Ladyzhenskaya.
. The proceedings of a workshop in honour of Olga Ladyzhenskaya and Olga Oleinik.
.
.
. Memorial page at the Saint Petersburg Mathematical Pantheon.
1922 births
2004 deaths
People from Kologrivsky District
Writers from Kostroma Oblast
20th-century Russian mathematicians
20th-century women scientists
Mathematical analysts
Full Members of the USSR Academy of Sciences
Full Members of the Russian Academy of Sciences
Fluid dynamicists
PDE theorists
Recipients of the Lomonosov Gold Medal
20th-century women mathematicians
Moscow State University alumni
Saint Petersburg State University alumni
Academic staff of the Steklov Institute of Mathematics
Russian Christians
Soviet mathematicians
Russian scientists | Olga Ladyzhenskaya | Chemistry,Mathematics,Technology | 1,063 |
40,550,153 | https://en.wikipedia.org/wiki/DriveSavers | DriveSavers, Inc. is a computer hardware data recovery company located in Novato, California. It was founded by former CEO Jay Hagan and former company President Scott Gaidano in 1985.
History
In 1985, former Jasmine Technologies executives Jay Hagan and Scott Gaidano founded DriveSavers, operating from Gaidano’s condo with $1,400. DriveSavers originally offered both hard drive repair and data recovery services, but the company dropped its drive repair services within its first eight months.
In 1992, DriveSavers signed an agreement with SuperMac Technology to assume technical support and warranty obligations for SuperMac Mass Storage Products.
The company merged with Data Recovery Disk Repair in 1994 and retained the DriveSavers name. In 2008, DriveSavers invested two million dollars to build a series of five ISO-certified cleanrooms to disassemble and rebuild damaged hard drives.
From 2004-2009, the company grew from 35 to 85 employees.
DriveSavers also works with "the more secretive" branches of government and celebrities. In order to provide comfort and assistance to clients with a data loss situation, DriveSavers has on staff an individual "data crisis counselor." This counselor has had experience in working for a suicide hotline.
DriveSavers is the only recovery firm licensed with every major hard-drive manufacturer, so their work on a drive does not void the warranty. It can recover data from hard disk drives, solid state drives, smart phones, servers, digital camera media and iOS devices. The company can recover data from T2 and M1-powered Macs with embedded SSD storage. Even with cloud backup, personal data loss is still possible, but can be recovered. The company recovered data from old floppy disks of the deceased Star Trek creator Gene Roddenberry, potentially containing lost episodes of the franchise.
DriveSavers is certified HIPAA-compliant, undergoes annual SOC2 Type II reviews and has encryption training certificates from GuardianEdge, PGP, PointSec and Utimaco.
Security certifications and practices
DriveSavers facility is made up of cleanrooms. The cleanrooms come in different ratings depending on the application and range from federal standards of 100,000 to 100. The rating is a measure of the number of 0.1-micron-sized airborne particulates per square meter.
DriveSavers employees have to go through background checks, because of contracts with state, and federal government agencies. The company also has to meet data-security standards that its clients do, like HIPAA certification to work with hospitals and GLBA certification to work with financial institutions.
Awards
Storage Visions, Visionary Company Award, 2014
Flash Memory Summit, Most Innovative Flash Memory Consumer Application Award, 2018
Better Business Bureau, A+ rating
See also
List of data recovery companies
References
External links
Official site
Computer forensics
Data recovery companies
American companies established in 1985
Novato, California
Companies based in Marin County, California | DriveSavers | Engineering | 594 |
64,284,153 | https://en.wikipedia.org/wiki/Wildlife%20of%20North%20Macedonia | Over 22,500 species of wildlife have been recorded in North Macedonia. Over 10,000 of these are insects, which include 3,000 beetle species and large numbers of Lepidoptera, flies, and Hymenoptera. Aside from insects, other large arthropod groups include Chelicerata (mostly spiders) and crustaceans. Among vertebrates, more than 300 species of birds recorded, although not all nest in the country. There are over 80 species of both fish and mammals, 32 reptiles, and 14 amphibians.
Over 4,200 plants have been identified, of which more than 3,700 are vascular plants. The majority of existing forest is deciduous, and the amount of forest has expanded slightly in recent years. Over 2,000 species of algae have been found, most of them within lakes. There are also 2,000 species of identified fungi, with 90% of these being Basidiomycota, and at least 450 lichens.
The country covers , with much of the terrain being mountainous. Significant variation in topography has contributed to large variation within local climates, which together with the presence of ice age refugia has resulted in significant diversity and endemism. Although North Macedonia is landlocked, it has numerous rivers and lakes supporting aquatic wildlife. The forest and caves in the west of the country support a number of unique and endangered animals. The three largest lakes, Lake Ohrid, Lake Prespa, and Lake Dojran, are further areas of relatively high diversity.
There is substantial interaction between people and wildlife, especially in rural areas. Various plant and fungi species are foraged for local use and for export, and many animal species are hunted. Other threats to wildlife include land use change, pollution, and climate change. A number of animal and plant species have become locally extinct. A variety of laws protect some species and habitats and regulate their use, and action plans have been developed by the government, covering topics including the environment, biodiversity, and water. Four national parks have been established, and North Macedonia is party to a number of European and International conventions that relate to wildlife and the environment.
Environment
North Macedonia spans in the middle of the Balkan Peninsula. The landlocked country is centred around the Vardar river valley, with the national borders being marked by mountain ranges. Its diverse landscape creates a variety of local climates, which can be divided into eight biomes. The varied habitats of the country, including both natural and man-made environments, can be divided into 28 broad types, which are further subdivided into 120 at the third level of EUNIS classification. These are spread across 38 classes of landscapes.
The overall climate is temperate, with the area undergoing four seasons throughout the year. However, the varied geography means that there is significant local climatic variety. Precipitation also varies significantly. Up to of snow can fall on some mountains.
North Macedonia contains of surface water. Its 35 rivers fall into three river basins, with one basin each flowing into the Aegean Sea, the Adriatic Sea, and the Black Sea. The majority of the country, , falls within the Agean Basin, which contains Lake Dojran. of this basin is covered by the Vardar river and its tributaries. The remaining of the Aegean drainage basin is covered by the Strumica river and its tributaries. The main river of the Adriatic basin is Black Drim, which begins at Lake Ohrid. Lake Prespa also falls within this basin. Under some accounting there are also three minor drainage basins, which combined cover just over 1% of North Macedonia. Altogether, lakes cover 2% of the country, about . The three large lakes mentioned above are tectonic lakes. Lake Ohrid is , and has an average depth of . It is thought to be 2–3 million years old. Lake Prespa is , with an average depth of . Lake Dojran is , with a maximum depth of just . These three lakes are split between North Macedonia and neighbouring countries. Around 43 glacial lakes exist in the mountains. There are also 111 are artificial lakes, including large reservoirs.
A complex geological history means the country's surface rocks originate from many periods of Earth's history. There are 13 mountains whose heights exceed , with the highest being above sea level. Around 50% of the country is considered mountainous, containing many gorges and valleys. Caves, especially in limestone mountains found in the west, support a variety of cave fauna, and contain their own lakes. Forests cover of the country, 90% of which is within state-owned land. Agricultural land covers , including both actively used land and land that is able to be quickly cultivated. Pastures make up 757,000 of this agricultural land.
Plants
At least 3700 species of vascular plants have been identified in North Macedonia, of which 116 are endemic or near-endemic. 3200 of these are flowering plants. Additionally, there are 573 non-vascular plant species, none of which are endemic. Over 400 of these species are mosses, while 100 are liverworts.
The estimated wood mass in the country was in 2008. Forested land expanded by 3.5% in the decade leading up to 2013, reflecting a decrease in annual logging, deliberate efforts to expand forested area, and a reduction in livestock. This recovery is not shared by all forest types however, with coniferous and mixed forest coverage reducing in size. As of 2013, 58% of the forest was deciduous, 30% was mixed, 7% was coniferous, and 5% was considered degraded. Some deliberate forestation has been undertaken with the invasive Robinia pseudoacacia and Ailanthus altissima.
Broadleaf tree woodlands, dominated by alder, birch, poplar, and willow species, are some of the country's most endangered habitats. Swamp woodlands are dominated by Alnus glutinosa. Beech forests are dominated by oaks and beech species. Oaks, chestnuts, hophornbeams, and Carpinus orientalis dominate deciduous woodlands. Other habitats contain many conifer species.
314 species of flowering plants (11% of all flowering plants in the country) are protected. 14 species of vascular plants assessed by the United Nations Environment Programme are considered at a risk of extinction: four are critically endangered, five are endangered, and five are vulnerable. Eight moss species are protected. Only 11 Macedonian species appear in the Bern Convention, whereas five are listed in European Union Habitats Directive appendices.
Plant species that have become locally extinct in North Macedonia include Acorus calamus, Sagittaria sagittifolia, Lysimachia thyrsiflora, Aldrovanda vesiculosa, and Nymphaea alba. Others, such as Jacobaea paludosa, Ranunculus lingua, and Gentiana pneumonanthe, are near locally extinct if not extinct already. Carex elata is also on the brink of extinction.
Fungi
Of the 2000 identified fungi taxa in North Macedonia, none are endemic. Around 1800 belong to Basidiomycota (including 550 within Agaricales and 450 within Aphyllophorales), while 200 species are Ascomycota. In addition, at least 450 species of lichens have been identified. Entomophaga maimaiga, an entomopathogenic fungus, is found on gypsy moths, following its introduction in neighbouring Bulgaria as a way to control the moth.
So far, 122 species have been identified as threatened, and 213 species from the ascomycetes and basidiomycetes have been preliminarily assessed for IUCN Red List status. Of these, 21 were critically endangered, 30 endangered, 71 vulnerable, 40 near threatened, 9 least concern, and 42 data deficient. Ten of the critically endangered species have been found in just a single location.
Animals
Invertebrates
There are 13,493 recorded invertebrate species in North Macedonia. The vast majority of these (90%) are arthropods. 6 of the 10 sponges found in the three major lakes are endemic. Eunapius carteri dojranensis is endemic to Lake Doyran. Ochridaspongia rotunda, Ochridospongia interlithonis, Ochridospongilla stankovici, and Spongilla stankovici are endemic to Lake Ohrid. Spongilla prespensis is endemic to Lake Prespa. Of the three Cnidaria species, one is invasive. The 229 flatworms are made up of 158 Neodermata and 71 Turbellaria. The two known Nematomorpha species known are Gordius nonmaculatus and Gordius aquaticus. The only known Nemertea species is Prostoma graecense, which was found in Lake Ohrid. Annelids consist of 140 Oligochaeta, 30 leeches, and 5 Branchiobdellida. 53 of the 180 identified annelids species are endemic, all in Lake Ohrid. 38 of these endemic species are oligochaetes, and 11 are Hirudinea.
There are at least 870 nematode species, including 450 forest-dwelling species, 80 plant parasites, and many living in lakes. The Mollusca species represented are mostly Gastropoda, which have 301 species. At least 195 of these are land snails, and 107 are aquatic snails. The remaining 19 mollusc species are Bivalvia. Of 92 molluscs considered endemic, 88 of them are gastropods and four of them are bivalves. One gastropod species in Lake Prespa is invasive.
Arthropods
At least 11,800 arthropod species have been identified. Of the 1126 Chelicerata species present, at least 60 are considered endemic. The most numerous order within Chelicerata is that of spiders, of which there are 767 species (nine of which are endemic). The next largest taxa is the Acari, whose 250 species include 164 Hydrachnidia and four Halacaridae. Within Halacroidea, the monotypic Stygohalacarus genus, represented by Stygohalacarus scupiensis, is endemic to North Macedonia, as is Copidognathus profundus. The fen raft spider, living in marshes near Mount Belasica, is known to eat fish. The remaining identified Cherlicerata species are 54 pseudoscorpions, 50 opiliones, three scorpions, and two Solifugae (Galeodes elegans and Galeodes graceus). 19 Opiliones are endemic, along with 16 pseudoscorpions.
Insects
The at least 10,081 species of insects make up around 85% of the country's arthropod species. At least 3,145 beetle species have been identified, split between 84 families. The ground beetles make up a large number of these, represented by 573 species. Within the Staphylinoidea superfamily there are 516 species, including representatives of the Staphylinidae (383 species), Cholevinae (57), Hydraenidae (42), Silphidae (14), Ptiliidae (4), and Agyrtidae (1) families. Water beetles have been identified from the Dytiscidae (62), Haliplidae (11), Gyrinidae (6), and Noteridae (2) families. Within Hydrophiloidea, there are 68 Histeridae species and 33 Hydrophilidae species. There are 14 families of Scarabaeoidea represented, which altogether contain 172 species. Within Cucujoidea, there are perhaps over 200 species, of which 116 are Nitidulidae and 33 are Coccinellidae. Within Chrysomeloidea, there are 340 leaf beetles and 176 longhorn beetles. The 88 species of Bostrichiformia are split between four families: Bostrichidae, Dermestidae, Ptinidae, and Nosodendridae. There are 100 species of Elateroidea. Most are click beetles (74), with the rest being soldier beetles (15), fireflies (7), and net-winged beetles (4). There are 44 identified Cleroidea species. 17 each are from Malachiinae and Dasytinae. One Dasytidae species is endemic to the Šar Mountains. The remaining 10 are from the Cleridae family. Other beetles include 290 Curculionidae and 45 darkling beetles.
A total of 2,638 species of Lepidoptera have been identified. These are split between 68 families, with family size ranging from a single species to 528 species (the Noctuidae). 67 species have been provisionally assessed for IUCN Red List status within North Macedonia. One has been identified as endangered, with 15 being vulnerable and 24 being near threatened.
There are over 1,500 species of flies, representing 54 families. The largest family is that of the hoverflies, represented by 262 species. Some flies are considered potential disease vectors, and one, Obolodiplosis robiniae, is invasive.
Hymenoptera is another diverse order, represented by about 1077 species. One relatively well-studied group are parasitoid wasps. 150 species of Braconidae have been found. The 129 Ichneumonidae species (21 of which fall within Cryptinae) present include three endemics: Hadrodactylus tiphae balcanicus, Mesochorus venerandus, and Gelis balcanicus. Eulophidae has 40 species. Non-parasitic wasps include 75 members of the family Vespidae and 37 members of the family Mutillidae. In addition to wasps, there are at least 113 Apidae bee species and 99 ant species.
Within Hemiptera, 776 species of Heteroptera (15 endemic) have been identified. However, local expertise in this taxon is lacking. It has been suggested that the discovered number makes up only 62% of the total species present, taking into account the rate of new discoveries, the diversity of habitats, and studies from neighbouring countries. Knowledge about Homoptera is also very limited, with most studies covering a small number of species from specific locations. The known Homoptera consist of 134 Sternorrhyncha, 15 Cicadomorpha, and 13 Fulgoromorpha.
There are 175 identified orthoptera species. One, Bradyporus macrogaster macrogaste, is critically endangered. Four are considered endangered, while another eight are considered vulnerable (another 10 are data deficient). Some species are known only from a single location, or even a single record. There are perhaps 106 caddisfly species, although estimates vary. The 68 species of identified Neuropterida consist of 59 Neuroptera and nine Raphidioptera. The 13 known flea species include parasites of the Balkan snow vole. The country has 68 identified mayflies, including seven Rhithrogena species, all found in the Pena river. Known Dictyoptera include 12 cockroaches, four mantis, and two termites. Of the 14 lice species known, two, Enderleinellus ferrisi and Schizophthirus gliris, are parasites (of European ground squirrels and edible dormice respectively). Haploembia solieri, which was found in Valandovo, is the only recorded Embioptera species. Other insect species include 97 Plecoptera (10 of which are endemic), 64 Odonata, 49 Psocoptera, 42 Thysanoptera, five Dermaptera, three Archaeognatha, two Mecoptera, one Strepsiptera (Hylecthrus rubi), and one Zygentoma. There is no data for Megaloptera within the country.
Crustaceans
Of the 490 crustacean species found, 133 are considered endemic. Within the class Branchiopoda, the most numerous order is Diplostraca, which has 86 species. Alona smirnovi is endemic to Lake Ohrid. Of the other orders, there are six species of Anostraca, including Chirocephalus pelagonicus which is endemic to wetlands in the Pelagonia plain. The two species of order Notostraca (which contains only the Triopsidae family) present are Lepidurus apus and Triops cancriformis.
All 172 representatives of the ostracods are from the order Podocopida. Numerous genera within this order have high levels of endemism.
Of the class Maxillopoda, 146 species are copepods: 60 Cyclopoida, 54 Harpacticoida, 30 Calanoida, and two Poecilostomatoida. Both Cyclopoida and Harpacticoida contain several endemic genera. 12 of these copepods are known to be parasitic, found infecting fish in the major lakes. Other Maxillopoda include three Branchiura, all from the genus Argulus within the class Argulidae. One, Argulus foliaceus, is a parasite of at least three fish species. There is one representative of the Pentastomida known: Linguatula serrata, which is a human parasite.
There are representatives of four orders of Malacostraca. Isopoda is represented by 50 species, of which about a third are endemic. Amphipoda is represented by 47 species, with five genera (Bogidiella, Gammarus, Niphargus, Hadzia, and Ignolfiella) containing many endemic species. Decapoda is represented by five species: two Astacidae (Austropotamobius torrentium and Astacus astacus), two Potamidae (Potamon fluviatile and Potamon ibericum), and one Atyidae (Atyaephyra stankoi, which was found in Lake Dojran). Bathynellacea is represented by three species: Bathynella natans, Parabathynella stygia, and Bathynella chappuisi.
There are around 100 species of Myriapoda, including 18 endemic millipedes. There are 21 identified species of Entognatha, including 11 Collembola, eight Protura and two Diplura.
Fish
The total number of fish species differs between sources, potentially due to different classifications regarding what counts as a species. One calculation identifies 85 Actinopterygii and two lampreys. Of these, 27 are endemic, and 19 are invasive. Some fish are endemic to certain lakes or rivers. Lake Ohrid has 21 native species, of which eight are endemic, and seven are introduced species. Lake Prespa has 11 native species, of which eight are endemic, and 12 introduced species. Lake Dojran has 12 native species, one of which is endemic, along with two introduced species.
Of the Actinopterygii, the European sea sturgeon, the European eel, and Alburnus macedonicus are considered critically endangered, and the Prespa minnow and Salmo peristericus are considered endangered. A further 10 species are vulnerable, one near threatened, and 10 data deficient. Salmonidae species in Orhid Lake are endangered, while the common carp is endangered in Lake Prespa. The two lamprey species are the Ukrainian brook lamprey and Eudontomyzon stankokaramani. The first lives in the Vardar's drainage basin, while the second lives in the Adriatic Sea basin. Both are protected by law.
Amphibians
The amphibian species found comprise nine frogs, and five salamanders. Three of these have been categorized as endangered, three as vulnerable, three as near threatened, and five as least concern. Considering the effects of disease and habitat loss, that proportion of threatened amphibian species is roughly in line with the state of amphibians worldwide. None of these species are endemic to the country, although two are endemic to the Balkans.
Reptiles
The 32 species of reptiles present are made up of 16 snakes, 12 lizards, and four tortoises. Within North Macedonia, 11 of these species are widespread, 10 are restricted to certain habitats, and 11 have very limited range. While none of these are endemic to the country, two are endemic to the Balkans. One snake species present is the Caspian whipsnake, Europe's largest snake.
Of those with IUCN Red List assessments, seven of them are endangered, six are vulnerable, eight are near threatened, and ten are of least concern. The globally vulnerable Vipera ursinii, one of three Viperidae in North Macedonia, has strict protection, while another 22 species have some level of protection. Hermann's tortoise, found in forest and meadows, are expected to lost at least one third of their population over the next 75 years.
Birds
There are 318 bird species with confirmed sightings, and an additional 16 for which there are less reliable claims. None are endemic. 215 of these species nest in the country. 106 bird species have some level of protection under law. At least eight nesting species have become locally extinct, while another 7–15 no longer nest in the country. For many species, fewer than 100 nesting couples remain.
Mammals
At least 85 mammal species inhabit North Macedonia, of which four are endemic to the Balkans: the western broad-toothed field mouse, the Balkan snow vole, Felten's vole, and the Balkan mole. The chamois sub-species Rupicapra rupicapra balcanica and lynx sub-species the Balkan lynx are endemic to the Balkans, and have core populations within North Macedonia. It has been suggested that the country's European ground squirrel population be considered its own sub-species. Eight mammal species are considered invasive.
The Balkan lynx persists around the Albanian-Macedonian border. Its low population leads it to being rarely seen, and it is the subject of recovery efforts. The lynx has a reputation for killing livestock, despite there being very few recorded incidents. There are perhaps fewer than 100 spread across North Macedonia and Albania. Brown bears live in mountainous forests in the west of the country. Around 160-200 individuals inhabit these forests, part of the larger Dinaric-Pindos population. About 70 are found in Mavrovo National Park. The population of wolves numbered around 800–1000 as of 2013.
Eurasian otters can be found throughout most waterways, although they are absent from a couple of areas with particularly high levels of pollution. Golden jackals were considered extinct in the area by the 1960s, but became re-established in forested mountainous areas, especially in the west, during the 21st century. An individual raccoon dog, a species which was introduced to European Russia in the 20th century, was found dead in the north of the country, indicating the species is spreading southwards. Herbivores include roe deer, chamois, wild boar, and European hares. European ground squirrels are found on Mount Mokra.
The European wildcat, Balkan lynx, Eurasian otter, brown bear, European ground squirrel, and Balkan snow vole have strict protection under national legislation, and another 10 mammal species have lesser levels of protection. Three bats, two rodents, and one mustelid are listed as threatened in the IUCN Red List.
Algae
Knowledge of algal species in North Macedonia remains patchy, with inconsistent research and several taxa that remain not fully identified. There are 2095 identified forms of algae, with the vast majority being diatoms. It is thought at least 10% of these are endemic, with at least 194 having been so far identified as such. Most research on algae has taken place in Lakes Ohrid and Prespa. Lake Ohrid by itself holds 798 identified taxa, with 158 being endemic to just that lake.
Diversity and endemism hotspots
Within North Macedonia, there are multiple areas that are likely high in endemism. Lake Ohrid is an extensively studied area, and its diversity is reflected in a high number of endemic species. Lake Prespa lies quite close to Lake Ohrid, and the two lakes are connected hydrologically. While Lake Prespa has fewer species than Lake Ohrid and while the two lakes share similar species compositions, Lake Prespa has its own endemic species and species more closely related to more western water bodies than to Lake Ohrid.
Within Lake Ohrid alone there are over 100 insects, 75 flatworms (35 endemic to the lake and nearby waterbodies), 72 gastropods (56 endemic), 52 ostracods (33 endemic), 49 rotifers, 43 Acari, 36 oligochaetes (17 endemic), 36 copepods (six endemic), 31 Cladocera (one endemic), 30 endemic ciliates, 24 leeches (12 endemic), 24 nematodes (three endemic), 14 amoebas, 13 bivalves (two endemic), 10–11 Amphipoda (9 endemic), four isopods (3 endemic), four sponges, and two decapods. The Ancylus genus species in this lake are monophyletic, likely evolving within the lake itself. The same situation is true for leeches from the genus Dina.
Lake Prespa is less well studied, but is known to have at over 100 insects, 90 crustaceans, 60 rotifers, 50 flatworms, 36 molluscs (27 of which are snails), 35 annelids, and three sponges. Seven of these snails are endemic, as is the mollusc Pisidium maasseni. Radix snail species in both Ohrid and Lake Prespas are related, and there are endemic species within springs near the Monastery of Saint Naum that are unrelated to the species found in the lakes. Lake Dojran has 17 rotifers.
It is expected that mountainous areas will also contain a number of endemic species. North Macedonia contains many areas of ice age refugia, which retain significant plant diversity. Plant endemism and sub-endemism is high around mountainous areas. Mammal diversity is highest in the mountainous west of the country. Caves in the west, especially within the drainage basins of the Radika, Galichica, Jakupica, and Poreche rivers, are thought to have rates of invertebrate endemism of around 90%. 57 species of stygofauna are known, including 14 pseudoscorpions, 12 beetles, and 10 isopods.
Human influence
There is significant interaction between the people of North Macedonia and wildlife, especially in rural areas. This continues despite rural areas decreasing in population by 2.2% from 2005 to 2010. Foraging for wild plants remains a common activity for those in rural areas. Around 700 plant species are considered medicinal or aromatic, with 220 frequently used. A quarter of known fungi species are edible, and some fungi and lichens are collected for use and export.
Carnivores, such as bears, wolves, and lynx, are viewed negatively in some areas. Bears and wolves are known to attack livestock such as sheep, cattle, and goats. In some instances wolves have been reported to hunt up to 38% of sheep flocks, and an even higher percentage of goats. Foxes and wild cats are reported to kill poultry. There are reports of bear, wolf, and even lynx and jackal attacks on humans, although none of these attacks resulted in deaths. Such conflict is expected to decrease given the number of sheep in the country halved following a 1996 foot-and-mouth disease outbreak and a resultant drop in the value of sheep products. Bears and wild boars are also known to damage agricultural produce.
Some species have cultural relevance. The lynx is depicted on a denar coin. Storks are considered to bring luck in local folklore, especially with regard to child-bearing.
Threats
Data deficiency is a significant issue for assessing environmental threats. For example, knowledge about invasive species in lacking. The invasive species that are relatively well studied are generally those that directly affect commercial resources, such as the Colorado potato beetle affecting crops and the Prussian carp reducing native fish populations in Lake Dojran.
Marsh draining, agricultural land use change, and mining have significantly altered the landscape. Hydropower dams have been placed in gorges that host rare and endemic plants. Lysimachia thyrsiflora became locally extinct due to the creation of Mavrovo Lake. Dams in the Black Drim have blocked eel migration to the Sargasso Sea. Smaller watercourses are at risk of diversion and depletion due to demand for irrigation.
Water pollution remains an issue. The most polluted river, the Vardar, is polluted by around of waste annually, including 75000 tons of solid particles, 5000 tons of nitrogen, and 1000 tons of phosphorus. Heavy industry contributes to such pollution in both surface and groundwater. Marshes are a habitat under considerable threat, not just from land use changes but from climate change, and lowland marshes may disappear completely from the country. This wetland conversion is threatening local plant species. In addition to water pollution and water flow modification, fish are threatened by illegal fishing and invasive species. On land, many plant species are threatened by over-harvesting. Gentiana lutea is rare due to over-collection for medical purposes. Illegal logging is also a threat, reducing the habitat of animals such as lynx.
Poaching can be an issue, with traps and poison used to catch animals such as wild boar. Poaching increased after independence, due to perceived high fees for hunting within one of the 249 hunting grounds and a lack of punishment, despite poaching a lynx theoretically being punishable by eight years in jail. Wolf hunting is legal. As they are considered pests, the government pays a bounty of 700 denar per wolf killed. Between 100 and 200 wolves are killed per year, mostly through chance encounters during the hunting of other animals. Brown bears are still sometimes poached, and while hunting them was banned in 1996, their population has not increased significantly since then. Poaching has also caused a decline in the chamois population.
Conservation
Of the 22,500 species that occur in the country, over 800 are considered to be endemic. Some species listed as endemic nonetheless have ranges which may stretch slightly outside North Macedonia's borders. For example, species found only in the three major lakes are considered endemic, despite the lakes being shared with neighbouring countries.
North Macedonia contains 33 pan-European habitat types regarded as endangered under the Bern Convention's Emerald network. The network covers 35 areas encompassing 29% of the country. The same network includes 167 species that require specific conservation measures: 7 invertebrates, 13 fish, 3 amphibians, 7 reptiles, 115 birds, 17 mammals, and 5 plants.
The government of North Macedonia has classified threats to biological diversity into 249 items, 17 of which were considered to have a very high priority. The root causes behind these 17 priority threats include poor policymaking, inconsistent enforcement of laws and regulations, poverty, low public awareness, and climate change. 18 habitats are thought to be vulnerable to climate change, along with 58 plant species and 224 animal species.
65 bird species are listed under the European Union's Birds Directive Annex I, while 15 migratory bird species are listed under Annex I of the Convention on the Conservation of Migratory Species of Wild Animals. 24 reptiles are listed in the Bern Convention and 25 in the European Union Habitats directive appendices. Eight amphibian species are protected by law, while also falling under the Bern Convention and European Union Habitats directive. 34 fish species have protection under Macedonian law. BirdLife International has identified 24 Important Bird and Biodiversity Areas, while Plantlife International has identified 42 Important Plant Areas.
The Vulture Conservation Project has been operated by the Macedonian Ecological Society since 2003. The Balkan Lynx Recovery Programme (or Programme for Balkan Lynx Recovery) began in 2006.
From 2007 to 2014, the government allocated between 4 and 9 million denar each year for environmental protection. However, most funding comes from external bodies, such as the Global Environment Facility, the European Union (including pre-accession assistance), and bilateral contributions. Cross-border projects exist with Albania, Bulgaria, and Greece. Species that have increased in number from low bases of local extirpation include Gentiana pneumonanthe, Ranunculus lingua, Salvinia natans, Nuphar lutea, and Menyanthes trifoliata.
Legal framework
Environmental protection became part of national law for the first time in 1963, under Article 32 of the Constitution of the Socialist Republic of Macedonia. This was maintained in the 1991 Constitution of the Republic of Macedonia under Article 56. The constitution obliges citizens to promote and protect the environment.
Most environmental laws are set at a national level, with Assembly of North Macedonia having a commission for transport, communications, and environment. The Ministry of Environment and Physical Planning (MoEPP) handles government implementation of laws and regulations. In 2007, the Administration of Environment was established within the MoEPP. In 2014 the State Inspectorate of Environment and Nature became a separate legal entity within the MoEPP. The also plays a role in ensuring the environment is utilised sustainably.
The country is working to integrate European Union environmental legislation into its national laws, and to meet the Aichi targets. IUCN Red List classification and methodology is specifically referenced by the Law on Nature Protection of the Republic of North Macedonia ("Official Gazette" no. 67/04, as amended).
While the MoEPP is responsible for nature protecting and monitoring, natural resources, such as forests, waters, and game animals remain under the jurisdiction of other ministries. This overlap in jurisdiction has complicated ecosystem management, and the implementation of the European Union Acquis communautaire into Macedonian law. Different aspects of environmental monitoring are carried out by a number of different ministries. However, the development of a biodiversity strategy was an early example of effective cooperation between ministries.
The Law on Protection of Ohrid, Prespa and Lake Dojrans provided specific protection for the three large lakes starting in 1977. The Law on Environment and Nature Protection and Promotion was instituted in 1996. The Law on Nature Protection was established in 2004 to consolidate previous laws, such as the Law on Natural Rarities (1973) and the Law on Protection of Ohrid, Prespa and Lake Dojrans (1977), and the Law on Protection of National Parks (1980). It also transposes parts of the Acquis communautaire, namely the Habitats Directive, the Birds Directive, and the Regulation on the Protection of Species of Wild Fauna and Flora by Regulating Trade Therein. The use of natural resources such as wild plants and animal parts is regulated under the Law on Nature Protection. National parks issue permits for the collection of wild resources from their forests. This law also legislates to protect the environment from invasive species.
Forests are regulated under the Spatial Plan of the Republic of Macedonia (2004), the Strategy for Sustainable Forestry Development in the Republic of Macedonia (2006), the Law on Reproductive Material of Forest Tree Species (2007), and the Law on Forests (2009). Species important for agriculture have their own protection under Article 78 of the Law on Agriculture and Rural Development. 2666 samples from 89 species were stored in a dedicated Gene Bank at the Institute of Agriculture as of 2013. This seed bank is regulated under the Law on Seeds and Seedlings. The government is considering establishing a gene bank for native flora, after a previous attempt at Botanical Garden of the Institute of Biology at the Faculty of Natural Sciences and Mathematics, Ss. Cyril and Methodius University of Skopje, ended after one year.
The Law on Waters transfers the European Union Water Framework Directive into national law, and directs the management of waters, shorelands, and wetlands. Other laws relevant to water management include the Law on Water Management Companies and the Law on Water Communities. The government has published a National Strategy for Waters to guide its actions, and is preparing specific action plans for various lakes and river systems. Commercial fishing is regulated under the Law on Fishery and Aquaculture. A transboundary plan for fisheries management in Lake Prespa is under development. A joint management plan is being put in place for Lake Ohrid.
The first National Biodiversity Strategy and Action Plan was adopted in 2004, alongside the Law on Nature Protection. Only 56% of its proposed actions were implemented. The 2018–2023 National Biodiversity Strategy and Action Plan was adopted on 13 March 2020 during the government's 58th session, and contains 19 specific targets. Another current plan is the National Strategy for Nature Protection for 2017–2027. The first National Environment Action Plan was put in place in 2006. It and the subsequent Second National Environmental Action Plans direct the government's anti-pollution efforts.
Pelister National Park, established in 1948, was the first national park in Yugoslavia. Mavrovo National Park followed in 1949, and Mount Galičica became the in 1958. The Law on Protection of National Parks was implemented in 1980. Other protected areas include "National Monuments" (Smolare Falls, Markovi Kuli, Kuklica, Lovki-Golemo, Slatinski Izvor cave, Lake Prespa, Lake Dojran, and the Vevchani Springs), "Strict Nature Reserves" (only Ploche Litotelmi), "Nature Parks" (Ezerani at Lake Prespa), and "Natural Rarities" (Dona Duka Cave). Individual species can also be declared natural rarities, and this status has been granted to the sycamore tree. Protected areas have been established in an ad hoc manner, and for a variety of purposes, resulting in no coherent network or strategy. The Law on Nature Protection divided them into six categories. Altogether, the 86 protected sites cover 230,083 ha, or 8.9% of the country (4.5% national parks, 3.0% monuments, 1.4% other protected areas). While these national parks have established management systems, other protected areas lack specific plans and dedicated scientific studies. A fourth park, Shar Mountain National Park, was created in 2021.
There are two classes of protected species under Macedonian law, "Strictly Protected" and "Protected". National protection measures exist for "Strictly Protected" species, but not yet for species identified as "Protected". Legislation has been enacted to control the trade of wildlife and wildlife parts.
The Constitution treats game animals as common goods, which are given special protection. Current regulation is established under the 2012 Law on Hunting. 133 wild species are classified as game: 110 birds and 23 mammals. There are 112 hunting grounds for large game, and 144 for small game. These are all divided between 11 hunting management areas under the General Hunting Management Master Plan. The most hunted species include hares, partridges, pheasants, wild boar, and chamois. Previously unprotected, bears were classified as a game species in 1988, gaining hunting law protections in 1996. 762 wolves, 76 deer, 521 chamois, over 6000 wild boar, over 32,000 rabbits, 252 birds of prey, and 1500 waterfowl were recorded as having been shot between 2003 and 2012. A few hunted species have seen steep reductions in their population. Some deer populations have been reduced by up to 93%. The legal list of protected and strictly protected species has not kept up to date with taxonomy, for example by double-counting species which had previously been assigned multiple names.
International conventions
North Macedonia ratified the Convention on Biological Diversity in 1997, the United Nations Framework Convention on Climate Change in 1997, the United Nations Convention to Combat Desertification in 2002, and the Kyoto Protocol in 2004. North Macedonia is also a party to the Convention on the Conservation of Migratory Species of Wild Animals, along with the specific Agreement on the Conservation of Populations of European Bats and Agreement on the Conservation of African-Eurasian Migratory Waterbirds.
Two areas totalling have been designated wetlands of international importance under the Ramsar Convention: Lake Prespa in 1995 and Dorjan Lake in 2007. However, there is currently no national programme for wetlands conservation, despite obligations under the convention. Lake Ohrid was designated a UNESCO World Heritage Site under the criteria for nature in 1979. Slatinski Izvor cave and the Markovi Kuli landscape were tentatively enrolled in 2004.
References
External links
Balkan lynx recovery programme
Biodiversity Strategy and Action Plan of the Republic of Macedonia (2004)
Biota of North Macedonia
North Macedonia | Wildlife of North Macedonia | Biology | 8,348 |
40,164,004 | https://en.wikipedia.org/wiki/Todd%20Weather%20Folios | The Todd Weather Folios are a collection of continental Australian synoptic charts that were published from 1879 to 1909.
The charts were created by Sir Charles Todd's office at the Adelaide Observatory. In addition to the charts, the folios include clippings of newspaper articles and telegraphic and handwritten information about the weather. The area covered is mainly the east and south-east of Australia, with occasional reference to other parts of Australasia and the world.
The maps are bound into approximately six-month folios, 63 of which cover the entire period. There are approximately 10,000 continental weather maps along with 750 rainfall maps for South Australia, 10 million printed words of news text, and innumerable handwritten observations and correspondences about the weather.
The folios are an earlier part of the National Archives of Australia listed collection series number D1384.
The History of the Folios
With the advent of the telegraph it was possible to simultaneously collect data, such as surface temperature and sea-level pressure, to draw synoptic weather charts. With Charles Todd's appointment as Postmaster General to the Colony, he trained not only his telegraph operators, but also his postmasters as weather observers. These observers provided valuable data points that, in combination with telegraphed observations from the other colonies (including New Zealand), showed the development and progress of weather activity across a large part of the Southern Hemisphere. Todd's best known feat was his construction management of the Overland Telegraph from Adelaide to Port Darwin. This line of communication was critical to his capacity to create continent-wide synoptic charts as the telegraphic observations from the Outback enabled the connection of data points on the east coast of Australia with similar data points on the west and southern coasts. These continent-scale isobaric lines allowed Todd and his staff to draw synoptic charts that in the early 1880s had a greater breadth than any (known) synoptic charts drawn elsewhere in the world.
The folios grew out of Todd's desire to inform the colonists of South Australia of the immense size of weather systems and that in southern Australia, they generally progressed from west to east and not from east to west as commonly assumed by the early colonists. To accomplish this, Todd displayed daily the last 6 synoptic charts for public viewing then bound and stored them in the folios.
The Todd weather folios consist not only of synoptic charts, but also include clippings from newspapers detailing weather statistics and events for all the eastern colonies of Australia. Newspapers from Brisbane, Sydney and Melbourne were collected as they came off the inter-colonial trains and were processed for pasting up next to the corresponding synoptic chart.
The collection from 1879 includes the earliest use of isobaric maps. It then develops through to the first maps posted for public consumption in the mid 1880s, and finishes with the ‘production maps’ of pre-Federalised weather observations and forecasting. The maps are accompanied by other information including the first in-house forecasts (and later published forecasts), early rainfall maps, weather observations from the logs of sailing ships, and telegrams and letters about significant weather events.
Digitising the Collection
As the original documents are in a fragile state and not easily accessible, a team of volunteers of the Australian Meteorological Society (AMETA) hosted by the Australian Bureau of Meteorology, has digitally
imaged the full 31-year run of Todd's charts and accompanying text. The digital images have been handed to the National Archives of Australia for inclusion in the Australian Digital Heritage collection. Access to the 26,000 high quality images is also available on-line.
The volunteer group has also digitised data from the Todd folios which have been forwarded for inclusion in the International Surface Pressure Databank (ISPD). This has been done as part of Project ACRE(Atmospheric Circulation Reconstructions over the Earth) of the Climate Monitoring and Attribution Group, Meteorology Office Hadley Centre, UK. ACRE exists to gather data to fuel a weather ‘backcasting’ model extending back to 1750. The Todd folios contain data of value to this initiative, data that is no longer available through other records. In many cases, the original documents containing the data recorded by weather observers are no longer in existence or are irretrievably lost, which gives significance to their recording in Todd's synoptic charts and ancillary documents.
Three key concerns have driven the project; they are to make this historical archive discoverable, accessible, and future-proofed. In an electronic format on the internet, discoverability and accessibility are greatly enhanced. With the National Archives agreement to store the images, future-proofing the electronic images is assured.
References
External links
Todd Weather Folios
AMETA The Australian Meteorological Association Inc.
International Surface Pressure Databank(ISPD)
Historical climatology
Climate of Australia
Climate and weather statistics
Meteorological data and networks | Todd Weather Folios | Physics | 1,009 |
1,429,661 | https://en.wikipedia.org/wiki/Khmer%20people | The Khmer people (, UNGEGN: , ALA-LC: ) are an Austroasiatic ethnic group native to Cambodia. They comprise over 95% of Cambodia's population of 17 million. They speak the Khmer language, which is part of the larger Austroasiatic language family alongside Mon and Vietnamese.
The majority of Khmer people follow Theravada Buddhism. Significant populations of Khmers reside in adjacent areas of Thailand (Northern Khmer) and the Mekong Delta region of neighboring Vietnam (Khmer Krom), while there are over one million Khmers in the Khmer diaspora living mainly in France, the United States, and Australia.
Distribution
The majority of the world's Khmers live in Cambodia, the population of which is over 95% Khmer.
Thailand, Vietnam and Laos
There are also significant Khmer populations native to Thailand and Vietnam. In Thailand, there are over one million Khmers (known as the Khmer Surin), mainly in Surin (Sorin), Buriram (Borei Rom) and Sisaket (Srei Saket) provinces. Estimates for the number of the Khmers in Vietnam (known as the Khmer Krom) vary from the 1.3 million given by government data to 7 million advocated by the Khmers Kampuchea-Krom Federation. The Khmer population native to Laos is less significant than in Thailand and Vietnam, those communities reside in the southwestern tip of Laos, at the borders of Thailand and Cambodia.
Western nations
Due to migration as a result of the Cambodian civil war and Cambodian genocide, there is a large Khmer diaspora residing in the United States, Canada, Australia, and France.
History
Origin myths
According to one Khmer legend attributed by George Coedes to a tenth century inscription, the Khmers arose from the union of the Brahmana Kambu Swayambhuva and the apsara ("celestial nymph") Mera. Their marriage is said to have given rise to the name Khmer and founded the Varman dynasty of ancient Cambodia.
A more popular legend, reenacted to this day in the traditional Khmer wedding ceremony and taught in elementary school, holds that Cambodia was created when a merchant named Kaundinya I (commonly referred to as Preah Thong) married Princess Soma, a Nāga (Neang Neak) princess. Kaundinya sailed to Southeast Asia following an arrow he saw in a dream. Upon arrival he found an island called Kok Thlok and, after conquering Soma's Naga army, he fell in love with her. As a dowry, the father of princess Soma drank the waters around the island, which was revealed to be the top of a mountain, and the land below that was uncovered became Cambodia. Kaundinya and Soma and their descendants became known as the Khmers and are said to have been the rulers of Funan, Chenla and the Khmer Empire. This myth further explains why the oldest Khmer wats, or temples, were always built on mountaintops, and why today mountains themselves are still revered as holy places.
Arrival in Southeast Asia
The Khmers, an Austroasiatic people, are one of the oldest ethnic groups in the area, having filtered into Southeast Asia from southern China, possibly Yunnan, or from Northeast India around the same time as the Mon, who settled further west on the Indochinese Peninsula and to whom the Khmer are ancestrally related. Most archaeologists and linguists, and other specialists like Sinologists and crop experts, believe that they arrived no later than 2000 BCE (over four thousand years ago) bringing with them the practice of agriculture and in particular the cultivation of rice. This region is also one of the first places in the world to use bronze. They were the builders of the later Khmer Empire, which dominated Southeast Asia for six centuries beginning in 802, and now form the mainstream of political, cultural, and economic Cambodia.
The Khmers developed the Khmer alphabet, which in turn gave birth to the later Thai and Lao alphabets. The Khmers are considered by archaeologists and ethnologists to be indigenous to the contiguous regions of Isan, southern Laos, Cambodia and South Vietnam. That is to say the Cambodians have historically been a lowland people who lived close to one of the tributaries of the Mekong River. The reason they migrated into Southeast Asia is not well understood, but scholars believe that Austroasiatic speakers were pushed south by invading Tibeto-Burman speakers from the north as evident by Austroasiatic vocabulary in Chinese, because of agricultural purposes as evident by their migration routes along major rivers, or a combination of these and other factors.
The Khmer are considered a part of the Indian cultural sphere, owing to them adopting Indian culture, traditions and religious identities. The first powerful trading kingdom in Southeast Asia, the Kingdom of Funan, was established in southeastern Cambodia and the Mekong Delta in the first century, although extensive archaeological work in Angkor Borei District near the modern Vietnamese border has unearthed brickworks, canals, cemeteries and graves dating to the fifth century BCE.
During the Funan period (1st–6th centuries CE) the Khmer also acquired Buddhism, the concept of the Shaiva imperial cult of the devaraja and the great temple as a symbolic world mountain. The rival Khmer Chenla Kingdom emerged in the fifth century and later conquered the Kingdom of Funan. Chenla was an upland state whose economy was reliant on agriculture whereas Funan was a lowland state with an economy dependent on maritime trade.
These two states, even after conquest by Chenla in the sixth century, were constantly at war with each other and smaller principalities. During the Chenla period (5th–8th centuries), Khmers left the world's earliest known zero in one of their temple inscriptions. Only when King Jayavarman II declared an independent and united Cambodia in 802 was there relative peace between the two lands, upper and lowland Cambodia.
Jayavarman II (802–830) revived Khmer power and built the foundation for the Khmer Empire, founding three capitals—Indrapura, Hariharalaya, and Mahendraparvata—the archeological remains of which reveal much about his times. After winning a long civil war, Suryavarman I (reigned 1002–1050) turned his forces eastward and subjugated the Mon kingdom of Dvaravati. Consequently, he ruled over the greater part of present-day Thailand and Laos, as well as the northern half of the Malay Peninsula. This period, during which Angkor Wat was constructed, is considered the apex of Khmer civilization.
Khmer Empire (802–1431)
The Khmer kingdom became the Khmer Empire and the great temples of Angkor, considered an archeological treasure replete with detailed stone bas-reliefs showing many aspects of the culture, including some musical instruments, remain as monuments to the culture of the Cambodia. After the death of Suryavarman II (1113–1150), Cambodia lapsed into chaos until Jayavarman VII (1181–1218) ordered the construction of a new city. He was a Buddhist, and for a time, Buddhism became the dominant religion in Cambodia. As a state religion, however, it was adapted to suit the Deva Raja cult, with a Buddha Raja being substituted for the former Shiva Raja or Vishnu Raja.
The rise of the Tai kingdoms of Sukhothai (1238) and Ayutthaya (1350) resulted in almost ceaseless wars with the Khmers and led to the destruction of Angkor in 1431. They are said to have carried off 90,000 prisoners, many of whom were likely dancers and musicians. The period following 1432, with the Khmer people bereft of their treasures, documents, and human culture bearers, was one of precipitous decline.
Post-empire (1431–present)
In 1434, King Ponhea Yat made Phnom Penh his capital, and Angkor was abandoned to the jungle. Due to continued Siamese and Vietnamese aggression, Cambodia appealed to France for protection in 1863 and became a French protectorate in 1864. During the 1880s, along with southern Vietnam and Laos, Cambodia was drawn into the French-controlled Indochinese Union. For nearly a century, the French exploited Cambodia commercially, and demanded power over politics, economics, and social life.
During the second half of the twentieth century, the political situation in Cambodia became chaotic. King Norodom Sihanouk (later, Prince, then again King), proclaimed Cambodia's independence in 1949 (granted in full in 1953) and ruled the country until March 18, 1970, when he was overthrown by General Lon Nol, who established the Khmer Republic. On April 17, 1975, Khmer Rouge, who under the leadership of Pol Pot combined Khmer nationalism and extreme Communism, came to power and virtually destroyed the Cambodian people, their health, morality, education, physical environment, and culture in the Cambodian genocide.
On January 7, 1979, Vietnamese forces ousted the Khmer Rouge. After more than ten years of painfully slow rebuilding, with only meager outside help, the United Nations intervened resulting in the Paris Peace Accord on October 23, 1992, and created conditions for general elections in May 1993, leading to the formation of the current government and the restoration of Prince Sihanouk to power as King in 1993.
Culture and society
The culture of the ethnic Khmers is fairly homogeneous throughout their geographic range. Regional dialects exist, but are mutually intelligible. The standard is based on the dialect spoken throughout the Central Plain, a region encompassed by the northwest and central provinces. The varieties of Khmer spoken in this region are representative of the speech of the majority of the population. A unique and immediately recognizable dialect has developed in Phnom Penh that, due to the city's status as the national capital, has been modestly affected by recent French and Vietnamese influence. Other dialects are Northern Khmer dialect, called Khmer Surin by Khmers, spoken by over a million Khmer native to Northeast Thailand; and Khmer Krom spoken by the millions of Khmer native to the Mekong Delta regions of Vietnam adjacent to Cambodia and their descendants abroad. A little-studied dialect known as Western Khmer, or Cardamom Khmer, is spoken by a small, isolated population in the Cardamom Mountain range extending from Cambodia into eastern Central Thailand. Although little studied, it is unique in that it maintains a definite system of vocal register that has all but disappeared in other dialects of modern Khmer.
The modern Khmer strongly identify their ethnic identity with their religious beliefs and practices, which combine the tenets of Theravada Buddhism with elements of indigenous ancestor-spirit worship, animism and shamanism. Most Cambodians, whether or not they profess to be Buddhists or other faiths, believe in a rich supernatural world. Several types of supernatural entities are believed to exist; they make themselves known by means of inexplicable sounds or happenings. Among these phenomena are kmaoch ខ្មោច (ghosts), pret ប្រែត (comes in many forms depending on their punishments) and beisach បិសាច(monsters) [these are usually the spirits of people who have died a violently, untimely, or unnatural deaths]; arak អារក្ស (evil spirits, devils), ahp krasue, neak ta អ្នកតា (tutelary spirit or entity residing in inanimate objects; land, water, trees etc.), chomneang/mneang phteah ជំនាងផ្ទះ/ម្នាងផ្ទះ(house guardians), meba មេបា (ancestral spirits), and mrenh kongveal ម្រេញគង្វាល (little mischief spirit guardians dressed in red). All spirits must be shown proper respect, and, with the exception of the mneang phteah and mrenh kongveal, they can cause trouble ranging from mischief to serious life-threatening illnesses.
The majority of the Cambodians live in rural villages either as rice farmers or fishermen. Their life revolves around the Wat (temple) and the various Buddhist ceremonies throughout the year.
However, if Cambodians become ill, they will frequently see a kru khmae (shaman/healer), whom they believe can diagnose which of the many spirits has caused the illness and recommend a course of action to propitiate the offended spirit, thereby curing the illness. The kru khmae is also learned in herb lore and is often sought to prepare various "medicines" and potions or for a magical tattoo, all believed to endow one with special prowess and ward off evil spirits or general bad luck. Khmer beliefs also rely heavily on astrology, a remnant of Hinduism. A fortune teller, called hao-ra (astrologists) or kru teay in Khmer, is often consulted before major events, like choosing a spouse, beginning an important journey or business venture, setting the date for a wedding and determining the proper location for building new structures. Throughout the year, the Cambodian celebrate many holidays, most of a religious or spiritual nature, some of which are also observed as public holidays. The two most important are Chol Chhnam (Cambodian New Year) and Pchum Ben ("Ancestor Day"). The Cambodian Buddhist calendar is divided into 12 months with the traditional new year beginning on the first day of khae chaet, which coincides with the first new moon of April in the western calendar. The modern celebration has been standardized to coincide with April 13. Dance occupies a central place for the Khmer people, one of its earliest records dates back to the 7th century, where performances were used as a funeral rite for kings. In the 20th century, the use of dancers is also attested in funerary processions, such as that for King Sisowath Monivong. During the Angkor period, dance was ritually performed at temples. The temple dancers came to be considered as apsaras, who served as entertainers and messengers to divinities. Ancient stone inscriptions describe thousands of apsara dancers assigned to temples and performing divine rites as well as for the public. The Khmer classical dance was placed in 2003 on the UNESCO World Heritage List.
Cambodian culture has influenced Thai and Lao cultures and vice versa. Many Khmer loanwords are found in Thai and Lao, while many Lao and Thai loanwords are found in Khmer. The Thai and Lao alphabets are also derived from the Khmer script.
The Khmer people are genetically closely related to other Southeast Asian populations. They show strong genetic relation to other Austroasiatic people in Southeast Asia and East Asia and have a minor genetic influence from Indian people. Cambodians trace about 16% of their ancestry to a Eurasian population that is equally related to both Europeans and East Asians, while the remaining 84% of their ancestry is related to other Southeast Asians, particularly to a source similar to the Dai people. Another study suggests that Cambodians trace about 19% of their ancestry to a similar Eurasian population related to modern-day Central Asians, South Asians, and East Asians, while the remaining 81% of their ancestry is related specifically to modern-day Dai and Han people.
The genetic testing website 23andMe groups Khmer people under the "Indonesian, Khmer, Thai & Myanmar" reference population. This reference population contains people who have had recent ancestors from Cambodia, Indonesia, Laos, Malaysia, Myanmar and Thailand.
Immunoglobulin G
Hideo Matsumoto, professor emeritus at Osaka Medical College tested Gm types, genetic markers of immunoglobulin G, of Khmer people for a 2009 study. The study found that the Gm afb1b3 is a southern marker gene possibly originating in southern China and found at high frequencies across southern China, Southeast Asia, Taiwan, Sri Lanka, Bangladesh, Nepal, Assam and parts of the Pacific Islands. The study found that the average frequency of Gm afb1b3 was 76.7% for the Khmer population.
See also
Anvaya (organization)
Cambodian cuisine
Khmer Krom
Northern Khmers
References
Benjamin Walker, Angkor Empire: A History of the Khmer of Cambodia, Signet Press, Calcutta, 1995.
Notes
External links
Center For Khmer Studies
The Khmers via Tamtofi
Khmer people
Ethnic groups in Cambodia
Genetics | Khmer people | Biology | 3,294 |
47,471,090 | https://en.wikipedia.org/wiki/Pyrenean%20ibex | The Pyrenean ibex (Capra pyrenaica pyrenaica), Aragonese and Spanish common name bucardo, Basque common name bukardo, Catalan common name herc and French common name bouquetin, was one of the four subspecies of the Iberian ibex or Iberian wild goat, a species endemic to the Pyrenees. Pyrenean ibex were most common in the Cantabrian Mountains, Southern France, and the northern Pyrenees. This species was common during the Holocene and Upper Pleistocene, during which their morphology, primarily some skulls, of the Pyrenean ibex was found to be larger than other Capra subspecies in southwestern Europe from the same time.
In January 2000, the last Pyrenean ibex died, making the species extinct. Other subspecies have survived: the western Spanish or Gredos ibex and the southeastern Spanish or beceite ibex, while the Portuguese ibex had already become extinct. Since the last of the Pyrenean ibex became extinct before scientists could adequately analyze them, the taxonomy of this particular subspecies is controversial.
Following several failed attempts to revive the subspecies through cloning, a living specimen was born in July 2003. The cloned Pyrenean ibex was born in Spain through genetic cloning techniques, with the research article published in 2009. However, she died several minutes after birth due to a lung defect. The Pyrenean ibex remains the only animal to have ever been brought back from extinction—and also the only one to go extinct twice.
History
Multiple theories are given regarding the evolution and historical migration of C. pyrenaica into the Iberian Peninsula, and the relationship between the different subspecies.
One possibility is that C. pyrenaica evolved from an ancestor related to C. caucasica from the Middle East, at the beginning of the last glacial period (120–80 ky). C. pyrenaica probably moved from the northern Alps through southern France into the Pyrenees area at the beginning of Magdalenian period about 18 kya. If this is the case, then C. caucasica praepyrenaica may have been more different from the other three ibex species that lived in the Iberian Peninsula than scientists currently know. For example, this would mean that the C. pyrenaica (possible migration 18ky) and C. ibex (300 ky earlier migration) would have evolved from different ancestors and been morphologically more different from their separate genes. It is known that all four subspecies lived together in the Upper Pleistocene time, but scientists are unsure of how much genetic exchange could have occurred. The problem with this theory is that genetics suggest that C. pyrenaica and C. ibex may have shared a more common origin, possibly C. camburgensis.
Many versions of when C. pyrenaica or C. ibex first migrated to and evolved in the Iberian Peninsula are related. C. pyrenaica possibly was already living in the Iberian Peninsula when the ibex began to migrate through the Alps. Genetic evidence also supports the theory that multiple Capra subspecies migrated to the Iberian region around the same time. Hybridization may have been possible, but the results are not conclusive.
Behaviour and physical characteristics
The Pyrenean ibex had short hair which varied according to seasons. During the summer, its hair was short, and in winter, the hair grew longer and thicker. The hair on the ibex's neck remained long through all seasons. Male and female ibex could be distinguished due to color, fur, and horn differences. The male was a faded grayish brown during the summer, and they were decorated with black in several places on the body such as the mane, forelegs, and forehead. In the winter, the ibex was less colorful. The male transformed from a greyish brown to a dull grey and where the spots were once black, he became dull and faded. The female ibex, though, could be mistaken for a deer since her coat was brown throughout the summer. Unlike the male ibex, a female lacked black coloring. Young ibex were colored like the female for the first year of life.
The male had large, thick horns, curving outwards and backwards, then outwards and downwards, then inwards and upwards. The surface of the horn was ridged, and the ridges developing progressively with age. The ridges were said to each represent a year, so the total would correspond to the ibex's age. The female had short, cylindrical horns. Ibex fed on vegetation such as grasses and herbs.
Pyrenean ibex migrated according to seasons. In spring, the ibex would migrate to more elevated parts of mountains where females and males would mate. In spring, females would normally separate from the males, so they could give birth in more isolated areas. Kids were typically born during May, usually singly. During the winter, the ibex would migrate to valleys that are not covered in snow. These valleys allowed them to eat regardless of the change in season.
Habitat
The species was often seen in parts of France, Portugal, Spain and Andorra, but not as much in northern areas of the Iberian Peninsula. In areas like Andorra and France in the mainland, the Pyrenean ibex became extinct first in the northern tip of the Iberian Peninsula. The Pyrenean ibex was estimated to have a peak population of 50,000 individuals with more than 50 other subgroups that ranged from the Sierra Nevada to Sierra Morena and Muela de Cortes. Many of these subgroups lived in mountainous terrain extending into Spain and Portugal. The last remaining Pyrenean ibex were seen in areas of the Middle and Eastern Pyrenees, below altitude. However, in areas of southern France and surrounding areas, ibex were found from to .
The Pyrenean ibex was abundant until the 14th century and numbers did not dwindle in the region until the mid-19th century. Pyrenean ibex tended to live in rocky habitats with cliffs and trees interspersed with scrub or pine trees. However, small patches of rocks in farmland or various areas along the Iberian coast also formed suitable habitat. The ibex was able to thrive well in its environment as long as the appropriate habitat was available, and was able to disperse rapidly and colonize quickly. Pyrenean ibex formed a useful resource for humans, which may have been a cause of their eventual extinction. Researchers say that the eventual downfall of the Pyrenean ibex may have been caused by continuous hunting and even perhaps that the animal could not compete with the other livestock in the area. However, definite reasons for the extinction of this animal are still unknown.
The subspecies once ranged across the Pyrenees in France and Spain and the surrounding area, including the Basque Country, Navarre, north Aragon, and north Catalonia. A few hundred years ago, they were numerous, but by 1900, their numbers had fallen to fewer than 100. From 1910 onwards, their numbers never rose above 40, and the subspecies was found only in a small part of Ordesa National Park, in Huesca.
Extinction
The Pyrenean ibex was one of four subspecies of the Iberian ibex. The first to become extinct was the Portuguese ibex (Capra pyrenaica lusitanica) in 1892. The Pyrenean ibex was the second, with the last individual, a female called Celia, found dead in 2000.
In the Middle Ages, Pyrenean ibex were very abundant in the Pyrenees region, but decreased rapidly in the 19th and 20th centuries due to hunting pressure. In the second half of the 20th century, only a small population survived in the Ordesa National Park situated in the Spanish Central Pyrenees.
Competition with domestic and wild ungulates also contributed to the extinction of the Pyrenean ibex. Much of its range was shared with sheep, domestic goats, cattle, and horses, especially in summer when it was in the high mountain pastures. This led to interspecific competition and overgrazing, which particularly affected the ibex in dry years. In addition, the introduction of non-native wild ungulate species in areas occupied by the ibex (e.g. fallow deer and mouflon in the Sierras de Cazorla, Segura y Las Villas Natural Park) increased the grazing pressure, as well as the risk of transmission of both native and exotic diseases.
The last natural Pyrenean ibex, a female named Celia, was found dead on January 6, 2000; she had been killed by a fallen tree. The reason for the subspecies' decline and extinction is not fully understood. Some hypotheses include the inability to compete with other species for food, infections and diseases, and poaching.
The Pyrenean ibex became the first taxon ever to become "unextinct" on July 30, 2003, when a cloned female ibex was born alive and survived for several minutes, before dying from lung defects.
Cloning project
Celia, the last ibex, was captured in Ordesa y Monte Perdido National Park in Huesca, Spain; skin biopsies were taken and cryopreserved in nitrogen. She died a year after tissue was harvested from her ear. American biotechnology company Advanced Cell Technology, Inc. announced in 2000 that the Spanish government would let them try to clone her from those samples. ACT intended to work alongside other scientists to clone Celia by nuclear transfer.
It was expected to be easier than the cloning experiment of endangered gaur (Bos gaurus), as the reproductive biology of goats is better known and the normal gestation period is only five months. In addition, only certain extinct animals are candidates for cloning because of the need for a suitable proxy surrogate to carry the clone to term. ACT agreed with the government of Aragon that the future cloned Pyrenean ibex would be returned to their original habitat.
Celia provided suitable tissue samples for cloning. However, attempts to clone her highlighted a major problem: even if it were possible to produce another healthy Pyrenean ibex, no males were available for the female clone to breed with. To produce a viable population of a previously extinct animal, genetic samples from many individuals would be needed to create genetic diversity in the cloned population. This is a major obstacle to re-establishing an extinct species population through cloning. One solution could be to cross Celia's clones with males of another subspecies, although the offspring would not be pure Pyrenean ibex. A more ambitious plan would be to remove one X chromosome and add a Y chromosome from another still-existing subspecies, creating a male Pyrenean ibex, but such technology does not yet exist, and it is not known whether this will be feasible at all without irreparable damage to the cell.
Three teams of scientists, two Spanish and one French, are involved in the cloning project. One of the Spanish teams was led by Dr. Jose Folch of Zaragoza, from the Centre of Food Technology and Research of Aragon. The other teams had researchers from the National Research Institute of Agriculture and Food in Madrid.
The project is coordinated by the Food and Agricultural Investigation Service of the Government of Aragon (Spanish: Servicio de Investigación Agroalimentaria del Gobierno de Aragón) and by the National Institute of Investigation and Food and Agrarian Technology (Instituto Nacional de Investigación y Tecnología Agraria y Alimentaria). The National Institute of Agrarian Investigation (INRA) of France is also involved in the project.
Researchers took adult somatic cells from the tissue and fused them with oocytes from goats that had their nuclei removed. The purpose of removing the nuclei from the goats' oocytes was to extract all the DNA of the goat, so there would be no genetic contribution to the clone from the egg donor. The resultant embryos were transferred into a domestic goat (Capra hircus), to act as a surrogate mother. The first cloning attempts failed. Of the 285 embryos reconstructed, 54 were transferred to 12 ibex and ibex-goat hybrids, but only two survived the initial two months of gestation before they, too, died.
On July 30, 2003, one clone was born alive, but died several minutes later due to physical defects in the lungs. There was atelectasis and an extra lobe in the left lung. Despite this setback, cloned cells of Celia taken by a research team are still being studied in attempt to create hybrids. Cells for this organism are alive and frozen, giving this an advantage over bringing back extinct species like mammoths who have very ancient DNA. Still, reproductive biotechnology has a long way to go before communities can be replicated or brought back.
This was the first attempt to revive an extinct subspecies, although the process technically began before the extinction of the subspecies.
See also
De-extinction
List of resurrected species
List of cloned animals
Notes
References
External links
Profile at The Sixth Extinction Website
Capra (genus)
Fauna of Spain
Extinct mammals of Europe
Mammal extinctions since 1500
Fauna of the Pyrenees
Cloned animals
Habitats Directive species
ml:പൈറീനിയന് ഐബക്സ്
sv:Iberisk stenbock | Pyrenean ibex | Biology | 2,725 |
37,089,603 | https://en.wikipedia.org/wiki/North-South%20Carrier | The North-South Carrier (NSC) is a pipeline in Botswana that carries raw water south for a distance of to the capital city of Gaborone. Phase 1 was completed in 2000. Phase 2 of the NSC, under construction, will duplicate the pipeline to carry water from the Dikgatlhong Dam, which was completed in 2012. A proposed extension to deliver water from the Zambezi would add another to the total pipeline length.
The NSC is the largest engineering project ever undertaken in Botswana.
Climate
Botswana has an arid climate, with little in the way of surface water supplies. Until recently, groundwater wells were used to meet about 80% of the demand for water. Some of the groundwater accumulated long ago when the climate was wetter. "Groundwater mining" is not sustainable in areas where the water is not being renewed from the surface. The more populous eastern portion of Botswana lies in the Limpopo River basin, which is considered "closed". In the South African portion of the basin, water usage exceeds the potential water yield from the basin by 800,000,000 cubic metres (650,000 acre-feet) annually. Water has to be imported from the Vaal River to make up the shortfall.
Almost all rainfall occurs in the summer months of October through April, at a time when temperatures over cause high levels of evaporation. Rainfall is undependable. A drought period may last for several years. Precipitation is highest in the northeast, at about annually, and lowest in the southwest, at about annually. Annual average potential evaporation is about annually. Botswana has a flat terrain that is mostly unsuitable for reservoirs.
Requirements
In 2008 Botswana had a population of 1,921,000. GDP per capita on a purchasing power parity (PPP) basis was $13,415.
83% of the people were literate.
The percentage of people with access to safe drinking water rose from 77% to 96% between 1996 and 2006.
The economy of Botswana is growing fast, as is the population, particularly in the Gaborone area. This is causing growth in per-capita demand for water, and rapid growth in total demand.
The Gaborone region accounts for over 75% of water demand in eastern Botswana.
The local Gaborone and Bokaa dams cannot meet the growing demand even with the help of reclamation from the Gaborone Water Treatment Works at Glen Valley.
Morupule Colliery uses three boreholes for water, but takes water from the NSC when needed through a pipeline from Palapye.
Exploitation of coal deposits in Botswana related to the South African Waterberg coalfield will also contribute to demand for water.
Water from the Dikgatlhong Dam, completed in 2012, will be used in part to supply the large coalfield and power station at Mmamabula via the NSC pipeline.
Plan
The Botswana National Water Master Plan (NWMP) identified promising sites for reservoirs in the northeast on the small, ephemeral Motloutse and Shashe tributaries of the Limpopo River. The North-South Carrier Water Project was launched to build a pipeline that would carry water from these sites to the area of highest demand around Gaborone in the southeast.
A 1994 review of environmental assessments conducted for the Norwegian Agency for Development Cooperation, which provided some of the funding for the project, concluded that the impact of the pipeline would be tolerable. The pipe would be buried. Native vegetation would soon regenerate along the route if the topsoil and subsoil were carefully removed and replaced without mixing. Plans for construction of the high rock-filled Letsibogo Dam on the Motloutse River also included careful environmental impact assessment studies.
The impact of the Letsibogo reservoir on an ecology that has not been carefully studied would be greater.
It would both destroy and create habitat. The review was cautious in its conclusions about the net impact.
The review said "the socio-economic and archaeological issues seem to have been handled in a particularly outstanding way".
The plan was divided into two phases. The Letsibogo Dam would be built in Phase 1, with a pipeline about to carry the raw water south to a treatment plant and master balancing reservoir at Mmamashia, about northeast of Gaborone. An early version of the plan used the existing Bokaa Dam as the reservoir, but it was decided to instead build a covered reservoir closer to Gaborone to minimise loss of water through evaporation. A second dam, the Dikgatlhong Dam, would be built on the Sashe River in Phase 2.
A second pipeline running parallel to the first would carry the water to the same treatment plant and reservoir near Gaborone.
The Phase 1 pipeline transported water from the Letsibogo Dam along the eastern road and rail corridor to Gaborone.
The pipeline plan included four pumping stations and a water treatment plant at the terminus just north of Gaborone.
The pipeline was to have pumping stations at Letsibogo, Moralane, Palapye and Serorome Valley.
The Serorome station was later deferred to a future upgrade.
There would be break-pressure tanks at Moralane, Thoti Hill, Mameno and Lose Hill.
Towns and large villages along the route would be fed by raw water taken from the pipeline at Palapye, supplying Moropule and Serowe, and at Mahalapye, supplying Kalamare and Shoshong. Water from wellfields would be injected into the pipeline at Palla Road and Mmamabula, and water would also be injected from the Bokaa Dam.
Construction
NSC-1
The Letsibogo Dam was designed for the Ministry of Minerals, Energy and Water Resources by Arup, who also supervised construction of the water storage embankment and central clay-core dam. Letsibogo has storage capacity of 100,000,000 cubic metres (3.5×109 cu ft).
J. Burrow provided engineering services including designs, contract documents, managing the tendering process and managing construction of the NSC-1 pipeline.
Pipe diameters in NSC-1 ranged from down to .
The pipe was made of alternating sections of glass-reinforced plastic (fiberglass) and steel.
It was placed in a trench, bedded in sand and buried, within a wide easement corridor.
The project included installing the pipeline itself, as well as pumping stations, water treatment plants, storage and balancing reservoirs, measurement and control systems and infrastructure. Construction took five years.
The North South Carrier Scheme cost about US$350 million, and started operation in 2000.
There were problems in laying the glass-reinforced piping, which caused the original January 1999 target completion date to be missed. A revised target date of June 2000 was also missed, with further delays caused by failures of the pipeline and pumping station equipment. These caused cost increases from the original estimate of P1,200 million to around P1,500 million.
Since opening, NSC-1 has had ongoing reliability problems.
In April 2012 a man who was prospecting for minerals entered the pipeline corridor and caused the pipe to burst,
sending a stream of about a second pouring into the surrounding land to form a deep crater.
Water supplies in the region were cut off until repairs could be made.
NSC-2
In the original plans, NSC-2 would deliver 45,000,000 cubic metres (1.6×109 cu ft) annually at a cost of P5.5 billion.
Construction of the Dikgatlhong Dam on the lower Shashe River began in March 2008 and was completed slightly ahead of schedule in December 2011.
This is a zoned earthfill structure, 41 metres (135 ft) high and 4.5 kilometres (2.8 mi) long, with potential storage capacity of 400,000,000 cubic metres (1.4×1010 cu ft),
almost three times that of the Gaborone Dam.
The dam will start impounding the Shashe River during the 2012–2013 rainy season.
The first portion of the NSC-2 pipeline, NSC-2A, will connect the Dikgatlhong Dam to the NSC 1 Break Pressure Tank 1 at Moralane.
With a troubled world economy, the Botswana government decided that between 2010 and 2016 they would focus on completing the NSC-2.1 section and upgrading NSC-1.
Construction of NSC-2.2 from Moralane up to Palapye would be deferred to the 2017–2022 budget period.
NSC-2.1 delivers water from a new storage reservoir at Palapye to a new reservoir at Mmamashia via two new pumping stations.
The NSC-1 upgrades would include introducing variable-speed drives at the existing pumping stations and installing a new pumping station, as well as upgrades to transfer links and treatment works at the south end of the pipeline.
Initial planning also started for NSC-3, another pipeline in the same corridor. The three independent pipelines would provide greater security and redundancy, although they would be operated using an integrated communication and control system.
In June 2012, stakeholders were told that construction of the NSC-2A pipeline to connect the Dikgatlhong Dam to the NSC was behind schedule.
This part of the project had started in October 2011 and was due for completion in October 2013.
The contractors, China State Construction Engineering Corp and the local Excavator Hire, had 350 employees, 75 of whom were Chinese.
The delay was caused by failure of a factory in Palapye to produce pipes of acceptable quality.
There were some concerns that further delays could occur if there were problems with blasting along the section from the Letsibogo Dam to the Moralane break-pressure tank and pumping station. Along this stretch, the new pipeline runs parallel to the NSC-1 pipeline, and great care must be taken to ensure no damage is done to the existing pipeline.
Zambezi potential
In the 1980s and early 1990s the Botswana and South African governments began discussing the possibility of drawing water from the Zambezi River and feeding it into the North-South Carrier. Some of this water could be passed on to South Africa. The two countries even speculated about "diverting the Zambezi River at Kazungula", a prospect that was not welcomed by the other members of the Southern African Development Community (SADC). Eventually the question of claims on the Zambezi water were settled by the 1995 SADC protocol on shared Watercourse Systems and establishment of the Zambezi River Authority. However, the commitment of member governments to honor the agreement seems weak and may not stand up to the pressures of climate change.
Under the agreement, the Botswana government has a large allocation of water from the Zambezi near Kasane.
The NWMP included plans for the Chobe/Zambezi Transfer scheme, taking about 495,000,000 cubic metres (400,000 acre-feet) annually from the Zambezi for use in agriculture by 2022. In a 2010 report, the Ministry of Minerals, Energy and Water Resources noted that Botswana might need more Zambezi water to meet expected urban demand by 2020. The ministry expected to implement the Chobe/Zambezi Transfer scheme earlier, and to link it up to the NSC.
Botswana had discussed plans to extract the water at various Zambezi Watercourse Commission meetings, and had received no objections.
The first phase of the project would deliver the water to the Pandamatenga area for agricultural use, and the second phase would carry water from Pandamatenga south to the NSC.
The pipeline would run via Francistown to join the NSC at Break Pressure Tank 1 (Moralane).
Depending on the route selected, it would be long.
The Botswana government notes that the pipeline development could serve the needs of neighboring countries.
The station that extracts water from the Zamebezi could also supply a pipeline to Namibia.
Some of the water could be pumped from Francistown to Bulawayo in Zimbabwe.
Criticism
Transfer of water to meet the needs of thirsty regions like that around Gaborone may have negative impacts on the poor riparian communities that will lose water. It is possible that transferring water-intensive industries to water-rich regions may be a more cost-effective approach with lower impact on the environment.
The 1996 SADC agreement in power pooling may be seen as a model for this alternative approach.
Botswana's diamond reserves will not last forever, and international demand and prices are unpredictable.
Botswana must diversify the economy to make other businesses more profitable and to become more competitive in the regional economic zone.
The estimated US$120 million spent on Phase 1 of the North-South Carrier could perhaps have been better allocated to other projects, with the government charging more realistic rates to encourage consumers to reduce their water usage, and with more emphasis on efficient use of existing supplies.
Still, spending some of Botswana's diamond revenues on improved water supply is clearly popular among voters.
References
Notes
Citations
Sources
Infrastructure in Botswana
Freshwater pipelines
Interbasin transfer
Buildings and structures in Botswana
Water in Botswana | North-South Carrier | Environmental_science | 2,680 |
3,823,981 | https://en.wikipedia.org/wiki/Psychosocial | The psychosocial approach looks at individuals in the context of the combined influence that psychological factors and the surrounding social environment have on their physical and mental wellness and their ability to function. This approach is used in a broad range of helping professions in health and social care settings as well as by medical and social science researchers.
Background
Psychiatrist Dr Adolf Meyer in the late 19th century stated that: "We cannot understand the individual presentation of mental illness, [and perpetuating factors] without knowing how that person functions in the environment." Psychosocial assessment stems from this idea. The relationship between mental and emotional wellbeing and the environment was first commonly applied by Freudian ego-psychologist Professor Erik Erikson in his description of the stages of psychosocial development in his book called Childhood and Society in 1950. Mary Richmond considered there to be a strict relationship between cause and effect, in a diagnostic process. In 1941 Gordon Hamilton renamed the existing (1917) concept of "social diagnosis" as "psychosocial study".
Psychosocial study was further developed by psychosocial therapist professor Florence Hollis in 1964 with emphasis on treatment model. It is in tension with diverse social psychology, which attempts to explain social patterns within the individual. Problems that occur in one's psychosocial functioning can be referred to as "psychosocial dysfunction" or "psychosocial morbidity." That refers to the lack of development or diverse atrophy of the psychosocial self, often occurring alongside other dysfunctions that may be physical, emotional, or cognitive in nature. There is now a cross-disciplinary field of study, and organisations such as the Transcultural Psychosocial Organization (United Nations High Commissioner for Refugees), and Association for Psychosocial Studies.
Psychosocial assessment and intervention
Psychosocial assessment considers several key areas related to psychological, biological, and social functioning and the availability of supports. It is a systematic inquiry that arises from the introduction of dynamic interaction; it is an ongoing process that continues throughout a treatment, and is characterized by the circularity of cause-effect/effect-cause. In assessment, the clinician/health care professional identifies the problem with the client, takes stock of the resources that are available for dealing with it, and considers the ways in which it might be solved from an educated hypothesis formed by data collection. This hypothesis is tentative in nature and goes through a process of elimination, refinement, or reconstruction in the light of newly obtained data.
There are five internal steps in assessment:
Data collection (relevant and current) of the problem presented.
Integrating collected facts with relevant theories.
Formulating a hypothesis (case theory) that gives the presented problem more clarity.
Hypothesis substantiation through exploration of the problem: life history of the client, etiology, personality, environment, stigmas, etc.
Further integration of newer facts identified in the treatment period and preparing a psychosocial report for psychosocial intervention.
Assessment includes psychiatric, psychological and social functioning, risks posed to the individual and others, problems required to address from any co-morbidity, personal circumstances including family or other carers. Other factors are the person's housing, financial and occupational status, and physical needs. Assessments when categorized, it particularly includes Life history of the client that include data collection of living situation and finances, social history and supports, family history, coping skills, religious/cultural factors, trauma from systemic issues or abuse and medico-legal factors (assessment of the client's awareness of legal documents, surrogate decision-making, power of attorney and consent). Components include: the resource assessment of psycho-spiritual strengths; substance abuse; coping mechanisms, styles and patterns (individual, family level, workplace, and use of social support systems); sleeping pattern; needs and impacts of the problem etc. Advanced clinicians incorporate individual scales, batteries and testing instruments in their assessments. In the late 1980s psychologist professor Hans Eysenck, in an issue of Psychological Inquiry, raised controversies on then assessment methods and it gave way to comprehensive Bio-Psycho-Social assessment. This theoretical model sees behavior as a function of biological factors, psychological issues and the social context. Qualified healthcare professionals conduct the physiological part of these assessments. This thrust on biology expands the field of approach for the client, with the client, through the interaction of these disciplines in a domain where mental illnesses are physical, just as physical conditions have mental components. Likewise, the emotional is both psychological and physical.
The clinician's comprehension and set of judgments about the client's situation, the assessment through a theory of each case, predicts the intervention. Hence a good psychosocial assessment leads to a good psychosocial intervention that aims to reduce complaints and improve functioning related to mental disorders and/or social problems (e.g., problems with personal relationships, work, or school) by addressing the different psychological and social factors influencing the individual. For example, a psychosocial intervention for an older adult client with a mental disorder might include psychotherapy and a referral to a psychiatrist while also addressing the caregiver's needs in an effort to reduce stress for the entire family system as a method of improving the client's quality of life. Treatment for psychosocial disorders in a medical model usually only involve using drugs and talk therapy.
Psychosocial adaptation and support
Psychosocial adaptation is a process a person experiences in order to achieve good fitness in person-environment congruence known as adjustment, a state of wisdom oriented activities and psychosocial equilibrium. Psychosocial support is the provision of psychological and social resources to a person by a supporter intended for the benefit of the receiver's ability to cope with problems faced. The allocentric principle within social relationships that promote health and well-being moves individuals to aid victims of terminal illness, disaster, war, catastrophe or violence to foster resilience of communities and individuals. It aims at easing resumption of normal life, facilitating affected people's participation to their convalescence and preventing pathological consequences of potentially traumatic situations. This might extend in forms of informational and instrumental support.
See also
Psychosocial needs
Psychological trauma
Mental status examination
Psychosocial- a song by Slipknot on their 2008 album All Hope Is Gone.
References
Further reading
Haley, J. Problem-solving therapy. New York: Harper & Row. 1978.
Edward S. Neukrug, & R. Charles Fawcett (2006). Essentials of Testing and Assessment: A Practical Guide for Counselors, Social Workers, and Psychologists, 3rd Edition.
Froggett, Lynn and Richards, Barry (2002). Exploring the Bio-psychosocial. European Journal of Psychotherapy & Counselling, Vol. 5 (3). pp. 321–326. ISSN 1364-2537. DOI: 10.1080/1364253031000140115.
Manley, Julian (2010) From Cause and Effect to Effectual Causes: Can we talk of a philosophical background to psycho-social studies?. Journal of Psycho-Social Studies, 4 (1). pp. 65–87. ISSN 1478-6737
Hodge, D. (2001). Spiritual assessment: A review of major qualitative methods and a new framework for assessing spirituality. Social Work, 46(3), 203–214.
Karls, J., & Wandrei, K.E. (1992). The person-in-environment system for classifying client problems. Journal of Case Management, 1(3), 90–95.
David E. Ross, A Method for Developing a Biopsychosocial Formulation. Journal of Child and Family Studies 9(1):1-6. March 2000. DOI: 10.1023/A:1009435613679.
External links
Psychosocial assessment - Michigan State University
The Journal of Psychosocial Studies
Assessment Report Guidelines for Individuals - Veterans Affairs Canada
The Science of Success
The International Red Cross Reference Center for Psychosocial Support
Biopsychosocial Assessment Samples: 1, 2, 3 , 4.
Social psychology
Social work
Conformity | Psychosocial | Biology | 1,663 |
43,161,091 | https://en.wikipedia.org/wiki/Central%20polynomial | In algebra, a central polynomial for n-by-n matrices is a polynomial in non-commuting variables that is non-constant but yields a scalar matrix whenever it is evaluated at n-by-n matrices. That such polynomials exist for any square matrices was discovered in 1970 independently by Formanek and Razmyslov. The term "central" is because the evaluation of a central polynomial has the image lying in the center of the matrix ring over any commutative ring. The notion has an application to the theory of polynomial identity rings.
Example: is a central polynomial for 2-by-2-matrices. Indeed, by the Cayley–Hamilton theorem, one has that for any 2-by-2-matrices x and y.
See also
Generic matrix ring
References
Ring theory | Central polynomial | Mathematics | 166 |
26,950,856 | https://en.wikipedia.org/wiki/Registration%20of%20architects%20in%20the%20United%20Kingdom | In the United Kingdom, the Architects Act 1997 imposes restrictions on the use of the name, style or title "architect" in connection with a business or a professional practice, and for that purpose requires a statutory Register of Architects to be maintained. The Architects Registration Board constituted under the Act is responsible for Architects Registration in the United Kingdom and is required to publish the current version of the Register annually. Every person who is entitled to be registered under the Act has the right to be entered in the register. The act consolidated previous enactments originating with the Architects (Registration) Act 1931 (21 & 22 Geo. 5. c. 33) as amended by the Architects Registration Act 1938. It applies to England, Wales, Scotland and Northern Ireland.
Section 2 of the act prescribes that the board shall appoint and regulate the functions ascribed to the Registrar. The Act refers to the Registrar by the masculine pronoun in the singular, but by the usual rules of statutory interpretation, this is not limited to an individual male person.
An amendment under the European Communities Act 1972 came into force on 20 June 2008.
The recurring controversy about whether statutory protection of title serves useful purposes has been intensified by the legislative impact of the EU Directive on Unfair Commercial Practices implemented in May 2008 by two Statutory Instruments under the European Communities Act 1972, namely No 1276 (Trade Descriptions) and No 1277 (Consumer Protection).
For the purposes of the Legislative and Regulatory Reform Act 2006, "regulatory function" is defined in subsection 32(2).
Use of the title "architect"
Under subsection 20(1) of the Architects Act 1997, a person in the United Kingdom may only practise or carry on business under any name, style or title containing the word "architect" if registered. There is no restriction on its use in any other circumstance. The words in the current Act follow those of the 1938 Architects Registration Act under which it was decided that the use of the suffix "FRIBA" in business notepaper constituted an infringement.
By subsection 20(3) corporate bodies, firms or partnerships can carry on business under a name, style or title containing the word "architect" provided that (in broad terms) the architectural business is run by a registered person. However the statutory registration Board may (by rules made under subsection 20(4) – see General Rules, Rule 25) effectively limit the application of subsection 20(3) to those corporate bodies firms or partnerships who have supplied information necessary for determining whether the architectural business is run by a registered person.
The rule-making power under subsection 20(4) appears to be limited to prescribing particular information to be provided to the Board viz. "such information necessary for statutory registration determining whether [subsection 20(3)] applies". The subsection makes no provision for levying any fee.
Background to legislation
Up to the 1990s
Opinions had been divided for well over a century about the merits of statutory registration of architects in the United Kingdom. The result was that Parliament, as the legislator and guided by the government of the day, has had to maintain a state of benevolent neutrality among the holders of these contending views, consistent with more general public policies for business competition, employment, professional education and so on. In relation to statutory protection of title, three aspects of the field in which architects practise invite examination. In summary:
The design quality of the built environment: this is essentially a cultural concern which was and remains one of the principal reasons for the formation and continuance of the Royal Institute of British Architects as a chartered body. It has connotations not only for the United Kingdom but world wide. It is beyond the ambit of statutory protection of title.
The technical sufficiency of buildings: the public interest is secured in the United Kingdom under Building Regulations and other enactments. This too is beyond the statutory protection of the title "architect".
The business of architectural practice: contracts of engagement for professional services are always between a business entity (whether individual, firm, partnership, or company) and the client, and are governed by the general law, including consumer protection legislation where applicable. Protection of the title ‘architect’ for business entities is of no practical relevance for securing the performance of architectural services.
In the light of experience since the inception of the Register under the 1931 Act, and more particularly under the Architects Registration Board’s regime from 1997, the recurring question has been whether protection of title serves useful purposes in respect of the three aspects mentioned above. The question of obsolescence has been further intensified by the EU Unfair Commercial Practices Directive, effective from 2007.
Statutory registration had its origin within the architectural profession in the latter part of the nineteenth century. It was then (as now) a matter of controversy. However, by 1905 the RIBA had established a policy to secure satisfactory training of architects by statutory means.
The basis of the policy (on registration) had always been that the profession was governed by voluntary associations of practising architects and that the profession would retain control of registration. This was reflected in the composition of the registration body (the Architects' Registration Council of the United Kingdom – ARCUK) established by the 1931 Act. Shortly after, in the book published on occasion of the Institute's centenary celebration in 1934, in the concluding paragraphs of the chapter on statutory registration., Harry Barnes FRIBA., Chairman of the Registration Committee, wrote:
" ... I do not conceive the purpose of the Registration Act to be that of protecting the Architectural profession. The interests of the Profession are of course legitimate but are best served by the Architectural Associations in which some 80 per cent of those practising architecture are to be found.
"The object of the Registration Act is to ensure to the public that the architects they employ possess capacity and character.
"Under the purview of the Board of Architectural Education no one will enjoy the title of 'Registered Architect' without giving evidence of his capacity, and under that of the Discipline Committee no one will retain the title whose character has been weighed in the balance and found wanting.
"The Architects' Registration Council of the United Kingdom can never, therefore, on this view be a rival of any Architectural Association and least of all of the Royal Institute of British Architects.
"The Architects' Registration Council stands at the gateway of the realm of Architectural practice, but within that realm the affairs of the Architect are best administered by those voluntary Associations to which he has allied himself and over the actions of which he has complete control."
After more than half a century times have changed and a regime of quite another kind has been installed under the 1997 Act.
From the 1990s
By the 1990s it was almost universally accepted that the time had come to bring the statutory Architects' Registration Council as it then was to an end. Opinion within the profession was divided among those who held that statutory registration should be discontinued altogether and those who held that the registration body should be reconstituted.
In the event the body was reconstituted as the Architects Registration Board (1996/1997 Acts). But it was only after the event that many in the profession came to appreciate the effect of the new requirement that the majority of the Board should be non-architects and appointed by the government, as the following quotation shows:
"Crucially, professional control of the Register was taken away by the government's decision which was realised in the 1996/97 Act. This had not been generally expected by those of the membership who before then had been in favour of continuing protection of the title 'architect'. The significance and effect of the change is now becoming more widely understood." (Report of the Royal Institute of British Architects Council's Task Group on the Architects Registration Board, September 2004.)
Statutory Registration – chronology of key events
1834 – The body which was to become the Royal Institute of British Architects (originally known as the Institute of British Architects in London) was formed by Thomas de Grey, 2nd Earl de Grey and several prominent architects. (After the grant of the charter it had become known as the Royal Institute of British Architects in London, eventually dropping the reference to London in 1892.)
1837 – It was granted its Royal Charter under William IV.
1884 – Society of Architects formed, after a campaign by a group of ARIBA to be allowed to vote on RIBA affairs had been resisted by FRIBA.
1887 – Architects and Engineers Registration Act Committee formed as an independent committee to promote a bill for registration of architects, engineers and surveyors. The bill was withdrawn after chief bodies representing engineers petitioned against it.
1889 & 1891 – Architects Registration Bill Committee put forward bills for registration of architects, which were strongly supported by the Society of Architects but opposed by an independent group of prominent architects and artists.
1892 – Papers published, defining the profession of architecture:
Norman Shaw and T G Jackson (eds.) "Architecture, A Profession or an Art".
William H White "The Architect and his artists, an essay to assist the public in considering the question is architecture a profession or an art".
1902 – Architects Registration Bill Committee amalgamated with the Society of Architects as a joint Registration Committee.
1905 – RIBA Education Policy was adopted for statutory powers to secure satisfactory training for architects by way of registration of title, by and through the RIBA.
1908 – RIBA Licentiate Class formed, for architects who could show evidence of competence, without exams. On closure in 1913, over 2000 had been accepted.
1924–1959 – RIBA Standing Registration Committee.
1925 – Amalgamation of RIBA and Society of Architects: most of the Society of Architects members transferred to Licentiate class, which was reopened.
1927 – RIBA Registration Committee has draft bill introduced in Parliament, but opposed by the Incorporated Association of Architects and Surveyors and the Faculty of Architects and Surveyors.
1931 – Bill recast and enacted as the Architects (Registration) Act 1931, enabling the Register of Architects to be established under a statutory body called the Architects' Registration Council of the United Kingdom (ARCUK). The Council was to be made up of representatives of all architectural bodies in the United Kingdom in proportion to the numbers of their memberships on the Register, and representatives from government departments and related professional bodies. Under ARCUK, the RIBA system of exams etc. was accepted for registration. (The provisions of the Act constituting the Board of Architectural Education were repealed when ARCUK was reconstituted as ARB in 1996/7.)
1937 – A letter is sent by "the president of the council" [sic] to the Institute of Chartered Surveyors recognizing the fact that there was nothing in the Bill which the council was then promoting (and which subsequently became the Architects Registration Act 1938) to interfere with the activities of registered architects. The letter is mentioned by Lord Goddard, Lord Chief Justice, in the course of his judgment in the Queen's Bench Divisional Court in 1957 allowing the appeal of an architect (Hughes) against a professional misconduct decision of the Discipline Committee of ARCUK (later renamed Architects Registration Board), a case which becomes cited in later cases and in legal text-books as a judicial precedent.
1938 – The Architects Registration Act 1938 changed the protected title from "Registered Architect" to "Architect".
1957 – Resulting from the 1938 legislation, an appeal by an architect against ARCUK is allowed by the High Court in a case which becomes a judicial precedent, Hughes v. Architects' Registration Council of the United Kingdom: "It is not of itself disgraceful to disagree with a majority view and to act accordingly. It is only if a man has bound himself in honour to accept that view and to act according to the code that a deliberate breach of the code for his own profit can be called disgraceful."—Devlin J. (Patrick Devlin, Baron Devlin)
1992 – Government, in response to a request from ARCUK, commissioned review of the Architects Registration Acts by an independent assessor (John Warne).
1993 – Warne Report published – principal recommendation: abolition of protection of title "architect" and disbanding of ARCUK. RIBA Council initially supported this recommendation, but this was resisted by the RIBA membership. As a result RIBA campaigned for the retention of protection of title with a "stream-lined" registration board.
1996 – Part III of Housing Grants, Construction and Regeneration Act 1996, among other things, reconstituted the registration body as the Architects Registration Board (ARB).
1997 – Architects Act 1997, a consolidating act, brought together the provisions of Part III of the 1996 act and previous registration legislation. The Architects Registration Board then established with a majority of appointed lay members and a minority of elected Architect members.
2008 – Amendments made in June 2008 by Statutory Instrument established rules for the recognition of professional qualifications enabling migrants from the European Economic Area or Switzerland to register as architects in the United Kingdom. It also set out provisions for facilitating temporary and occasional professional services cross-border.
The present legislation
Summary of legislative history
The following analysis of the operative and other parts of the Architects Act 1997 as it was before the amendment of June 2008 pays attention to details which sometimes go unnoticed.
The Act is fairly short. That is partly due to its conciseness, but this quality makes it all the more necessary to remember that the Act must be read as a whole to ascertain the meaning and effect of its various parts. Care is needed not to read into it what is not there (whatever conventional wisdom may have supposed or desired), and not to fail to notice what actually is there. In particular, like many such Acts, it can be better understood by looking at its beginning (Arrangement of Sections and long title) and its end (derivations), as well as the operative part in between.
Its long title is "An Act to consolidate the enactments relating to architects". The Table of Derivations printed at the end lists the enactments which it consolidated; and Schedule 3 lists the originating and two amending Acts which it repealed, namely: the Architects (Registration) Act 1931, the Architects Registration Act 1938 and parts of the Housing Grants, Construction and Regeneration Act 1996.
The unbroken continuity of these enactments is shown by paragraph 19(2)(a) of Schedule 2:
"the Council" means the Architects Registration Council of the United Kingdom established under the 1931 Act, which was renamed as the Board by section 118(1) of the 1996 Act.
The changes made by the 1996 Act to the originating Act as amended can be deduced from the Table of Derivations. This also shows that, for the purpose of the consolidation, certain definitions were inserted in the "Interpretation" section. These included one to make clear that where there is a reference to "unacceptable professional conduct", it has the same meaning as it has in section 14 (not vice versa). In subsection 14(1) the phrase is expanded as: "conduct which falls short of the standard required of a registered person".
The "burdens" and "choices"
The scheme of the consolidation Act of 1997 is identical with that of the originating Act of 1931, as amended. It is as follows:
It operates by imposing (under Part IV) one kind of burden, backed by threat of penal sanction, on persons carrying on business in the United Kingdom generally, but at the same time giving to one particular group of persons a statutory right to choose instead to submit to another kind of burden, and to another group of persons the statutory right to choose to submit to a third kind of burden.
Here, the first is described as the "general burden"; the other two as the "voluntary burdens"; and the freedom to choose the "statutory choices".
The general burden is a prohibition imposed on all persons, firms, partnerships and bodies corporate carrying on business in the United Kingdom, including architects and Chartered Architects. The prohibition is against practising or carrying on business under the title of "architect", with two exceptions, viz.:
In the case of an individual, s/he has opted for one of the voluntary burdens and is registered or enrolled under the Act.
In the case of a business entity (body corporate, firm or partnership) it has opted for the other of the voluntary burdens, in that its business so far as it relates to architecture satisfies the statutory requirement to be under the control, management and supervision of a registered person.
Here, the first of these is described as a "practice volunteer", and the second as a "business volunteer".
The statutory choice is exercisable:
In the case of a practice volunteer, by applying for registration in the Register of Architects and paying the annual retention fee (under sections 4 to 11), or by enrolling in the statutory list of visiting EEA architects (under section 12).
In the case of a business volunteer, by complying with the control, management and supervision requirements (of subsection 20(3)).
The voluntary burdens which result from the exercise of the statutory choice are as follows:
In the case of a registered person, to submit to the regime imposed on registered persons under sections 4 to 11 of Part II (payment of fees and satisfying requirements about qualifications and competence) and Part III – Discipline.
In the case of a person enrolled in the list of visiting EEA architects, to submit to the regime of section 12 and of Part III – Discipline.
In the case of a business volunteer, to submit to the regime for control, management and supervision imposed by subsection 20(3) and be liable to supply the Board with information showing compliance.
In consequence:
any person or business entity is free to supply services of the same kind as a registered person, but may only use the title "architect" in this connection subject to one of the voluntary burdens; and
a Chartered Architect may only use this title in the course of professional practice if s/he has opted to submit to one of the voluntary burdens, normally as a registered person.
Side effects
A side effect of the Act is the imposition of burdens on third parties under Part II, namely, Schools of Architecture, but the effect of the changes made by the 1996 Act is that Schools of Architecture have disappeared from the legislation without trace. The result has been a certain amount of wrangling between the Schools, the ARB and the RIBA which is the principal professional body, whose concerns inevitably include architectural education.
Another side effect has been a claim by the ARB to be able to impose on registered persons certain requirements about Professional Indemnity Insurance.
The Board and its duties
The membership of the Board is the result of one of the changes made by the 1996 Act to the registration body's previous constitution when its name had been the Architects' Registration Council of the United Kingdom. The Act abolished the Board of Architectural Education, renamed the Council as the Board, and made this Board consist predominantly of persons appointed by the Privy Council.
Subsection 3(4) states "The Board shall publish the current version of the Register annually...". Provisions of Part II of the Act prescribe how the Register shall be kept up to date, and who shall be entitled to be registered; other provisions of Parts II and III prescribe for the Board the circumstances, events or conditions when a person's name shall be removed from the Register; and other provisions prescribe for the Board certain ancillary, or derivative and secondary, duties in connection with the Board's primary responsibility for the maintenance and regular publication of the Register of Architects.
Apart from officers, employees and agents of the Board, the Act creates no duties or obligations towards the Board which fall on any one else at all; and nothing in the Act itself creates any obligation which an architect owes to the Board.
Not delegated
Section 20 of the Architects Act 1997 mentions "architecture" (subsection (3)(b)) and the "services" of a person enrolled under the Act (subsection (5)). These are not defined by the Act, and it is quite obvious that Parliament has not delegated to the Board the power to define them. In practice, as technology and the building process continue to evolve new specialisms, the concept of architecture can be seen as becoming ever more fluid, extensive and comprehensive, and at the same time becoming narrower while ever more ancillary specialities are identified.
The Board's Professional Conduct Committee
Sanctions are available to the Board under Part III of the Act against any person on the register who is found guilty under section 15 of unacceptable professional conduct or serious professional incompetence, or else has been convicted with a criminal offence which has relevance to fitness to practise.
Although a criminal conviction is an objective criterion, no statutory definition is given that defines the level of professional conduct or incompetence that will attract a sanction, judgment in the matter being given to the Board's Professional Conduct Committee, subject to commonsense, reason and judicial review.
The PCC is constituted under the Act, Schedule 1 Part II, as amended. The Committee includes members of the Board, both elected and appointed, as well as persons appointed by the Board and nominees of the President of the Law Society. A bare quorum of the Committee meets for a disciplinary hearing, comprising a nominee of the President of the Law Society (invariably a solicitor), a person from the Register and another not on the Register. PCC members are paid for their service.
Categorisation
The Architects Registration Board is admittedly not a professional body or society in the sense explained in Wikipedia article "British Professional Bodies". The question whether or not it is a "regulatory agency" was publicly considered in 2003. The June 2003 issue of the RIBA Journal included the following:
"The words 'regulator' and 'regulation' are not used in the Act and that status is not conferred upon the Board. It could be argued that the Board's assumption of this role is adverse to the public interest... A contrary proposition to the Board's claim is that an essential characteristic of a regulatory body in this context is to have jurisdiction or control over particular functions or activities in the supply of goods or services... whether or not the regulatory method is in conjunction with a system of certificating or licensing (such as applies to solicitors or places of entertainment)... The Board's claim to regulatory status appears to be the result of want of understanding how it can usefully go about fulfilling the services that have been assigned to it by statute for the public benefit..."''
In November 2003 the Architects Registration Board published a summary of a barrister's opinion which included the following:
"The Board as 'Regulator'. It has been suggested that the Board is not a 'regulator' of the architects’ profession... The precise generic description that any individual chooses to give to the collection of statutory duties imposed upon, and the powers available to, the Board under the 1997 Act is in any event irrelevant for the purposes of the questions asked," [by the Board when obtaining this information for its own use] "for they largely involve issues of statutory interpretation which require the legislation to be construed and not given epithets."
Use of "Chartered Practice"
The Royal Institute of British Architects, which is a professional body (see Wikipedia category list British Professional Bodies), operates a voluntary "Chartered Practice" scheme. The use of the title "Chartered Practice" is authorised under Article 4.7 of the RIBA Charter (see reference below). The Institute's website explains that the scheme requires all architectural work of a Chartered Practice to be under the supervision of a Chartered Architect. A directory listing Chartered Practices has been published annually by the Institute from 2007.
Qualification for registration
The standard of qualification is not equal for all persons applying to be included in the Register of Architects.
The Registrar is bound, under section 4 of the Architects Act, to register any person who applies for registration if that person has "such qualifications and has gained such practical experience as may be prescribed".
An alternative route to registration is to satisfy the Board that an equivalent standard of competence has been achieved. A matrix can be applied as follows:
The disparity arises from the European Directive on Mutual Recognition of Qualifications in Architecture 85/384/EC. (Go to Article 4 of the main text for the required duration of training. The original Directive has been updated.) It is clearly stated in that Directive that "the total length of education and training shall consist of a minimum of either four years of full-time studies at a university or comparable educational establishment or at least six years of study at a university or comparable educational establishment of which at least three must be full time".
As this is a minimum requirement there is nothing to stop a country applying higher standards to those obtaining qualifications and experience within its own jurisdiction. However it is widely held (and expressed in the report of Michael Highton to the RIBA Council) that any challenge to this disparity is likely to succeed on the grounds of irrationality. The report stated:
"In our view it is only a matter of time before a UK student, who has been denied registration on the grounds that he or she has not passed the Part 3 examination, yet who has been educated in the UK and achieved two years practical experience in the UK, successfully challenges such a decision on the basis that it is irrational to require a UK based student to possess a higher level of qualification and experience than is required of a non-UK based student or architect."
If the Architects Registration Board reduced the registration requirement to four years full-time study, there is no reason why the RIBA should lower its entry standard.
Break in annually produced copies of the Register
At a time when there had occurred a hiatus in production under the legislation of printed copies of the Register in the annual series from 1933, the Architects Registration Board made a new departure and in 2012 announced an online version of the Register which registered persons were invited to use for demonstrating their professionalism to members of the public and to non-registered competitors.
References
External links
ARB web site
Royal Institute of British Architects
Architectural education
Law of the United Kingdom | Registration of architects in the United Kingdom | Engineering | 5,318 |
29,468,127 | https://en.wikipedia.org/wiki/UDF%20423 | UDF 423 is the Hubble Ultra Deep Field (UDF) identifier for a distant spiral galaxy. With an apparent magnitude of 20, UDF 423 is one of the brightest galaxies in the HUDF and also has one of the largest apparent sizes in the HUDF.
Distance measurements
The "distance" of a far away galaxy depends on how it is measured. With a redshift of 1, light from this galaxy is estimated to have taken around 7.7 billion years to reach Earth. However, since this galaxy is receding from Earth, the present comoving distance is estimated to be around 10 billion light-years away. In context, Hubble is observing this galaxy as it appeared when the Universe was around 4.9 billion years old.
See also
List of Deep Fields
References
Fornax
Spiral galaxies
Hubble Space Telescope
00423 | UDF 423 | Astronomy | 177 |
42,418,733 | https://en.wikipedia.org/wiki/Hay%27s%20test | Hay's test, also known as Hay's sulphur powder test, is a chemical test used for detecting the presence of bile salts in urine.
Procedure
Sulphur powder is sprinkled into a test tube with three millilitres of urine and if the test is positive, the sulphur powder sinks to the bottom of the test tube. Sulphur powder sinks because bile salts decrease the surface tension of urine.
References
Chemical tests | Hay's test | Chemistry | 93 |
4,859,035 | https://en.wikipedia.org/wiki/Drag%20area | In mechanics and aerodynamics, the drag area of an object represents the effective size of the object as it is "seen" by the fluid flow around it. The drag area is usually expressed as a product where is a representative area of the object, and is the drag coefficient, which represents what shape it has and how streamlined it is.
The drag coefficient plays a role in Reynold's drag equation,
Here, is the drag force, the density of the fluid, and the speed of the object relative to the fluid.
See also
Drag (physics)
Automobile drag coefficient#Drag area
Zero-lift drag coefficient
Drag (physics) | Drag area | Chemistry | 130 |
6,139,325 | https://en.wikipedia.org/wiki/2%20euro%20coin | The 2 euro coin (€2) is the highest-value euro coin and has been used since the introduction of the euro (in its cash form) in 2002. The coin is made of two alloys: the inner part of nickel brass, the outer part of copper-nickel. All coins have a common reverse side and country-specific national sides. The coin has been used since 2002, with the present common side design dating from 2007.
The €2 coin is the euro coin subject to legal-tender commemorative issues and hence there is a large number of national sides, including three issues of identical commemorative sides by all eurozone members.
History
The coin dates from 2002, when euro coins and notes were introduced in the 12-member eurozone and its related territories. The common side was designed by Luc Luycx, a Belgian artist who won a Europe-wide competition to design the new coins. The designs of the one- and two-euro coins were intended to show the European Union (EU) as a whole with the then-15 countries more closely joined together than on the 10 to 50-cent coins (the 1-cent to 5-cent coins showed the EU as one, though intending to show its place in the world).
The national sides, then 15 (eurozone plus Monaco, San Marino and the Vatican who could mint their own) were each designed according to national competitions, though to specifications which applied to all coins such as the requirement of including twelve stars. National designs were not allowed to change until the end of 2008, unless a monarch (whose portrait usually appears on the coins) dies or abdicates. This happened in Monaco and the Vatican City, resulting in three new designs in circulation (the Vatican had an interim design until the new Pope was selected). National designs have seen some changes due to a new rule stating that national designs should include the name of the issuing country (neither Finland and Belgium show a country name, and hence have made minor changes).
In 2004 the commemorative coins were allowed to be minted in six states (a short interim period was set aside so citizens could get used to the new currency). By 2007 nearly all states had issued a commemorative issue, and the first eurozone-wide commemorative was issued to celebrate the Treaty of Rome.
As the EU's membership has since expanded in 2004 and 2007, with further expansions envisaged, the common face of all euro coins from the value of 10 euro cent and above were redesigned in 2007 to show a new map. This map showed Europe, not just the EU, as one continuous landmass; however, Cyprus was moved west as the map cut off after the Bosphorus (which was seen as excluding Turkey for political reasons). The redesign in 2007, rather than in 2004, was due to the fact that 2007 saw the first enlargement of the eurozone: the entry of Slovenia. Hence, the Slovenian design was added to the designs in circulation.
Cyprus and Malta joined in 2008, Slovakia in 2009 and Estonia in 2011, bringing four more designs. Also in 2009, the second eurozone-wide issue of a 2-euro commemorative coin was issued, celebrating ten years of the introduction of the euro. In 2012, the third eurozone-wide issue of a 2-euro commemorative coin was issued, celebrating 10 years of euro coins and notes. In 2015, the fourth eurozone-wide issue for this denomination was issued, commemorating the 30th anniversary of the flag of Europe. Four more countries, Estonia, Latvia, Lithuania, and Croatia joined the eurozone in 2011, 2014, 2015, and in 2023, respectively. Andorra began minting its own designs in 2014 after winning the right to do so.
Design
The coins are composed of two alloys. The inner circle is composed of three layers (nickel brass, nickel, nickel brass) and the outer ring of copper-nickel giving them a two colour (silver outer and gold inner) appearance. The diameter of the coins is 25.75 mm, the thickness is 2.20 mm and the mass is 8.5 grams. The coins' edges are finely milled with lettering, though the exact design of the edge can vary between states with some choosing to write the issuing state's name or denomination around the edge (see "edges" below). The coins have been used from 2002, though some are dated 1999 which is the year the euro was created as a currency, but not put into general circulation.
Reverse (common) side
The reverse (used from 2007 onwards) was designed by Luc Luycx and displays a map of Europe, not including Iceland and cutting off, in a semicircle, at the Bosporus, north through the middle of Ukraine and Belarus and through northern Scandinavia. Cyprus is located further west than it should be and Malta is shown disproportionally large so it appears on the map. The map has numerous indentations giving an appearance of geography rather than a flat design. Six fine lines cut across the map except where there is landmass and have a star at each end – reflecting the twelve stars on the flag of Europe. Across the map is the word EURO, and a large number 2 appears to the left hand side of the coin. The designer's initials, LL, appear next to Cyprus.
Luc Luycx designed the original coin, which was much the same except the design was only of the then 15 members in their entirety and showing border and no geographic features. The map was less detailed and the lines the stars were upon cut through where there would be landmass in eastern Europe if it were shown.
Obverse (national) sides
The obverse side of the coin depends on the issuing country. All have to include twelve stars (in most cases a circle around the edge), the engraver's initials, and the year of issue. New designs also have to include the name or initials of the issuing country. The side cannot repeat the denomination of the coin unless the issuing country uses an alphabet other than Latin (currently, Greece is the only such country, hence it engraves "2 ΕΥΡΩ" upon its coins). Austria also engraves "2 EURO" on the reverse of its coins.
Edges
The edges of the 2 euro coin vary according to the issuing state;
Planned designs
Austria, Germany and Greece will also at some point need to update their designs to comply with guidelines stating they must include the issuing state's name or initial, and not repeat the denomination of the coin.
In addition, there are several EU states that have not yet adopted the euro, some of them have already agreed upon their coin designs however it is not known exactly when they will adopt the currency, and therefore these are not yet minted. See enlargement of the Eurozone for expected entry dates of these countries. Latvia officially introduced the euro on 1 January 2014, its design for the 2 euro coin is similar to the 5 lati coin's design from 1929 to 1932:
Commemorative issues
Each state, allowed to issue coins, may also mint two commemorative coins each year (until 2012, it was one a year). Only €2 coins may be used in this way (for them to be legal tender) and there is a limit on the number that can be issued. The coin must show the normal design criteria, such as the twelve stars, the year and the issuing country. Not all states have issued their own commemorative coins except for in 2007, 2009 and 2012 when every then-eurozone state issued a common coin (with only different languages and country names used) to celebrate the 50th anniversary of the Treaty of Rome (1957–2007), the 10th anniversary of the euro (1999–2009) and the 10th anniversary of euro coins (2002–2012). Eurozone-wide issues do not count as a state's two-a-year issue. Germany has begun issuing one coin a year for each of its states (the German Bundesländer series which will take it up to 2021.
Types of Commemorative €2 coins
There are several types of Commemorative €2 Coins:
Commemorative coins that the euro countries are issued jointly by all EU Countries
Commemorative coins issued by a single country
Commemorative coins issued by a number of countries
Commemorative coins that are issued jointly by all eurozone countries
So far, there have been five commemorative coins that the eurozone countries have issued jointly: the first, in March 2007, to commemorate the "50th anniversary of the Treaty of Rome", the second, in January 2009, to commemorate the tenth anniversary of the euro is celebrated with a coin called the "10th anniversary of Economic and Monetary Union of the European Union", the third one in 2012, to commemorate 10 years of the euro coins and notes, the fourth one in 2015, to commenorate 30 years of the Flag of Europe and the fifth one in 2022 to commemorate 35 years of the Erasmus Programme.
There are €2 commemorative coins that have been issued on the same topic by different member states, two (by Belgium and Italy) to celebrate Louis Braille's 200th birthday, four (by Italy, Belgium, Portugal and Finland) to celebrate 60th anniversary of the Universal Declaration of Human Rights, two (by Germany and France) to commemorate 50 years of the Elysee Treaty (1963–2013) and three (by Lithuania, Latvia and Estonia) to commemorate the 100th anniversary of the foundation of the independent Baltic states.
Commemorative coins issued by a single country
As a rule, euro countries may each issue only two €2 commemorative coins per year. Exceptionally, they are allowed to issue another, provided that it is a joint issuance and commemorates events of European-wide importance.
Proposing a topic for a €2 Commemorative Coin
Role of the European Central Bank
Designing and issuing the coins is the competence of the individual euro countries. The ECB's role regarding the commemorative but also all other coins is to approve the maximum volumes of coins that the individual countries may issue.
"Unlike banknotes, euro coins are still a national competence and not the ECB's. If a euro area country intends to issue a €2 commemorative coin it has to inform the European Commission. There is no reporting by euro area countries to the ECB. The Commission publishes the information in the multilingual Official Journal of the EU (C series).
The Official Journal is the authoritative source upon which the ECB bases its website updates on euro coins. The reporting process, the translation into 22 languages and publishing lead to unavoidable delays. The coin pages on the ECB’s website cannot therefore always be updated as timely as users might wish.
If the ECB learns of a euro coin that has not yet featured in the Official Journal, only its image will be posted on the ECB’s website, with a brief statement that confirmation by the European Commission is pending."
Role of the Directorate-General for Economic and Financial Affairs
The website of the EU – DG for Economic and Financial Affairs is not specific on the topic of proposing themes for €2 commemorative coins.
The website of the European Central Bank where the Euro coins are mentioned, is not specific on the topic of proposing themes for €2 commemorative coins. It is not mentioned how the €2 commemorative coins that are in circulation today came about.
Similar coins
The coins were minted in several of the participating countries, many using blanks produced at the Birmingham Mint in Birmingham, England. A problem has arisen in differentiation of coins made using similar blanks and minting techniques.
The Turkish 1 New Lira coin (which was in circulation from 2005 until 2008) closely resembled the €2 coin in both weight and size, and both coins seem to be recognized and accepted by coin-operated machines as being a €2 coin; however, 2 euro are worth roughly 20 times than 1 Turkish lira. There are now some vending machines which have been upgraded to reject the 1 lira coin.
The 10 Thai baht coin, first minted in 1988, which is of similar shape and size to a €2 coin but worth around one-eighth of the value has recently been appearing in the coin boxes of vending machines throughout Europe and being given back as change in some smaller establishments.
The new 50 gapik coin of the Azerbaijani manat also looks like a €2 coin. The new coin set of the country contains other coins similar to some euro coins.
The Philippine ₱10 coin of the BSP series is also similar to the 2 Euro coin making it easy to pass for a Euro in some establishments in the Eurozone.
The Egyptian pound coin is also similar to the 2 Euro coin making it easy to pass for a Euro in some establishments in the Eurozone. It's worth around 12–13 Euro Cents (1/16 of a 2 Euro coin). It is slightly thicker, with a marginally smaller diameter. In everyday exchanges the similarity is effectively misleading. Its use has been attested in Amsterdam.
The Mexican $5 coin is also similar to the 2 Euro coin. It is worth around 28 Euro Cents (1/7 of the 2 Euro coin).
The Canadian $2 coin or 'toonie', first minted in 1996, also bears a small similarity to the €2 coin. The toonie however is 2mm larger in diameter, 0.40mm less thick, 1.5g lighter, and features a larger outer ring. It is worth around 1.40 EUR.
The Polish 5 złotych coin, currently worth about 1.16 EUR.
The Indonesian Rp1000 coin, minted between 1993 and 2000, weighs 0.1g more, has a diameter 0.25mm larger and is 0.20mm thinner. The coin is worth approximately €0.06 (1/30th of a €2 coin).
The South African R5 is also similar in appearance and size, and is worth around €0.40.
The Italian 500 lire minted from 1982 to 2001 has a diameter 0.05 larger. The coin was worth approximately €0.25.
References
External links
Euro coins
Bi-metallic coins
Two-base-unit coins
Maps on coins | 2 euro coin | Chemistry | 2,884 |
3,888,930 | https://en.wikipedia.org/wiki/Single-cylinder%20engine | A single-cylinder engine, sometimes called a thumper, is a piston engine with one cylinder. This engine is often used for motorcycles, motor scooters, motorized bicycles, go-karts, all-terrain vehicles, radio-controlled vehicles, power tools and garden machinery (such as chainsaws, lawn mowers, cultivators, and string trimmers). Single-cylinder engines are made both as 4-strokes and 2-strokes.
Characteristics
Compared with multi-cylinder engines, single-cylinder engines are usually simpler and compact. Due to the greater potential for airflow around all sides of the cylinder, air cooling is often more effective for single cylinder engines than multi-cylinder engines. This reduces the weight and complexity of air-cooled single-cylinder engines, compared with liquid-cooled engines.
Drawbacks of single-cylinder engines include a more pulsating power delivery through each cycle and higher levels of vibration. The uneven power delivery means that often a single-cylinder engine requires a heavier flywheel than a comparable multi-cylinder engine, resulting in relatively slower changes in engine speed. To reduce the vibration level, they often make greater use of balance shafts than multi-cylinder engines, as well as more extreme methods such as a dummy connecting rod (for example the Ducati Supermono). These balancing devices can reduce the benefits of single-cylinder engines regarding lower weight and complexity.
Most single-cylinder engines used in motor vehicles are fueled by petrol (and use a four-stroke cycle), however diesel single-cylinder engines are also used in stationary applications (such as the Lombardini 3LD and 15LD).
A variation known as the split-single makes use of two pistons which share a single combustion chamber.
Uses
Early motorcycles, automobiles and other applications such as marine engines all tended to be single-cylinder. The configuration is almost exclusively used in portable tools, along with garden machinery such as lawn mowers. Single cylinder engines also remain in widespread use in motorcycles, motor scooters, go-karts, auto rickshaws, and radio-controlled models. From 1921 to 1960, the Lanz Bulldog tractor used a large horizontally-mounted single cylinder two-stroke engine. However they are rarely used in modern automobiles and tractors, due to developments in engine technology.
Single cylinder engines remain the most common engine layout in motor scooters and low-powered motorcycles. The Honda Super Cub (the motor vehicle with the highest overall sales since its introduction in 1958) uses a four-stroke single-cylinder engine. There are also several single-cylinder sportbikes (such as the KTM 690 Duke R), dual-sport motorcycles (such as the BMW G650GS) and the classic-styled Royal Enfield 500 Bullet.
The Moto3 class in the MotoGP World Championship have used four-stroke single cylinder engines since the class replaced two-strokes in 2012.
Other single-cylinder engines
Engines of other sorts, like the beam engine and certain types of Stirling engine, operate using one cylinder and thus can also be considered single-cylinder engines.
References
Motorcycle engines
Engines by cylinder layout | Single-cylinder engine | Technology | 635 |
18,910 | https://en.wikipedia.org/wiki/Markup%20language | A markup language is a text-encoding system which specifies the structure and formatting of a document and potentially the relationships among its parts. Markup can control the display of a document or enrich its content to facilitate automated processing.
A markup language is a set of rules governing what markup information may be included in a document and how it is combined with the content of the document in a way to facilitate use by humans and computer programs. The idea and terminology evolved from the "marking up" of paper manuscripts (e.g., with revision instructions by editors), traditionally written with a red pen or blue pencil on authors' manuscripts.
Older markup languages, which typically focus on typography and presentation, include Troff, TeX, and LaTeX.
Scribe and most modern markup languages, such as XML, identify document components (for example headings, paragraphs, and tables), with the expectation that technology, such as stylesheets, will be used to apply formatting or other processing.
Some markup languages, such as the widely used HTML, have pre-defined presentation semantics, meaning that their specifications prescribe some aspects of how to present the structured data on particular media. HTML, like DocBook, Open eBook, JATS, and many others, is based on the markup meta-languages SGML and XML. That is, SGML and XML allow designers to specify particular schemas, which determine which elements, attributes, and other features are permitted, and where.
A key characteristic of most markup languages is that they allow intermingling markup with document content such as text and pictures. For example, if a few words in a sentence need to be emphasized, or identified as a proper name, defined term, or another special item, the markup may be inserted between the characters of the sentence.
Etymology
The noun markup is derived from the traditional publishing practice called "marking up" a manuscript, which involves adding handwritten annotations in the form of conventional symbolic printer's instructions — in the margins and the text of a paper or a printed manuscript.
For centuries, this task was done primarily by skilled typographers known as "markup men" or "markers" who marked up text to indicate what typeface, style, and size should be applied to each part, and then passed the manuscript to others for typesetting by hand or machine.
The markup was also commonly applied by editors, proofreaders, publishers, and graphic designers, and indeed by document authors, all of whom might also mark other things, such as corrections, changes, etc.
Types of markup language
There are three main general categories of electronic markup, articulated in Coombs, Renear, and DeRose (1987), and Bray (2003).
Presentational markup
The kind of markup used by traditional word-processing systems: binary codes embedded within document text that produce the WYSIWYG ("what you see is what you get") effect. Such markup is usually hidden from human users, even authors and editors. Properly speaking, such systems use procedural and/or descriptive markup underneath but convert it to "present" to the user as geometric arrangements of type.
Procedural markup
Markup is embedded in text which provides instructions for programs to process the text. Well-known examples include troff, TeX, and Markdown. It is assumed that software processes the text sequentially from beginning to end, following the instructions as encountered. Such text is often edited with the markup visible and directly manipulated by the author. Popular procedural markup systems usually include programming constructs, especially macros, allowing complex sets of instructions to be invoked by a simple name (and perhaps a few parameters). This is much faster, less error-prone, and more maintenance-friendly than re-stating the same or similar instructions in many places.
Descriptive markup
Markup is specifically used to label parts of the document for what they are, rather than how they should be processed. Well-known systems that provide many such labels include LaTeX, HTML, and XML. The objective is to decouple the structure of the document from any particular treatment or rendition of it. Such markup is often described as "semantic". An example of a descriptive markup would be HTML's <cite> tag, which is used to label a citation. Descriptive markup — sometimes called logical markup or conceptual markup — encourages authors to write in a way that describes the material conceptually, rather than visually.
There is a considerable blurring of the lines between the types of markup. In modern word-processing systems, presentational markup is often saved in descriptive-markup-oriented systems such as XML, and then processed procedurally by implementations. The programming in procedural-markup systems, such as TeX, may be used to create higher-level markup systems that are more descriptive in nature, such as LaTeX.
In recent years, several markup languages have been developed with ease of use as a key goal, and without input from standards organizations, aimed at allowing authors to create formatted text via web browsers, for example in wikis and in web forums. These are sometimes called lightweight markup languages. Markdown, BBCode, and the markup language used by Wikipedia are examples of such languages.
History of markup languages
GenCode
The first well-known public presentation of markup languages in computer text processing was made by William W. Tunnicliffe at a conference in 1967, although he preferred to call it generic coding. It can be seen as a response to the emergence of programs such as RUNOFF that each used their own control notations, often specific to the target typesetting device. In the 1970s, Tunnicliffe led the development of a standard called GenCode for the publishing industry and later was the first chairman of the International Organization for Standardization committee that created SGML, the first standard descriptive markup language. Book designer Stanley Rice published speculation along similar lines in 1970.
Brian Reid, in his 1980 dissertation at Carnegie Mellon University, developed the theory and a working implementation of descriptive markup in actual use. However, IBM researcher Charles Goldfarb is more commonly seen today as the "father" of markup languages. Goldfarb hit upon the basic idea while working on a primitive document management system intended for law firms in 1969, and helped invent IBM GML later that same year. GML was first publicly disclosed in 1973.
In 1975, Goldfarb moved from Cambridge, Massachusetts to Silicon Valley and became a product planner at the IBM Almaden Research Center. There, he convinced IBM's executives to deploy GML commercially in 1978 as part of IBM's Document Composition Facility product, and it was widely used in business within a few years.
SGML, which was based on both GML and GenCode, was an ISO project worked on by Goldfarb beginning in 1974. Goldfarb eventually became chair of the SGML committee. SGML was first released by ISO as the ISO 8879 standard in October 1986.
troff and nroff
Some early examples of computer markup languages available outside the publishing industry can be found in typesetting tools on Unix systems such as troff and nroff. In these systems, formatting commands were inserted into the document text so that typesetting software could format the text according to the editor's specifications. It was a trial and error iterative process to get a document printed correctly. Availability of WYSIWYG ("what you see is what you get") publishing software supplanted much use of these languages among casual users, though serious publishing work still uses markup to specify the non-visual structure of texts, and WYSIWYG editors now usually save documents in a markup-language-based format.
TeX
Another major publishing standard is TeX, created and refined by Donald Knuth in the 1970s and '80s. TeX concentrated on the detailed layout of text and font descriptions to typeset mathematical books. This required Knuth to spend considerable time investigating the art of typesetting. TeX is mainly used in academia, where it is a de facto standard in many scientific disciplines. A TeX macro package known as LaTeX provides a descriptive markup system on top of TeX, and is widely used both among the scientific community and the publishing industry.
Scribe, GML, and SGML
The first language to make a clean distinction between structure and presentation was Scribe, developed by Brian Reid and described in his doctoral thesis in 1980. Scribe was revolutionary in a number of ways, introducing the idea of styles separated from the marked-up document, and a grammar that controlled the usage of descriptive elements. Scribe influenced the development of Generalized Markup Language (later SGML), and is a direct ancestor to HTML and LaTeX.
In the early 1980s, the idea that markup should focus on the structural aspects of a document and leave the visual presentation of that structure to the interpreter led to the creation of SGML. The language was developed by a committee chaired by Goldfarb. It incorporated ideas from many different sources, including Tunnicliffe's project, GenCode. Sharon Adler, Anders Berglund, and James A. Marke were also key members of the SGML committee.
SGML specified a syntax for including the markup in documents, as well as one for separately describing what tags were allowed, and where (the Document Type Definition (DTD), later known as a schema). This allowed authors to create and use any markup they wished, selecting tags that made the most sense to them and were named in their own natural languages, while also allowing automated verification. Thus, SGML is properly a meta-language, and many particular markup languages are derived from it. From the late '80s onward, most substantial new markup languages have been based on the SGML system, including for example TEI and DocBook. SGML was promulgated as an International Standard by International Organization for Standardization, ISO 8879, in 1986.
SGML found wide acceptance and use in fields with very large-scale documentation requirements. However, many found it cumbersome and difficult to learn — a side effect of its design attempting to do too much and being too flexible. For example, SGML made end tags (or start-tags, or even both) optional in certain contexts, because its developers thought markup would be done manually by overworked support staff who would appreciate saving keystrokes.
HTML
In 1989, computer scientist Sir Tim Berners-Lee wrote a memo proposing an Internet-based hypertext system, then specified HTML and wrote the browser and server software in the last part of 1990. The first publicly available description of HTML was a document called "HTML Tags", first mentioned on the Internet by Berners-Lee in late 1991. It describes 18 elements comprising the initial, relatively simple design of HTML. Except for the hyperlink tag, these were strongly influenced by SGMLguid, an in-house SGML-based documentation format at CERN, and very similar to the sample schema in the SGML standard. Eleven of these elements still exist in HTML 4.
Berners-Lee considered HTML an SGML application. The Internet Engineering Task Force (IETF) formally defined it as such with the mid-1993 publication of the first proposal for an HTML specification: "Hypertext Markup Language (HTML)" Internet-Draft by Berners-Lee and Dan Connolly, which included an SGML Document Type Definition to define the grammar. Many of the HTML text elements are found in the 1988 ISO technical report TR 9537 Techniques for using SGML, which in turn covers the features of early text formatting languages such as that used by the RUNOFF command developed in the early 1960s for the CTSS (Compatible Time-Sharing System) operating system. These formatting commands were derived from those used by typesetters to manually format documents. Steven DeRose argues that HTML's use of descriptive markup (and the influence of SGML in particular) was a major factor in the success of the Web, because of the flexibility and extensibility that it enabled. HTML became the main markup language for creating web pages and other information that can be displayed in a web browser and is likely the most used markup language in the world today.
XML
XML (Extensible Markup Language) is a meta markup language that is very widely used. XML was developed by the World Wide Web Consortium in a committee created and chaired by Jon Bosak. The main purpose of XML was to simplify SGML by focusing on a particular problem — documents on the Internet. XML remains a meta-language like SGML, allowing users to create any tags needed (hence "extensible") and then describing those tags and their permitted uses.
XML adoption was helped because every XML document can be written in such a way that it is also an SGML document, and existing SGML users and software could switch to XML fairly easily. However, XML eliminated many of the more complex features of SGML to simplify implementation environments such as documents and publications. It appeared to strike a happy medium between simplicity and flexibility, as well as supporting very robust schema definition and validation tools, and was rapidly adopted for many other uses. XML is now widely used for communicating data between applications, for serializing program data, for hardware communications protocols, vector graphics, and many other uses as well as documents.
XHTML
From January 2000 until HTML 5 was released, all W3C Recommendations for HTML have been based on XML, using the abbreviation XHTML (Extensible HyperText Markup Language). The language specification requires that XHTML Web documents be well-formed XML documents. This allows for more rigorous and robust documents, by avoiding many syntax errors which historically led to incompatible browser behaviors, while still using document components that are familiar with HTML.
One of the most noticeable differences between HTML and XHTML is the rule that all tags must be closed: empty HTML tags such as <br> must either be closed with a regular end-tag, or replaced by a special form: (the space before the '/' on the end tag is optional, but frequently used because it enables some pre-XML Web browsers, and SGML parsers, to accept the tag). Another difference is that all attribute values in tags must be quoted. Both these differences are commonly criticized as verbose but also praised because they make it far easier to detect, localize, and repair errors. Finally, all tag and attribute names within the XHTML namespace must be lowercase to be valid. HTML, on the other hand, was case-insensitive.
Other XML-based applications
Many XML-based applications now exist, including the Resource Description Framework as RDF/XML, XForms, DocBook, SOAP, and the Web Ontology Language (OWL). For a partial list of these, see List of XML markup languages.
Features of markup languages
A common feature of many markup languages is that they intermix the text of a document with markup instructions in the same data stream or file. This is not necessary; it is possible to isolate markup from text content, using pointers, offsets, IDs, or other methods to coordinate the two. Such "standoff markup" is typical for the internal representations that programs use to work with marked-up documents. However, embedded or "inline" markup is much more common elsewhere. Here, for example, is a small section of text marked up in HTML:
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<title>My test page</title>
</head>
<body>
<h1>Mozilla is cool</h1>
<img src="images/firefox-icon.png" alt="The Firefox logo: a flaming fox surrounding the Earth.">
<p>At Mozilla, we’re a global community of</p>
<ul> <!-- changed to list in the tutorial -->
<li>technologists</li>
<li>thinkers</li>
<li>builders</li>
</ul>
<p>working together to keep the Internet alive and accessible, so people worldwide can be informed contributors and creators of the Web. We believe this act of human collaboration across an open platform is essential to individual growth and our collective future.</p>
<p>Read the <a href="https://www.mozilla.org/en-US/about/manifesto/">Mozilla Manifesto</a> to learn even more about the values and principles that guide the pursuit of our mission.</p>
</body>
</html>
The codes enclosed in angle-brackets <like this> are markup instructions (known as tags), while the text between these instructions is the actual text of the document. The codes h1, p, and em are examples of semantic markup, in that they describe the intended purpose or the meaning of the text they include. Specifically, h1 means "this is a first-level heading", p means "this is a paragraph", and em means "this is an emphasized word or phrase". A program interpreting such structural markup may apply its own rules or styles for presenting the various pieces of text, using different typefaces, boldness, font size, indentation, color, or other styles, as desired. For example, a tag such as "h1" (header level 1) might be presented in a large bold sans-serif typeface in an article, or it might be underscored in a monospaced (typewriter-style) document – or it might simply not change the presentation at all.
In contrast, the i tag in HTML 4 is an example of presentational markup, which is generally used to specify a particular characteristic of the text without specifying the reason for that appearance. In this case, the i element dictates the use of an italic typeface. However, in HTML 5, this element has been repurposed with a more semantic usage: to denote a span of text in an alternate voice or mood, or otherwise offset from the normal prose in a manner indicating a different quality of text. For example, it is appropriate to use the i element to indicate a taxonomic designation or a phrase in another language. The change was made to ease the transition from HTML 4 to HTML 5 as smoothly as possible so that deprecated uses of presentational elements would preserve the most likely intended semantics.
The Text Encoding Initiative (TEI) has published extensive guidelines for how to encode texts of interest in the humanities and social sciences, developed through years of international cooperative work. These guidelines are used by projects encoding historical documents, the works of particular scholars, periods, genres, and so on.
Language
While the idea of markup language originated with text documents, there is increasing use of markup languages in the presentation of other types of information, including playlists, vector graphics, web services, content syndication, and user interfaces. Most of these are XML applications because XML is a well-defined and extensible language.
The use of XML has also led to the possibility of combining multiple markup languages into a single profile, like XHTML+SMIL and XHTML+MathML+SVG.
See also
ADDML
Comparison of document markup languages
Curl (programming language)
HTML
LaTeX
Lightweight markup language
List of markup languages
Markdown
Programming language
Modeling language
Plain text
Formatted text
ReStructuredText
Style sheet language
Tag (markup)
WYSIWYG
XML
References
External links
Formal languages
American inventions | Markup language | Mathematics | 4,087 |
71,785,549 | https://en.wikipedia.org/wiki/Muellerella%20erratica | Muellerella erratica is a species of lichenicolous fungus in the family Verrucariaceae. It has been reported from numerous countries, including India. Known host species include the thallus of Lecidea lapicida and Lecanora.
References
Verrucariales
Fungi described in 1855
Fungi of India
Lichenicolous fungi
Taxa named by Abramo Bartolommeo Massalongo
Fungus species | Muellerella erratica | Biology | 87 |
5,862,474 | https://en.wikipedia.org/wiki/Inosperma%20erubescens | Inosperma erubescens (formerly Inocybe erubescens, also formerly named I. patouillardii), and also commonly known as the deadly fibrecap, brick-red tear mushroom or red-staining Inocybe, is a poisonous basidiomycete fungus, one of many in the original genus Inocybe and one of the few known to have caused death. It is found growing in small groups on leaf litter in association with beech. All mushroom guidebooks as well as mushroom hunters advise that the entire Inocybaceae should be avoided for consumption. The fruit bodies (i.e., the mushrooms) appear in spring and summer; the bell-shaped caps are generally pale pinkish in colour with red stains, which can also be seen on the stipe and gills.
Taxonomy and naming
The red-staining inocybe was first described by Norwegian naturalist Axel Gudbrand Blytt in 1904 as Inocybe erubescens. However, it was widely known for many years as I. patouillardii, as named by Italian mycologist Giacomo Bresadola in 1905 in honour of the French botanist Narcisse Théophile Patouillard. However, the former name takes priority due to age.
A 2019 multigene phylogenetic study by Matheny and colleagues found that I. erubescens and its relatives in the subgenus Inosperma were only distantly related to the other members of the genus Inocybe. Inosperma was raised to genus rank and the species became Inosperma erubescens.
Description
The cap is hemispherical before flattening out and can reach 8 cm (3.4 in) in diameter. It is variable in colour, initially white though becoming yellow or brownish with age, and stained with pink-white and red marks or lines. The edge of the cap is often irregular with split edges and rough texture. The adnexed gills are reddish-pink. The stipe, dark red-pink, is thin with no ring. The flesh is initially yellowish, later dark pink. The colour tends to fade in direct sunlight. It may be mistaken for Calocybe gambosa, though the latter does not stain red, Agaricus species or Cortinarius caperatus.
Distribution and habitat
It is commonest in beech woods and chalky soils, but grows in other broad-leaved woodland as well. It mainly grows on leaf litter usually during the spring and summer seasons. It is found in southern Europe and has been recorded from eastern Anatolia in Turkey. In Israel, I. erubescens grows under Palestine oak (Quercus calliprinos) and pines, with mushrooms still appearing in periods of little or no rain as they are mycorrhizal.
Toxicity
Inocybe erubescens contains a possibly fatal dose of the toxin muscarine. One fatality was recorded in Surrey in southern England in 1937. In Israel, it is confused with edible mushrooms of the genus Tricholoma, particularly Tricholoma terreum which grows in similar habitat.
High dose intramuscular injections of atropine or diphenhydramine serve as an antidote to Inocybe poisoning.
See also
List of Inocybe species
List of deadly fungi
References
Toxicity, Mushrooms - Muscarine
Poisonous fungi
erubescens, Inosperma
Deadly fungi
Fungi described in 1905
Fungi of Europe
Fungus species | Inosperma erubescens | Biology,Environmental_science | 726 |
2,784,662 | https://en.wikipedia.org/wiki/Markland%20%28Scots%29 | A markland or merkland () is an old Scottish unit of land measurement.
There was some local variation in the equivalences; for example, in some places eight ouncelands were equal to one markland, but in others, such as Islay, a markland was twelve ouncelands. The markland derived its name from the old coin, the Merk Scots (cognate with German mark and various other European coinages, see Mark (money)), which was the annual rent paid on it. It was based on this, rather than its actual area. Originally a Scots mark or merk was 13s 4d (160 pence), but the Scottish coinage depreciated against the English, and by the 18th century a Scots merk was worth only 131/3d sterling – one-twelfth of its original value. Although such coins were abolished by the Acts of Union 1707, some stayed in circulation for decades, and the names themselves remained in common use for centuries.
See also
Obsolete Scottish units of measurement
In the East Highlands:
Rood
Scottish acre = 4 roods
Oxgang (Damh-imir) = the area an ox could plough in a year (around 20 acres)
Ploughgate (?) = 8 oxgangs
Daugh (Dabhach) = 4 ploughgates
In the West Highlands:
Markland (Marg-fhearann) = 8 Ouncelands (varied)
Ounceland (Tir-unga) =20 Pennylands
Pennyland (Peighinn) = basic unit; sub-divided into half penny-land and farthing-land
(Other terms in use; Quarterland (Ceathramh): variable value; Groatland (Còta bàn)
References
((Dabhach, Peighinn, Unga) with corrections and additions).
Obsolete Scottish units of measurement
Units of area | Markland (Scots) | Mathematics | 387 |
4,153,139 | https://en.wikipedia.org/wiki/Cryptoregiochemistry | Cryptoregiochemistry refers to the site of initial oxidative attack in double bond formation by enzymes such as fatty acid desaturases. This is a mechanistic parameter that is usually determined through the use of kinetic isotope effect experiments, based on the premise that the initial C-H bond cleavage step should be energetically more difficult and therefore more sensitive to isotopic substitution than the second C-H bond breaking step.
References
Chemical kinetics
Stereochemistry | Cryptoregiochemistry | Physics,Chemistry | 97 |
14,881,966 | https://en.wikipedia.org/wiki/CDC14B | Dual specificity protein phosphatase CDC14B is an enzyme that in humans is encoded by the CDC14B gene.
The protein encoded by this gene is a member of the dual specificity protein tyrosine phosphatase family. This protein is highly similar to Saccharomyces cerevisiae Cdc14, a protein tyrosine phosphatase involved in the exit of cell mitosis and initiation of DNA replication, which suggests the role in cell cycle control. Specifically, it is thought to fulfil this role by bundling and stabilising microtubules. This protein has been shown to interact with and dephosphorylates tumor suppressor protein p53, and is thought to regulate the function of p53. Alternative splicing of this gene results in 3 transcript variants encoding distinct isoforms.
Interactions and Functions
CDC14B has been shown to interact with p53, potentially de-phosphorylate p53 at Serine 315 and thereby stabilize p53. S315-phosphorylated p53, in contrast to other p53 phosphorylation, was shown to facilitate p53 degradation. At the patho-physiological level, mice with CDC14B deletion were shown to exhibit early-onset ageing phenotypes.
References
Further reading
External links | CDC14B | Chemistry | 276 |
58,949,425 | https://en.wikipedia.org/wiki/Perdita%20Barran | Perdita Elizabeth Barran is a Professor of Mass Spectrometry at the University of Manchester. She is Director of the Michael Barber Centre for Collaborative Mass Spectrometry. She develops and applies ion-mobility spectrometry–mass spectrometry to the study of molecule structure and is searching for biomarkers for Parkinson's disease. She is Associate Dean for Research Facility Development at the University of Manchester. In 2020 and 2021 she was seconded to work for the Department of Health and Social Care as an advisor on the use case for mass spectrometry as a diagnostic method for diagnosis of COVID infection.
Education and early career
Barran went to Godolphin and Latymer School. She moved to the University of Manchester to study chemistry, graduating in 1994. She joined the University of Sussex for her graduate studies, working with Harry Kroto and Tony Stace.
Research and career
Barran stayed with Stace for three years after completing her PhD in 1998. In 2001 Barran joined the University of California, Santa Barbara, working as a postdoctoral fellow with Mike Bowers. She was interested in the structure and stability of small molecules in the gas phase. She looked at how Ion-mobility spectrometry could be used to identify conformation.
Barran joined the University of Edinburgh as an Engineering and Physical Sciences Research Council (EPSRC) Advanced Research Fellow in 2002. In 2005 she was awarded the 10th Desty Memorial prize for her innovations in Separation Science. She was made a Senior Lecturer in 2009. She worked on mass spectrometry techniques that can be used to evaluate conformational change, aggregation and intrinsic conformation. She investigated mass spectrometry for therapeutics for pre-fibrillar aggregation. She helped to establish the Scottish Instrumentation and Resource Centre for Advanced Mass Spectrometry at the University of Edinburgh. This had an initial remit to provide proteomic analysis for the MRC Human Genetics Unit.
In 2013 Barran was appointed to the Manchester Institute of Biotechnology as a Chair in Mass Spectrometry sponsored by Waters Corporation. She led an EPSRC platform grant to study the structure-activity relationships of Beta defensins. She works with Cait MacPhee, Garth Cooper and Tilo Kunath on neurodegenerative proteins, and with several groups including Richard Kriwacki, Rohit Pappu and Gary Daughdrill to examine intrinsically disordered proteins. She works with several biopharmaceutical companies to apply new mass spectrometry techniques to new drug modalities including monoclonal antibodies. She also develops new mass spectrometry instrumentation. Her group looks at the structure of biological systems at a molecular level, studying them in the gas and solution phase as well as theoretically. They use electrospray ionization, mass spectrometry, ion mobility mass spectrometry native mass spectrometry and complementary solution based biophysical techniques. They are interested in a proteins structure and how it changes in an effort to relate that to their function. Ion-mobility spectrometry–mass spectrometry can be used to look at the temperature dependent rotationally averaged collision cross-section of gas-phase ions of proteins. In 2014 she was awarded a Biotechnology and Biological Sciences Research Council grant to study the interactions of proteins with other proteins. Barran serves on the editorial board of the International Journal of Mass Spectrometry. She was included in the page of Perditas created by Perdita Stevens.
Parkinson's disease
Barran has been working with Joy Milne to search for odorous biomarkers of Parkinson's disease. By smelling skin swabs, Milne says she can differentiate between people with and without Parkinson's disease. She says she identified changes in her husband's scent before he was formally diagnosed with Parkinson's disease, which he died of in 2015. Barran uses mass spectrometry to investigate the biomarkers of Parkinson's disease. The story was made into a BBC documentary The Woman Who Can Smell Parkinson's. Barran received ethical approval for her work of the skin metabolites of Parkinson's in 2015, allowing them to work with Parkinson's UK to conduct a larger study. In 2018 Milne travelled to the Tanzanian training centre APOPO to check whether she could smell Tuberculosis. Barran's work on Parkinson's is sponsored by The Michael J. Fox Foundation.
In 2022, Barran and others published a study of a method to detect Parkinson's disease by analysing sebum using mass spectrometry.
Awards
Barran was awarded the 2009 Joseph Black award, and the 2020 Theophillus Redwood Award from the Royal Society of Chemistry Analytical Division. Along with a team of researchers 'NosetoDiagnose' she won the Horizon Prize from the Royal Society of Chemistry 2021. She is ranked #3 in the "Human Health Heroes" field on the 2024 Analytical Scientist Power List.
References
Year of birth missing (living people)
Living people
Alumni of the University of Manchester
Academics of the University of Manchester
Academics of the University of Edinburgh
University of California, Santa Barbara faculty
People educated at Godolphin and Latymer School
Women chemists
Mass spectrometrists
21st-century women scientists | Perdita Barran | Physics,Chemistry | 1,085 |
37,477,684 | https://en.wikipedia.org/wiki/Hip%20roof | A hip roof, hip-roof or hipped roof, is a type of roof where all sides slope downward to the walls, usually with a fairly gentle slope, with variants including tented roofs and others. Thus, a hipped roof has no gables or other vertical sides to the roof.
A square hip roof is shaped like a pyramid. Hip roofs on houses may have two triangular sides and two trapezoidal ones. A hip roof on a rectangular plan has four faces. They are almost always at the same pitch or slope, which makes them symmetrical about the centerlines. Hip roofs often have a consistent level fascia, meaning that a gutter can be fitted all around. Hip roofs often have dormer slanted sides.
Construction
Hip roofs can be constructed on a wide variety of plan shapes. Each ridge is central over the rectangle of the building below it. The triangular faces of the roof are called the hip ends, and they are bounded by the hips themselves. The "hips" and hip rafters sit on an external corner of the building and rise to the ridge. Where the building has an internal corner, a valley makes the join between the sloping surfaces (and is underlain by a valley rafter). Hip roofs have the advantage of giving a compact, solid appearance to a structure. The roof pitch (slope) may vary.
Use
In modern domestic architecture, hip roofs are commonly seen in bungalows and cottages, and have been integral to styles such as the American Foursquare. However, they have been used in many styles of architecture and in a wide array of structures.
Advantages and disadvantages
A hip roof is self-bracing, requiring less diagonal bracing than a gable roof. Hip roofs are thus much more resistant to wind damage than gable roofs. Hip roofs have no large, flat, or slab-sided ends to catch wind and are inherently much more stable than gable roofs. However, for a hurricane region, the roof also has to be steep-sloped; at least 35 degrees from horizontal or steeper in slope is preferred. When wind flows over a shallow sloped hip roof, the roof can behave like an airplane wing. Lift is then created on the leeward side. The flatter the roof, the more likely for this to happen. A steeper pitched hip roof tends to cause the wind to stall as it goes over the roof, breaking up the effect. If the roof slopes are less than 35 degrees from horizontal, the roof is subject to uplift. Greater than 35 degrees, and not only does wind blowing over it encounter a stalling effect, but the roof is actually held down on the wall plate by the wind pressure.
A disadvantage of a hip roof, compared with a gable roof on the same plan, is that there is less room inside the roof space; access is more difficult for maintenance; hip roofs are harder to ventilate; and there is not a gable with a window for natural light. Elegant, organic additions are relatively difficult to make on houses with hip roofs.
Variants
Mansard roof
A mansard roof is a variation on a hip roof, with two different roof angles, the lower one much steeper than the upper.
Gablet roof or Dutch gable
Another variation is the gablet (UK terminology) or Dutch gable roof (U.S. and Australasian terminology), which has a hip with a small gable (the gablet) above it. This type simplifies the construction of the roof; no girder trusses are required, but it still has level walls and consistent eaves.
The East Asian hip-and-gable roof is similar in concept to the gablet roof.
Half-hip roof
A half-hip, clipped-gable or jerkin head roof has a gable, but the upper point of the gable is replaced by a small hip, squaring off the top of the gable. The lower edge of the half-hip may have a gutter that leads back on to the remainder of the roof on one or both sides. Both the gablet roof and the half-hipped roof are intermediate between the gabled and fully hipped types: the gablet roof has a gable above a hip, while a half-hipped roof has a hip above a gable.
Half-hipped roofs are common in England, Denmark, Germany and especially in Austria and Slovenia. They are also typical of traditional timber-frame buildings in the Wealden area of South East England.
Half-hip roofs are sometimes referred to as "Dutch hip", but this term is easily confused with "Dutch gable".
Pavilion roof
A roof with equally hipped pitches on a square or regular polygonal plan having a pyramidal or almost pyramidal form. Low variants are typically found topping gazebos and other pavilion structures. Steep tower or church tower variants are known as pyramid roofs.
Rhenish helm or Helm roof
A pointed roof seen on a spire or a tower, oriented so that it has four gable ends. See the Church of St Mary the Blessed Virgin, Sompting in England, or Speyer Cathedral and Limburg Cathedral in Germany.
Tented roof
A tented roof is a type of polygonal hipped roof with steeply pitched slopes rising to a peak or intersection.
See also
List of roof shapes
Domestic roof construction
Finial, or hip-knob
References
External links
Hip Roof - Encyclopædia Britannica
Roofs | Hip roof | Technology,Engineering | 1,098 |
32,610,748 | https://en.wikipedia.org/wiki/KCR%20CRO | KCR is a clinical development provider for the biotechnology and pharmaceutical industries. It has three main service areas: Trial Execution, Consulting and Placement.
KCR operates across four main regions: North America, Western Europe, Central Europe, and Eastern Europe, with a main operational hub located in Boston, MA, and other hubs in Berlin, Germany, Warsaw, Poland, Kyiv, Ukraine and Sydney, Australia respectively. KCR employs over 700 staff.
History
KCR was established in 1997 as Kiecana Clinical Research. The company provided services for clinical monitoring, clinical project management, safety/pharmacovigilance, regulatory affairs and quality assurance.
In April 2014, KCR launched KCR Placement, which offers recruitment and outsourcing for pharma and biotech in Europe. In 2017, KCR opened its headquarters in Boston, US.
In 2017, KCR, launched an NGO called Human Behind Every Number which provides research, insight and education on the first-hand experiences of patients involved in clinical trials. In March 2018, KCR and The Story, received an iF Design Award for their work on the NGO's website.
As of 2020, KCR employs over 700 life science professionals and offers end-to-end study execution and consulting services in oncology, immunology, CNS and vaccines.
References
External links
Clinical trial organizations
Life sciences industry
Polish Joint-stock companies | KCR CRO | Biology | 283 |
15,311,568 | https://en.wikipedia.org/wiki/Potting%20%28electronics%29 | In electronics, potting is the process of filling a complete electronic assembly with a solid or gelatinous compound. This is done to exclude water, moisture, or corrosive agents, to increase resistance to shocks and vibrations, or to prevent gaseous phenomena such as corona discharge in high-voltage assemblies. Potting has also been used to protect against reverse engineering or to protect parts of cryptography processing cards. When such materials are used only on single components instead of entire assemblies, the process is referred to as encapsulation.
Thermosetting plastics or silicone rubber gels are often used, though epoxy resins are also very common. When epoxy resins are used, low chloride grades are usually specified. Many sites recommend using a potting product to protect sensitive electronic components from impact, vibration, and loose wires.
In the potting process, an electronic assembly is placed inside a mold (the "pot") which is then filled with an insulating liquid compound that hardens, permanently protecting the assembly. The mold may be part of the finished article and may provide shielding or heat dissipating functions in addition to acting as a mold. When the mold is removed the potted assembly is described as cast.
As an alternative, many circuit board assembly houses coat assemblies with a layer of transparent conformal coating rather than potting. Conformal coating gives most of the benefits of potting, and is lighter and easier to inspect, test, and repair. Conformal coatings can be applied as liquid or condensed from a vapor phase.
When potting a circuit board that uses surface-mount technology, low glass transition temperature (Tg) potting compounds such as polyurethane or silicone may be used. High Tg potting compounds may break solder bonds through solder fatigue by hardening at a higher temperature because the coating then shrinks as a rigid solid over a larger part of the temperature range, thus developing greater force.
See also
Integrated circuit packaging
Resin dispensing
References
Electronic design
Electronics manufacturing | Potting (electronics) | Engineering | 418 |
9,566,223 | https://en.wikipedia.org/wiki/Temperament%20test | Temperament tests assess dogs for certain behaviors or suitability for dog sports or adoption from an animal shelter by observing the animal for unwanted or potentially dangerous behavioral traits, such as aggressiveness towards other dogs or humans, shyness, or extreme fear.
AKC Temperament Test
In 2019, the American Kennel Club launched its AKC Temperament Test (ATT), a pass-fail evaluation by AKC licensed or member clubs. Evaluators are specially trained AKC Obedience judges, Rally judges and AKC Approved Canine Good Evaluators.
American Temperament Test Society
American Temperament Test Society, Inc. was started by Alfons Ertel in 1977. Ertel created a test for dogs that checks a dog's reaction to strangers, to auditory and visual stimuli (such as the gun shot test), and to unusual situations in an outdoor setting; it does not test indoor or home situation scenarios. It favors a bold confident dog. , the top three dog breeds that have tested with ATTS are Rottweiler (17% of all tests conducted), German Shepherd Dog (10%), and Doberman (5%). The test itself is copyrighted and prospective testers must apply to become official. The test is conducted as a pass-fail by majority rule of three testers, and each individual dog is graded according to its own breed's native aptitudes, taking into account the individual dog's age, health and training. Though the ATTS is the only organization which posts pass rates "by breed", the breeds cannot be compared against each other because the grades are based on each breed's own characteristics. Despite that, attorneys have been encouraged to use the ATTS published "results by breed" to defend their clients in dangerous dog cases by comparing pass rates of the breed of their client's dog against the pass rates of other well-known non-aggressive pet dog breeds. , 34,686 tests have been completed; less than 1,000 per year.
BH-VT test by FCI
BH-VT, an abbreviation of a German term which roughly translates to "companion dog test with traffic safety part", is governed by rules from Fédération Cynologique Internationale (FCI). The BH-VT has become the prerequisite examination for entry into almost all dog sports in Europe that require off-leash work, such as Schutzhund/IPO/IGP, agility and flyball. It is not required for conformation shows where dogs are always on a leash.
Dogs must be at least 12 months old (older for some breeds). There are two portions: obedience and traffic. For the obedience portion, each of the following is part of the test: heeling on leash, heeling off leash, sit exercise, down with recall, and down under distraction. The traffic portion includes tests for encountering a group of people, bicyclists, cars, joggers, other dogs, and being tethered for a short period alone without its handler, and walking through a group of people that are moving.
Aggression towards other dogs is at all times forbidden in FCI events, and any bite or attempt to bite or attack a human or other dog will immediately disqualify a dog. Any aggression towards another dog will permanently disqualify a dog from any participation until it has proven itself through passing a repeat BH-VT with behavioral test.
An earlier version of the test was called simply "BH", and it was Schutzhund's preliminary test that all dogs must pass before going further in Schutzhund training. With the increase in (non-protection) dog sports for all breeds, the new BH-VT omits the "gun shy" test, which was instead moved to the next higher level of Schutzhund trials.
Canine Good Citizen by AKC
The Canine Good Citizen by the American Kennel Club tests for good behavior in a companion dog. Over 1 million dogs and their owners have participated in CGC since it was started in 1989 (over 30,000 dogs per year).
Puppy aptitude tests
There are numerous puppy aptitude and temperament tests which are used by buyers when selecting a puppy and by breeders when evaluating a litter of puppies.
Shelter evaluations
Shelters use temperament tests to help identify dogs with problem behaviors, including aggression, and to help increase the rate of successful adoptions. For some, these tests are a way to determine if a dog should even be offered for adoption, or to whom they will restrict adoption of an individual dog (adult-only household or sanctuary only, versus family with children). In a time when shelters are trying to improve outcomes for shelter animals, some consider temperament tests to be controversial and result in too many dogs being labeled negatively, leading to euthanasia. As such, some shelters have discontinued using any form of testing for their dogs.
Such tests seek to assess a dog's manners, and its reaction to strangers, small children and other pets. The tests try to identify if a dog has problems with food aggression, resource guarding, or separation anxiety. Tools used for evaluations might include a leash, bowl of food, a lifelike doll, a fake arm, and dog treats or toys.
Assess-a-Pet and Assess-a-Hand
The Assess-a-Pet Temperament Test involves use of the Assess-a-Hand, a vinyl or latex mock hand and arm mounted on a wooden dowel, used to avoid bites to the tester who uses it to approach, pet, and then pull away a bowl or toy from the dog. The device was invented by Sue Sternberg. The test is typically given after a certain number of days at a shelter, with retesting after a failure, and additionally after resolution of illness.
Match-Up II Shelter Dog Rehoming Program
This test requires two people: a handler and a recorder. It has 11 sub-tests and the answers are placed in a computer program. It was designed to "help shelters learn about the personality and needs of each dog so that behavioral interventions can be implemented and successful matches can be made."
SAFER Test
SAFER (Safety Assessment for Evaluating Rehoming) by the ASPCA is used to "help identify the risk of future aggression and individual behavioral support needed before adoption for each dog in a shelter."
Wolfhound testing
Temperament testing in wolfhounds is an old and proven form of mild dog fighting used in young dogs to test their temperament. For example, an American standard for an Irish Wolfhound is defined as "a large, rough-coated, greyhound-like dog, fast enough to catch a wolf and strong enough to kill it." It states that "the breed's well-being demands strong, gentle hounds, never aggressive or shy, not even "edgy" ones. Edgy hounds are presently under control, but without their handler's constant control would surely at least retreat, or perhaps manifest worse characteristics of the weak temperament."
Typically it is practiced with larger breeds known in Russia as волкодав (literally: dogs meant for the hunting of wolves). These large breeds (such as Caucasian Shepherd) in Russia undergo the testing called тестовые испытания волкодавов (i.e. testing/examination of dogs meant for hunting wolves). The breeders believe that males used for breeding have to have preserved fighting ability and dominant tendencies because it is a typical mark of their breed. They also believe that weak dogs without fighting abilities will cause a decrease in the quality of the breed.
As part of the test, breeders release two young dogs and let them behave naturally even if they begin to fight. If the fight looks dangerous, the breeders pull the dogs off each other to prevent their injury. If one of the participating dogs shows fear of the other dog and displays no dominant tendencies, he is removed from breeding to ensure his weak nature is not passed on to his descendants.
See also
Dog behavior
Dog training
Notes
References
Animal emotions
Ethology | Temperament test | Biology | 1,657 |
87,310 | https://en.wikipedia.org/wiki/Supercavitation | Supercavitation is the use of a cavitation bubble to reduce skin friction drag on a submerged object and enable high speeds. Applications include torpedoes and propellers, but in theory, the technique could be extended to an entire underwater vessel.
Physical principle
Cavitation is the formation of vapour bubbles in liquid caused by flow around an object. Bubbles form when water accelerates around sharp corners and the pressure drops below the vapour pressure. Pressure increases upon deceleration, and the water generally reabsorbs the vapour; however, vapour bubbles can implode and apply small concentrated impulses that may damage surfaces like ship propellers and pump impellers.
The potential for vapour bubbles to form in a liquid is given by the nondimensional cavitation number. It equals local pressure minus vapour pressure, divided by dynamic pressure. At increasing depths (or pressures in piping), the potential for cavitation is lower because the difference between local pressure and vapour pressure is greater.
A supercavitating object is a high-speed submerged object that is designed to initiate a cavitation bubble at its nose. The bubble extends (either naturally or augmented with internally generated gas) past the aft end of the object and prevents contact between the sides of the object and the liquid. This separation substantially reduces the skin friction drag on the supercavitating object.
A key feature of the supercavitating object is the nose, which typically has a sharp edge around its perimeter to form the cavitation bubble. The nose may be articulated and shaped as a flat disk or cone. The shape of the supercavitating object is generally slender so the cavitation bubble encompasses the object. If the bubble is not long enough to encompass the object, especially at slower speeds, the bubble can be enlarged and extended by injecting high-pressure gas near the object's nose.
The very high speed required for supercavitation can be temporarily reached by underwater-fired projectiles and projectiles entering water. For sustained supercavitation, rocket propulsion is used, and the high-pressure rocket gas can be routed to the nose to enhance the cavitation bubble. In principle, supercavitating objects can be maneuvered using various methods, including the following:
Drag fins that project through the bubble into the surrounding liquid
A tilted object nose
Gas injected asymmetrically near the nose to distort the cavity's geometry
Vectoring rocket thrust through gimbaling for a single nozzle
Differential thrust from multiple nozzles
Applications
The Russian Navy developed the VA-111 Shkval supercavitation torpedo, which uses rocket propulsion and exceeds the speed of conventional torpedoes by at least a factor of five. NII-24 began development in 1960 under the code name "Шквал" (Squall). The VA-111 Shkval has been in service (exclusively in the Russian Navy) since 1977 with mass production starting in 1978. Several models were developed, with the most successful, the M-5, completed by 1972. From 1972 to 1977, over 300 test launches were conducted (95% of them on Issyk Kul lake).
In 2006, German weapons manufacturer Diehl BGT Defence announced their own supercavitating torpedo, the Barracuda, now officially named (). According to Diehl, it reaches speeds greater than .
In 1994, the United States Navy began development of the Rapid Airborne Mine Clearance System (RAMICS), a sea mine clearance system invented by C Tech Defense Corporation. The system is based on a supercavitating projectile stable in both air and water. RAMICS projectiles have been produced in diameters of , , and . The projectile's terminal ballistic design enables the explosive destruction of sea mines as deep as with a single round. In 2000 at Aberdeen Proving Ground, RAMICS projectiles fired from a hovering Sea Cobra gunship successfully destroyed a range of live underwater mines. As of March 2009, Northrop Grumman completed the initial phase of RAMICS testing for introduction into the fleet.
Iran claimed to have successfully tested its first supercavitation torpedo, the Hoot (Whale), on 2–3 April 2006. Some sources have speculated it is based on the Russian VA-111 Shkval supercavitation torpedo, which travels at the same speed. Russian Foreign Minister Sergey Lavrov denied supplying Iran with the technology.
In 2004, DARPA announced the Underwater Express program, a research and evaluation program to demonstrate the use of supercavitation for a high-speed underwater craft application. The US Navy's ultimate goal is a new class of underwater craft for littoral missions that can transport small groups of navy personnel or specialized military cargo at speeds up to 100 knots. DARPA awarded contracts to Northrop Grumman and General Dynamics Electric Boat in late 2006. In 2009, DARPA announced progress on a new class of submarine:
A prototype ship named the Ghost, uses supercavitation to propel itself atop two struts with sharpened edges. It was designed for stealth operations by Gregory Sancoff of Juliet Marine Systems. The vessel rides smoothly in choppy water and has reached speeds of 29 knots.
The Chinese Navy and US Navy are reportedly working on their own supercavitating submarines using technical information obtained on the Russian VA-111 Shkval supercavitation torpedo.
A supercavitating propeller uses supercavitation to reduce water skin friction and increase propeller speed. The design is used in military applications, high-performance racing boats, and model racing boats. It operates fully submerged with wedge-shaped blades to force cavitation on the entire forward face, starting at the leading edge. Since the cavity collapses well behind the blade, the supercavitating propeller avoids spalling damage caused by cavitation, which is a problem with conventional propellers.
Supercavitating ammunition is used with German and Russian underwater firearms, and other similar weapons.
Alleged incidents
The Kursk submarine disaster was initially thought to have been caused by a faulty Shkval supercavitating torpedo, though later evidence points to a faulty 65-76 torpedo.
See also
Supercavitating torpedo
"Shkval" supercavitating torpedo
APS amphibious rifle
SPP-1 underwater pistol
Supercavitating propeller
References
Further reading
Office of Naval Research (2004, June 14). Mechanics and energy conversion: high-speed (supercavitating) undersea weaponry (D&I). Retrieved April 12, 2006, from Office of Naval Research Home Page
Savchenko Y. N. (n.d.). CAV 2001 - Fourth Annual Symposium on Cavitation - California Institute of Technology Retrieved April 9, 2006, archived at Wayback Machine
Hargrove, J. (2003). Supercavitation and aerospace technology in the development of high-speed underwater vehicles. In 42nd AIAA Aerospace Sciences Meeting and Exhibit. Texas A&M University.
Kirschner et al. (2001, October) Supercavitation research and development. Undersea Defense Technologies
Miller, D. (1995). Supercavitation: going to war in a bubble. Jane's Intelligence Review. Retrieved Apr 14, 2006, from Defence & Security Intelligence & Analysis | Jane's 360
Graham-Rowe, & Duncan. (2000). Faster than a speeding bullet. NewScientist, 167(2248), 26–30.
Tulin, M. P. (1963). Supercavitating flows - small perturbation theory. Laurel, Md, Hydronautics Inc.
Niam J W (Dec 2014), Numerical Simulation Of Supercavitation
External links
Supercavitation Research Group at the University of Minnesota
Diehl BGT Defence's "Barracuda" - a German supercavitating Torpedo
DARPA Underwater Express Program
Global Security.org on Supercavitation
How to Build a Supercavitating Weapon, Scientific American
Fluid dynamics | Supercavitation | Chemistry,Engineering | 1,633 |
77,706,916 | https://en.wikipedia.org/wiki/Natural%20Fibre%20Board | Natural Fibre Board (NFB), or otherwise natural fibreboard or natural fiberboard, is a registered European trademark representing wood fibre boards produced without the use of binding agents, for instance, formaldehyde-based resins or other synthetic resins.
It is a European wood-based panel, which belongs to the fibreboards; from the past, it is typically called hardboard. Technically this is a wet-process fibreboard glued only with the natural lignin of the wood material itself.
The trademark is licensed to the European Panel Federation (EPF), an organization that represents all European manufacturers of hardboard and softboard using the wet-process system. These manufacturers adhere to stringent product characteristics and production process criteria.
Manufacturing process
NFB hardboard and softboard products are manufactured by a specialized industry using exclusively pure wood fibres. The production process involves a thermo-mechanical treatment that re-polymerizes the lignin —nature's glue found within wood—which naturally binds the wood fibres without the need for synthetic adhesives or resins containing formaldehyde or isocyanate resin. As a result, NFB products are considered formaldehyde-free, clean, and safe for human health. These fibreboard products are completely recyclable and biodegradable.
The final uses of the NFB boards include, among other things, furniture manufacturing, building construction, and thermal and acoustic insulation.
Products
There are actually two (2) different types of panels produced today.
NFB hardboard: This fibreboard is formed by high-pressure compression and temperature during a pressing process. Thus, the high-density boards satisfy demanding applications as a result of tempering or other treatments that achieve great physical and mechanical characteristics and excellent dimensional stability.
NFB softboard: a fibreboard which is a good insulating material against the cold. Such a wet process-manufactured softboard also protects against summer heat, and its high heat storage capacity helps prevent indoor overheating during the summer months.
Environmental impact
Natural fibreboard is recognized for its eco-friendly properties, playing a significant role in addressing climate change by capturing and storing carbon dioxide (CO2). According to the data of EPF, on average, the use of NFB in a single house can save up to 2 tons of CO2, an amount equivalent to that absorbed by a 750 m2 forest area.
NFB serves as a certification brand, ensuring that fibreboards manufactured by EPF members comply with all relevant European health and environmental standards. The licensed manufacturers are committed to the principles of forest sustainability and the responsible management of natural resources. This commitment is reflected in establishing effective wastewater treatment systems and wood dust collection methods and maintaining safe working conditions. As such, the NFB trademark guarantees consumers that the fibreboard used in their homes, furnishings, and automobiles is safe, healthy, and produced with a strong regard for the environment.
See also
Medium-density fibreboard
Hardboard
Solid wood
References
External links
Natural Fibre Board
Engineered wood
Composite materials | Natural Fibre Board | Physics | 615 |
67,873,986 | https://en.wikipedia.org/wiki/Father%20No%C3%ABl | Nicolas Noël (1712-1781) was a Benedictine monk, instrument and telescope maker at the workshop of l'abbaye de Saint-Germain des Prés. He became custodian of Louis XV's telescopes in 1759, having presented an eight foot (focal length) telescope to the king, under the sponsorship of the duc de Chaulnes, on December 14, 1756. Between 1759 and 1774 Dom Noël assembled a collection of his own instruments and those acquired from others in buildings erected for the purpose adjacent to the Château de la Muette. Noêl's position was later held by Abbé Rochon. In 1772 Noël made a 22-inch diameter 24-foot focus Gregorian, which at the time was the largest telescope in the world. The speculum mirror was re-polished by Carochez in 1787. It was installed in Paris at the Hôtel de la Muette (also known as the Cabinet de Passy). The "Grand Telescope of Passy" was too large and cumbersome to serve as an effective scientific instrument, and after being re-mirrored in 1800 was dismantled in 1841.
References
Benedictine monks
1712 births
1781 deaths
Telescope manufacturers | Father Noël | Astronomy | 236 |
22,022,775 | https://en.wikipedia.org/wiki/International%20Census%20of%20Marine%20Microbes | The International Census of Marine Microbes is a field project of the Census of Marine Life that inventories microbial diversity by cataloging all known diversity of single-cell
organisms including bacteria, Archaea, Protista, and associated viruses, exploring and discovering unknown microbial diversity, and placing that knowledge into ecological and evolutionary contexts.
The ICoMM program, led by Mitchell Sogin, has discovered that marine microbial diversity is some 10 to 100 times more than expected, and the vast majority are previously unknown, low abundance organisms thought to play an important role in the oceans.
References
External links
ICoMM Website
Marine biology | International Census of Marine Microbes | Biology | 127 |
76,113,713 | https://en.wikipedia.org/wiki/Seiler%20oscillator | The Seiler oscillator is an LC electronic oscillator. It was presented in 1941 by E. O. Seiler. The original implementation used a vacuum tube in an Electron-coupled oscillator circuit. Like the Clapp oscillator and the Vackář oscillator it is a variation of the Colpitts oscillator. It uses a voltage divider made of two capacitors, named C3 and C4 in the original schematic. The tuning capacitor C1 is parallel to the inductance L1 of the LC circuit. In an Clapp oscillator, the tuning capacitor is in series to the inductance. The variable capacitor C2 controls the coupling between the tube and tank (LC circuit).
Practical example
The schematic shows an example with component values. The Seiler oscillator uses a LC circuit L1, C1 that is connected via C2 to a capacitive voltage divider C3, C4 that connects to the amplifier Q1. C1 and C2 are calculated for inductance L1 having a unloaded Q factor of 250. Resistor R1 sets the collector current to 0.5mA with no oscillation. The negative supply voltage V2 allows direct connection from Q1 base to ground. The radio frequency choke L3 is needed to isolate the LC circuit from the power supply, but has a potential problem. L3, C3 and C4 create a Colpitts oscillator circuit. R2 reduces the Q factor of L3 and prevents oscillation on the wrong frequency. The correct oscillator frequency is 10MHz. The load resistor RL is part of the simulation, not part of the circuit.
References
Sources
Electronic oscillators
Electronic design | Seiler oscillator | Engineering | 382 |
16,620,571 | https://en.wikipedia.org/wiki/FTTLA | FTTLA refers to "Fibre to the Last Active". Classic analogue cable television trunks used several amplifiers at intervals in cascade, each of which degrades the signal. FTTLA replaces the coaxial cable all along the line to the last active component (towards the subscriber) with optical fibre, eliminating all distribution amplifiers. It retains the existing most expensive part of the access network, the coaxial cables for the "last mile" or "last metres" connected with the subscriber.
Fibre to the last amplifier improves scalability (performance and reliability) when new services such as triple play are introduced.
From the optical sender to the node, fibre which is split by 4, or by 8 depending on the distance, and on the output power of the optical sender (from 6 to 16 dBm).
Intermodulation and carrier-to-noise ratio are improved. Other benefits include lower power consumption.
See also
FTTx
FTTH
FTTB
Hybrid fibre-coaxial
Cable Internet access
References
Broadband
Network architecture
Last amplifier
Local loop | FTTLA | Engineering | 220 |
49,380,773 | https://en.wikipedia.org/wiki/Hygrocybe%20appalachianensis | Hygrocybe appalachianensis, commonly known as the Appalachian waxy cap, is a gilled fungus of the waxcap family. It is found in the eastern United States, where it fruits singly, in groups, or clusters on the ground in deciduous and mixed forests. The species, described in 1963 from collections made in the Appalachian Mountains, was originally classified in the related genus Hygrophorus. It was transferred to Hygrocybe in 1998, in which it has been proposed as the type species of section Pseudofirmae.
Fruit bodies of the Appalachian waxy cap are bright purplish-red to reddish-orange. They have convex to somewhat funnel-shaped caps that are in diameter, held up by a cylindrical stipe up to long. The gills are thick and widely spaced, with a color similar to that of the cap or paler, and a whitish-yellow edge. Microscopically, the spores and spore-bearing cells are dimorphic—of two different sizes.
Systematics
The fungus was described as new to science in 1963 by mycologists Lexemuel Ray Hesler and Alexander H. Smith in their monograph on North American species of Hygrophorus. Hesler collected the type on July 28, 1958 in Cades Cove, Great Smoky Mountains National Park (Tennessee). The fungus was recorded from the same location in a fungal survey conducted about 50 years later. It was transferred to the genus Hygrocybe in a 1998 paper by Ingeborg Kronawitter and Andreas Bresinsky. In this publication, the basionym was given as "appalachiensis" instead of the original spelling appalachianensis, and so Hygrocybe appalachiensis is an orthographic variant spelling. A reference to the type locality–the Appalachian Mountains–appears in both the specific epithet and in the common name, Appalachian waxy cap.
Because of its color and habit, Hesler and Smith originally thought the unknown agaric was H. coccinea or perhaps a large form of H. miniata, but study of its microscopic characteristics revealed that it was distinct from these. They noted that the fibrillose-squamulose texture of the cap (i.e. that it appears to be made of thin fibers, or covered with small scales) and the large spores suggested a relationship with H. turundus. The type of Hygrocybe appalachianensis is of an immature specimen, and the description of the basidia only accounted for microbasidia (i.e., the smaller of the two forms of basidia in the hymenium). The immature macrobasidia were described as pleurocystidia (i.e., cystidia arising from the side, or face, of the gill), which Hesler and Smith described as "more or less embedded in the hymenium". Microspores (the smaller of the two spore types produced by the fungus) were not accounted for in their original description, although they are present in the type.
Deborah Jean Lodge and colleagues, in a reorganization of the family Hygrophoraceae based on molecular phylogenetics, proposed that H. appalachianensis should be the type species of the new section Pseudofirmae in genus Hygrocybe. Species in this section, which include Hygrocybe chloochlora, H. rosea, and H. trinitensis, have sticky or glutinous caps that often have perforations in the center. Their spores and basidia are dimorphic (of two sizes), and the development of the microbasidia and macrobasidia is often staggered. The macrobasidia are club shaped and appear as if they have a stalk.
Description
Fruitbodies of H. appalachianensis have convex caps that are in diameter. As the mushroom matures, the cap margins curl upward, and the central depression in the cap deepens, becoming more or less funnel shaped. Its color is bright red to purplish-red, which fades in age. The cap margin is often whitish. The well-spaced gills are initially adnate-decurrent, becoming more decurrent in age. Their color is that of the cap or paler; the gill edges are sometimes whitish-yellow. The cylindrical stipe, which measures long by , is more or less the same width throughout its length. Its surface texture is smooth to slightly scurfy, and it is often whitish at its base. The flesh of the mushroom lacks any distinctive taste or odor. It is yellowish with orange tinges, with reddish color near the cap cuticle. Alan Bessette and colleagues, in their 2012 monograph on eastern North American waxcap mushrooms, note that the mushroom is "reported to be edible".
Hygrocybe appalachianensis mushrooms produce a white spore print. Both the spores and the basidia are dimorphic. The larger spores (macrospores) are smooth, ellipsoid, and measure 11–17.5 by 7–10 μm. They are hyaline (translucent), and inamyloid. The macrobasidia are club shaped, measuring 38–57 by 8–14 μm, and can be one- two-, three- or four-spored. The ratio of macrobasidia length to macrospore length is usually less than five to one. Clamp connections are present on the hyphae of several tissues of the mushroom. The hyphae of the gills (the lamellar trama) are arranged in a parallel fashion.
The colors of Hygrocybe mushrooms originate from betalains, a class of red and yellow indole-derived pigments. Specific betalains found in H. appalachianensis include muscaflavin, and a group of compounds called hygroaurins, which are derived from muscaflavin by conjugation with amino acids.
Similar species
There are several lookalike species found in North American with which the Appalachian waxy cap might be confused. Hygrocybe cantharellus is a bright red mushroom that has smaller fruit bodies and a more slender stipe than H. appalachianensis. It also has smaller spores, measuring 7–12 by 4–8 μm. Hygrocybe reidii, found in Europe and northeastern North America, has flesh with a sweet odor that is reminiscent of honey. This smell is sometimes weak and only noticeable when the tissue is rubbed, or when it is drying. Its scarlet cap initially has a narrow yellow-orange margin.
Widespread and common in the Northern Hemisphere, the scarlet waxcap (Hygrocybe coccinea) is most reliably distinguished from H. appalachianensis by its smaller spores, measuring 7–11 by 4–5 μm. The sphagnum waxcap, H. coccineocrenata, also has colors that are similar to H. appalachianensis. In addition to its smaller spores (8–12 by 5.5–8 μm), its fruit bodies have smaller caps, measuring in diameter, and it is typically found fruiting in mosses.
Habitat and distribution
Fruit bodies of Hygrocybe appalachianensis grow singly, in groups, or clusters on the ground. Like all Hygrocybe species, the fungus is believed to be saprophytic, meaning it obtains nutrients by breaking down organic matter. It fruits in deciduous or mixed forest, typically appearing between the months of July and December. Its range covers a region extending from the states Ohio and West Virginia south to South Carolina and Tennessee. Its occurrence is occasional to locally common.
See also
List of Hygrocybe species
References
Cited literature
External links
appalachianensis
Fungi of the United States
Fungi described in 1963
Taxa named by Alexander H. Smith
Fungi without expected TNC conservation status
Fungus species
Taxa named by Lexemuel Ray Hesler | Hygrocybe appalachianensis | Biology | 1,650 |
20,478 | https://en.wikipedia.org/wiki/Magnetopause | The magnetopause is the abrupt boundary between a magnetosphere and the surrounding plasma. For planetary science, the magnetopause is the boundary between the planet's magnetic field and the solar wind. The location of the magnetopause is determined by the balance between the pressure of the dynamic planetary magnetic field and the dynamic pressure of the solar wind. As the solar wind pressure increases and decreases, the magnetopause moves inward and outward in response. Waves (ripples and flapping motion) along the magnetopause move in the direction of the solar wind flow in response to small-scale variations in the solar wind pressure and to Kelvin–Helmholtz instabilities.
The solar wind is supersonic and passes through a bow shock where the direction of flow is changed so that most of the solar wind plasma is deflected to either side of the magnetopause, much like water is deflected before the bow of a ship. The zone of shocked solar wind plasma is the magnetosheath. At Earth and all the other planets with intrinsic magnetic fields, some solar wind plasma succeeds in entering and becoming trapped within the magnetosphere. At Earth, the solar wind plasma which enters the magnetosphere forms the plasma sheet. The amount of solar wind plasma and energy that enters the magnetosphere is regulated by the orientation of the interplanetary magnetic field, which is embedded in the solar wind.
The Sun and other stars with magnetic fields and stellar winds have a solar magnetopause or heliopause where the stellar environment is bounded by the interstellar environment.
Characteristics
Prior to the age of space exploration, interplanetary space was considered to be a vacuum. The coincidence of the first observation of a solar flare and the geomagnetic storm of 1859 was evidence that plasma was ejected from the Sun during the flare event. Chapman and Ferraro proposed that a plasma was emitted by the Sun in a burst as part of a flare event which disturbed the planet's magnetic field in a manner known as a geomagnetic storm. The collision frequency of particles in the plasma in the interplanetary medium is very low and the electrical conductivity is so high that it could be approximated to an infinite conductor.
A magnetic field in a vacuum cannot penetrate a volume with infinite conductivity. Chapman and Bartels (1940) illustrated this concept by postulating a plate with infinite conductivity placed on the dayside of a planet's dipole as shown in the schematic. The field lines on the dayside are bent. At low latitudes, the magnetic field lines are pushed inward. At high latitudes, the magnetic field lines are pushed backwards and over the polar regions. The boundary between the region dominated by the planet's magnetic field (i.e., the magnetosphere) and the plasma in the interplanetary medium is the magnetopause. The configuration equivalent to a flat, infinitely conductive plate is achieved by placing an image dipole (green arrow at left of schematic) at twice the distance from the planet's dipole to the magnetopause along the planet-Sun line. Since the solar wind is continuously flowing outward, the magnetopause above, below and to the sides of the planet are swept backward into the geomagnetic tail as shown in the artist's concept. The region (shown in pink in the schematic) which separates field lines from the planet which are pushed inward from those which are pushed backward over the poles is an area of weak magnetic field or day-side cusp. Solar wind particles can enter the planet's magnetosphere through the cusp region. Because the solar wind exists at all times and not just times of solar flares, the magnetopause is a permanent feature of the space near any planet with a magnetic field.
The magnetic field lines of the planet's magnetic field are not stationary. They are continuously joining or merging with magnetic field lines of the interplanetary magnetic field in a process called magnetic reconnection. The joined field lines are swept back over the poles into the planetary magnetic tail. In the tail, the field lines from the planet's magnetic field are re-joined and start moving toward night-side of the planet. The physics of this process was first explained by Dungey (1961). As such, the process is now referred to as the Dungey Cycle.
If one assumed that magnetopause was just a boundary between a magnetic field in a vacuum and a plasma with a weak magnetic field embedded in it, then the magnetopause would be defined by electrons and ions penetrating one gyroradius into the magnetic field domain. Since the gyro-motion of electrons and ions is in opposite directions, an electric current flows along the boundary. The actual magnetopause is much more complex.
Estimating the standoff distance to the magnetopause
If the pressure from particles within the magnetosphere is neglected, it is possible to estimate the distance to the part of the magnetosphere that faces the Sun. The condition governing this position is that the dynamic ram pressure from the solar wind is equal to the magnetic pressure from the Earth's magnetic field:
where and are the density and velocity of the solar wind, and
is the magnetic field strength of the planet in SI units ( in T, in H/m).
Since the dipole magnetic field strength varies with distance as the magnetic field strength can be written as , where is the planet's magnetic moment, expressed in .
Solving this equation for r leads to an estimate of the distance
The distance from Earth to the subsolar magnetopause varies over time due to solar activity, but typical distances range from 6–15 R. Empirical models using real-time solar wind data can provide a real-time estimate of the magnetopause location. A bow shock stands upstream from the magnetopause. It serves to decelerate and deflect the solar wind flow before it reaches the magnetopause.
Solar System magnetopauses
Research on the magnetopause is conducted using the LMN coordinate system (which is set of axes like XYZ). N points normal to the magnetopause outward to the magnetosheath, L lies along the projection of the dipole axis onto the magnetopause (positive northward), and M completes the triad by pointing dawnward.
Venus and Mars do not have a planetary magnetic field and do not have a magnetopause. The solar wind interacts with the planet's atmosphere and a void is created behind the planet. In the case of the Earth's moon and other bodies without a magnetic field or atmosphere, the body's surface interacts with the solar wind and a void is created behind the body.
See also
Heliopause
Geopause
Bow shock
Solar System
For applications to spacecraft propulsion, see magnetic sail
Notes
References
Space plasmas | Magnetopause | Physics | 1,399 |
75,962,366 | https://en.wikipedia.org/wiki/Silicon%20Quantum%20Electronics%20Workshop | The Silicon Quantum Electronics Workshop (SiQEW) is a series of workshops on silicon quantum computing that date back to 2007.
References
Workshops
Quantum electronics | Silicon Quantum Electronics Workshop | Physics,Materials_science | 31 |
2,071,932 | https://en.wikipedia.org/wiki/Phosphoric%20acids%20and%20phosphates | In chemistry, a phosphoric acid, in the general sense, is a phosphorus oxoacid in which each phosphorus (P) atom is in the oxidation state +5, and is bonded to four oxygen (O) atoms, one of them through a double bond, arranged as the corners of a tetrahedron. Two or more of these tetrahedra may be connected by shared single-bonded oxygens, forming linear or branched chains, cycles, or more complex structures. The single-bonded oxygen atoms that are not shared are completed with acidic hydrogen atoms. The general formula of a phosphoric acid is , where n is the number of phosphorus atoms and x is the number of fundamental cycles in the molecule's structure, between 0 and .
Removal of protons () from k hydroxyl groups –OH leaves anions generically called phosphates (if ) or hydrogen phosphates (if k is between 1 and ), with general formula . The fully dissociated anion () has formula . The term phosphate is also used in organic chemistry for the functional groups that result when one or more of the hydrogens are replaced by bonds to other groups.
These acids, together with their salts and esters, include some of the best-known compounds of phosphorus, of high importance in biochemistry, mineralogy, agriculture, pharmacy, chemical industry, and chemical research.
Acids
Phosphoric acid
The simplest and most commonly encountered of the phosphoric acids is orthophosphoric acid, . Indeed, the term phosphoric acid often means this compound specifically (and this is also the current IUPAC nomenclature).
Oligophosphoric and polyphosphoric acids
Two or more orthophosphoric acid molecules can be joined by condensation into larger molecules by elimination of water. Condensation of a few units yields the oligophosphoric acids, while larger molecules are called polyphosphoric acids. (However, the distinction between the two terms is not well defined.)
For example, pyrophosphoric, triphosphoric and tetraphosphoric acids can be obtained by the reactions
The "backbone" of a polyphosphoric acid molecule is a chain of alternating P and O atoms. Each extra orthophosphoric unit that is condensed adds 1 extra H (hydrogen) atom, 1 extra P (phosphorus) atom, and 3 extra O (oxygen) atoms. The general formula of a polyphosphoric acid is or .
Polyphosphoric acids are used in organic synthesis for cyclizations and acylations; an alternative is Eaton's reagent.
Metaphosphoric acid
Metaphosphoric acid () is a colorless, vitreous, deliquescent solid, density 2.2 to 2.5 g/cc, which sublimes upon heating. It is soluble in ethanol.
Cyclic phosphoric acids
Phosphoric acid units can be bonded together in rings (cyclic structures). The simplest such compound is trimetaphosphoric acid or cyclo-triphosphoric acid having the formula . Its structure is shown in the illustration. Since the ends are condensed, its formula has one less (water) than tripolyphosphoric acid.
The general formula of a phosphoric acid is , where n is the number of phosphorus atoms and x is the number of fundamental cycles in the molecule's structure; that is, the minimum number of bonds that would have to be broken to eliminate all cycles.
The limiting case of internal condensation, where all oxygen atoms are shared and there are no hydrogen atoms (x = ) is an anhydride , phosphorus pentoxide .
Phosphates
Removal of the hydrogen atoms as protons turns a phosphoric acid into a phosphate anion. Partial removal yields various hydrogen phosphate anions.
Orthophosphate
The anions of orthophosphoric acid are orthophosphate (commonly called simply "phosphate") , monohydrogen phosphate , and dihydrogen phosphate .
Linear oligophosphates and polyphosphates
Dissociation of pyrophosphoric acid generates four anions, , where the charge k ranges from 1 to 4. The last one is pyrophosphate . The pyrophosphates are mostly water-soluble.
Likewise, tripolyphosphoric acid yields at least five anions , where k ranges from 1 to 5, including tripolyphosphate . Tetrapolyphosphoric acid yields at least six anions, including tetrapolyphosphate , and so on. Note that each extra phosphoric unit adds one extra P atom, three extra O atoms, and either one extra hydrogen atom or an extra negative charge.
Branched polyphosphoric acids give similarly branched polyphosphate anions. The simplest example of this is triphosphono phosphate and its partially dissociated versions.
The general formula for such (non-cyclic) polyphosphate anions, linear or branched, is , where the charge k may vary from 1 to . Generally in an aqueous solution, the degree or percentage of dissociation depends on the pH of the solution.
Cyclic polyphosphates
Salts or esters of cyclic polyphosphoric acids are often called "metaphosphates". What are commonly called trimetaphosphates actually have a mixture of ring sizes. A general formula for such cyclic compounds is where x = number of phosphoric units in the molecule.
When metaphosphoric acids lose their hydrogens as , cyclic anions called metaphosphates are formed. An example of a compound with such an anion is sodium hexametaphosphate (), used as a sequestrant and a food additive.
Chemical properties
Solubility
These phosphoric acids series are generally water-soluble considering the polarity of the molecules. Ammonium and alkali phosphates are also quite soluble in water. The alkaline earth salts start becoming less soluble and phosphate salts of various other metals are even less soluble.
Hydrolysis and condensation
In aqueous solutions (solutions of water), water gradually (over the course of hours) hydrolyzes polyphosphates into smaller phosphates and finally into ortho-phosphate, given enough water. Higher temperature or acidic conditions can speed up the hydrolysis reactions considerably.
Conversely, polyphosphoric acids or polyphosphates are often formed by dehydrating a phosphoric acid solution; in other words, removing water from it often by heating and evaporating the water off.
Uses
Ortho-, pyro-, and tripolyphosphate compounds have been commonly used in detergents (i. e. cleaners) formulations. For example, see Sodium tripolyphosphate. Sometimes pyrophosphate, tripolyphosphate, tetrapolyphosphate, etc. are called diphosphate, triphosphate, tetraphosphate, etc., especially when they are part of phosphate esters in biochemistry. They are also used for scale and corrosion control by potable water providers. As a corrosion inhibitor, polyphosphates work by forming a protective film on the interior surface of pipes.
Phosphate esters
The groups in phosphoric acids can also condense with the hydroxyl groups of alcohols to form phosphate esters. Since orthophosphoric acid has three groups, it can esterify with one, two, or three alcohol molecules to form a mono-, di-, or triester. See the general structure image of an ortho- (or mono-) phosphate ester below on the left, where any of the R groups can be a hydrogen or an organic radical. Di- and tripoly- (or tri-) phosphate esters, etc. are also possible. Any groups on the phosphates in these ester molecules may lose ions to form anions, again depending on the pH in a solution. In the biochemistry of living organisms, there are many kinds of (mono)phosphate, diphosphate, and triphosphate compounds (essentially esters), many of which play a significant role in metabolism such as adenosine diphosphate (ADP) and triphosphate (ATP).
See also
Adenosine monophosphate
Adenosine diphosphate
Adenosine triphosphate
Adenosine tetraphosphate
Nucleoside triphosphate
Organophosphate
Phosphonic acid
Phosphoramidate
Ribonucleoside monophosphate
Superphosphate
References
Further reading
External links
Determination of Polyphosphates Using Ion Chromatography with Suppressed Conductivity Detection, Application Note 71 by Dionex
Dietary minerals
Inorganic compounds
Phosphates
Pyrophosphates
Reagents for organic chemistry | Phosphoric acids and phosphates | Chemistry | 1,875 |
36,522,087 | https://en.wikipedia.org/wiki/Interactive%20Theorem%20Proving%20%28conference%29 | Interactive Theorem Proving (ITP) is an annual international academic conference on the topic of automated theorem proving, proof assistants and related topics, ranging from theoretical foundations to implementation aspects and applications in program verification, security, and formalization of mathematics.
ITP brings together the communities using many systems based on higher-order logic such as ACL2, Coq, Mizar, HOL, Isabelle, Lean, NuPRL, PVS, and Twelf. Individual workshops or meetings devoted to individual systems are usually held concurrently with the conference.
Together with CADE and TABLEAUX, ITP is usually one of the three main conferences of the International Joint Conference on Automated Reasoning (IJCAR) whenever it convenes,
History
The inaugural meeting of ITP was held on 11–14 July 2010 in Edinburgh, Scotland, as part of the Federated Logic Conference. It is the extension of the Theorem Proving in Higher Order Logics (TPHOLs) conference series to the broad field of interactive theorem proving. TPHOLs meetings took place every year from 1988 until 2009.
The first three were informal users' meetings for the HOL system and were the only ones without published papers. Since 1990 TPHOLs has published formal peer-reviewed proceedings, published by Springer's Lecture Notes in Computer Science series. It has also entertained an increasingly wide field of interest.
External links
Automated theorem proving
Theoretical computer science conferences
Logic conferences | Interactive Theorem Proving (conference) | Mathematics,Technology | 288 |
30,805 | https://en.wikipedia.org/wiki/Tape%20bias | Tape bias is the term for two techniques, AC bias and DC bias, that improve the fidelity of analogue tape recorders. DC bias is the addition of direct current to the audio signal that is being recorded. AC bias is the addition of an inaudible high-frequency signal (generally from 40 to 150 kHz) to the audio signal. Most contemporary tape recorders use AC bias.
When recording, magnetic tape has a nonlinear response as determined by its coercivity. Without bias, this response results in poor performance, especially at low signal levels. A recording signal that generates a magnetic field strength less than the tape's coercivity cannot magnetise the tape and produces little playback signal. Bias increases the signal quality of most audio recordings significantly by pushing the signal into more linear zones of the tape's magnetic transfer function.
History
Magnetic recording was proposed as early as 1878 by Oberlin Smith, who on 4 October 1878 filed, with the U.S. patent office, a caveat regarding the magnetic recording of sound and who published his ideas on the subject in the 8 September 1888 issue of The Electrical World as "Some possible forms of phonograph". By 1898, Valdemar Poulsen had demonstrated a magnetic recorder and proposed magnetic tape. Fritz Pfleumer was granted a German patent for a non-magnetic "Sound recording carrier" with a magnetic coating, on 1 January 1928, Years earlier, Joseph O'Neil had created a similar recording medium, yet had not made a working machine that could record sound.
DC bias
The earliest magnetic recording systems simply applied the unadulterated (baseband) input signal to a recording head, resulting in recordings with poor low-frequency response and high distortion. Within short order, the addition of a suitable direct current to the signal, a DC bias, was found to reduce distortion by operating the tape substantially within its linear-response region. The principal disadvantage of DC bias was that it left the tape with a net magnetization, which generated significant noise on replay because of the grain of the tape particles. However: the earlier wire recorders were largely immune to the problem due to their high running speed and relatively large wire size. Some early DC-bias systems used a permanent magnet that was placed near the record head. It had to be swung out of the way for replay. DC bias was replaced by AC bias but was later re-adopted by some very low-cost cassette recorders.
AC bias
The original patent for AC bias was filed by Wendell L. Carlson and Glenn L. Carpenter in 1921, eventually resulting in a patent in 1927. The value of AC bias was somewhat masked by the fact that wire recording gained little benefit from the technique and Carlson and Carpenter's achievement was largely ignored. The first rediscovery seems to have been by Dean Wooldridge at Bell Telephone Laboratories, around 1937, but their lawyers found the original patent, and Bell simply kept silent about their rediscovery of AC bias.
Teiji Igarashi, Makoto Ishikawa, and Kenzo Nagai of Japan published a paper on AC biasing in 1938 and received a Japanese patent in 1940. Marvin Camras (USA) also rediscovered high-frequency (AC) bias independently in 1941 and received a patent in 1944.
The reduction in distortion and noise provided by AC bias was accidentally rediscovered in 1940 by Walter Weber while working at the Reichs-Rundfunk-Gesellschaft (RRG) when a DC-biased Magnetophon that he had been working on developed an 'unwanted' oscillation in its record circuitry.
The last production DC biased Magnetophon machines had harmonic distortion in excess of 10 percent; a dynamic range of 40 dB and a frequency response of just 50 Hz to 6 kHz at a tape speed slightly in excess of 30 inches per second (76.8 cm/sec). The AC biased Magnetophon machines reduced the harmonic distortion to well under 3 percent; extended the dynamic range to 65 dB and the frequency response was now from 40 Hz to 15 kHz at the same tape speed. These AC biased magnetophons provided a fidelity of recording that outperformed any other recording system of the time.
See also
Barkhausen effect
Dither
Hysteresis
References
Further reading
External links
Biasing in Tape Recording
Audio storage
Tape recording
de:Tonband#Vormagnetisierung | Tape bias | Technology | 885 |
22,359,636 | https://en.wikipedia.org/wiki/Simplicial%20sphere | In geometry and combinatorics, a simplicial (or combinatorial) d-sphere is a simplicial complex homeomorphic to the d-dimensional sphere. Some simplicial spheres arise as the boundaries of convex polytopes, however, in higher dimensions most simplicial spheres cannot be obtained in this way.
One important open problem in the field was the g-conjecture, formulated by Peter McMullen, which asks about possible numbers of faces of different dimensions of a simplicial sphere. In December 2018, the g-conjecture was proven by Karim Adiprasito in the more general context of rational homology spheres.
Examples
For any n ≥ 3, the simple n-cycle Cn is a simplicial circle, i.e. a simplicial sphere of dimension 1. This construction produces all simplicial circles.
The boundary of a convex polyhedron in R3 with triangular faces, such as an octahedron or icosahedron, is a simplicial 2-sphere.
More generally, the boundary of any (d+1)-dimensional compact (or bounded) simplicial convex polytope in the Euclidean space is a simplicial d-sphere.
Properties
It follows from Euler's formula that any simplicial 2-sphere with n vertices has 3n − 6 edges and 2n − 4 faces. The case of n = 4 is realized by the tetrahedron. By repeatedly performing the barycentric subdivision, it is easy to construct a simplicial sphere for any n ≥ 4. Moreover, Ernst Steinitz gave a characterization of 1-skeleta (or edge graphs) of convex polytopes in R3 implying that any simplicial 2-sphere is a boundary of a convex polytope.
Branko Grünbaum constructed an example of a non-polytopal simplicial sphere (that is, a simplicial sphere that is not the boundary of a polytope). Gil Kalai proved that, in fact, "most" simplicial spheres are non-polytopal. The smallest example is of dimension d = 4 and has f0 = 8 vertices.
The upper bound theorem gives upper bounds for the numbers fi of i-faces of any simplicial d-sphere with f0 = n vertices. This conjecture was proved for simplicial convex polytopes by Peter McMullen in 1970 and by Richard Stanley for general simplicial spheres in 1975.
The ''g-conjecture, formulated by McMullen in 1970, asks for a complete characterization of f-vectors of simplicial d-spheres. In other words, what are the possible sequences of numbers of faces of each dimension for a simplicial d-sphere? In the case of polytopal spheres, the answer is given by the g''-theorem, proved in 1979 by Billera and Lee (existence) and Stanley (necessity). It has been conjectured that the same conditions are necessary for general simplicial spheres. The conjecture was proved by Karim Adiprasito in December 2018.
See also
Dehn–Sommerville equations
References
Algebraic combinatorics
Topology | Simplicial sphere | Physics,Mathematics | 675 |
8,853,271 | https://en.wikipedia.org/wiki/In-glaze%20decoration | In-glaze or inglaze is a method of decorating pottery, where the materials used allow painted decoration to be applied on the surface of the glaze before the glost firing so that it fuses into the glaze in the course of firing.
It contrasts with the other main methods of adding painted colours to pottery. These are underglaze painting, where the paint is applied before the glaze, which then seals it, and overglaze decoration where the painting is done in enamels after the glazed vessel has been fired, before a second lighter firing to fuse it to the glaze. There is also the use of coloured glazes, which often carry painted designs.
As with underglaze, in-glaze requires pigments that can withstand the high temperatures of the main firing without discolouring. Historically this was a small group. Inglaze works well with tin-glazed pottery, as unlike lead glaze the glaze does not become runny in the course of firing.
Faience
The very wide range of types of European tin-glazed earthenware or "faience" all began using in-glaze or underglaze painting, with overglaze enamels only developing in the 18th century. In French faience, the in-glaze technique is known as grand feu ("big fire") and the one using enamels as petit feu ("little fire"). Most styles in this group, such as Delftware, mostly used blue and white pottery decoration, but Italian maiolica was fully polychrome, using the range of in- and underglaze colours available.
References
References
Lane, Arthur, French Faïence, 1948, Faber & Faber
Savage, George, and Newman, Harold, An Illustrated Dictionary of Ceramics, 1985, Thames & Hudson,
Ceramic glazes
Types of pottery decoration | In-glaze decoration | Chemistry | 386 |
16,728,071 | https://en.wikipedia.org/wiki/SSTGFLS%20J222557%2B601148 | SSTGFLS J222557+601148 is a planetary nebula in the constellation Cepheus. Located between 2000 and 3000 parsecs distant from Earth, it was originally classified in 2006 as a supernova remnant. Thought to be the first supernova remnant first detected in the infrared wavelengths, the spectrum and properties of the object did not match up well with that of a typical supernova remnant, and it was reclassified as a planetary nebula in 2010. A candidate central star has been identified, with an apparent infrared magnitude of 22.4.
References
Planetary nebulae
Cepheus (constellation)
? | SSTGFLS J222557+601148 | Astronomy | 124 |
3,973,461 | https://en.wikipedia.org/wiki/Coniferyl%20alcohol | Coniferyl alcohol is an organic compound with the formula HO(CH3O)C6H3CH=CHCH2OH. A colourless or white solid, it is one of the monolignols, produced via the phenylpropanoid biochemical pathway. When copolymerized with related aromatic compounds, coniferyl alcohol forms lignin or lignans. Coniferin is a glucoside of coniferyl alcohol. Coniferyl alcohol is an intermediate in biosynthesis of eugenol and of stilbenoids and coumarin. Gum benzoin contains significant amount of coniferyl alcohol and its esters. It is found in both gymnosperm and angiosperm plants. Sinapyl alcohol and paracoumaryl alcohol, the other two lignin monomers, are found in angiosperm plants and grasses.
Occurrence
Coniferyl alcohol is produced from coniferyl aldehyde by the action of dehydrogenase enzymes.
It is a queen retinue pheromone (QRP), a type of honey bee pheromone found in the mandibular glands.
In Forsythia intermedia a dirigent protein was found to direct the stereoselective biosynthesis of (+)-pinoresinol from coniferyl alcohol. Recently, a second, enantiocomplementary dirigent protein was identified in Arabidopsis thaliana, which directs enantioselective synthesis of (−)-pinoresinol.
References
Monolignols
O-methylated phenylpropanoids
Semiochemicals
Insect pheromones | Coniferyl alcohol | Chemistry | 349 |
333,676 | https://en.wikipedia.org/wiki/List%20of%20content%20management%20systems | Content management systems (CMS) are used to organize and facilitate collaborative content creation. Many of them are built on top of separate content management frameworks. The list is limited to notable services.
Open source software
This section lists free and open-source software that can be installed and managed on a web server.
Software as a service (SaaS)
This section lists proprietary software that includes software, hosting, and support with a single vendor. This section includes free services.
Proprietary software
This section lists proprietary software to be installed and managed on a user's own server. This section includes freeware proprietary software.
Systems listed on a light purple background are no longer in active development.
Other content management frameworks
A content management framework (CMF) is a system that facilitates the use of reusable components or customized software for managing Web content. It shares aspects of a Web application framework and a content management system (CMS).
Below is a list of notable systems that claim to be CMFs.
See also
Comparison of web frameworks
Comparison of wiki software
References
Content management systems | List of content management systems | Technology | 219 |
42,726,919 | https://en.wikipedia.org/wiki/Deferred%20measurement%20principle | The deferred measurement principle is a result in quantum computing which states that delaying measurements until the end of a quantum computation doesn't affect the probability distribution of outcomes.
A consequence of the deferred measurement principle is that measuring commutes with conditioning.
The choice of whether to measure a qubit before, after, or during an operation conditioned on that qubit will have no observable effect on a circuit's final expected results.
Thanks to the deferred measurement principle, measurements in a quantum circuit can often be shifted around so they happen at better times.
For example, measuring qubits as early as possible can reduce the maximum number of simultaneously stored qubits; potentially enabling an algorithm to be run on a smaller quantum computer or to be simulated more efficiently.
Alternatively, deferring all measurements until the end of circuits allows them to be analyzed using only pure states.
References
Quantum information science | Deferred measurement principle | Physics | 186 |
3,780,915 | https://en.wikipedia.org/wiki/Table%20manners | Table manners are the rules of etiquette used while eating and drinking together, which may also include the use of utensils. Different cultures observe different rules for table manners. Each family or group sets its own standards for how strictly these rules are to be followed.
Historical
There is a section on table etiquette in the deuterocanonical Book of Sirach, dated to around 200-175 BC.
Europe
Traditionally in Europe, the host or hostess takes the first bite unless he or she instructs otherwise. The host begins after all food for that course has been served and everyone is seated. In religious households, a family meal may commence with saying grace, or at dinner parties the guests might begin the meal by offering some favorable comments on the food and thanks to the host. In a group dining situation it is considered impolite to begin eating before all the group have been served their food and are ready to start.
Napkins should be placed on the lap and not tucked into clothing. They should not be used for anything other than wiping one's mouth and should be placed unfolded on the seat of one's chair should one need to leave the table during the meal, or placed unfolded on the table when the meal is finished.
The fork is held with the left hand and the knife held with the right. The fork is held generally with the tines down, using the knife to cut food or help guide food on to the fork. When no knife is being used, the fork can be held with the tines up. With the tines up, the fork balances on the side of the index finger, held in place with the thumb and index finger. Under no circumstances should the fork be held like a shovel, with all fingers wrapped around the base. A single mouthful of food should be lifted on the fork and one should not chew or bite food from the fork. The knife should be held with the base into the palm of the hand, not like a pen with the base resting between the thumb and forefinger. The knife must never enter the mouth or be licked. When eating soup, the spoon is held in the right hand and the bowl tipped away from the diner, scooping the soup in outward movements. The soup spoon should never be put into the mouth, and soup should be sipped from the side of the spoon, not the end. Food should always be chewed with the mouth closed. Talking with food in one's mouth is seen as very rude. Licking one's fingers and eating slowly can also be considered impolite.
Food should always be tasted before salt and pepper are added. Applying condiments or seasoning before the food is tasted is viewed as an insult to the cook, as it shows a lack of faith in the cook's ability to prepare a meal.
Butter should be cut, not scraped, from the butter dish using a butter knife or side plate knife and put onto a side plate, not spread directly on to the bread. This prevents the butter in the dish from gathering bread crumbs as it is passed around. Bread rolls should be torn with the hands into mouth-sized pieces and buttered individually, from the butter placed on the side plate, using a knife. Bread should not be used to dip into soup or sauces. As with butter, cheese should be cut and placed on the plate before eating.
When eating with other people, pouring one's own drink is acceptable, but it is more polite to offer to pour drinks to the people sitting on either side. Wine bottles should not be upturned in an ice bucket when empty.
It is impolite to reach over someone's plate to pick up food or other items. Diners should always ask for items to be passed along the table to them. In the same vein, diners who are not themselves using the item should pass those items directly to the person who asked, or to someone else who can pass them along to the person. It is also rude to slurp food, eat noisily or make noise with cutlery.
Elbows should remain off the table.
When one has finished eating, regardless of whether the plate is empty or not, this should be communicated to others by placing the knife and fork together on the plate at either the 6 o'clock position (facing upwards), or the 4 o'clock position (facing towards approximately 10 o'clock). The fork tines should face upwards. The napkin, if there is one, should be folded (not too neatly, so it's obvious that it was used) to the left of the plate. This is particularly customary in restaurants, where it is understood as a cue by waiters that one's plate can be collected.
At family meals, children are often expected to ask permission to leave the table at the end of the meal.
Should a mobile telephone (or any other modern device) ring or if a text message is received, the diner should ignore the call. In exceptional cases where the diner feels the call may be of an urgent nature, they should ask to be excused, leave the room and take the call (or read the text message) out of earshot of the other diners. Placing a phone, keys, handbag or wallet on the dinner table is considered rude.
North America
Modern etiquette provides the smallest numbers and types of utensils necessary for dining. Only utensils which are to be used for the planned meal should be set. Even if needed, hosts should not have more than three utensils on either side of the plate before a meal. If extra utensils are needed, they may be brought to the table along with later courses.
A tablecloth extending 10–15 inches past the edge of the table should be used for formal dinners, while placemats may be used for breakfast, lunch, and informal suppers. Candlesticks, even if not lit, should not be on the table while dining during daylight hours. At some restaurants, women may be asked for their orders before men.
Men's and unisex hats should never be worn at the table. Ladies' hats may be worn during the day if visiting others.
Phones and other distracting items should not be used at the dining table. Reading at a table is permitted only at breakfast, unless the diner is alone. Urgent matters should be handled, after an apology, by stepping away from the table.
If food must be removed from the mouth for some reason—a pit, bone, or gristle—the rule of thumb, according to Emily Post, is that it comes out the same way it went in. For example, if olives are eaten by hand, the pit may be removed by hand. If an olive in a salad is eaten with a fork, the pit should be deposited back onto the fork inside one's mouth, and then placed onto a plate. The same applies to any small bone or piece of gristle in food. A diner should never spit things into a napkin, certainly not a cloth napkin. Since the napkin is always laid in the lap and brought up only to wipe one's mouth, hidden food may be accidentally dropped into the lap or onto the host's floor. Food that is simply disliked should be swallowed.
When eating soup or other food served with bowl and spoons, the spoon is always pushed away from oneself, rather than being drawn toward oneself. Food is never slurped. This stems from aristocratic views that drawing the spoon toward oneself portrayed negative images of either hunger or gluttony.
The fork may be used in the American style (in the left hand while cutting and in the right hand to pick up food) or the European Continental style (fork always in the left hand). (See Fork etiquette) The napkin should be left on the seat of a chair only when leaving temporarily. Upon leaving the table at the end of a meal, the napkin is placed loosely on the table to the left of the plate.
India
In formal settings, the host asks the guests to start the meal. Generally, one should not leave the table before the host or the eldest person finishes his or her food. It is also considered impolite to leave the table without asking for the host's or the elder's permission. Normally whoever completes first will wait for others and after everybody is finished all leave the table.
In a traditional Indian meal setup, the following is observed. Normally the plate is served with small quantities of all the food items.
A cardinal rule of dining is to use the right hand when eating or receiving food. It is inappropriate to touch any communal utensils by the hand used for eating. If the right hand is used for eating, then the left hand should be used for serving oneself from common utensils. Hand washing, both before sitting at a table and after eating, is important.
Small amounts of food are taken at a time, ensuring that food is not wasted. It is considered important to finish each item on the plate out of respect for the food being served. Traditionally, food should be eaten as it is served, without asking for salt or pepper. It is however, now acceptable to express a personal preference for salt or pepper and to ask for it.
Distorting or playing with food is unacceptable. Eating at a moderate pace is important, as eating too slowly may imply a dislike of the food and eating too quickly is considered rude. Generally, it is acceptable to burp, slurp while at the table.
Staring at another diner's plate is also considered rude. It is inappropriate to make sounds while chewing. Certain Indian food items can create sounds, so it is important to close the mouth and chew at a moderate pace.
At the dining table, attention must be paid to specific behaviors that may indicate distraction or rudeness. Answering phone calls, sending messages and using inappropriate language are considered inappropriate while dining and while elders are present.
China
Seating and serving customs play important roles in Chinese dining etiquette. For example, the diners should not sit down or begin to eat before the host (or guest of honor) has done so. When everyone is seated, the host offers to pour tea, beginning with the cup of the eldest person. The youngest person is served last as a gesture of respect for the elders.
Just as in Western cultures, communal utensils (chopsticks and spoons) are used to bring food from communal dishes to an individual's own bowl (or plate). It is considered rude and unhygienic for a diner to use his or her own chopsticks to pick up food from communal bowls and plates when such utensils are present. Other potentially rude behaviors with chopsticks include playing with them, separating them in any way (such as holding one in each hand), piercing food with them, or standing them vertically in a plate of food. (The latter is especially rude, evoking images of incense or 'joss' sticks used ceremoniously at funerals). A rice bowl may be lifted with one hand to scoop rice into the mouth with chopsticks. It is also considered rude to look for a piece one would prefer on the plate instead of picking up the piece that is closest to the diner as symbol of fairness and sharing to the others.
The last piece of food on a communal dish is never served to oneself without asking for permission. When offered the last bit of food, it is considered rude to refuse the offer. It is considered virtuous for diners to not leave any bit of food on their plates or bowls. Condiments, such as soy sauce or duck sauce, may not be routinely provided at high-quality restaurants. The assumption is that perfectly prepared food needs no condiments and the quality of the food can be best appreciated.
Korea
In formal settings, a meal is commenced when the eldest or most senior diner at the table partakes of any of the foods on the table. Before partaking, intention to enjoy their meal should be expressed. Similarly, satisfaction or enjoyment of that meal should be expressed at its completion. On occasion, there are some dishes which require additional cooking or serving at the table. In this case, the youngest or lowest-ranked adult diner should perform this task. When serving, diners are served food and drink in descending order starting with the eldest or highest-ranked diner to the youngest or lowest-ranked.
Rice is always consumed with a spoon and never with chopsticks in formal settings. Picking up one's plate or bowl and bringing it to the mouth is considered rude.
Usually, diners will have a bowl of soup on the right with a bowl of rice to its left. Alternatively, soup may be served in a single large communal pot to be consumed directly or ladled into individual bowls. Dining utensils will include a pair of chopsticks and a spoon. Common chopstick etiquette should be followed, but rice is generally eaten with the spoon instead of chopsticks. Often some form of protein (meat, poultry, fish) will be served as a main course and placed at the center of the table within reach of the diners. Banchan will also be distributed throughout the table. If eaten with spoon, banchan is placed on the spoonful of rice before entering the mouth. With chopsticks, however, it is fed to the mouth directly. The last piece of food on a communal dish should not be served to oneself without first asking for permission, but, if offered the last bit of food in the communal dish, it is considered rude to refuse the offer. Bowls of rice or soup should not be picked up off the table while dining, an exception being made for large bowls of Korean noodle soup. Slurping while eating noodles and soup is generally acceptable. It is not uncommon to chew with the mouth open.
If alcohol is served with the meal, it is common practice that when alcohol is first served for the eldest/highest-ranked diner to make a toast and for diners to clink their glasses together before drinking. The clinking of glasses together is often done throughout the meal. A host should never serve alcohol to themselves. Likewise, it is considered rude to drink alone. Instead, keep pace with other diners and both serve and be served the alcohol. Alcohol should always be served to older and higher-ranked diners with both hands, and younger or lower-ranked diners may turn their face away from other diners when drinking the alcohol.
See also
Cultural competence
Eating utensil etiquette
Montreal–Philippines cutlery controversy
References
External links
Eating behaviors of humans
Etiquette by situation
Dinner | Table manners | Biology | 2,993 |
22,408,334 | https://en.wikipedia.org/wiki/HD%20158633 | HD 158633 is a main sequence star in the northern constellation of Draco. With an apparent visual magnitude of
6.43, this star is a challenge to view with the unaided eye but it can be seen clearly with a small telescope. Based upon parallax measurements, it is located at a distance of around 42 light years from the Sun. The star is drifting closer to the Sun with a radial velocity of −39 km/s, and is predicted to come to within in around 190,400 years.
This is a K-type main sequence star with a spectral classification of K0 V. It has about 79% of the Sun's radius and 73% of the solar mass. It is an estimated 4.3 billion years old and is spinning with a projected rotational velocity of 3.4 km/s. The star is emitting an excess of infrared radiation at a wavelength of 70 μm, suggesting the presence of an orbiting debris disk. It has a low metallicity, with only 37% of the Sun's abundance of elements more massive than helium, and has a relatively high proper motion.
References
K-type main-sequence stars
Solar-type stars
Circumstellar disks
Draco (constellation)
Durchmusterung objects
0675
158633
085235
6518 | HD 158633 | Astronomy | 268 |
44,690,150 | https://en.wikipedia.org/wiki/Chrysomyxa%20neoglandulosi | Chrysomyxa neoglandulosi is a fungus. It likely occurs wherever its telial host, Ledum glandulosum Nutt., is found. The only reported aecial host, Engelmann spruce, occurs in montane to subalpine areas in western Canada and the United States (Crane 2001).
References
Fungal plant pathogens and diseases
neoglandulosi
Fungus species | Chrysomyxa neoglandulosi | Biology | 82 |
54,161,658 | https://en.wikipedia.org/wiki/Jessica%20Lovering | Jessica Lovering is an American astrophysicist, researcher and Director of Energy at the Breakthrough Institute. She supports the innovative development of new nuclear power plants in response to climate change. She also sits on the Advisory Committee of the Nuclear Innovation Alliance, and was a speaker at Nuclear Innovation Bootcamp at the University of California, Berkeley in 2016. Her biography at ClimateOne states that Lovering "works to change how people think about energy and the environment". Her written work has featured in various publications, including journals Issues in Science and Technology, Science and Public Policy, Foreign Affairs and Energy policy. Websites featuring her work include various nuclear energy blogs and EnergyPost.eu. She has worked as a researcher on the documentary film Pandora's Promise and appeared in the TV series Abandoned.
References
Nuclear power
American astrophysicists
Living people
Year of birth missing (living people) | Jessica Lovering | Physics | 177 |
39,871,498 | https://en.wikipedia.org/wiki/Force%20chain | In the study of the physics of granular materials, a force chain consists of a set of particles within a compressed granular material that are held together and jammed into place by a network of mutual compressive forces.
Between these chains are regions of low stress whose grains are shielded for the effects of the grains above by vaulting and arching. A set of interconnected force chains is known as a force network. Force networks visualise inter-particle forces, which is particularly informative for spherical particle systems. For non-spherical particle systems force chain networks benefit from being supplemented by traction chain networks. Traction chains visualise inter-particle tractions, which give additional insight in inter-particle contact not captured by force chains, in particular, the role of contact area over which inter-particle forces act.
Force networks are an emergent phenomenon that are created by the complex interaction of the individual grains of material and the patterns of pressure applied within the material. Force chains can be shown to have fractal properties.
Force chains have been investigated both experimentally, through the construction of specially instrumented physical models, and through computer simulation.
References
External links
Force Chains and Distributions in Bead Packs
Granularity of materials
Emergence
Fractals | Force chain | Physics,Chemistry,Mathematics | 248 |
4,354,735 | https://en.wikipedia.org/wiki/Direct%20digital%20control | Direct digital control is the automated control of a condition or process by a digital device (computer). Direct digital control takes a centralized network-oriented approach. All instrumentation is gathered by various analog and digital converters which use the network to transport these signals to the central controller. The centralized computer then follows all of its production rules (which may incorporate sense points anywhere in the structure) and causes actions to be sent via the same network to valves, actuators, and other heating, ventilating, and air conditioning components that can be adjusted.
Overview
Central controllers and most terminal unit controllers are programmable, meaning the direct digital control program code may be customized for the intended use. The program features include time schedules, setpoints, controllers, logic, timers, trend logs, and alarms.
The unit controllers typically have analog and digital inputs, that allow measurement of the variable (temperature, humidity, or pressure) and analog and digital outputs for control of the medium (hot/cold water and/or steam). Digital inputs are typically (dry) contacts from a control device, and analog inputs are typically a voltage or current measurement from a variable (temperature, humidity, velocity, or pressure) sensing device. Digital outputs are typically relay contacts used to start and stop equipment, and analog outputs are typically voltage or current signals to control the movement of the medium (air/water/steam) control devices.
History
An early example of a direct digital control system was completed by the Australian business Midac in 1981-1982 using R-Tec Australian designed hardware. The system installed at the University of Melbourne used a serial communications network, connecting campus buildings back to a control room "front end" system in the basement of the Old Geology building. Each remote or Satellite Intelligence Unit (SIU) ran 2 Z80 microprocessors whilst the front end ran eleven Z80's in a Parallel Processing configuration with paged common memory. The z80 microprocessors shared the load by passing tasks to each other via the common memory and the communications network. This was possibly the first successful implementation of a distributed processing direct digital control.
Data communication
When direct digital controllers are networked together they can share information through a data bus. The control system may speak 'proprietary' or 'open protocol' language to communicate on the data bus. Examples of open protocol language are Building Automation Control Network (BACnet), LonWorks (Echelon), Modbus TCP and KNX.
Integration
When different direct digital control data networks are linked together they can be controlled from a shared platform. This platform can then share information from one language to another. For example, a LON controller could share a temperature value with a BACnet controller. The integration platform can not only make information shareable, but can interact with all the devices.
Most of the integration platforms are either a PC or a network appliance. In many cases, the HMI (human machine interface) or SCADA (Supervisory Control And Data Acquisition) are part of it. Integration platform examples, to name only a few, are the Tridium Niagara AX, Trend Controls, TAC Vista, CAN2GO and the Unified Architecture i.e. OPC (Open Connectivity) server technology used when direct connectivity is not possible.
Applications
In heating, ventilating, and air conditioning
Direct digital control is often used to control heating, ventilating, and air conditioning devices such as valves via microprocessors using software to perform the control logic. Such systems receive analog and digital inputs from the sensors and devices and, according to the control logic, provide analog or digital outputs.
These systems may be mated with a software package that graphically allows operators to monitor, control, alarm and diagnose building equipment remotely.
Plant growth
Direct digital control can be applied to optimize plant growth in a growth chamber.
Motor
Using an algorithm based on optimal control theory, it is possible to control the speed of an induction motor using a microcontroller.
See also
Building automation
Fieldbus
GE Fanuc Intelligent Platforms
Industrial control systems
Plant process and emergency shutdown systems
Programmable logic controller
Safety instrumented system
References
External links
Role on direct digintal control systems in building commissioning
DDCTalk.com - Information, news, and resources related to direct digital control of buildings
www.cipriansusanu.ro
www.directdigital.ro
inginer în informatică, București, România.
Heating, ventilation, and air conditioning
Building engineering | Direct digital control | Engineering | 918 |
52,142,704 | https://en.wikipedia.org/wiki/Polygenic%20score | In genetics, a polygenic score (PGS) is a number that summarizes the estimated effect of many genetic variants on an individual's phenotype. The PGS is also called the polygenic index (PGI) or genome-wide score; in the context of disease risk, it is called a polygenic risk score (PRS or PR score) or genetic risk score. The score reflects an individual's estimated genetic predisposition for a given trait and can be used as a predictor for that trait. It gives an estimate of how likely an individual is to have a given trait based only on genetics, without taking environmental factors into account; and it is typically calculated as a weighted sum of trait-associated alleles.
Recent progress in genetics has developed polygenic predictors of complex human traits, including risk for many important complex diseases that are typically affected by many genetic variants, each of which confers a small effect on overall risk. In a polygenic risk predictor the lifetime (or age-range) risk for the disease is a numerical function captured by the score which depends on the states of thousands of individual genetic variants (i.e., single-nucleotide polymorphisms, or SNPs).
Polygenic scores are widely used in animal breeding and plant breeding due to their efficacy in improving livestock breeding and crops. In humans, polygenic scores are typically generated from data of genome-wide association study (GWAS). They are an active area of research spanning topics such as learning algorithms for genomic prediction; new predictor training; validation testing of predictors; and clinical application of PRS. In 2018, the American Heart Association named polygenic risk scores as one of the major breakthroughs in research in heart disease and stroke.
Background
DNA in living organisms is the molecular genetic code for life. Although polygenic risk scores from study in humans have gained the most attention, the basic idea was first introduced for selective plant and animal breeding. Similar to the latter-day approaches of constructing a polygenic risk score, an individual'sanimal or plantbreeding value was calculated to be the combined weight of several single-nucleotide polymorphisms (SNPs) by their individual effects on a trait.
Human DNA contains about 3 billion bases. The human genome can be broadly separated into coding and non-coding sequences, where the coding genome encodes instructions for genes, including some of the sequence that codes for proteins. Genome-wide association studies enable mapping phenotypes to the variations in nucleotide bases in human populations. Improvements in methodology and studies with large cohorts have enabled the mapping of many traitssome of which are diseasesto the human genome. Learning which variations influence which specific traits and how strongly they do so, are the key targets for constructing polygenic scores in humans.
The methods were first considered for humans after the year 2000, and specifically by a proposal in 2007 that such scores could be used in human genetics to identify individuals at high risk for disease. The concept was successfully applied in 2009 by researchers who organized a genome-wide association study (GWAS) regarding schizophrenia with the objective of constructing scores of risk propensity. That study was the first to use the term polygenic score for a prediction drawn from a linear combination of single-nucleotide polymorphism (SNP) genotypeswhich was able to explain 3% of the variance in schizophrenia.
Calculation with genome-wide association study
A PRS is constructed from the estimated effect sizes derived from a genome-wide association study (GWAS). In a GWAS, single-nucleotide polymorphisms (SNPs) are tested for an association between cases and controls, (see top graphic). The results from a GWAS estimate the strength of the association at each SNP, i.e., the effect size at the SNP, as well as a p-value for statistical significance. A typical score is then calculated by adding the number of risk-modifying alleles across a large number of SNPs, where the number of alleles for each SNP is multiplied by the weight for the SNP.
In mathematical form, the estimated polygenic score is obtained as the sum across m number of SNPs with risk-increasing alleles weighted by their weights, i.e., .
This idea can be generalized to the study of any trait, and is an example of the more general mathematical term regression analysis.
Key considerations
Methods for generating polygenic scores in humans are an active area of research. Two key considerations in developing polygenic scores are which SNPs and the number of SNPs to include. The simplest, the so-called "pruning and thresholding" method, sets weights equal to the coefficient estimates from a regression of the trait on each genetic variant. The included SNPs may be selected using an algorithm that attempts to ensure that each marker is approximately independent.
Independence of each SNP is important for the score's predictive accuracy. SNPs that are physically close to each other are more likely to be in linkage disequilibrium, meaning they typically are inherited together and therefore don't provide independent predictive power. That's what's referred to as 'pruning'. The 'thresholding' refers to using only SNPs that meet a specific p-value threshold.
Penalized regression can also be used to construct polygenic scores. From prior information penalized regression assigns probabilities on: 1) how many genetic variants are expected to affect a trait, and 2) the distribution of their effect sizes. These methods in effect "penalize" the large coefficients in a regression model and shrink them conservatively. One popular tool for this approach is "PRS-CS". Another is to use certain Bayesian methods, first proposed in 2001 that directly incorporate genetic features of a given trait as well as genomic features like linkage disequilibrium. (One Bayesian method uses "linkage disequilibrium prediction" or LDpred.)
More approaches for developing polygenic risk scores continue to be described. For example, by incorporating effect sizes from populations of different ancestry, the predictive ability of scores can be improved. Incorporating knowledge of the functional roles of specific genomic chunks can improve the utility of scores. Studies have examined the performances of these methods on standardized dataset.
Application to humans
As the number of genome-wide association studies has exploded, along with rapid advances in methods for calculating polygenic scores, its most obvious application is in clinical settings for disease prediction or risk stratification. It is important not to over- or under-state the value of polygenic scores. A key advantage of quantifying polygenic contribution for each individual is that the genetic liability does not change over an individual's lifespan. However, while a disease may have strong genetic contributions, the risk arising from one's genetics has to be interpreted in the context of environmental factors. For example, even if an individual has a high genetic risk for alcoholism, that risk is lessened if that individual is never exposed to alcohol.
Predictive performance in humans
For humans, while most polygenic scores are not predictive enough to diagnose disease, they could be used in addition to other covariates (such as age, BMI, smoking status) to improve estimates of disease susceptibility. However, even if a polygenic score might not make reliable diagnostic predictions across an entire population, it may still make very accurate predictions for outliers at extreme high or low risk. The clinical utility may therefore still be large even if average measures of prediction performance are moderate.
Although issues such as poorer predictive performance in individuals of non-European ancestry limit widespread use, several authors have noted that some causal variants for some conditions, but not others, are shared between Europeans and other groups across different continents for (e.g.) BMI and type 2 diabetes in African populations as well as schizophrenia in Chinese populations. Other researchers recognize that polygenic under-prediction in non-European population should galvanize new GWAS that prioritize greater genetic diversity in order to maximize the potential health benefits brought about by predictive polygenic scores. Significant scientific efforts are being made to this end.
Embryo genetic screening is common with millions biopsied and tested each year worldwide. Genotyping methods have been developed so that the embryo genotype can be determined to high precision. Testing for aneuploidy and monogenetic diseases has increasingly become established over decades, whereas tests for polygenic diseases have begun to be employed more recently, having been first used in embryo selection in 2019.
The use of polygenic scores for embryo selection has been criticised due to alleged ethical and safety issues as well as limited practical utility. However, trait-specific evaluations claiming the contrary have been put forth and ethical arguments for PGS-based embryo selection have also been made. The topic continues to be an active area of research not only within genomics but also within clinical applications and ethics.
As of 2019, polygenic scores from well over a hundred phenotypes have been developed from genome-wide association statistics. These include scores that can be categorized as anthropometric, behavioural, cardiovascular, non-cancer illness, psychiatric/neurological, and response to treatment/medication.
Examples of disease prediction performance
When predicting disease risk, a PGS gives a continuous score that estimates the risk of having or getting the disease, within some pre-defined time span. A common metric for evaluating such continuous estimates of yes/no questions (see Binary classification) is the area under the ROC curve (AUC). Some example results of PGS performance, as measured in AUC (0 ≤ AUC ≤ 1 where a larger number implies better prediction), include:
In 2018, AUC ≈ 0.64 for coronary disease using ~120,000 British individuals.
In 2019, AUC ≈ 0.63 for breast cancer, developed from ~95,000 case subjects and ~75,000 controls of European ancestry.
In 2019, AUC ≈ 0.71 for hypothyroidism for ~24,000 case subjects and ~463,00 controls of European ancestry.
In 2020, AUC ≈ 0.71 for schizophrenia, using 90 cohorts including ~67,000 case subjects and ~94,000 controls with ~80% of European ancestry and ~20% of East Asian ancestry. Note that these results use purely genetic information as input; including additional information such as age and sex often greatly improves the predictions. The coronary disease predictor and the hypothyroidism predictor above achieve AUCs of ~ 0.80 and ~0.78, respectively, when also including age and sex.
Importance of sample size
The performance of a polygenic predictor is highly dependent on the size of the dataset that is available for analysis and ML training. Recent scientific progress in prediction power relies heavily on the creation and expansion of large biobanks containing data for both genotypes and phenotypes of very many individuals. As of 2021, there exist several biobanks with hundreds of thousands samples, i.e., data entries with both genetic and trait information for each individual (see for instance the incomplete list of biobanks).
With the use of these growing biobanks, data from many thousands of individuals are used to detect the relevant variants for a specific trait. Exactly how many are required depends very much on the trait in question. Typically, increasing levels of prediction are observed until a plateau phase where the performance levels off and does not change much when increasing the sample size even further. This is the limit of how accurate a polygenic predictor that only uses genetic information can be and is set by the heritability of the specific trait. The sample size required to reach this performance level for a certain trait is determined by the complexity of the underlying genetic architecture and the distribution of genetic variance in the sampled population. This sample size dependence is illustrated in the figure for hypothyroidism, hypertension and type 2 diabetes.
Note again, that current methods to construct polygenic predictors are sensitive to the ancestries present in the data. As of 2021, most available data have been primarily of populations with European ancestry, which is the reason why PGS generally perform better within this ancestry. The construction of more diverse biobanks with successful recruitment from all ancestries is required to rectify this skewed access to and benefits from PGS-based medicine.
Clinical utility and current usage
A landmark study examining the role of polygenic risk scores in cardiovascular disease invigorated interest the clinical potential of polygenic scores. This study demonstrated that an individual with the highest polygenic risk score (top 1%) had a lifetime cardiovascular risk >10% which was comparable to those with rare genetic variants. This comparison is important because clinical practice can be influenced by knowing which individuals have this rare genetic cause of cardiovascular disease. Since this study, polygenic risk scores have shown promise for disease prediction across other traits. Polygenic risk scores have been studied heavily in obesity, coronary artery disease, diabetes, breast cancer, prostate cancer, Alzheimer's disease and psychiatric diseases.
As of January 2021 providing PRS directly to individuals was undergoing research trials in health systems around the world, but is not yet offered as standard of care. Most use is therefore through consumer genetic testing, where a number of private companies report PRS for a number of diseases and traits. Consumers download their genotype (genetic variant) data and upload them into online PRS calculators, e.g. Scripps Health, Impute.me or Color Genomics. The most frequently reported motivation for individuals to seek out PRS reports is general curiosity (98.2%), and the reactions are generally mixed with common misinterpretations. It is speculated that personal use of PRS could contribute to treatment choices, but that more data is needed. As of 2020 a more typical use was that clinicians face individuals with commercially derived disease-specific PRS in the expectation that the clinician will interpret them, something that may create extra burdens for the clinical care system.
Challenges and risks in clinical contexts
At a fundamental level, the use of polygenic scores in clinical context will have similar technical issues as existing tools. For example, if a tool is not validated in a diverse population, then it may exacerbate disparities with unequal efficacy across populations. This is especially important in genetics where, as of 2018, a majority of the studies to date have been done in Europeans. Other challenges that can arise include how precisely the polygenic risk score can be calculated and how precise it needs to be for clinical utility. Even if a polygenic score is accurately calculated and calibrated for a population, its interpretation must be approached with caution. First, it is important to realize that polygenic traits are different from monogenic traits; the latter stem from fewer genetic loci and can be detected more accurately. Genetic tests are often difficult to interpret and require genetic counseling. Currently, polygenic-score results are being shared with clinicians. Since monogenic genetic testing is far more mature than polygenic scores, we can look there for approximating the clinical impact of polygenic scores. While some studies have found negative effects of returning monogenic genetic results to patients, the majority of studies have that negative consequences are minor.
Benefits in humans
Unlike many other clinical laboratory or imaging methods, an individual's germ-line genetic risk can be calculated at birth for a variety of diseases after sequencing their DNA once. Thus, polygenic scores may ultimately be a cost-effective measure that can be informative for clinical management. Moreover, the polygenic risk score may be informative across an individual's lifespan helping to quantify the genetic lifelong risk for certain diseases. For many diseases, having a strong genetic risk can results in an earlier onset of presentation (e.g. Familial Hypercholesterolemia). Recognizing an increased genetic burden earlier can allow clinicians to intervene earlier and avoid delayed diagnoses. Polygenic score can be combined with traditional risk factors to increase clinical utility. For example, polygenic risk scores may help improve diagnosis of diseases. This is especially evident in distinguishing Type 1 from Type 2 Diabetes. Likewise, a polygenic risk score based approach may reduce invasive diagnostic procedures as demonstrated in Celiac disease. Polygenic scores may also empower individuals to alter their lifestyles to reduce risk for diseases. While there is some evidence for behavior modification as a result of knowing one's genetic predisposition, more work is required to evaluate risk-modifying behaviors across a variety of different disease states. Population level screening is another use case for polygenic scores. The goal of population-level screening is to identify patients at high risk for a disease who would benefit from an existing treatment. Polygenic scores can identify a subset of the population at high risk that could benefit from screening. Several clinical studies are being done in breast cancer and heart disease is another area that could benefit from a polygenic score based screening program.
Non-predictive applications
A variety of applications exists for polygenic scores. In humans, polygenic scores were originally computed in an effort to predict the prevalence and etiology of complex, heritable diseases, which are typically affected by many genetic variants that individually confer a small effect to overall risk. Additionally, a polygenic score can be used in several different ways: as a lower bound to test whether heritability estimates may be biased; as a measure of genetic overlap of traits (genetic correlation), which might indicate e.g. shared genetic bases for groups of mental disorders; as a means to assess group differences in a trait such as height, or to examine changes in a trait over time due to natural selection indicative of a soft selective sweep (as e.g. for intelligence where the changes in frequency would be too small to detect on each individual hit but not on the overall polygenic score); in Mendelian randomization (assuming no pleiotropy with relevant traits); to detect & control for the presence of genetic confounds in outcomes (e.g. the correlation of schizophrenia with poverty); or to investigate gene–environment interactions and correlations. Polygenic scores also have useful statistical properties in (genomic) association testing, for instance to account for outcome-specific background effects and/or improve statistical power.
Applications in non-human species
The benefit of polygenic scores is that they can be used to predict the future for crops, animal breeding, and humans alike. Although the same basic concepts underlie these areas of prediction, they face different challenges that require different methodologies. The ability to produce very large family size in nonhuman species, accompanied by deliberate selection, leads to a smaller effective population, higher degrees of linkage disequilibrium among individuals, and a higher average genetic relatedness among individuals within a population. For example, members of plant and animal breeds that humans have effectively created, such as modern maize or domestic cattle, are all technically "related". In human genomic prediction, by contrast, unrelated individuals in large populations are selected to estimate the effects of common SNPs. Because of smaller effective population in livestock, the mean coefficient of relationship between any two individuals is likely high, and common SNPs will tag causal variants at greater physical distance than for humans; this is the major reason for lower SNP-based heritability estimates for humans compared to livestock. In both cases, however, sample size is key for maximizing the accuracy of genomic prediction.
While modern genomic prediction scoring in humans is generally referred to as a "polygenic score" (PGS) or a "polygenic risk score" (PRS), in livestock the more common term is "genomic estimated breeding value", or GEBV (similar to the more familiar "EBV", but with genotypic data). Conceptually, a GEBV is the same as a PGS: a linear function of genetic variants that are each weighted by the apparent effect of the variant. Despite this, polygenic prediction in livestock is useful for a fundamentally different reason than for humans. In humans, a PRS is used for the prediction of individual phenotype, while in livestock a GEBV is typically used to predict the offspring's average value of a phenotype of interest in terms of the genetic material it inherited from a parent. In this way, a GEBV can be understood as the average of the offspring of an individual or pair of individual animals. GEBVs are also typically communicated in the units of the trait of interest. For example, the expected increase in milk production of the offspring of a specific parent compared to the offspring from a reference population might be a typical way of using a GEBV in dairy cow breeding and selection.
Notes
A. Preprint lists AUC for pure PRS while the published version of the paper only lists AUC for PGS combined with age, sex and genotyping array information.
References
Further reading
Francesca Forzano, Olga Antonova, Angus Clarke, Guido de Wert, Sabine Hentze, Yalda Jamshidi, Yves Moreau, Markus Perola, Inga Prokopenko, Andrew Read, Alexandre Reymond, Vigdis Stefansdottir, Carla van El (2022). "The use of polygenic risk scores in pre-implantation genetic testing: an unproven, unethical practice." Nature 30, pages 493–495.
External links
Polygenic Risk Scores
Polygenic Score Atlas
Polygenic Score (PGS) Catalog
Animal breeding
Plant breeding
Regression analysis
Genetics studies
Statistical genetics
Personalized medicine | Polygenic score | Chemistry | 4,471 |
4,363,276 | https://en.wikipedia.org/wiki/Bench-clearing%20brawl | A bench-clearing brawl is a form of fighting that occurs in sports, most notably baseball and ice hockey, where most or all players on both teams leave their dugouts, bullpens, or benches, and charge onto the playing area in order to fight one another or try to break up a fight. Penalties for leaving the bench can range from nothing to severe.
Baseball
In baseball, brawls are usually the result of escalating infractions or indignities, often stemming from a batter being hit by a pitch, especially if the batter then charges the mound. They may also be spurred by an altercation between a baserunner and fielder, such as excessive contact during an attempted tag out.
Few bench-clearing brawls result in serious injury, as in most cases, no punches are thrown, and the action is limited to pushing and shoving. Noteworthy is that players from opposing bullpens run onto the field, often side-by-side, depending on bullpen locations, to join the brawl (which is usually over by the time they arrive), rather than brawling among themselves. This highlights the purpose of coming onto the field which is to show support instead of escalating the conflict.
Unlike most other team sports, where teams usually have an equivalent number of players on the field at any given time, in baseball the hitting team is at a numerical disadvantage, with a maximum of five players (the batter, up to three runners, and an on-deck batter) and two base coaches on the field, compared to the fielding team's nine players. For this reason, leaving the dugout to join a fight is generally considered acceptable in that it results in numerical equivalence on the field, a fairer fight, and a typically neutral outcome. As in most cases, managers and/or umpires will intervene to restore order and resume the game. In at least one case, (the infamous Ten Cent Beer Night promotion) the home team (Cleveland Indians) left their dugout to defend the visiting team (Texas Rangers) from fans who had stormed the field.
Penalty
Depending on the severity of the unsportsmanlike conduct, an umpire may or may not eject a brawl's participants. Since a bench-clearing brawl by definition involves everyone on both teams, it is exceedingly unlikely that all participants will be ejected, but the player or players responsible for the precipitating event are often ejected. Fines and suspensions generally result and are issued at a later date.
Ice hockey
Fighting in ice hockey by enforcers is an established, if unofficial, part of the sport (especially in North America, where the penalty rules are more permissive); the general procedure in a one-on-one fight is to let it run to its completion and then send both players to the penalty box with five-minute major penalties. Escalations beyond isolated fights, such as when most or all players on the ice begin to fight, known as a line brawl, are prohibited. Players violating these rules face more serious consequences, such as players being assessed game misconduct penalties (being ejected from the game) and suspensions.
As in baseball, hockey brawls usually result from escalating infractions. In this case, dangerous hits, excessive post-whistle roughness, taking shots after the whistle, attacking the goaltender, and hatred from competition in a game with a significant amount of inter-player violence, all contribute to bench-clearing brawls.
In the National Hockey League the penalties include, in addition to in-game penalties, an automatic 10-game suspension and a fine of $10,000 for the first player to leave his bench or the penalty box to participate in a brawl; for each subsequent player after the first to leave his bench or the penalty box, the penalties include, in addition to in-game penalties, an automatic five-game suspension and a fine of $5,000.
The International Ice Hockey Federation rules prescribe a double minor penalty plus a game misconduct penalty for the first player to leave the bench during an altercation and a misconduct penalty for other such players; a player who leaves the penalty box during an altercation is assessed a minor penalty plus a game misconduct penalty. In addition to these penalties for leaving the bench, all players engaging in a fight may be penalized.
One of the more notable incidents was the Punch-up in Piestany, a game between Canada and the Soviet Union during the 1987 World Junior Ice Hockey Championships. The game was rougher and more dangerous than is generally accepted, and with 6:07 left in the second period, a fight broke out between Pavel Kostichkin and Theoren Fleury, causing both teams to leave the benches for 20 minutes. The officials ordered that the arena lights be turned out, but to no avail, and the IIHF eventually declared the game void. Both teams were ejected from the tournament, costing Canada a potential gold medal, and the Canadian team, disgusted at what they perceived to be a conspiracy against them, chose to leave rather than stay for the end-of-tournament festivities, from which the Soviet team were banned.
A notable KHL bench-clearing brawl saw all the players of Avangard Omsk and Vityaz Chekhov, except for the goaltenders, begin fighting at 3:34 into the first period. The referees ejected 33 players and both teams' coaches before the game was abandoned as only four players remained; the teams and players were fined a total of 5.6 million rubles ($191,000), with seven players being suspended, and the game was counted as a 5-0 loss for both teams.
Other sports
Bench-clearing brawls have also been known to occur in other sports, and officials in those sports have been cracking down on such brawls. In 1995, the National Basketball Association changed the penalty for leaving the bench to participate in a brawl from a $500 fine to an automatic one-game suspension.
In 2010, the Northern Territory Football League in Australia ruled that any player found to have left the interchange bench to participate in a melee would be ejected from the match. They would also have their melee fine increased by 25% and receive an automatic one-game suspension.
Bench-clearing brawls do not occur very often in gridiron football. All levels of the game penalize any "substitute who leaves the team box during a fight" (as it is worded in the high school rule books) with automatic ejection and possible further sanctions depending on the league, and the amount of equipment a football player wears greatly increases the risk for injury in a brawl. In addition, on-field umpires and referees move in immediately to break up fights, and any contact by a team member against an official will draw the penalty of ejection from the game, with further sanctions by league officials virtually certain. This also includes on-field penalties that move the ball closer or further from the goal line depending on the team sanctioned, hurting the team's winning chances far more than in other sports. One notable brawl at the college level was between the University of Miami and Florida International University, where tough talk between the two crosstown schools escalated into a brawl with severe consequences for FIU. A notable recent bench-clearing brawl in gridiron football occurred after the conclusion of the 2024 edition of “The Game” between the universities of Michigan and Ohio State, after Michigan players attempted to plant their team’s flag on the Ohio State logo at midfield following their victory. The subsequent brawl lasted several minutes, involving dozens of players and required police intervention to break it up, including the police use of pepper spray against the players.
At least two bench-clearing brawls have taken place in the Lingerie Football League, since renamed the Legends Football League. The first came in 2009 between the Miami Caliente and the New York Majesty; that brawl eventually led to the Majesty suspending operations. Another occurred during the December 9, 2011 LFL game between the Toronto Triumph and the Philadelphia Passion. It was unclear what punishment either team would face as Toronto was already using replacement players due to a mass walkout of the original team earlier in the year.
A minor bench-clearing brawl occurred during the 2022 FIFA World Cup in the quarterfinal match between Argentina and The Netherlands, when Argentinian player Leandro Paredes kicked a ball directly into the Dutch bench after fouling Nathan Aké. This resulted in the Dutch players surrounding Paredes which led to a brawl.
High school and scholastic sports
Bench-clearing brawls are prohibited in scholastic competition with the National Federation of State High School Associations specifying the penalty for leaving the bench area to participate in a fight in any sanctioned sport as an automatic ejection and, if actively involved in a fight, an automatic suspension. In addition, school administrators may implement more severe penalties such as disqualification from activities, academic suspension or expulsion. In more severe instances, participants and coaches can face criminal charges (for example, assault and battery and endangerment of a minor, respectively), and entire schools can face sanctions from their state's athletic association, ranging from letters of reprimand, forfeiture of contests, withholding of travel expenses and extended suspensions of players and coaches to, in the most severe cases, cancellation of a team's entire season, prohibition from participation in state tournaments for a period of time, or suspension of a school's entire athletic program.
See also
Violence in sports
The Battle of Candlestick, 1965 brawl between the Dodgers and Giants, lasting 14 minutes
Good Friday Massacre, 1984 NHL playoff game where two brawls led to 11 ejections and 252 penalty minutes
1984 Braves–Padres bean brawl
Colorado Avalanche–Detroit Red Wings brawl
The Malice at the Palace, 2004 brawl between the Indiana Pacers, Detroit Pistons, and NBA fans
Knicks–Nuggets brawl
Philippines–Australia basketball brawl
2011 Crosstown Shootout brawl
References
Aggression
American football terminology
Baseball terminology
Basketball terminology
Ice hockey penalties
Ice hockey terminology
Sports terminology
Violence in ice hockey
Major League Baseball controversies | Bench-clearing brawl | Biology | 2,033 |
84,179 | https://en.wikipedia.org/wiki/Lotus%20Improv | Lotus Improv is a discontinued spreadsheet program from Lotus Development released in 1991 for the NeXTSTEP platform and then for Windows 3.1 in 1993. Development was put on hiatus in 1994 after slow sales on the Windows platform, and officially ended in April 1996 after Lotus was purchased by IBM.
Improv was an attempt to redefine the way a spreadsheet program should work, to make it easier to build new spreadsheets and to modify existing ones. Conventional spreadsheets used on-screen cells to store all data, formulas, and notes. Improv separated these concepts and used the cells only for input and output data. Formulas, macros and other objects existed outside the cells, to simplify editing and reduce errors. Improv used named ranges for all formulas, as opposed to cell addresses.
Although not a commercial success in comparison to mainstream products like Lotus 1-2-3 or Microsoft Excel, Improv found a strong following in certain niche markets, notably financial modeling. It was very influential within these special markets, and spawned a number of clones on different platforms, notably Lighthouse Design's Quantrix.
Apple Inc.'s Numbers combines a formula and naming system similar to Improv's, but running within a conventional spreadsheet.
History
Background
The original spreadsheets were pieces of paper with vertical and horizontal lines on them, a customized worksheet intended for accounting uses. Users would enter data into rectangular areas on the sheets, known as cells, then apply formulas to the data to produce output values that were written down in other cells. A Berkeley professor, Richard Mattessich, was a proponent of using spreadsheets for financial modeling and "what if" calculations for businesses, but noted that it could take so long to recalculate it to run a different scenario that the inputs would be out of date by the time the calculation was finished. In 1964 he proposed using a computer to run all of the calculations from the point of the change on, thereby updating the sheet in seconds, rather than days.
Teaching the use of spreadsheet modelling was common in business schools, often using chalkboards marked up with a layout similar to the paper versions. Using a chalkboard made it easier to fix errors, and allowed the sheet to be shared with a class. In 1979, Dan Bricklin was using such a device when he decided to attempt to computerize it on the newly introduced personal computers. Joined by Bob Frankston, the two created the first spreadsheet, VisiCalc, and released it for the Apple II in 1979. When Ben Rosen of Morgan Stanley saw the program, he wrote that "VisiCalc might be the software tail that wagged the computer industry dog."
VisiCalc was an enormous success, so much so that a huge number of clones appeared. One of these was written by a former VisiCalc programmer, Mitch Kapor. His version, Lotus 1-2-3, would go on to be an even greater success than VisiCalc, in no small part due to the fact that it ran on, and was tuned for, the new IBM PC. Lotus 1-2-3 shipped 60,000 copies in the first month, and Lotus was soon one of the largest software companies in the world.
ATG
Lotus set up an advanced technology group in 1986. One of their initial tasks was to see if they could simplify the task of setting up a spreadsheet. Completed spreadsheets were easy to use, but many users found it difficult to imagine what the sheet needed to look like in order to get started creating it. Should data be entered down columns, or across rows? Should intermediate values be stored within the sheet, or on a separate one? How much room will we need?
Pito Salas, a developer at ATG, decided to attack this problem. After a few months of studying existing real-world examples, it became clear that the data, views of that data, and the formulas that acted on that data were very separate concepts. Yet in every case, the existing spreadsheet programs required the user to type all of these items into the same (typically single) sheet's cells.
This overlap of functionality led to considerable confusion, because it's not obvious which cells hold what sort of data. Is this cell an input value that is used elsewhere? Is it an intermediate value used for a calculation? Perhaps it is an output value from a calculation? There's no way to know. This insight led to ideas for a new spreadsheet that would cleanly separate these concepts — data, formulas, and output views that would combine data and formulas in a format suitable for the end user. At the same time, the new product would allow users to group data "by purpose", giving it a name instead of referring to it by its position in the sheet. This meant that moving the data on the sheet would have no effect on calculation.
Salas also noted that it was the views of output data that was often the weakest part of existing spreadsheets. Since the input, calculations and output were all mixed on a sheet, changing the layout could lead to serious problems if data moved. With the data and formulas separated, this was no longer an issue. Salas demonstrated that this separation meant that a number of common tasks that required lengthy calculations on existing spreadsheets could be handled almost for free simply by changing the view. For instance, if a spreadsheet contained a list of monthly sales, it was not uncommon to have an output column that summed up the sales by month. But if one wanted that summed by year, this would normally require another formula column and a different output sheet.
Back Bay
By the end of the summer of 1986, Salas had created a slideshow-like demonstration of a system known as Modeler on the IBM PC. In February 1987 he hired Glenn Edelson to implement a working version in C++. As they worked on the project, it became clear that the basic concept was a good one, and was especially useful for financial modeling. At the end of the spring, they hired Bonnie Sullivan to write up a project specification, and Jeff Anderholm was hired to examine the market for a new program aimed at the financials industry. That summer, the team took Modeler to a number of financials companies, and found an overwhelmingly positive reception.
A year later, in September 1988, the team was finally given the go-ahead to start implementing Modeler. After examining a number of platforms, including DOS and the Macintosh OS, the team decided the target platform would be OS/2, at that time considered to be an up-and-coming system in the commercial space. The project was given the code name "Back Bay", which was named after Back Bay in Boston, and a mascot, namd Fluffy Bunny, was selected.
The next month, in October 1988, Steve Jobs visited Lotus to show them the new NeXT computer. When he saw Back Bay he immediately got excited and started pressing for it to be developed on the NeXT platform. The Lotus team was equally excited about NeXT, but continued work on the OS/2 platform. This proved to be much more difficult than imagined; at the time, OS/2 was very buggy, and their Presentation Manager UI was in its infancy. Development was not proceeding well.
NeXT release
After struggling with OS/2 for months, in February 1989 they decided to move it to NeXT. When Jobs learned of the decision he sent an enormous bouquet of flowers to the team. More importantly, he also sent Bruce Blumberg, one of NeXT's software experts, to teach the Lotus team about NeXTSTEP. One worrying problem turned out to be an enormous advantage in practice; as the back-end was written in C++ and the front-end in Objective-C, it turned out to be very easy to segregate the program and track down bugs. Additionally, NeXT's Interface Builder let the team experiment with different UIs at a rate that was not possible on other platforms, and the system evolved rapidly during this period.
Returning for a visit in April 1989, Jobs took the team to task about their categorization system. He demanded a way to directly manipulate the categories and data on-screen, rather than using menus or separate windows. This led to one of Improv's most noted features, the category "tiles", icons that allowed output sheets to be re-arranged in seconds. Jobs remained a supporter throughout, and constantly drove the team to improve the product in many ways. Blumberg remained on-call to help with technical issues, which became serious as NeXT was in the process of releasing NeXTSTEP 2.0, the first major update to the system.
Improv for NeXT was released in February 1991, resulting in "truckloads" of flowers from Jobs. The program was an immediate hit, receiving praise and excellent reviews from major computer publications, and, unusually, mainstream business magazines as well. Earlier predictions that Improv might be NeXT's killer app proved true, and thousands of machines would eventually be sold into the financials market, initially just to run Improv. This gave NeXT a foothold in this market that lasted into the late 1990s, even after their purchase by Apple Inc.
Windows release
After release on NeXT (a version known as "Black Marlin"), attempts were made to port to Windows ("Blue Marlin") and Macintosh ("Red Marlin"). The APIs and programming language for NeXTSTEP were so different from Windows and Macintosh system software that porting was very difficult. Lotus Improv for Windows v2.0 (there was no 1.0) shipped in May 1993, running on Windows 3.1. Like the NeXT release, the Windows version also garnered critical praise, with Byte magazine noting its "usability is outstanding".
In spite of the positive reviews, sales on Windows were slow. In March 1994, Lotus decided to attack this problem by re-positioning Improv as an add-in for 1-2-3, although the programs had nothing in common other than Improv's ability to read data in 1-2-3 files. This had no effect on the sales, and after the release of the minor 2.1 upgrade, development ended in August 1994. The project was left in limbo until April 1996 when the product was officially killed, shortly after IBM purchased Lotus.
After Improv
Improv's disappointing sales and eventual cancellation on the PC platform has been used as a case study in numerous post-failure analyses of the software market. Sales on the NeXT platform could be explained by NeXTs limited marketshare, but the failure on the PC was another issue. Among the favored explanations are the fact that, unlike the release on NeXT, the Windows version faced strong internal resistance from 1-2-3, and corporate immune response became an issue. Lotus' sales and marketing teams, well versed in selling 1-2-3, did not know how to sell Improv into the market, so they simply didn't, selling the well known and understood 1-2-3. Other explanations include the fact that Microsoft Excel was being offered as part of the Office bundle at marginal rates that were tiny in comparison, as well as several mis-steps during introduction, like the lack of a macro language or undo. Joel Spolsky blames it on the design itself, claiming it was too perfectly aimed at a specific market and lacked the generality that Excel featured.
Although Improv disappeared in the 1990s, the program is fondly recalled in the industry and continues to be mentioned in books on Excel. When Improv disappeared a number of clones of Improv quickly appeared. Notable among these was Lighthouse Design's Quantrix, an almost direct clone aimed at the financial market. Quantrix suffered the same fate as Improv when the company was purchased by Sun Microsystems.
Concepts
The core of what would become Improv was to separate the concepts of data, views of the data, and formulas into three portions. The spreadsheet itself would contain only input data. Instead of referring to the data as, in effect, "the data that happens to be in these cells", each set of data in the sheet was given a name, and could then be grouped into categories. Formulas were typed into a separate section, and referred to data through their range, not their physical position in the sheets. Views of the data, some of which looked like spreadsheets, others like charts, could be created dynamically and were not limited in number.
To illustrate the difference between Improv and other systems, consider the simple task of calculating the total sales for a product, given unit sales per month and unit prices. In a conventional spreadsheet the unit price would be typed into one set of cells, say the "A" column, and the sales into another, say "B". The user would then type a formula into "C" that said "A1 times B1" (typically in a form such as @times(A:1, B:1) or =A1*B1). Then that formula must be copied into all of the cells in column C, making sure to change the reference to A1 to a new reference for A2, etc. The sheet can automate this to some degree, but the real problem is that it simply has no idea what the formula means. Any changes to the layout of the spreadsheet will often make the entire sheet stop working properly.
In Improv, one simply enters the data into columns called "Unit Price" and "Unit Sales". A formula can then be created that says "Total Sales = Unit Price times Unit Sales". Then if "Total Sales" view is added to the workbook, the totals would automatically appear there, because the sheet "knows" that is what the formula is for.
But the real power of Improv did not become clear until work had already started on the project. With the grouping system, one could collect monthly sales into groups like "1995" and "1996", and call the category "years". Then the unit prices could be grouped in terms of the product type, say "clothing" and "food". Now by dragging these groups around (represented by small tabs) the view could be quickly changed. This concept has later been implemented in the form of pivot tables in several products.
See also
Trapeze, a classic MacOS program that introduced the idea of named formulas and blocks in 1987
Spreadsheet 2000 further separated data and formulas, representing both graphically on-screen
Javelin, a multi-dimensional spreadsheet/modeling program which may have influenced the design of Improv
Quantrix, a multi-dimensional business modelling and analytics software based on Improv
FlexiSheet for Mac OS X
Flexisheet (source code) an open source clone for GNUstep
Notes
References
Citations
Bibliography
(PDF version available here)
External links
A review from 1993
Improv
NeXTSTEP software
Spreadsheet software | Lotus Improv | Mathematics | 3,077 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.